Do LLMs laugh at electric memes?

We know that LLMs have a pretty good understanding of the world. Previous research has shown that transformer-style models outperform traditional cognitive modelling on many different tasks and even in modelling cognition across species. However, do they understand what humans experience when they look at memes?

What might sound entertaining or nonsensical at first is actually an interesting question: While pure sentence continuation by brute-force probability learning only trains a stochastic parrot (so the rumor has it), it is not clear what higher-order functions emerge within this parrot. Is the parrot able to understand what goes on in an average human mind?

Enter this cool paper from 2017 by Cowen & Keltner: “Self-report captures 27 distinct categories of emotion bridged by continuous gradients”. Adding to the decades old dispute whether emotions are categorigal or dimensional they come up somehow that could be paraphrased as – “why not both” (obviously, their discussion of the subject is a bit more nuanced than that). Leaving aside this academic question, to me, the neat thing is not the paper but the resulting dataset! They took ~2000 GIFs (short memes) and let several thousand people evaluate them using free-text, categorical or dimensional labels – then used PCA to extract a meaningfull category-dimensional hybrid for each GIF. They place each GIF in a 48-dimensional emotional hyperspace, telling us ground truth values of how an average human will experience the GIF emotionally.

We have successfully used this dataset in a paradigm to elicit emotions in participants – it works much better than the overused IAPS. Plus it’s quite enjoyable for the participant, the first study where people recommended their friends to participate as it was so entertaining.

Explore the GIFs yourself using this map – it’s pretty fun!

Can an LLM predict human emotions when seing memes?

Coming back to the topic: Can we use it to test the understanding of an LLM?

TBC

Leave a Comment