Social Cognition in AI

Photo by cottonbro studio. https://www.pexels.com/photo/bionic-hand-and-human-hand-finger-pointing-6153354/

When Alan Turing first proposed his Imitation Game (later known as the Turing test) in 1950, he proposed that within 50 years it would be possible to programme computers so well that a human interrogator would not recognise them as a machine with more than 70 percent accuracy. Part of this proposition was to circumvent the–he argued–intractable problem of whether machines can think by replacing it with whether machine can appear to think.

It turned out that Turing overshot his estimate, both in terms of time and the necessary sophistication of the computer; the Turing test was first ‘passed’ in 1991 by a mindless programme that was able to imitate human typos and fooled interrogators into identifying it as human (an event that led to the coining of the phrase, ‘artificial stupidity’). But while the Turing test has been widely criticised for dismissing important philosophical questions about the nature of mind and thought, and how these might be detected or identified in artificial agents, it remains an important feature of AI research because of its emphasis on interaction. Put simply, whether a computer can think or not is less important for everyday users than whether it can appear to think.

With the recent rise of Large Language Models (LLMs), for the first time in human history the general public have access to sophisticated, powerful, articulate AI models that serve as viable interaction partners. LLM chatbots such as OpenAI’s ChatGPT, Google’s Gemini, or Meta’s LLaMA offer unprecedented access to artificial intelligence, and with that access comes an unbounded expansion of the kinds of interactions that humans and AI can have.

For humans, social interactions are complex tasks that we navigate using a diverse suite of cognitive and behavioural mechanisms. Tracking other peoples’ mental states, reading their emotions, inferring their intended meaning from indirect requests, and reverse-engineering their decision-making process from limited information are all necessary skills that humans use to navigate the complex social environment. In fact, these abilities are often so fundamental to social interaction that we take them for granted, making it all the more jarring when we have to interact without them.

These social signals, subtle cues, pragmatics, and theory of others’ minds are a hugely complicated problem for humans to solve, and building them in AI has been a longterm goal of the field. In this project, I am investigating social cognition in LLMs using psychological tools to examine how their (appearance of) cognitive abilities stack up against humans and how they differ, with a view to studying how these impact interactions with humans.

James W.A. Strachan
James W.A. Strachan
Humboldt Fellow
he/him 🏳️‍🌈