A growing body of research suggests that people have a commonsense psychology — an intuitive way to infer what others want and might decide — that is already sophisticated from infancy. Now researchers have proposed a model for this commonsense psychology that suggests people understand others via a “naive utility calculus” in which they intuit that others choose actions that minimize costs and maximize rewards.
The scientists note that computer models based on their idea could help test how well it describes human behavior. Such research might help artificial intelligences better approximate people’s states of mind. The scientists detailed their findings online July 4 in Trends in Cognitive Sciences.
Over the past few decades, researchers have suggested that the mind has two broad, fundamental capacities: figuring out how to act based on what we know and want, and figuring out what others know and want. “This second faculty is what we call our commonsense psychology—the way we think other people think,” explains study lead author Julian Jara-Ettinger, a cognitive scientist at MIT.
Jara-Ettinger says that he wanted to learn more about human social intelligence in order to engineer artificial social intelligence. This, he says, entails “algorithms that can take as an input how someone is behaving, and infer what they think, what they want, what they are trying to do, whether they have good or bad intentions, and so on.”
In toddlers and children, Jara-Ettinger and his colleagues note that commonsense psychology is apparently guided by the assumption by one person that another person decides to act based on “utilities”—that is, the rewards they might get compared with the costs they might incur. They developed a testable computational model based on long-standing ideas that such naive utility calculus followed a statistical approach known as Bayesian reasoning, in which prior knowledge helps compute the probability that an uncertain choice might be correct. Altogether, this naive utility calculus would support the predictions that people mostly unconsciously make about the future behaviors of others as well as analyses of the causes of observed behaviors.
“If this idea is right, we should see signatures of this reasoning in all kinds of domains where we are reasoning about other people, and we should see it in the earliest ages, before we have received any formal education about utilities,” Jara-Ettinger says. A host of recent research supports this idea, he notes.
For instance, the naïve utility calculus suggests that people who are ignorant about the costs and rewards of actions should be more likely to make poor choices. In experiments described in this latest work where the researchers introduced 4-year-olds to two puppets, both of whom reached for and chose a rambutan over an African cucumber. However, only one of the puppets said “yuck,” upon the choice. From this response, children successfully identified which puppet knew all about these fruits before, and which had never seen them before.
The concept put forth in the Trends paper formalizes and systematizes ideas that had been proposed and discussed for a while, says cognitive scientist Noah Goodman at Stanford University, who did not take part in this research. “A lot of people have been investigating a rational view of social cognition, where one assumes people are acting in their own interests, and this work is a really nice crystallization of those ideas and works them out in both the developmental domain and the computational modeling side.”
Future research can focus on how precisely the brains make these computations regarding utility, Jara-Ettinger says. Another research direction would be to see “whether the naive utility calculus is already at work when we’re born, or if it takes some time to develop,” he says, noting that some of his colleagues are already working on experiments along these lines.
Future studies may also try to unravel how abstract concepts such as reputation, laziness and selfishness are connected with the naive utility calculus. “It’s possible that all these concepts can ultimately boil down to notions of costs and rewards,” Jara-Ettinger says “but it may be that we need something extra to explain how we come to learn these concepts.”