Skip to content

Cognitive science

Cecilia Heyes: Cognitive scientist at the University of Oxford and the author of “Cognitive Gadgets: The Cultural Evolution of Thinking”

Prof. Cecilia Heyes

“My work concerns the evolution of cognition. It explores the ways in which natural selection, learning, developmental and cultural processes combine to produce the mature cognitive abilities found in adult humans. I am especially interested in social cognition. Most of my current projects examine the possibility that the neurocognitive mechanisms enabling cultural inheritance – social learning, imitation, mirror neurons, mind reading etc – are themselves products of cultural evolution.”

Below is a transcript of the Q&A session

What are the alternative models or hypotheses against “empathy is not genetically inherited”, and which ones are commonly accepted in the scientific literature today?

There’s a hypothesis: the perception-action model (PAM) of empathy, which is associated with Frans de Waal, originally, I think put forward in 2002, and as it was, relaunched in 2017. And both of those papers are very widely cited, and, I think, they are at the core of the belief that emotional contagion is genetically inherited in us and indeed in a range of other mammalian species. They make an apparently very compelling case for that conclusion, but using a confirmation strategy. So, they survey evidence which is consistent with the genetic inheritance account without considering whether that evidence is equally or more consistent with the learning account, so no alternative hypothesis is considered. I put forward this Learned Matching hypothesis in a paper in Neuroscience & Biobehavioral Reviews. I explicitly compared the perception-action model with the Learned Matching model against each piece of evidence, and said: Which model does the evidence fit better? I think that’s a healthier scientific approach.

I am very surprised, because all my research is about empathy, and I have never heard that babies are not born empathetic. Then, I was thinking, if this hypothesis is true, the opposite of empathy is apathy; all the babies would be born with indifference. Do you want to say that we create empathy by social connection?

Starting out with indifference, it is not going to last very long, a baby could begin to show empathetic crying within ten or fifteen minutes after they first heard their own cries. So, you could see this as indifference at the outset, but it is not going to last very long, enculturation starts immediately. There might even be opportunities for intrauterine learning when it comes to this low-level empathy. Also, I would not rule out that there are genetically inherited things like inequity aversion. Indeed, there is evidence of inequity aversion in rats. So, I think there’s going to be departures from indifference, but you are maybe pointing to the scary aspect of this. Certainly, if the Learned Matching hypothesis is correct, our empathetic responses are very fragile. If an individual were to grow up in a nasty brutalizing society, they would be a good deal less empathetic than people with the good fortune to grow up in a broadly empathetic society.

I was wondering about your view on this high capacity for general learning, that’s where a lot of the pain issues with AI are at the moment. So AI is capable of statistical correlation, but that does not seem to cut it for general learning, so I was wondering if you have some views on how this works for humans in this social context?

I think unguided statistical learning would not get you very far with humans either. I think everybody’s agreed on that in the case of humans. The question is where does the guidance come from? So nativists, traditional evolutionary psychologists, say that the guidance comes from what’s genetically inherited and I am saying that the guidance comes from cultural learning, that you can get a long way with statistical learning, even with model free reinforcement learning provided it is guided by experts deliberately or inadvertently guided by experts.

Okay, so if we look at AI for supervised learning we have human experts labelling data for the AI. But it still doesn’t generalize nearly as well as humans would. Is there a difference there you think, maybe in how we are guiding AI, that there is some extra guidance you would need to give over and above saying: “On this picture, there is a banana”. 

I do not know enough about AI; what I should point out is that I am with the traditional evolutionary psychologists in thinking that what you are developing in the course of childhood is a set of specialized cognitive mechanisms. So I am not saying that language or mindreading or episodic memory is nothing but statistical learning. I am saying that the domain-general learning works on the input from other agents to produce specialized mechanisms.

How do AI emotions relate to other academic disciplines?

One kind of connection is that: On the view I am presenting, historians and classicists, in particular, can make a major contribution to cognitive science. They can tell us how conceptions of emotions have changed over historical time, and, on my account, that is a cultural evolutionary process. So, I think on a nativist account, there is a stock of emotions that are programmed in the genes and “yeah, you will get a little change around the edges between cultures” but nothing too radical. Whereas I think what many people in the humanities have been inclined to emphasize is how deep the cultural changes can be in the conception of emotion and that’s compatible with the cognitive gadget view in the way that is less compatible with the cognitive instinct view. That’s a bit meta, but I think that’s the best I can do [in answering this question].

What do you think of religious thinking being a cognitive gadget? Secondly, has there been any pushback from the traditional evolutionary psychologists on your cognitive gadgets theory, or on cultural evolution in general?

Religion as a cognitive gadget… I like it! Would it be distinguishable from morality as a cognitive gadget, I wonder? Religion would be one of the major real world shapers of a morality gadget, but I am not sure that there is grounds for saying that inside the heads of religious people, on the one hand, there is a religion gadget, on the other hand, there is a morality gadget. I think probably instead there is a morality gadget that might work slightly differently in religious people than in non-religious people: if there is such thing as non-religious people, and if atheism is not a stealth religion in effect. I think religion is terribly important in forming our views about the social world and about morality. I don’t think I want to give it a gadget of its own. Have I had pushback? In a way, not as much as I would have liked, the kind I would have really liked: “Here’s some evidence that fits the cognitive instinct view much better than it fits the cognitive gadget view”, that’s how we would really make progress on this. I do not often get that, what I get instead is: “You might be right about the origins of distinctively human cognitive mechanisms, or some of them. If they were good for reproductive fitness, they would have sunk into the genes, a process known as genetic assimilation or the Baldwin effect. So that they might start out needing a lot of specific experience in order to develop, but any mutation that allowed the development to occur with less experiential input would have been favoured by natural selection and therefore by now, as it were, they would be in genes”. And it’s a very interesting hypothesis, but having sifted a lot of evidence on genetic assimilation, it looks to me as if in other animals genetic assimilation seems to operate on perceptual and motor processes but not core processes of thinking. That’s one problem, and the other problem is more theoretical: if an attribute is reliably developing on the basis of experience, then it could be screened off from selection pressure operating on genes, so genetic assimilation is conceivable, but the opposite is also conceivable, that social inheritance screens off genetic inheritance but, I think, that’s the most interesting challenge that I got. I’d like more: “No, look here is a piece of evidence relating to development which shows they have to be instincts”. 

The thing I want to come back to is something you said about the society that is brutal and leaves individuals disadvantaged. One the one hand, you suggest that we could devise robots that are social in some sense. If there is someone in an environment where they don’t get the right kind of stimulus from their parents, we could introduce it. Why would it be empathy? Empathy is also the flip side of sadism, but to some extent, it tends to cause us to favour some people over others, specifically, in situations that have a moral content. I wonder, what does it take for robots to teach people that are brought up in these bad places on how to empathise better?

What I would like to see is a transfer from cultural selection to what Dan Dennett calls intelligent design. If it is true that distinctively human cognitive mechanisms have come about through cultural selection, as I suggest, then there is the potential to look at what is supporting their development now, in the state of nature, and to use that as inspiration to what could be done to intervene, to try to achieve desirable outcomes. So, in a way, you would no longer be leaving this to cultural selection, you’d be educating people in these things. What are the desirable outcomes is always going to be a complicated matter. But Paul Bloom’s idea that empathy is antithetical to moral action is rooted in the nativist view. You genetically inherit a little widget that makes you empathetic towards people like you, so this is a really clever widget. This widget has to work out who is like me, and then, it experiences empathy only for those individuals. On the Learned Matching account, there is a tendency to show empathy for people like me, but only because when I grow up, I tend to be surrounded by people like me, but if instead you arranged through the use of robotics or through the use of more contact with people who look and sound different, then you would be able to get rid of this kind of group specificity of empathy.

If we consider that several aspects of cognition develop through interaction in social environment, what I am wondering is: What is the effect of people treating robotic agents differently than they would treat other people? Ultimately, it would be fascinating to have an embodied robot go to kindergarten and learn with the kids through social interaction. What would be the behavior of kids with this robot? I am wondering how these impacts correlate with the theories you have been explaining to us?

I think that my recipe was implying that if you are going to apply it to artificial agents, you would put them with adults. You would have them develop in social interaction with adults, that is, people that already have in some full-blown form the skills that you want the artificial agent to develop. I grant you there would be ethical questions about splitting a kindergarten class into half robots and half children, because you would expect some impact on the childhood development, as well as the other way around. Splitting a kindergarten class would be a bit radical. But I do anticipate that as our lives become more bound up with artificial agents, you know, maybe we have sophisticated robots living with us, doing childcare, doing housework, this kind of thing does not seem impossible. There could be changes to our social cognition which are quite positive in their way, which make us better at mindreading artificial agents. So, I think the general assumption, particularly on the part of Japanese roboticists, is that if elderly people are going to find robotic carers acceptable, the robots have to start acting exactly like people, whereas I think there is the potential if people earlier in life were exposed to robots, there is the potential for some movement in the other direction for human models of other agents, human theory of mind to become more inclusive of robots that the robots don’t have to behave exactly like we do now, we are able to meet them halfway.

A deep dive into the topic:
Crandall, J. W., Oudah, M., Ishowo-Oloko, F., Abdallah, S., Bonnefon, J. F., Cebrian, M., … & Rahwan, I. (2018). Cooperating with machines. Nature Communications9(1), 1-12. [link]

Herrmann, E., Call, J., Hernández-Lloreda, M. V., Hare, B., & Tomasello, M. (2007). Humans have evolved specialized skills of social cognition: The cultural intelligence hypothesis. Science317(5843), 1360-1366. [pdf]

Heyes, C. M. & Moore, R. (2021). The cognitive foundations of cultural evolution. R. Kendal, J. Tehrani & J. Kendal (Eds.) The Oxford Handbook of Cultural Evolution. Oxford University Press. [pdf]

Heyes, C. (2018). Cognitive Gadgets: The Cultural Evolution of Thinking. Harvard University Press.

Heyes, C. (2010). Where do mirror neurons come from?. Neuroscience & Biobehavioral Reviews34(4), 575-583. [pdf]

Legare, C. H. (2019). The development of cumulative cultural learning. Annual Review of Developmental Psychology1, 119-147. [pdf]