Skip to content

Anthropology

Daniel White: Cultural anthropologist at the University of Cambridge

Dr. Daniel White

His research examines the mutual production of emotion, politics and emerging media technologies, with geographic concentrations on Japan and the UK. Currently he is investigating practices of emotion modeling in the development of affect-sensitive software, social robots and artificial emotional intelligence. Through an ongoing project called Model Emotion, he works across disciplines with anthropologists, psychologists, computer scientists and robotics engineers to trace how theoretical models of emotion are built into machines with the capacity to evoke, read or even, in a philosophical sense, have emotion in ways that foster care and wellbeing. Comparing how this process unfolds differently in places like Japan and the UK, he explores how designing robots with emotional intelligence is shifting research agendas within the psychological science of emotion, as well as transforming people’s capacities to relate affectively with emerging forms of artificial life.

Below is a transcript of the Q&A session

Social media recommendation systems tailor information that we get, creating perfectly personalized representations of the world which makes it more difficult for us to connect with other people. Simultaneously, we’re building robots that make it being lonely more bearable. Isn’t this a scary feedback loop that we might want to avoid?

Absolutely, I think there’s many reasons to worry about this feedback loop, a kind of what might turn into a vicious cycle of seeking forms of care and intimacy in ways that are only coded with limitations, with the algorithms that we currently have. At the same time, I think because there has already been a long history of this kind of discussion in robotics, and, particularly, emotional robotics. If you think of people like Sherry Turkle, she’s been talking about this concern for a very long time. As an anthropologist, I want to hold these concerns very much at the forefront of my mind, but also keep myself open to different cultural contexts where people, such as those like Aya that I talked about in Japan, can see this part of the argument very well. They can see very well that companies like Facebook already do emotion profiling in a certain way and thus maybe we should be hesitant, apprehensive about companies like GrooveX doing similar things creating a robot. But at the same time and adopting her perspective and many of those like her, as an anthropologist, I want to recognize that she makes discernment to a company like Facebook, a company like GrooveX and sees a lot of potential in companies like GrooveX and, in fact, as someone who might speak on behalf of people who are genuinely feeling, maybe even more so in Covid these days, a lack of connection that it’s okay to experiment with other kinds of connection with other beings like LOVOT. And I think LOVOT is very interesting because if you compare LOVOT to something like Pepper, who I think you all know, Softbank’s Pepper, who very much looks like a humanoid, more human in form than LOVOT does, LOVOT is kind of this animated creature. And from that point of view, LOVOT represents a bit of a change in social emotion design or social robot design in Japan where robots are used not just as a way, and absolutely the software systems running them, are not just used in a way to capture what we think are distinct fixed human emotion but are used as experimental devices to discover what human-robot intimacy might look like. So GrooveX’s designers when building LOVOT, they don’t exactly know what they’re going to find once they put, as they have now, LOVOT out into the wild to get feedback from customers. And so obviously they have drawn psychological models like other companies like Fujisoft also do, like SoftBank also do, but they also leave the space open to say, ‘We don’t know what people need not from a human but from a robot like this and we’re not trying to replace the human, and that’s why we build something which is specifically non-humanlike. We’re designing something that is evocative and interactive in order to see if we can as a company, makers as, expand the capacity for love.’ Now, one might take a very sceptical attitude toward that and say, ‘This is just a company trying to manipulate people.’ But I think from the feedback that we get from people in Japan, it’s important to leave open the space for possibility that people might get forms of satisfaction that can be constructive from robots because they claim they can. That’s kind of the point of our work in this capacity is to try to describe what those people might mean by those statements.

You were talking about this interaction between non-human entities and humans, and mentioned that shapes that are non-humanlike are also being created. Do you think there is anything specific in this relationship between non-human agents and humans vs. human-human or human-pet interactions? Any characteristics that are more specific to this type of relationship, or if there is anything different in this type of relationship?

Yeah, that’s a great question. You asked it in a wonderful way that allows me not to spend much time skipping over the concern I have with making these hard categorical distinctions between human-human emotion, human-animal emotion, and human-machine emotion. You already alluded to this fact that these are overly simplified categories, and this is precisely what we find in our fieldwork. People find many different kinds of emotional things in interaction with robots based on a kind of evocative design that they have. So, I can try to illustrate this by contrasting two examples, so you have a machine like LOVOT, and I will try to stay focused on your question, which I understand, is kind of seeking to find what might we learn by people interacting with specifically non-human artificial agents, what might that evoke in people. And I think the first thing that this evokes is experimental and playful attitude. And when you invite people to take this playful approach to robots, you find that emotions become very complex things, they become things which aren’t either a feeling of happiness or surprise but is a mixture of both, or something that vacillates back and forth. When you talk with people who are interacting with robots like LOVOT or with robots like Sony’s AIBO which I showed you a brief picture of at the robot funerary service. You find that people are interacting with these robots in a way that, in the case of LOVOT, people adopt a very playful attitude. In the case of AIBO, at least in the context of the ceremony, people adopt a very kind of sorrowful attitude, but at the same time, it’s both playful and melancholic and that also parallels the experience of how people not only feel about robots but also how they think about robots. They think about these robots not as just artificial agents or as alive, but something that is actually both. And that’s not something we’re used to taking very seriously from an engineering perspective. What does it mean to say that people treat this robot as both artificial and alive? Well, it means that people treat robots in this category of the conditional, the kind of as if, as saying, ‘I understand this robot very much is not alive, but I am willing to treat this robot as alive because of what that does for me affectively. And what it does it provides comfort, and it provides forms of surprise where I find myself when I open myself up to that possibility of the robot being alive, I feel myself surprised to be comforted by that robot in a way that is not the same when I am interacting with a human or a virtual agent. And I maybe can’t explain why that is’, speaking the voice of the interlocutor. ‘But I’m curious of finding out’. And this is exactly what our interlocutors, social robot users and social robot creators, are all trying to figure out. This is a moving target because it’s in a cultural context. They’re all trying to figure out what it means to construct intimacy in human-robot context, and the reason this is an interesting question to ask, I think for all of us, is because I think we’re used to framing again like Ekman or other engineering models emotion as something which is simply codable in a single emotion category. And here we have an emotional experience that is placed in a context which requires a much deeper description in order to document all of its dimensions and capacities.   

Can you elaborate on how social emotions are dynamic?

Yeah, very simply I mean by that, at least in the context of that paper, if you’re in a laboratory and you’re analysing how a person is reacting to a robot that touches your hand, for example. And on the first instance of that the person smiles slightly and then the person does this again in a series of ten times over the course of an hour experiment and then comes back the next two weeks and does it again and gives that same smile that a computer system recognizes as the same smile, very much from an anthropological point of view, that smile would very likely indicate different emotional experiences, different affective or physiological experiences. I think this is something engineers obviously can see, admit, and understand as a problem but it’s also something we want to highlight and make centre for people who are designing these systems. Say, how would we possibly not only distinguish between different kinds of smiles because, I know engineers are doing a lot of this challenging work, how to distinguish between a slight difference between this corner of the mouth raising and that corner of the mouth raising, what kinds of different smiles are those, and how many different smiles are those? Even beyond that, the humanities, and social science people, and psychologists like Lisa Feldman Barrett would say you’re still never going to get to what for many people is the reality of emotion which is a history of experience. So, in this case, it’s a history of people interacting with that robot in a laboratory over three weeks of time, and the way they feel at that first touch with the robot is not the same way they feel at that same touch three weeks later even though they have the same smile on their face.

Neuroscience seems to find a lot of similarity across people with respect to emotions.  How can we incorporate those findings? Additionally, doesn’t a lot of human behavior towards robots mirror what was obvious already in Weizenbaum’s ELIZA experiments?

Neuroscience does seem to find a lot of similarity across people with respect to emotions, and also, in people like Lisa Feldman Barrett who consider themselves people with neuroscientific training, they would argue that we don’t find a lot of neurological fingerprints there, a strong neurological fingerprint what emotional category is. Now, I think again that’s a kind of an unproductive way to frame this debate and this question makes a concession to that. So, I very much want to make a concession too, and say there’s obviously got to be something more satisfying in the middle here that tries to address the fact that even though a smile doesn’t in all cases refer to happiness, or a general state of happiness, or however you want to define that. Nonetheless, in many places wherever you go around the world, people are using the smile to signal a kind of sense of good will, openness, and joy. You have to recognize that and there’s always a lot of strong physical and biological-anthropological evidence that suggests that people are operating unconsciously through those codes, so I very much appreciate that point of view, and we do have to find some way to talk about that middle ground better than we do now.

In terms of the ELIZA experiments, this is a good point. This experimental design has been going on for a long time in terms of what a machine elicits as a model for humans. At the same time, I think what’s happening in Japan, in particular with its own very different history of human-robot interactions and stories about human-robot interactions in media and animation, manga and film, there’s a lot to be said for how you embody that system, how you give a robot a body. Of course, in Japan, one might say that robots have always come with a particular kind of body, often very mascunalized bodies or overly hyper feminized bodies, actually today. So, in this sense, as Jennifer Robertson has shown in her work, robot embodiment is always gendered in very important ways that have to be taken into consideration. I think that’s a point which is different from Weizenbaum’s experiment, these aren’t disembodied human virtual conversations, that they always take place in a context where a body is having a certain evocative affect on people.

It seems what happens is that while in Japan there’s a huge market potential for these cute little robots, in Euro-American contexts, a lot of efforts to build social robots get discontinued. For example, if we take Cynthia Breazeal and robot Jibo, or SoftBank’s Pepper (it too got discontinued recently). I wonder, is that because in Western cultures, we expect a lot from robots and we do not project that much on them, whereas perhaps in Japan and other Eastern cultures, it’s more common to think that it’s possible to bond with an inanimate object? Is there a cultural factor in this?

There’s absolutely a cultural factor and as a starting point it’s fair to think in terms of places like Japan vs. place like the United States. At the same time, I think very quickly we want to move beyond that as the starting point for how we think about culture, and move very quickly outside those nation state levels as if the nation state is co-terminist with the culture. Because very much so and I think you characterized very well the depiction of sort of what’s happening in Japan about robots and how people feel about robots as a contrast vs. often seen as the Terminator in the West and how people are naturally “more fearful” because of different cultural histories there. So, I think the cultural aspect is maybe better understood as histories of storytelling, and in Japan, you have histories of storytelling which are very different than the West. A lot of those fall along the lines of depicting humans like Astro Boy, or Doraemon as helpers for people, that’s true. At the same time, you also have, I’d say, just as much, people in Japan getting tired with robots as you do in the West or other cultures. In fact, there is this phrase in Japanese which translates to English as “the three-month wall” where robot engineers or corporations are trying desperately to build robots that can hold users’ attention beyond three months because most people seem to lose interest after three months which is much like people do in the States. So, that finding invites us to think about culture in a more complex way in terms of what stories do people bring to human-robot interactions, what stories are people being told within those interactions, which actually help construct the feelings within those interactions moment-to-moment, and then what new alternative futures are being imagined about possible human-robot interactions. At least all three of those dynamics need to be incorporated, and if fair done so, they can move us beyond thinking about cultural difference as one between kind of one nation and another.  

If we can agree that our emotional reactions can be quite irrational, do we want robots that elicit an emotional reaction? Suppose LOVOT malfunctions and causes some real harm. Do we want to have to get over our emotional reactions before disposing of the robot? Is there a threshold where this emotional reaction breaks down?

This is a very interesting question. In many ways, this idea of emotions as irrational compared to the rational is sort of part of our history of how we talk about emotions in some way. In another ways, I understand exactly where you’re coming from… Well, not exactly, but I understand that the question is not meant to frame it in this way, but I think, it might be worth highlighting or flagging that for a moment. So, especially in Euro-American history, we have histories of the rational being associated with science and the Enlightenment, and often men. And emotion is associated with the romantic movement, Romanticism, often women. And, obviously, that’s not a division that holds true very well today or across cultures. So, Catherine Lutz famously in the 1980s showed how these divisions don’t work very well at all in Micronesia, and how they are, in fact, gendered in different ways there. And, of course, you have Antonio Damasio’s work on “Descartes’ Error”, taking precisely this point that we need to think about reason and emotion in a complex relationship. And I think, this question here does point us to that, so I don’t want to make it seem like this question is following a simple binary, not at all. I think it highlights the history of that binary very well. What I do want to say is that to more directly address this question that I think you’re never going to have an emotion neutral or emotionless interaction with a robot. You can have what David Hume called reason which was simply a state of low affect but that does not mean no affect. So, from that point of view the hierarchy of value on emotion becomes not like, How do we get rid of irrational emotion and how do we get more rational before we make our decisions? It becomes, What kind of emotions do we want to cultivate? What kinds of emotions do we think provide the best state of mind for making smart decisions? And a lot of my current work and future work actually looks at how Buddhist perspectives from Japan and elsewhere think about how emotions might be built into robots differently than how we do so from North American perspectives. Of course, in Buddhist traditions, emotions are incredibly important, and you can’t have insight, knowledge, reason, rationality, wisdom without certain kinds of emotion, such as compassion. So, from that point of view, we can flip this question, or approach in a different way, and say, How can we get more emotional in order to make a good decision about throwing out a robot or not? How can we cultivate a kind of compassion to provocatively adopt some perspectives of my Buddhist interlocutors, and say, ‘How can we develop a kind of compassion that recognizes a form of reverence in humans and a form of reverence that may not be the same in robots but might be good for us as humans to cultivate in ourselves toward robots? It might serve us well, both on emotional level and as a reason inside knowledge level to cultivate feelings of compassion for robots that are, in fact, productive. And then maybe will, in fact, make the best decision on whether to dispose that robot then based on those feelings rather than based on feelingless reason.’

A deep dive into the topic:
Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychological Science in the Public Interest20(1), 1-68. [link]

Bell, G. (2018). Making life: a brief history of human-robot interaction. Consumption Markets & Culture21(1), 22-41. [pdf]

Robertson, J. (2017). Robo sapiens Japanicus: Robots, Gender, Family, and the Japanese Nation. Berkeley: University of California Press.

White, D., & Katsuno, H. (2021). TOWARD AN AFFECTIVE SENSE OF LIFE: Artificial Intelligence, Animacy, and Amusement at a Robot Pet Memorial Service in Japan. Cultural Anthropology36(2), 222-251. [link]