Skip to content

Human-robot interaction

Maryam Alimardani: Associate Professor in the domains of human-robot interaction and brain-computer interfaces at Tilburg University

Dr. Maryam Alimardani with the Pepper robot

“Future interaction between machines and humans requires a high level of awareness from the user’s side and automatic communication of the user’s commands and mental states to the machines. My research bridges the fields of brain-computer interfaces, robotics, and cognitive science by developing adaptive BCI systems that enhance the human-machine interaction and increase the human cognitive capacities. In the past, I focused on the sense of embodiment that operators experienced during BCI-operation of a humanlike robot and introduced a new neurofeedback training paradigm that could improve their learning of a motor imagery task. Currently, I am working on the development of BCI-controlled robots/avatars that monitor users’ brain activity in real-time and perform a user-specific therapeutic intervention.”

This talk is organized in collaboration with LT Big Brother.

LT Big Brother is the first voluntary professional mentoring program for young Lithuanian professionals globally. The aim of the project is to facilitate the transfer of professional knowledge and experience from established Lithuanian professionals (Big Brothers/Sisters) and ambitious Lithuanian students or professionals (Small Brothers/Sisters) at the start of their careers. LT Big Brother project was first launched in 2009 in London by the Lithuanian City of London Club (LCLC).

Below is a transcript of the Q&A session

Was haptic feedback part of the embodiment experiment?

No, in that experiment we had no sensory information from the robotic body. It was just the visual feedback that they saw, no tactile, no proprioception and only with that vision, that one channel of information that people had together with the thoughts of movement, we could show that people could have the sensation of embodiment. That was something that was novel because all the work before was about the integration of multiple sensory information. For example, in the rubber hand illusion what you get is the integration of the tactile together with vision. 

I was wondering, the study about changing views when it comes to nature or sustainability. How were participants informed about the study without forming a bias beforehand?

That’s a very good question. I think we are going to inform them about the purpose of the study with respect to objective responses of the brain toward climate change, toward different types of products. We are going to collect subjective reports of people. I don’t think we can bias brain activity in that respect. With the inclusion of multiple distractors in the experiment, we are going to remove the effect of bias in the neurophysiological responses, whereas most often in a neuromarketing research that’s true, subjective bias is a factor that limits the interpretation of the results. So, we know that people could be embarrassed by the observations of other people of how they respond, so they might change what they actually would do. If you ask them if their shopping behavior is this way or that way, they might report differently. Now very interestingly in this experiment that we are designing we would like to ask people to provide receipts of their shopping baskets, one week before and one week after the experiment. We don’t know how many people are willing to provide that, but we would like to go beyond the subjective report of the subjects. 

But were they aware of the possible changes in mindset?

So, neurofeedback training is going to be the last stage in our project. The first stage is to first confirm that the frontal asymmetry, the indicator I just talked about, used in the past for affective BCIs for emotion processing is related to processing of climate change images. The second stage of the project is to see whether we can couple those images with product images and see whether we can extend those emotions toward products. And then the last step is to see whether by repeating this in the long term, in a longitudinal study, we can actually develop a brain that seeks reward toward products that are emotionally loaded vs. products that are negatively loaded. 

And is asking for receipts would change the behavior in line with what is expected from them?

Yes, we thought of that, we thought that they could be biased about this. This is particularly related to one of the theories that I discussed in climate change and environmental behavior which is called cognitive dissonance, where you think you believe there is climate change and its impacting the planet and humans are responsible for that, but on the other hand, you don’t really change your behavior to bring up any change. We think that it could be the case that for these types of people the behavior might be a one time changer, they would not really do it in the long term. So we have to think of alternatives like whether we can have access to multiple purchase receipts or whether we can, for example, ask supermarkets to provide receipts that are associated with a certain type of product. We might just go with other alternatives that might just compensate for this limitation. We acknowledge this limitation that there will always be human bias here. But then for us, the most important thing is whether we can actually train the brain into processing these products differently. 

What does BCI technology mean for the future of how we monitor or assess cognitive tasks; does it mean that behavioral measures are becoming obsolete? Or, is it enough to have biosignals only, how do we combine the two?

Very good question, I am not a behaviorist. I always liked the brain better. You can train it, you can trick it, but it does not lie to you, it tells you the truth, but so can people say about behavior. For me, behavior and subjective reports are always secondary measures in my experiments to validate the indicators that I find within EEG for my systems. I never disregard them. I think behavior more so than the subjective reports can be a very great support for the validation of the neuromarkers that they find in neuroscientific studies, and later for the monitoring of the classifiers. The classifiers might predict something, but we can only predict whether the prediction is correct based on the performance that the subject shows. So always the behavior remains for me in my experiments. 

What characteristics define a female robot?

A female robot is modelled after a female human. The robots that you saw were a replica of real humans and that’s why they are called geminoids, gemi having the meaning of twin in Latin. So, the idea is that they are modelled after real people. We did have identity of those people, actually, that female robot that you saw in the pictures, we did have the model who participated in our experiments; often we just collected videos of the real human and the robot replicate and then we showed it to participants in, for example, MRI machine to see how people respond to uncanny valley. Uncanny valley is this response that people show to human-like but not yet human agents. We say it’s female because it was modelled after a female human. 

Where do you see the biggest BCI potential in the future?

I can talk about where I wish it to be more useful. I think the giant companies have become a really big player in this field, similar to VR. VR for such a long time was a research domain but then there were commercial companies that came into the picture and they invested in it and suddenly you had hardware and software that were just so easy to use and you saw research around VR blossom. I think BCI is going to be like that in a decade or so; where I think BCI is going to be mostly employed is in pedagogy. I think regarding learning, BCIs can really help you to tune your attention, to increase engagement with technology, just by having this assistive system that passively helps you to concentrate and get involved in the task. I think that would be one main thing that we will see more progress in over the next few years. And, I will say that because already with medical BCIs and particularly these motor imagery BCIs, where patients that have lost limbs have to think about movement to move their prosthetics, we have done a lot. And we see a lot of research in terms of development and also human factors, whereas passive BCIs is just a new topic that is receiving more attention these years.  

A deep dive into the topic:
Alimardani, M., & Hiraki, K. (2020). Passive Brain-Computer Interfaces for Enhanced Human-Robot Interaction. Frontiers in Robotics and AI7. [link]