Georgios Yannakakis: Professor and Director of the Institute of Digital Games at the University of Malta
Georgios N. Yannakakis is a Professor and Director of the Institute of Digital Games, University of Malta (UM), and the co-founder of modl.ai. He received his Ph.D. degree in Informatics from the University of Edinburgh in 2006. His research focuses on the crossroads of artificial intelligence, computational creativity, affective computing, advanced game technology, and human-computer interaction. He explores such research concepts as user experience modeling and procedural content generation for the design of personalized interactive systems for entertainment, education, training, and health. He has published over 300 journal and conference papers in the aforementioned fields (Google Scholar Profile Page). His research has been supported by numerous national and European grants (including a Marie Skłodowska-Curie Fellowship) and has appeared in Science Magazine and New Scientist among other venues. He has been involved in a number of journal editorial boards and he is currently the Editor in Chief of the IEEE Transactions on Games and an Associate Editor of the IEEE Transactions on Evolutionary Computation. He is the co-author of the Artificial Intelligence and Games textbook and the co-organiser of the Artificial Intelligence and Games summer school series.
Below is a transcript of the Q&A session
You mentioned that AI can generate levels or the content of the game, but I’m also curious, can AI generate the concepts of the game and then perhaps developers can make that game?
That’s a very, very good question. Concepts can have many forms, because different designers get inspired by different types of concepts. You might have a visual concept, you might have some sort of sketch of gameplay, you might have some sort of novel music, or sound that you can be inspired and build a game around. So yes, AI can create abstract representations of any sort of type, with a whole community dedicated to that. You know, there’s a large community working on computational creativity, investigating ways that computers can be inspiring to humans when they create. So, there’s a lot of concept blending, work over there, where we investigate how AI can blend really weird concepts, in the way that stable diffusion is doing today or Midjourney, right, with image generation. So yeah, many possibilities here, many possibilities in this space when it comes to concept creation.
I was wondering, do you think it’s feasible to have some sort of AI provide feedback to teams who are playing a game, I mean, like on a simpler level, there is the chess engines that can sort of predict the best next steps for a chess player, and maybe there is something that can be done for teams when they want to improve themselves, train for a tournament or whatever it is?
Yeah, I didn’t talk a lot about teams, and there is a good reason for that, because it’s a much harder problem. When you bring teams into the game, like, you know, two opposing teams, then you complexify the problem of artificial intelligence. It was yesterday, a few days ago, where Meta’s AI managed to come up with a decent sort of algorithm that matches a human expert in the game Diplomacy. I don’t know if you know Diplomacy, have you played Diplomacy? It’s a very old game, very old board game, back from the Second World War, more or less, just after the Second World War, where people would play over mail, they would actually change moves through mail. But that game relies a lot on negotiation between humans. They sit together, then they negotiate, they can negotiate four hours, and then they come back, and they play. And the nature of the game is both competitive and collaborative. So, you have your own goals, you’re competing, you want to win, but you need to collaborate with others to do that. And that problem is very difficult for humans to start with. For AI, it’s even more difficult, because of this non-verbal, plus non-verbal interactions that we have between us, right? When it comes to negotiation. So Meta’s AI made a fantastic step forward using a very interesting, multi-agent reinforcement learning algorithm to play this game, but not for Diplomacy game but what is called No-press Diplomacy game, which does not allow for interaction among humans. So, all the negotiation part of the game, which is the coolest part, is gone away so that AI can play. In that sort of very simplified version of the game, yes, we have some decent results these days. So, it’s like Risk, I don’t know if you play the risk, No-Press Diplomacy is similar to Risk. So these games like Starcraft, Diplomacy and so on, that involve many players are much harder for AI to play. And that’s why we haven’t seen many advancements, and I didn’t talk about them too much.
So perhaps then when you want to personalize games, that’s also very hard, when you have a multi-agent game?
Oh, yes, everything that has to do with multiple entities is hard. Every time you go more than one human, it becomes hard. I mean, when you’re alone, it’s so much easier and so much more boring. What makes life exciting is then when you bring more humans into the same table, they discuss possibilities, group and team information, common decision-making, all of these problems are hard, but it is the very reason why as a human race we have been advancing so much, because we are social, and we need to rely on others. So yeah, I think this is what makes AI applications in these domains with multiple agents or multiple humans quite tough. This is the next big challenge for artificial intelligence, among many.
I’ve got a special interest in what you’re doing, because I game a lot, especially the first person, shooters and all top names from Tom Clancy, Ubisoft to Electronic Arts. What I have come across very frequently when I play with my friends on the multiplayer is that there are a lot of cheaters, right? Guys who use some algorithms. And it is so obvious to us, human players, in that not just us but everybody starts typing in the game chat or in the global chat that “there’s a cheater and kick him out”. And also, there are anti-cheat things that already come by default when you install this game. So, I want to understand if it is possible for your AI algorithms to immediately identify such things, because it’s too obvious from the kill-death ratios, that everybody identifies it except the anti-cheat.
That’s a very important question. There are many, many answers to that. I can start with the very fact that, yes, indeed, I agree there are anti-cheat detection systems out there. ModelAI is over in one of the services, for instance. They’re quite good, some of them. The the question is, how do I notify humans that, oh, I have just detected a cheater behaviour here? You need to be a bit careful with labelling people if you’re an algorithm, right? As an algorithm, obviously you can label people, but there’s a lot of ethical complications in doing so. So, one of the reasons for why you maybe don’t see automatic banning of these players is that the companies themselves don’t want to do that, because the algorithm often fails. And then again, it’s not really right to automatically label or ban people. So, one of the better ideas is to have some sort of background system that works in the background and the back-end that detects toxic behaviour or cheating behaviour, and then reports back to humans. And then humans have to decide whether that was a toxic behaviour or not. And then obviously, as a player, you’re providing all these labels. You keep saying, you know, this guy is a toxic player, this guy is cheating and so on. And all of that is considered by algorithms. Some algorithms are better than others. So, maybe you play the games where this cheat detection system is not good enough. But my team, for instance, has been working with Ubisoft on a cheat detection system for Honour. Honour is one of these very toxic games where you have multiple players talking trash to each other, where there’s a lot of bullying going on and a lot of toxicity. You can use AI algorithms to detect that. And then you can inform Ubisoft about, you know, different types of things.
Yeah, sorry. My question was not about a toxic chat, but about cheating. And I don’t understand why companies don’t ban it because when you actually play, there are so many people who just exit the server, because it’s no more fun playing.
Yes, I know. I know, and yeah, there are many problems with human behaviour. So, we talked about cheat detection already and I’ve just moved on to toxic behaviour. I think the two are the same, they are under the same umbrella of being an evil person. You know, it’s not fair, toxicity, being a bully – I see them as the same sort of cluster of human behaviour. So that’s why I jumped into toxicity, because I think they’re relevant phenomena. It happens from the same people usually, similar personalities. I think, yes, we have algorithms, some of them work better than others. We have community managing systems that will consider your labels, and they will consider what the algorithms will say about a particular behaviour: this is a cheater, this a toxic behaviour and so on. And then decisions are made about what to do with these players. Now, different companies have different policies on how to deal with this and there are better and worse algorithms. So, I don’t know. I mean, we will get better. I mean, AI algorithms can detect those behaviours quite easily, believe me. Is just what humans do with this information, right?
And also, AI systems cheat, because sometimes they have much more information than human players, right?
Yeah, unfair AI systems do that, yeah, cheating systems. Sure, and it has been very dominant unfortunately within games when you play against advanced AI systems, usually they have more information than you have, especially back in the old days of early game development, like, you know, the 80s and the 90s.
I want to come back to where you talked about the model of the learner and the teacher, right? And, I was just wondering, could that be an approach for other domains too? Perhaps where you have more modalities for training, but then, let’s say, you have limited information. I guess what happens in real life, if you collect images and train your model, say, a in medical domain, the decision typically depends on multiple factors, right? You not just need the images to diagnose something, but also the previous history of a patient and many more factors. So, what you talked about this transfer, is that game-specific, or do you think that could be a useful type of approach for other domains?
I think it’s a general method, it’s just that we were, to the best of our knowledge, the first ones to apply it to affect within a lab. When you leave the lab, whether those affect models can retain their predictive capacity, this is what we tested. They do to a very good degree and that’s very promising because then you can apply a similar method to other subjective notions like comfort. Imagine you’re in your office and you have several sensors that measure your comfort levels, but then when you walk around, you only have your mobile phone, you don’t have those sensors available. Still the model should be able to retain its predictive capacity because it has learned a lot from you, from your office space. I think this idea of privileged information is quite powerful. We have had a discussion with game developers or HCI researchers of how this is a very common problem because you have a lab and all of a sudden you don’t have access to all this sort of expensive equipment, what do you do? I think privileged information is a very powerful idea. You should check the paper regarding the deals, but I think it’s generally enough as a concept.
And this works because there are statistical similarities between modalities, or why are you able to do that?
Yeah, exactly because you basically transfer the information of non-existent modalities indirectly, let me put this way, through a loss function. So, they’re indirectly embedded in their representation, the neural network representation while you’re training this. So, when you don’t have them available, the model still performs well. This is the best I can do now without getting you maths details, right? But there’s a loss function, it’s like a form of transfer learning, but I think it’s a more powerful form of transfer learning.
Since we started talking about toxic and violent behaviours and what I found very interesting in your book, and I will quote what you wrote: “Games are frequently used to train and test AI algorithms. This is the main aim, for example, for general video game AI competition, however given how many games are focused on violent competition, does this mean that we focus on the development of violence in artificial intelligence?” And I find it very interesting. So, what are your thoughts on that?
Yeah, it’s a very good question again and a hard one, because it comes down to the use of technology and technological advancements. There are always evil uses and good uses. And we can start with evil ones. Last year in the summer school that we organized with Julian we had a person from the military coming into our sessions and just, you know, brutally and bluntly sort of admitting that “all this technology, yeah, we know about this. We have flight simulators. We have AI that plays, you know, sort of navigates a plane better than any human these days”, which is scary if you think about it, right? So, then you have uses of all of that, of privilege information, for instance, for education, right? So, you can train your model in a game lab where you collect data about students, and then when you don’t have those sensors in a classroom, your models can still predict frustration of students and sort of try to assist them in a systematic way. So, I can come up with endless examples of bad, evil examples of game artificial intelligence versus good examples like game AI for good. It boils down to the ethical complications and the constraints of each one of us, because I think any technology has been abused, any technology out there has been abused already, so AI and games is not something different, it’s not an exception, I’m afraid.
A deep dive into the topic:
Yannakakis, G. N., & Togelius, J. (2018). Artificial intelligence and games (Vol. 2, pp. 2475-1502). New York: Springer. [pdf]