Skip to content

Citizen science for AI

May the 27th of this year marked the start of the AI Forward Forum initiative. We hope that it will grow into a citizen science project. Over the course of the first months, we had numerous exchanges about this initiative: from our friends and colleagues to multiple individuals in academia, and non-profit organizations. Some of these conversations raised questions. One particularly important question was, “Isn’t it for the tech companies to be preoccupied with the creation of systems that can flexibly operate beyond the narrow domains?” We think that it is certainly not only for the tech companies, nor is it solely for researchers… but it is for everyone. The more people across the board get involved into the active efforts and thinking how to solve certain problems that plague current AI, the better. The world needs the voices of all types when it comes to technological innovation and, importantly, diversity is likely to catalyze ingenuity and action. Below are a couple of highlights to some important questions and answers.

What is the AI Forward Forum initiative essentially about?

This initiative is about attracting diverse people into the endeavours to create smart systems. We can call it ‘citizen science’ of sorts, because we invite both professionals and enthusiasts across different areas. This is because the creation of these systems goes beyond the involvement of just computer scientists, ML engineers or… neuroscientists. At the same time, we aim to raise awareness that developing artificial intelligence needs concerted efforts from a wide range of domains, cutting across the sciences, humanities and arts. If AI is to become intelligent beyond the narrow domains, which is currently a huge issue, then everyone’s involvement is vital. The citizen science idea has social implications too. Do we want smart systems to be developed by a small group of people, or do we want their development to be a societally-driven process?

What are the limitations of current AI systems?

Current AI systems are intelligent in very narrow aspects, meaning that they can do one specific thing. If a system is good at playing chess, then it can’t also process natural language, etc. On the other hand, humans and also animals are extremely diverse in what they can do. As for humans, one person can play chess and other board games, write books, play music and so on. On a deeper level, current AI is not quite intelligent because it lacks general understanding of the world. And this lack of general understanding hinders the applicability of AI to a lot of real-world scenarios, such as in healthcare and education (healthcare assistants, care robots, robot tutors).

Why should society get involved in the solving of the “AI is not quite intelligent” problem? Is it not what the tech companies should do anyways?

Many of us already live in a world permeated by devices, mobile applications, algorithms, and so on. Many would hardly imagine surviving without the internet. And all of these technology-related entities have already started to shape societies: how people communicate, work and consume, to mention just a few examples. Also, it is likely that we will live in the future in a kind of symbiosis or co-existence with AI systems, which raises the question: Do we want to adapt to these systems and potentially have a lot of unfavourable outcomes (e.g. the flattening of our cultural expressions), or do we want to design them so that it is a more balanced relationship? If the latter is the preferred option, then the development of these systems should reflect a variety of domains and values, which is not necessarily the case now in case of the big tech. On top of that, the arena is dominated by either Euro-American tech companies (and researchers for that matter) or Chinese companies, which does not reflect a variety of cultures in the world.

Are you saying that diversity is required when designing technology?

Essentially yes, but diversity at multiple levels, not just in terms of gender or culture, but also the different domains, not just computer science but also psychology, anthropology, biology and so on.

Could you give a concrete example where inadvertently technology without a broader perspective can have a negative impact?

Let’s take emotion. Many would have heard about emotion recognition systems that supposedly can read emotions. What do they read off from our faces? To answer this question we need to ask the following: What does the understanding of emotion entail in computers? It entails building a computational model — a quantitative output. This means that we simplify emotion and we box facial movements into distinctive categories, typically, only a small number of categories, such as happiness, sadness, anger and so on. The problem with this approach is that as shown by various experiments, emotion is not universal across cultures, nor do the same facial movements “fit” nicely into one category. In real life, depending on the situation, the same facial expression can mean different things: we may smile when we are happy or satisfied, but we also may smile when we are embarrassed.

Why is art important for science and technology?

Art helps to imagine. Art can also be prescient of science. For example, long before it was scientifically shown that planets can have two suns, the image of a planet orbiting two stars appeared in a Star Wars movie. Sci-fi movies have already given us the vision of intelligent robots, but it is time artists helped to move the vision of smart machines, at least as smart as any animal is, closer to reality.

What are you striving for? 

We aim to spur creative mixing of ideas and interactions between different areas. We hope that this will help uncover parallels and analogies that would otherwise be difficult to come across.