Skip to content

‘Language isn’t just a bunch of arbitrary code’ Professor Max Louwerse

In a recent popular science book “Keeping Those Words in Mind: How Language Creates Meaning”, Professor Louwerse discusses how patterns of sounds and words create meaning in language.

Conversing is like dancing. Every day we take part in many little dances with one another. How do we know the moves – in conversations and on the dance floor? To continue with the dancing metaphor, picture someone who dances well. You come up to them and ask, “How are you able to dance to the music?” The person looks at you and shrugs, not spelling out the definite answer. You keep thinking, though. Could it be that they’ve trained all the different moves carefully? Or, is it that they have the instinct for dancing? Could it rather be down to the neural networks – the more music they listen to, the better they dance? And what about the body? One dances with the whole body…

The four explanations – training, innateness, neural networks, and embodied cognition – are the ones that have, in fact, been proposed to explain not dancing, but the origins of language. While it remains a mystery how language evolved, Professor of Cognitive Psychology and Artificial Intelligence at Tilburg University Max Louwerse points out that one thing is often forgotten. “It is far too easy to take a single explanation as the only explanation,” says Louwerse. He then adds that it is surprising how rarely language system itself has been mentioned as the cause. “Language is not just a bunch of arbitrary code. Language evolved over millennia and have structures that language users take advantage of. It’s the music that causes one to dance, and I see the same way with language,” says Louwerse.

The four explanations also form the basis of several chapters in Louwerse’s recent book “Keeping Those Words in Mind: How Language Creates Meaning” (2021). This popular science book is not only on language, but also cognition. You shall immediately say that language is part of cognition, and you are right. Indeed, we recently sat down with Professor Louwerse to talk about how language and psychology are intertwined, how humans are cognitively lazy, and the degree to which machines can converse with humans.

In the book, you mention that youve always been fascinated by language. What is fascinating about it? In addition, which unresolved conundrums in the language sciences do you consider as the most interesting?

What I am fascinated by is how we are able to get meaning out of nonsensible sounds. I think the language sciences are moving in the direction that this becomes more important than a few decades ago. In particular, the importance is highlighted by more recent research on systematicities in language, by iconicity. The comment in the margin is that the sound of a word is arbitrarily related to its meaning, the famous De Saussure statement. There is more and more research that argues that the relation between the sound of a word and its meaning is not arbitrary. Everybody agrees that it is not fixed, but it now becomes clearer it is not arbitrary either – there is something that lies in the middle. The language is arbitrary claim can be linked to the Sapir-Whorf hypothesis, linguistic relativity. It is clearly not the case that language only drives thought, but it might have an effect. There are more and more pieces of research that seem to put that together. I think research on these century old claims, made by De Saussure and Whorf, become the primary questions in the language sciences. This also need to be put in the perspective of embodied cognition, where researchers have argued that it’s only perceptual simulation that can assign meaning to words. I think that that explanation is also the case, but that there is more to it. And that more-to-it lies in the language system itself.

From something very broad, language, let’s turn to a narrower domain, conversations. What is conversation, and what is a good conversation?

I will make the link to conversing, but I’ll first come back to the four explanations. In the book, I argue that the four explanations assume that humans go to deep mental depths in language processing. But it turns out that we are cognitively lazy. In everything we do we take cognitive shortcuts. That makes sense: we try to save energy. So, I’d argue that the language system evolved in such a way so that we can take cognitive shortcuts by offloading resources to the patterns in language. We easily pick up these patterns, and we can do that well because humans – more than other species – are good at seeking out patterns. Now, with regards to conversation, for me, conversations are social glue – lazy ways to make sure that we keep in social contact. You can say that’s a rather pessimistic view! If I give a lecture, for instance, I do more than creating social contact – I transfer knowledge. Yet I think in a majority of the cases what we are doing is just making sure that we have a social bond in the cognitively easiest way. For me, the majority of conversations humans have is small talk, it is social glue.

What does that mean in terms of conversing? There are linguistic things (words, grammar, structures), but there is also psychology (situational awareness, mental states) involved when conversing?

I think it is hard to see language separated from psychology. Understanding psychological mechanisms are critical for the language sciences, even though theoretical linguists denied that point several decades ago. Perhaps that’s a mistake and the reason why psychology, that is, psycholinguistics has flourished, and theoretical linguistics had a really hard time for a while, namely, because of keeping psychological evidence and language theory apart. I think you need to put the two together. At the same time, however, I think there is an important role for a linguist if you argue that language structures say something about the meaning of words. Linguists and, particularly, computational linguists, can extract those structures, and psychologists can shed light on the cognitive mechanisms that link these structures to meaning.

What about machines and whether they can understand language? Is conversing a tough nut to crack for computers because you need more than pattern understanding?

Yeah, but you’re making an assumption that the field doesn’t necessarily makes. One sees many astonishing developments in computer science, computational linguistics, and natural language processing. Yet many of these researchers are not looking at mechanisms, they say ‘as long as you can get the maximum performance out of the algorithms, one should be happy’. And, of course, one ought to be excited about maximum performance, but that should not come at the expense of a lack of understanding of the mechanisms behind that. As soon as you try to understand the mechanisms behind, I think you are bound to think in terms of cognitive psychology and psycholinguistics. Because language creates meaning in the mind of the language user.

This makes me think, what is understanding? If the system performs at a certain level, does it mean that it shows understanding? There are those who argue that these systems do not understand anything because they can’t figure out how one thing refers to another thing in a sentence, so they struggle with referents and pronouns. For example, what it refers to in an utterance.

It is a valid question, it goes back to Searle’s The Chinese Room Argument and Harnad’s symbol merry-go-round idea. The idea that you need symbol grounding for language to be meaningful. But I can also turn the question upside-down. You pointed out that computers may not understand things, because they are doing nothing more than symbol crunching. My question to you is, “What is it then that humans do?”. It may suggest that humans have a deep understanding of the nature of the conversation, but I think that we don’t. If I were to do a recall experiment at the end of this conversation, I bet that you would fail miserably, at least on the verbatim words that I said. And I would fail a recall test as well. I think all we do in conversation, but for reading it is not much different, is trying to get the gist of what is communicated, so we are desperately trying to get pieces of meaning together and to get to a good enough representation of the language that has been communicated.

One could argue that humans are good at piecing together multiple streams of information, whereas machines may not always be good at that…

So, when I say cognitively lazy, I don’t mean it as a bad thing, it could be a good thing to take shortcuts. And in one of the chapters in the book, where it comes to picking out structures, if you compare humans to other animals, we do really well in picking up patterns in data; better than other animals if you look at the pattern recognition, but considerably worse than other animals in terms of logic. We are able to pick up patterns even if they don’t exist. Imagine a dataset in which there are absolutely zero patterns, it is entirely random. Animals would go for majority patterns and ignore a pattern because there isn’t one. Humans, on the other hand, still desperately detect a pattern and try to extract meaning out of it.

There is this explanation behind the reason why people come across as cognitively lazy or, more precisely, why it looks from certain studies that we are terrible with, for example, the probability theory, bad “Bayesians”, so to say. A lot of those experiments are based on text. When one contrasts the evidence from the text studies, as in Kahneman’s work, with the evidence from experiments in which participants can themselves sample the input naturally, then you see that people perform rather well.

I think it is not an either / or question. What you see in a lot of research from Daniel Kahneman to Gerd Gigerenzer to Nick Chater is that ‘our mental depths may not be as deep as we consider them’, to quote Nick. We’ve always considered humans to be the rational species, the one that can do deep mathematical thinking, and all the other species, well, they never reached that level. There is sufficient research now that shows that this difference is too easily made and that we often do not go for the rational behavior, a lot of our behavior is irrational, and when it comes to language processing it’s also worthwhile to think about that irrational behavior rather than assume that we make the most difficult computations.

Do you mean to point out that humans can be quite delusional in thinking that they are the most rational ones?

Absolutely. So, you asked about the gaps that exist in the language sciences. I think the gaps in the cognitive sciences are related to the difference between humans versus non-human animals. For a long time, we assumed that humans are the superior cognitive species, and all non-human animals are just plain stupid. It becomes clearer and clearer that this is not quite like that. We can actually learn a lot from non-human species, but we may not be smart enough to know how smart animals are, to quote Frans de Waal.

We’re moving towards the end of the interview, could you share what books you are currently reading?

Over the summer, I finally have had more time for reading novels. I have been reading Joël Dicker’s novels, which I absolutely love. And there are several non-fiction books that I would like to read.

Can you recommend one of those books in addition to your book?

I would like to recommend some of the work we discussed, so Nick Chater’s “The mind is flat”, and Frans De Waal’s “Are we smart enough to know how smart animals are?”. There is also a book coming out next year by Nick Chater and Morton Christiansen that I am looking forward to. There are more and more popular science books that are coming out and one nice thing about popular science – that’s the reason I wrote this book – is that you can reach a wider audience and you can make more overarching claims than you can ever do in a journal article, and that makes it exciting.

When you were writing your book, whom did you imagine as your potential reader?

I would be the happiest if the average person on the street picks up this book, reads through it and thinks “Gosh, I’ve never considered that”. Or, that an undergraduate student reads it –because one of the reviewers mentioned it would be a great textbook for an introduction in cognitive science or psycholinguistics – and that student thinks, “Hmmm, I never thought about that, let me look into that a bit more”. That is the audience that I wrote it for. Of course, I don’t mind if colleague academicians read the book too.

We wished Professor Louwerse that indeed those who read the book get inspired – to which he said with a smile, “I hope so too, if not, I need to write another book”.

Keeping those words in mind: How language creates meaning is published by Prometheus Books.


Note
1. Training refers to the idea that by receiving feedback one updates their behaviors.
2. Innateness or instinct refers to the idea that one is born with the ability to do something.
3. Neural networks refers to the idea that cognitive processes emerge from the activity of simple interconnected units that operate as a network.
4. Embodied cognition refers to the idea that one’s body and what it affords shape one’s cognition.

Credits
Text: Julija Vaitonytė and Max Louwerse
Edits: Judita Rudokaitė

Image courtesy: Max Louwerse and Prometheus Books