calendar tag arrow download print
Skip to content

Marlies van Eck: Algorithms are a magic show: super cool, but we just don’t get it

18 May 2018
Artificial intelligence Decent Digitisation Blog

Many people believe passionately in algorithms. It would be better for them to admit that they don’t understand them at all and take a closer, more critical look at how they operate.

By Marlies van Eck, writing in a private capacity. Her doctoral research at Tilburg University concerned automated chain decisions and legal protection. She works for the Dutch Ministry of Finance.

Reading time: 2-3 minutes | Be sure to read the other articles in the Decent Digitisation series. 

Comedian Jochem Myjer’s hilarious ‘School Excursion’ routine in his Yeee-Haa! cabaret show introduces us to country bumpkin Harm. Harm is confused. Life to him is like a magic show; it’s wonderful, but he doesn’t understand it one bit. His standard expression is ‘I just don’t get it’.

In his 1976 book Computer Power And Human Reason: From Judgement to Calculation, Joseph Weizenbaum wrote that human beings approach artificial intelligence with too much respect and conviction, even though we don’t understand it. He reached that conclusion after seeing how people responded to his ELIZA experiment, conducted between 1964 and 1966. Weizenbaum coded what we would today refer to as a chat bot, a computer program that allowed people to communicate with a computer. The program, which he called ELIZA, parodied a ‘real’ psychotherapist by repeating or asking follow-up questions. A sample conversation:

Men are all alike.


They’re always bugging us about something or other.


Well, my boyfriend made me come here.


He says I’m depressed much of the time.


It was patently clear to Weizenbaum that only a human being could help another human being resolve emotional issues. To his astonishment, however, the world beyond his lab embraced ELIZA with excitement. Various universities copied the program and it became famous across the United States. Weizenbaum was especially shocked at the enthusiasm of psychiatrists themselves, who felt that the program could be turned into an almost fully automated form of psychotherapy.

Illustratie Max Kisman
Illustratie Max Kisman

Weizenbaum observed something else: people who talked to ELIZA became addicted to the conversations they were having with it and anthropomorphised the system. Even his secretary, who had seen him write the code, asked him to leave the room so that she could talk to ELIZA privately. While Weizenbaum’s whole point was to prove that computers cannot truly understand our language, people thought that ELIZA demonstrated precisely the opposite.

This led Weizenbaum to observe that humans, regardless of their level of education, tend to ascribe exaggerated traits to technologies that they do not understand. This worried him, because he felt that some things were too important to entrust to computers.

It isn’t a civil servant, but a computer that decides

Since Weizenbaum’s day, government has come to use artificial intelligence or AI to take decisions for it. It isn’t a civil servant who decides whether a person is entitled to benefit or how big a traffic penalty should be, but a computer. I wanted to study how the law is translated into computer instructions. But even when I was allowed to inspect certain documents, I ended up feeling just like Harm: I just didn’t get it.

The law offers people less protection than before
Marlies van Eck

I had to conclude that in terms of automated decision-making, it isn’t at all clear how government has interpreted the law. I was not able to study whether its interpretation is correct and which choices it had made. That means that the law offers people less protection than before. Neither the public nor the courts know why a computer reaches its decision.

How strong are control mechanisms today?

In essence, a judicial review consists of a conversation about the reasoning that government has applied. We are familiar with this mechanism in analogue society: when government takes action (grants a subsidy, frisks a person or issues a permit), that action is subject to numerous control mechanisms. People can object, they can submit a complaint, and they can take their complaint to a higher authority such as the national or municipal ombudsman or the court. The executive body must also account for the way in which it performs its tasks to the people’s democratically elected representatives, such as the municipal council or the House of Representatives.

But how strong are these control mechanisms when government uses algorithms? Over the next few years, this question will be an extremely important one in the relationship between government and the public, but also in the relationship between executive and judiciary powers. Perhaps we will need to seek new mechanisms, or perhaps algorithms will be required to explain themselves. Who knows?

Perhaps algorithms will be required to explain themselves
Marlies van Eck

We can start by disabusing ourselves of our belief in a smart algorithm that only does good things. Because what if it isn’t so smart after all? Or if the black box turns out to be completely empty? To quote Weizenbaum, let’s stop the ‘sloppy thinking’ and look critically at the things that we don’t understand.

By Marlies van Eck, writing in a private capacity. Her doctoral research at Tilburg University concerned automated chain decisions and legal protection. She works for the Dutch Ministry of Finance.

She also spoke at the Living with Algorithms event in Utrecht on 25 May.

Read more

Be sure to read the other articles in the Decent Digitisation series, and the related reports: