calendar tag arrow download print
Skip to content

Medialab SETUP: We need civil weapons to protect ourselves against uncivil algorithms

article
01 May 2018
Image

What decisions do we want to let algorithms take for us? To launch a public debate of this question, says media lab SETUP, we need tools that will make decision-making by algorithms transparent and comprehensible.

By Siri Beerends, Cultural Sociologist at medialab SETUP

Reading time: 2-3 minutes | Be sure to read the other articles in the Decent Digitisation series.

From tracking potential terrorists to hiring new employees, authorities and businesses are letting algorithms take more and more of their decisions. But what are the moral assumptions and mathematical simplifications lurking beneath these algorithms? And what decisions do we want to let algorithms take for us? To launch a public debate of this question, we need tools that will make decision-making by algorithms transparent and comprehensible.

Weapons of Math Destruction

One of the biggest misconceptions about algorithms is that they are neutral and that their decisions are therefore fair. Mathematician Cathy O’Neil shatters this illusion in her bestseller Weapons of Math Destruction. O’Neil claims that algorithms become ‘weapons of math destruction’ when:

  • we don’t know what moral assumptions underlie the scores that they produce and the decisions that they take, making it impossible to contest an algorithmic decision;
  • algorithms encode human prejudices into software systems and disseminate them widely; and
  • the decisions that algorithms take are destructive to society.

One example of a ‘weapon of math destruction’ is an algorithm that companies use to select the best CEO. Because women are under-represented among CEOs, the algorithm regards ‘female’ as a predictive factor for being an unsuitable CEO candidate.

Illustraties Max Kisman
Illustraties Max Kisman

GDPR offers weak protection

The EU’s General Data Protection Regulation (GDPR) is due to enter into effect on 25 May. One of its purposes is to protect us against detrimental decisions by algorithms. It’s a first step, but it doesn’t go far enough. The regulation is vague about companies and authorities that take decisions based on derived data. It is also unclear about how to prohibit detrimental forms of algorithmic decision-making.

It does give individuals the right to object to their personal data being processed, but by then the damage has usually already been done. To what extent they can actually make effective use of that right is uncertain. How easy is it for an individual to intervene when algorithms take decisions in more complex environments and terms of reference?

There is still a clear distinction between police officers, the devices that they use, and the IT company that develops their software, for example. But these separate worlds are merging and devices are taking more and more stand-alone decisions. Researchers warn that we are becoming a ‘black box’ society in which no one truly understands algorithmic decisions and how we can intervene.

On 25 May, the Rathenau Instituut and medialab SETUP are organising a debate on Living with Algorithms. Experts, researchers, designers and artists will discuss their views on algorithms and decision-making. You can register for this event here.

Weapons of Math Retaliation

All these developments make it vitally important that we understand the moral values, mathematical simplifications and biases that are encoded in algorithms. In addition to a public debate, we need tools to arm ourselves against detrimental decision-making by algorithms. Legal standards that limp along behind the commercial market are not enough. 

That is why medialab SETUP is cooperating with artists and experts in a research programme that we have entitled Civil Weapons of Math Retaliation. The programme is meant to uncover the moral implications of algorithmic decision-making and ensure a fairer distribution of the power of moralising algorithms. That way people will not only have the right to object to a decision taken by an algorithm but will also be able to use algorithms to empower themselves.

We will explore issues of autonomy, human dignity, and the desirability of algorithmic decision-making in a series of four design research projects. We will present two of our results on 25 May during the Living with Algorithms debate, organised by SETUP and the Rathenau Instituut.

Designer Isabel Mager is responsible for one of the four projects. On 25 May, she will introduce us to the world of recruitment algorithms.

A growing number of companies are choosing to let algorithms screen job candidates. One of the leaders of the recruitment algorithm industry is HireVue, a company worth millions of euros that has Unilever and Goldman Sachs as clients. HireVue assesses videos of job candidates on word choice, tone of voice and micro-expressions that are said to reveal our true emotions.

Consistency is not the same as neutrality
Siri Beerends, Medialab SETUP

Anyone who thinks that they are no longer being measured by algorithms once they’ve landed a job is wrong. The Dutch firm KeenCorp has developed an algorithm that measures employee engagement by searching internal e-mails and chat messages for unconscious language patterns that indicate tension and personal involvement. In an extensive interview with SETUP KeenCorp explains what their algorithm measures and how it contributes to improvements in the workplace.

Consistency is not the same as neutrality

Recruitment software companies claim that they liberate job candidates from the whims of biased employers. But this marketing promise is based on the misconception that technology is neutral. Consistency is not the same as neutrality, after all; the fact that an algorithm assesses every candidate in the same way does not mean that its assessment is neutral. In fact, the scores awarded by recruitment algorithms are based on all sorts of moral assumptions about personality types. For example, what about people who have friendly faces. Are they actually nice people?

Isabel Mager will show us on 25 May how recruitment algorithms categorise and appraise job candidates during their video interviews. How do algorithms analyse our personalities and can we turn this to our advantage during a job interview?

Empowerment

Making algorithmic decision-making more transparent is the first step on the road to empowerment. The next step is to understand what we’re seeing. The ‘Civil Weapons of Math Retaliation’ programme will offer society telling examples, designs, presentations and a vocabulary that makes algorithmic decision-making comprehensible for a wider audience.

After all, all of us – and not just technicians – need to be able to talk about algorithms and decide, along with others, how we’re going to live with them.

By Siri Beerends, Cultural Sociologist at medialab SETUP