calendar tag arrow download print
Skip to content

Amnesty International: Algorithms must respect human rights

article
04 October 2017
Decent Digitisation Blog Human rights
Image

Algorithms have infiltrated deep into our society without our always being aware of the associated risks. The Amsterdam police force, for example, uses software that predicts break-ins and muggings. That’s why Amnesty International believes that now is the time to start talking about how to deal with artificial intelligence.

By Eduard Nazarski, Director of Amnesty International Netherlands

Reading time: 3-4 minutes
Be sure to read the other articles in the Decent Digitisation series.

The world will look very different in the near future than it does now. We are leaving more and more of our decisions to devices, based on data that they have collected and processed. Artificial intelligence is a tremendous advantage in many respects. Algorithms – sets of rules that draw conclusions from data – can relieve us of hazardous work and save us time. But we must be aware that they also pose numerous risks. Amnesty International has decided to investigate this subject.

Governments let computers draw conclusions

Algorithms have infiltrated deep into our society:

  • The Royal Netherlands Constabulary patrols the Dutch borders using profiles developed by an algorithm – in this case, a database assembled from road traffic surveillance data.
  • The Municipality of Apeldoorn wants to use Big Data to predict the likelihood of juvenile crime in certain neighbourhoods.
  • The Amsterdam police force uses software that predicts break-ins and muggings.

These organisations let computers search for data patterns and sometimes draw far-reaching conclusions from the findings. This form of data mining poses certain risks to human rights.

Illustrations Max Kisman
Illustrations Max Kisman

Data aren’t impartial

Algorithms seem objective. People have their biases, but technology doesn’t – or at least, that’s what we believe. Unfortunately, it’s not that straightforward. Software depends on the data that people feed into it. And those data aren’t impartial by any means.

A study carried out by two researchers in the US, Kristian Lum and William Isaac, is a case in point. They applied a predictive policing algorithm used in US police force software to the Oakland police department’s drug crime databases. And what did they find? The software predicted that future drug crimes would occur in areas where police officers had already encountered many drug crimes. The researchers then added public health data on drug use. Based on the new data, the software predicted that drug crimes would also occur in many other parts of the city.

Fairness at risk

The police databases turned out to have a blind spot, in other words, and one that would have caused the Oakland police to overlook crimes in certain neighbourhoods. Self-learning software only makes it more likely that they would have continued overlooking these crimes in the future. Going by the software’s advice, police officers would have only patrolled neighbourhoods with which they were already familiar. They would have recorded the crimes that occurred there in their database. The software would have then used the database to make subsequent predictions and would have overlooked any crimes unknown to the police in neighbourhoods with only minimum policing. Not only would this have put the effectiveness of policing at risk, but it was also unfair: the police would be addressing crime in one neighbourhood but not in another.

Attempts to circumvent this problem produce a human rights paradox: government either has databases with blind spots, or it links up many different databases, which is undesirable for privacy reasons.

Black box algorithm

One of Amnesty’s other major concerns is the lack of transparency regarding what computer systems do with their data input. It is often impossible to see how the system reaches a certain outcome, making it difficult to test its accuracy. Users also don’t notice if the system has made a mistake.

The lack of transparency has negative implications for the rule of law. Once an algorithm has identified you as a potential risk, how can you prove you are not if you have no insight into the rationale behind that assumption?

Who is responsible?

That brings us to the third important risk: who do we hold responsible for decisions, especially wrong decisions? If we allow algorithms to take decisions for us, either directly or because we base our own decisions on their advice, then who is ultimately responsible for that decision, the software developer or the person who uses the software? And who monitors whether the advice issued by the algorithm is actually correct? Where do victims obtain justice when no one can tell them who is responsible? These are questions that we need to discuss.

Who monitors whether the advice issued by the algorithm is actually correct?
Eduard Nazarski

Urgent call for dialogue

In June 2017, Amnesty International founded the Artificial Intelligence and Human Rights Initiative. Its purpose is to arrive at a set of human rights principles for artificial intelligence and to launch a debate about the ethics of AI. In the Netherlands, we plan to organise a round-table meeting in the autumn about the police and the courts using predictive analyses. These are the topics that urgently call for dialogue:

  1. Prioritise human rights. Programmers must understand the potential effects of their work on people and adhere to human rights standards.
  2. Promote justice. We can do that by seeking solutions to the problem of biased data and software that compounds existing biases.
  3. Guarantee transparency. We should not allow crucial decisions to depend on systems that we cannot monitor. The algorithms and the data that they use should be transparent for individuals as well.
  4. Agree on who’s responsible. We have to agree on who is responsible if an algorithm produces an erroneous outcome, and where victims can obtain justice.

Now is the time to start talking about how to deal with artificial intelligence. We need to talk to software developers and the organisations that use them: the government, businesses, the police and the court system. We also need a dialogue about this as a society. Because once algorithms become ubiquitous, there will be no way back.

By Eduard Nazarski, Director of Amnesty International Netherlands

Read more

Be sure to read the other articles in the Decent Digitisation series, and the related reports: