calendar tag arrow download print
Skip to content

Artificial Intelligence, what’s New?

article
29 November 2018
AI machine learning 5G
Image

It is some years ago that the world was shocked by Edward Snowden who hacked American secret service data and published them for the whole world to look at. How was it possible that so many data had been collected? And that these data could be hacked? Big data and the safe storage of data became an issue. But soon we learned to reassure ourselves, that this was America and these were data belonging to important people. This would not happen to us.

Melanie Peters, director Rathenau Instituut

The American elections and the Brexit referendum in the UK proved us wrong and taught us it could happen to all of us. We too, are interesting targets. Our online behaviour can be analysed to profile us and divide us into certain groups. This information can be used to target us for instance by modifying the messages we get to read on Facebook. This means we are of interest to those who care to manipulate our thoughts and personal behaviour, even though we don’t know them and they don’t care about us in person. The issue on big data and privacy and security became broader. These incidents showed that without releasing data, and thus offending our privacy in a direct way, combining different data sets, analysing them and using them without our knowledge to influence what we see in social media, has a hugh impact on our personal lives and also on our democratic society as a whole.

Human Rights

At the Rathenau Instituut we analysed different use cases ranging from care robots to local government services, to establish in what way we ourselves are affected by these new technologies that can be used for good or for bad. Our studies show that they definitely have effects on us and our societies as well. Sometimes we think this technology is handy, for instance when we learn that “other readers of this book, were also interested in...”. But very often we are not aware of the process of data captured, combined and used to profile us. And very often these profiles are not used in our interest.

But very often we are not aware of the process of data captured, combined and used to profile us. And very often these profiles are not used in our interest.

Our studies show that our most intimate rights, the rights that the European Union declared as fundamental, the rights that were adopted by the UN, are at stake. These new technologies affect our autonomy when taking decisions for us; for instance in determining what news items we see.  When they take over decisions in our workspace, they can lead to deterioration of our professional decision making skills. They might lead to exclusion and discrimination in the marketplace when services are offered to some and not to others. Our rights as consumers to buy what is good for us and access services without paying with our data. In this way they affect our individual rights and also our collective rights as citizens: the right to be heard and voice our opinions and to take part in public life. When police use data for predictive policing. In fact these technologies change many relationships fundamentally such as those between workers and employers, between patients and doctors, between citizens and government, and in this way affect the existing rights we defined in these domains.

So what do we need to do?

First of all we need to look at these effects for what they are. Digital technologies and data are not magically going to fulfill our basic needs, even though this is what we often hear. On the contrary, a lot of misuse is possible. So we need conceptual clarity that the new players who collect our data, combine them and use them, and those who design and own these new technologies are also very much responsible for our wellbeing and are not allowed to violate our rights. This conceptual clarity also points to what laws or rules apply and what agencies are to supervise their behaviour. Just like the ruling of the Court of Justice of the European Union helped to show that Uber does have responsibilities towards its drivers and passengers and towards road safety. This clarification is needed first of all. It does not always mean we need to design new rules.

New rights and rules

In analysing our use cases we found that we needed two additional rights to satisfy our basic needs in the data society, in addition to our existing rights - the right not to be measured in certain situations and the right to meaningful human contact in certain important situations. To give an example, a care robot can help a person to stay at home for longer, but in some situations it can also be dehumanising to use one. A person should have the right to speak to a doctor or nurse to discuss certain decisions, for instance. In some cases there would be a case for political decision making and sometimes it is about personal decisions. The introduction of these technologies does take a lot of effort to rethink how to incorporate these new possibilities in a way that they work for good of all, for society as a whole and in our personal lives. We will need additional rules in some domains. But it starts with all actors taking responsibility and taking their duty of taking care seriously.

We will need additional rules in some domains.

Competition rules and cyber

One area where we need to think about new rules, is competition. The following question needs to be answered: how can we create fair competition in a data society? We know that data companies and their platforms tend to evolve into monopolies. In this way they almost become utilities and the question is how they can be privately owned yet governed to serve the common good. This question requires very fundamental rethinking. Related to competition, data ownership is another issue. Under the European GDPR your data are your property. But will you be able to command ownership of these data? How to prevent misuse and cybercrime. The area of cybercrime and even cyberwar needs our fullest attention.

And AI?

In last few months AI had become a buzz word. In fact what I described above, is AI:  We call an artificial system an intelligent system when it is able to sense (collect data), decide (profile) and act (advise us or show us a piece of news). We now all talk about algorithms and artificial intelligence as if it is new, but in fact a calculator is artificial intelligence and an IQ test designed to select children for certain schools is an algorithm. We have used algorithms for centuries.

The scary bit is not the artificial intelligence or the algorithm, but it is automatic decision making without being able to appeal. When our children take an entry test, we discuss the result with the teacher and everyone agrees one bad test should not determine the future of the child. This is why the interpretation of the test is always a teacher’s job. Even if the test algorithm is self-learning (thus the formula changes), we would want the teacher to be in charge.

The scary bit is not the artificial intelligence or the algorithm, but it is automatic decision making without being able to appeal.

What is new is that we will be able to collect more and more data. With 5G networks we will be able to collect data from our homes, from every device, from our cars and the built environment. These data can be instantly combined, calculated by algorithms (formula) and used to profile us and influence our behaviour. This is possible if we have an internet that is fast enough, such as the 5G network.

Uses of algorithms crunching big data will rise in many sectors. Think about banking, health, justice, government services and especially in the commercial domain.

Our use cases show that 5G will accelerate what we already see today. Uses of algorithms crunching big data will rise in many sectors. Think about banking, health, justice, government services and especially in the commercial domain. The examples above show which human rights are at stake. It is not the use of AI per se, but the way data are used and combined in ways we don’t know, in order to arrive at decisions we cannot follow or appeal against: decisions which have unacceptable impact on our personal lives and public life.

AI for the good of all

For the High-Level Expert Group on AI it means deciding in what situations or domains AI is problematic, when do we really need to be sure how decisions were taken and what data were used? Who can we hold responsible? And how can we prevent commercial or other misuse such as cyber attack on the data? How can we trust our government with our data?

Again the question will be what standards already apply, what ways there are to appeal and how to form new practices around the use of these tools. Keep asking the teacher for advice, even though school entry will not be based on one test, but perhaps on data from the whole school career of a child. In any case it is about human judgement what is best for a child.

Just as in education, extra safeguards are needed in banking, health, justice or government services. We believe that in these domains extra care should be taken when innovating and generating data and sharing them with third parties. We think on the basis of our research this should be done in embedded innovations, in which human rights are not an afterthought but part of the design. And this has to be tested in practice and monitored and embedded in the democratic decision-making process

What we learned was to differentiate between public sectors creating public good and needing democratic control and the private sector. Even though private actors of course work for governments they need to know that when working with governments even more safeguards are needed. We need to look at dual use applications (products that can be used and misused), which could be security threats and sectors that are vulnerable such as our energy networks. Decisions that affect these networks, when tampered with, could become geopolitical hazards.

We need to increase our knowledge on these technologies and how they can be used for the good of all.

Now what?

All of the above means that we need to increase our knowledge on these technologies and how they can be used for the good of all. And we should admit the impacts they can have on our lives if we don’t design and use them the right way.

We also need good governance frameworks that apply to different actors. Many companies put their ethical codes on AI and data-technologies at the top of their CSR-codes. Governments also are already bound by international legislation and soft law. But the question is one of collaboration. How will all these actors work together to protect us citizens, to create a world in which we are not controlled by data but data help us shape our world and fulfill our basic needs? How can we build trust in the digital society?

It is good to see that UN, ISO, OECD, UNESCO, the Council of Europe and the European Union put AI and the safe and inclusive digital society high on the agenda. They are all building a part of the governance framework in which companies, governments, society groups and we as citizens can take responsibility for this development.

This article was written at the request of the High Level Expert Group on Artificial Intelligence (AI HLEG) of the European Commission.