In this blog we discuss why and how manipulative artificial intelligence (AI) should be regulated. To this end, we talked to Paul Nemitz, Principle Advisor in the Directorate General for Justice and Consumers of the European Commission, and Yordanka Ivanova, legal and policy officer for the Directorate General for Communications Networks, Content and Technology of the European Commission. Both contributed to the AI Act of the European Commission, which proposes to prohibit certain manipulative applications of AI entirely.
- This fourth blog in the blog series "AI and manipulation" is about the draft AI regulation.
- This regulation is currently being negotiated in the European Parliament and in the Council.
- The European Commission recognises that values such as individual freedom and autonomy are at stake with manipulative AI.
How can AI be manipulative?
Paul Nemitz distinguishes three categories of manipulative AI. The first category covers computers that make users believe they are dealing with a human being. Nemitz: ‘Technology now makes it possibleto imitate humans in written language exchanges and also in terms of speech. Artificial voices can really sound lifelike, as if you are talking to another human being. To make people believe that they are dealing with a human, while in reality they are dealing with a machine, is not an innocent matter. This is a problem from an ethical, philosophical, theological and also legal point of view. So it’s very important that we have binding rules, enforceable against everyone and sanctionable too, which ensure that people always know when they are dealing with a machine or with a human being.’
A second category of AI manipulation is persuasive design. Nemitz explains: ‘People can still be manipulated when they know they are dealing with a machine, because technology can be programmed to deceive. There is a whole school of engineering for persuasion, which goes into manipulation for commercial purposes, amongst others.’
“Artificial voices can really sound lifelike, as if you are talking to another human being.”
The third category Nemitz mentions is the economic incentive for personalised or targeted advertising. ‘There is a huge commercial incentive to cross the line of what might be acceptable marketing. Manipulative advertisement is, for example, targeted at poor people, or at people that are not so smart. It has become normal to profile people and to predict and manipulate their behaviour, views, or mood, for the sake of profit. This is a very serious problem. Ethics has failed in this area. To tackle this problem, we need enforceable laws and severe sanctions. Fortunately, there is an increasing conviction among members of parliament that targeted advertising based on human traits and profiling is a terrible thing.’
Why should manipulative AI be regulated?
‘What is at stake,’ says Nemitz, ‘is our idea of the human as it was passed on to us from the enlightenment. Our societies are based on the assumption that we as humans are free and exercise self-determination, individually and collectively in democracy. Manipulative technologies put the ability to act free and autonomously into question. By doing so, manipulative technologies put into question the very tenant of western civilization, of how we see and value the human. Tech absolutists bring us back to a pre-Enlightenment time, by trying to make us trust technology in such a way that we don’t ask for reasons or explanations anymore.’
It is exactly this concern that underlies the AI Act that was published by the European Commission in April 2021. Yordanka Ivanova explains: ‘It puts people at the centre, allowing them to exercise control and freedom in their personal autonomy and decision-making.’
What is the purpose of the AI Act?
Ivanova tells us a little bit more about the EU’s AI Act. ‘Of course, we already have the GDPR as main legal framework for data protection. And then there is the European AI strategy from 2018 and the White Paper on AI from 2020. But these only offer recommendations and ethical guidelines – they are not binding. This was considered to be insufficient to guide all development and use of AI.’
‘The AI Act of April 2021 is meant to complement existing legal frameworks, such as the GDPR and consumer protection legislation. The AI Act proposes to have horizontal legislation on AI, meaning that it is applicable to the use of AI in all sectors and to the whole AI life cycle. Its aim is to remain proportionate – to prohibit only those practices that really pose an unacceptable risk and are incompatible with EU fundamental rights and values. For lower risk categories we propose mitigating measures, requirements, risk management, and so on. We don’t want to go beyond what is strictly necessary to address, because we don’t want to limit innovation.’
How does the AI Act address manipulation?
Two of the prohibitions proposed in the AI Act are related to manipulation. Namely, the prohibition of the use of subliminal techniques and the exploitation of vulnerabilities, when these practices are likely to cause physical or psychological harm.
Ivanova explains why the European Commission decided on these prohibitions. ‘Our concern with AI manipulation actually stems from the wish to make AI future proof. But manipulative practices are already prevalent today. So many AI applications are now entering our everyday lives and collecting our personal data. Personal assistants, for example Siri or Alexa. They are able to really influence what we see and what we choose, which brings with it a lot of risk regarding our autonomy.’
Regulating manipulation is not an easy task, Ivanova admits. ‘Firstly, it is difficult to translate the concept of manipulation into specific prohibitions. It’s also challenging because not all forms of manipulation are harmful in principle, of course. And because the Act is supposed to be horizontal, not technology specific, we could phrase it only in one way.’
“Our concern with AI manipulation stems from the wish to make AI future proof.”
But can you ban harmful AI before any harm occurs?
The prohibitions apply only to manipulative or exploitative AI that can cause physical or psychological harm. But whether or not an AI system really causes such harm can only be determined after the technology has already been used. Banning harmful AI, therefore, seems a little contradictory.
Ivanova clarifies the matter: ‘Damage is indeed an important element in the prohibition of manipulative AI, but it is not necessary that the damage has actually occurred. Therefore, the prohibitions are formulated as: something "may cause harm" or "there is a possibility of harm". Even if the damage would never have happened in the end, or if it was not intended by the developer, it is sufficient for a prohibition that there is a possibility of damage. Of course, it is always difficult to subsequently prove that there is such a possibility, which is true of all legal prohibitions. That is why I think it is important to give public parties the authority to approve AI applications.’
What is the next step?
In the months after the AI Act became public, the European Commission received a lot of responses. Ivanova: ‘Generally people are supportive – even businesses. But of course, not everyone is happy with the proposed regulations. There are NGOs that want to go much further, while some companies and governments would benefit from less regulation on AI. We have tried to find a balance.’
Nemitz: ‘This proposal is currently being discussed within the European Parliament and the Council. I think we can be optimistic that this process will move forward with the necessary speed and that policy makers will ensure that this was not just symbolic politics, but that there will actually be a legal framework around AI and manipulation.’
Targeted advertisements, personalized news feeds or smart home devices: artificial intelligence (AI) creates new ways to influence people’s behaviour. The European Commission has proposed a ban on certain (harmful) instances of AI manipulation. In a new blog series, the Rathenau Instituut talks to experts about the phenomenon of manipulative AI and the desirability of this emerging technology.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813497
This blog was written by Rosalie Waelen, PhD candidate at University of Twente, where she focuses her research on the ethics of artificial intelligence.