What do we mean when we say that artificial intelligence (AI) can be manipulative? And why is manipulation a moral problem? In this first blog of the series ‘AI and manipulation’, we interview dr. Michael Klenk (TU Delft) and dr. Tjerk Timan (TNO) about the moral questions that manipulative AI raises.
- This first blog of the series ‘AI and manipulation’ deals with the ethical questions regarding manipulative AI.
- Manipulation is a form of influence that becomes more effective and more widely applicable through the use of AI.
- There are various views on the moral permissibility of manipulation.
What is manipulation?
In order to have an informed debate about manipulative AI, we first need to clarify what manipulation itself entails. Manipulation is a common term in day-to-day language, but philosophers have multiple definitions for it.
Philosopher dr. Michael Klenk, who works at the TU Delft, describes manipulation as follows: ‘Manipulation is a type of influence. It is to be distinguished from rational persuasion and coercion. It falls in between. Coercion takes freedom of choice away completely, manipulation perhaps lowers it a bit.’
Klenk himself defines manipulation as ‘negligent influence'. 'Good forms of influence are accompanied by a certain care for the motives of the other person. Manipulation is negligent in the sense that it is defined by the absence of such care. The manipulating party wants to achieve a certain result, and that is all that matters.'
Dr. Tjerk Timan, who is familiar with the policy and ethics of AI through his job as policy analyst at TNO in the Netherlands, describes manipulation as a kind of strategy: ‘The manipulating party knows something that the other doesn’t and takes advantage of that.’ In other words, there is a knowledge asymmetry between the manipulator and the manipulee. ‘It has to do with subconsciousness. Manipulation entails influencing persons in such a way that they do something that they actually do not entirely agree with.’
Each of these definitions concentrates on different aspects of manipulation. Klenk looks at the role of the manipulating party, who he believes to be negligent in their influence on other people. Timan also focuses on the role of the person that is being manipulated. Manipulation can only be a successful strategy, according to Timan, when there is a certain asymmetry in knowledge or subconsciousness at play on the side of the manipulee.
What’s wrong with manipulation?
Manipulation is not necessarily seen as a problem when we focus on the consequences of manipulation on the manipulee. We can also manipulate people in order to benefit them, by stimulating them to eat healthier, for example. In such cases, manipulation is referred to as ‘nudging’. Meaning sometimes somebody is better off by being manipulated.
However, we already learned that manipulation can also be defined from the perspective of the manipulating person, or, manipulator. In that case, it is the act itself or the intention behind it that matters. Who believes that the act of manipulation is undesirable as such, would reject it even when it has positive consequences. Klenk opts for this point of view: ‘When you look at the act of manipulation itself, I would say that the negligence is always a failing on the part of the manipulator.’
How does manipulation by AI differ from other forms of manipulation?
What differentiates AI manipulation from so-to-say traditional manipulation is its efficiency. By means of big data, AI can influence an individual’s behavior in a very targeted manner. Moreover, online one can reach a much bigger audience than with, say, a billboard next to the highway. Klenk calls these ‘aggregating factors’.
Digital manipulation is not worse than other forms of manipulation, according to Klenk. ‘I don’t see that there is a moral difference, that the act is worse per se. But it might be that it just has more impact on our lives and increases the amount of manipulation in our lives. That’s an empirical question that should be looked into.’
What are the ethical questions regarding manipulative AI?
If there is no moral difference between AI manipulation and other forms of manipulation, why then are people so worried about it? Timan explains that manipulative AI brings a lot of new ethical questions and problems with it. For instance, manipulative AI raises questions regarding data protection and the matter of scale and scope. Timan: ‘When you use AI to manipulate behaviour, even when you do so with the best of intentions, the question remains: are you allowed to use people’s data to manipulate them?’
Another issue that Timan mentions is the risk that new problems are passed on to users, sometimes called ‘responsibilisation’. ‘Take the example of fitness trackers. AI is a useful tool to stimulate people to move more, but it can lead to a problematic narrative: “it’s your own mistake that you are in bad health, look at the data”.’
The risk that the user ends up dealing with new problems connects to a larger cultural challenge of AI and manipulation, says Timan: ‘I think that digital colonialism is the biggest cultural problem regarding manipulative AI. The influence that takes place, for example through content filtering, spreads American values, which constitutes a form of subtle, yet pervasive and hidden mass manipulation.’ American values are, for example, reflected in the kinds of topics that have priority on our social media’s news feeds.
‘On an individual level, the biggest problem is the decline of freedom of choice. Especially due to the rise of immersive technologies, such as voice activated smart-home devices like Alexa or Google Home, we lack an opt-out more and more often. We can no longer choose not to participate or not to be the subject of analysis.’
Klenk raises a similar issue. ‘One of the main reasons we use AI is to make design more user friendly. However, there is a certain tension between user-friendliness and manipulation. AI makes decisions very easy for you. It reduces complexity. But this robs you of your opportunity to think for yourself. It is in stark contrast with autonomy.’
What do these ethical issues mean for AI policy?
Although manipulative AI raises many moral questions, there is nevertheless no consensus on the answer to the question what is wrong with manipulation. When we reject the act of manipulation a priori, we would need legislation to prohibit manipulative AI as much as possible. But if we only reject manipulation on the basis of its consequences, we should only tackle specific instances of manipulative AI.
In April 2021, the European Commission published the AI Act. In this Act, the Commission proposed, amongst other things, to prohibit manipulative and exploitative AI that leads to physical or psychological harm. A prohibition would make one believe that manipulation is considered to be wrong as such. However, what actually matter in the proposed prohibition are manipulation’s consequences. Forms of AI manipulation that would do little harm are considered to be permissible under the EU’s proposed legislation.
‘The AI Act is a weird piece.’ Timan acknowledges. ‘We see that, for the first time in a long time, the Commission moves away from a form of policy that is solely risk-based. The Commission makes an odd, bold move by saying something substantial about a technology. Namely by banning certain applications.’
About the EU’s focus on harm Timan says: ‘We see that manipulation is rejected because of its consequences. That utilitarian, risk-based approach is very Anglo-Saxon. This is problematic in this case, because of the burden of proof. It’s not easy to proof as an individual that you are suffering psychological damage as a result of some hidden algorithm, on a social media platform that you voluntarily subscribed to. Moreover, most harms are long-term and cumulative. Whilst the AI Act is set up only to deal with singular instances or cases, in which a claim to harm must be based. This renders most harms due to long-term, small but incremental manipulation inadmissible.’
Targeted advertisements, personalized news feeds or smart home devices: artificial intelligence (AI) creates new ways to influence people’s behaviour. The European Commission has proposed a ban on certain (harmful) instances of AI manipulation. In a new blog series, the Rathenau Instituut talks to experts about the phenomenon of manipulative AI and the desirability of this emerging technology.
This blog was written by Rosalie Waelen, PhD candidate at University of Twente, where she focuses her research on the ethics of artificial intelligence.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813497