calendar tag arrow download print
Skip to content

Five conclusions about AI and manipulation

25 August 2022

Photo: Fons Heijnsbroek - Unsplash


Targeted advertisements, personalised news feeds or smart home devices: artificial intelligence (AI) creates new ways to influence people’s behaviour. The European Commission has proposed a ban on certain (harmful) instances of AI manipulation. In recent months, we have had experts discuss the ethical objections to AI manipulation. In this concluding blog, we draw five conclusions from our blog series.

In short:

  • The AI and manipulation blog series covered the ethical concerns of manipulative AI applications and their regulation by the European AI Strategy.
  • AI manipulation takes various forms, with consequences on an individual and a societal level.
  • The effectiveness of the AI Strategy remains to be seen, but effective regulation depends on effective supervision.

Experts on AI manipulation

In the blog series AI and manipulation, we talked to experts about the desirability of AI. We talked to Dr Michael Klenk (TU Delft) and Dr Tjerk Timan (TNO), Dries Cuijpers (ACM supervisor), Nadia Benaissa (Bits of Freedom), Richard Rogers (UvA) and finally Paul Nemitz and Yordanka Ivanova (both from the European Commission).

What is manipulation? Why is it a moral problem? What does AI manipulation look like in practice? Based on these discussions, we draw five conclusions about effectively regulating AI manipulation.

“Manipulation can be understood as an act itself, or as the consequences for an individual.”

1: Manipulation raises several moral objections

The experts broadly agreed on what manipulation is and why manipulation raises ethical concerns: it is an undesirable form of influence that restricts people's free choice and autonomy. 

Yet, we also saw differences. You can, for instance, look at the person who 'influences'. Michael Klenk said that manipulation occurs when the manipulating party does not take the other person into account: manipulation is done 'carelessly' - without concern for the other person.

But you can also look at the role of the person who is being manipulated. Tjerk Timan points out that the manipulating party knows something that the other party does not know, and takes advantage of this: there is a form of unawareness on the part of the person who is being manipulated. 

Manipulation can therefore be understood as an act itself, or as the consequences for an individual. 

2: AI manipulation takes different forms

The blog series also showed that AI manipulation takes different forms. Various underlying mechanisms play a role in this. For example, AI manipulation takes place by collecting enormous amounts of data, with which companies want to convince specific people or target groups to buy a product or service. This makes influencing much more widespread and sophisticated. According to Dries Cuijpers and Nadia Benaissa, this gives companies an information advantage. 

Besides companies influencing consumers, AI manipulation also takes place on social media. This happens in different ways, as Richard Rogers described. Platform algorithms prioritise certain content, such as posts that are tagged as angry. In addition, users themselves can be the manipulating party, for example through fake likes, fake followers or fake views. Organisations can use the online mechanisms of platforms, such as a political party that wants to influence public opinion through political micro-targeting.

In addition to these techniques, Paul Nemitz mentioned two other aspects of AI manipulation: computers that make users believe they are dealing with a human, and dark patterns. This is a form of persuasive design in which the system is deliberately programmed to mislead - for example, by already ticking a box for an insurance policy and placing it in an obvious location. 

3: AI manipulation raises ethical concerns at both an individual and societal level

At its core, AI manipulation is similar to other forms of manipulation: an individual is restricted in their autonomy. 

The blog series also shows other effects, such as the negative impact AI manipulation can have on the wellbeing of individuals, like Instagram culture. Moreover, the experts point out cultural and societal consequences. Tjerk Timan, for example, mentions the risk of societal problems being interpreted as an individual problem. A pedometer can encourage people to exercise more, but it can also lead to health being seen as an individual responsibility. He also points to digital colonialism: through content filtering - the mechanism that determines what is shown - predominantly American values start to prevail online. Rogers and Benaissa show how AI on social media plays a role in the way people absorb information, gather news and form political opinions. In this way, AI is influencing the social and political debate and with it our democratic society. 

“The call for regulation of AI has been growing in recent years.”

4. An individual has limited defences against AI manipulation

An overarching problem of AI manipulation is that it is (by definition) not transparent. 'The characteristic of many influencing techniques is that they tap into unconscious behaviour. Sometimes they still work even if you know how they work', Cuijpers noted. That makes it difficult for individuals to arm themselves against them. 

It is, therefore, not surprising that the call for regulation of AI has been growing in recent years. Part of the discussion is about creating more responsibilities for the influencing or manipulating party. For social media companies in particular, more obligations have already been created in recent years; for AI manipulation on social media, the recently passed Digital Service Act (DSA) and the Digital Market Act (DMA), among others, are important. The other part of the discussion revolves around a more fundamental question: are some forms of AI manipulation simply unacceptable? 

5. There are different views on how AI manipulation should be regulated

With the proposed AI regulation, the European Commission proposes two bans: a ban on 'subliminal AI' and a ban on AI applications for vulnerable groups. Both forms are incompatible with fundamental rights and values in the EU, says Yordanka Ivanova. For less risky AI manipulation, the EU proposes risk management measures. 

In the blog series, experts raised several questions. What exactly will be prohibited? Does the European Commission condemn manipulation - the act as such - or is it about the consequences? Several experts conclude that the latter is the case. 

This is comparable to current consumer law. Not without reason, as ACM supervisor Cuijpers makes clear. Because even if an application was not intended to mislead, but in practice that is the effect of the application, a company can be held responsible. 

The problem with this is that it is difficult to demonstrate these effects - partly because manipulation is not transparent. That is why the European Commission also looks at potential damage and not only at actual damage suffered. But even potential damage is difficult to prove.

Another solution of the AI regulation is to impose requirements on the development process of AI, for example, to conduct a timely ethical risk analysis. Finally, Ivanova mentions that it will become possible for supervisors to (pre)approve AI applications.

Various experts raise the question whether regulating the consequences is sufficient. We see the differences in the definition of manipulation reflected in views on how AI should be regulated: is the problem the act itself or its consequences? 

AI for society

Our blog series discussed the undesirable effects that AI can have. The consequences affect not only individuals, but also democracy and society as a whole. 

With the AI Strategy, the European Commission has proposed a ban on certain (harmful) forms of manipulative AI. The question remains: is regulating the consequences enough? Or is it necessary to set stricter limits? This is something that science, society and politics will have to address in the coming period.

Finally, it applies to all legal frameworks that their effectiveness stands or falls with effective supervision. Increasingly, supervisory authorities must cooperate intensively, although it is not yet clear exactly what that cooperation should look like. The organisation of supervision, therefore, remains a point for attention. 

Targeted advertisements, personalized news feeds or smart home devices: artificial intelligence (AI) creates new ways to influence people’s behaviour. The European Commission has proposed a ban on certain (harmful) instances of AI manipulation. In a new blog series, the Rathenau Instituut talks to experts about the phenomenon of manipulative AI and the desirability of this emerging technology.


This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813497

Other publications in this series: