calendar tag arrow download print
Skip to content

What does manipulative AI mean for consumers?

Article
22 April 2022
AI en manipulatie AI Ethics

Photo: Shutterstock

Image
AI herkent de leeftijd van voorbijgangers en past daarop automatisch de inhoud van een advertentie aanutomatisch op aan

Businesses often use artificial intelligence (AI) to steer the behaviour of consumers. This category of manipulative AI will be the topic of this blog. We discussed the issue with Dries Cuijpers, who works for the Dutch Authority Consumer & Market (ACM). We asked him how AI can be used to influence consumer behaviour, when such influence can be deemed manipulative, what its societal implications are, and how such AI applications are regulated.

In short:

  • This second blog in the blog series "AI and manipulation" is about the use of AI by companies.
  • Consumer protection focuses on influence that is misleading.
  • Because the use of AI, as a new technology, creates new situations, the regulator must consider on a case-by-case basis whether an unfair commercial practice has occurred in the eyes of the law.

In what ways do consumers come in contact with AI?

Companies can employ AI for many purposes. Influencing or manipulating consumer behaviour is one of them. Dries Cuijpers explains: ‘When we look at the customer journey, there are many moments in which AI plays a role. AI not only facilitates marketing, it also determines what you get to see on a website, it makes it possible to use a chatbot for customer service, it determines when your package will be delivered, or maybe even what price you are going to pay for a product or service.’

As supervisor at ACM, it is Cuijpers’ task to deal with consumer protection. ‘From the perspective of consumer protection, the first question I ask myself is: which of these AI application is most risky? And I think influence is one of the most high-risk AI applications in the customer journey.’

Why then is influence through AI so high-risk?

Influencing consumer behaviour is nothing new. Marketing practices were always aimed at steering or manipulating the consumer. According to Cuijpers, the use of AI to influence consumers differs in two ways from traditional forms of marketing. ‘The first difference is the scalability. With AI, influence can take place at a very large scale. That completely changes the cost-benefit analysis. The second difference is the intricacy of influence by means of AI. AI makes it possible to target a group or person very specifically, because it can easily process large amounts of data. An average shop owner could never collect and use that much information about a customer.’

These differences contribute to the fact that Cuijpers considers the use of AI to influence consumers to be high-risk. ‘Consumers are vulnerable to this influence because of the huge information advantage that companies have. With the data they collect and the psychological techniques they apply, companies’ ability to convince a certain person or group to buy their product or services continuously increases.’

When can we consider influence to be manipulative?

So, AI increases the ability of companies to influence consumers. But can we say in all cases that this influence is manipulative? Cuijpers: ‘In the current consumer protection regulation influence is forbidden when it is misleading. But it is not always obvious whether something is misleading or not.’ As an example Cuijpers mentions green purchase buttons, that nudge the consumer to click on them. ‘This is a form of influence. But is it misleading? I don’t think so.’

‘With new technologies we are often dealing with new situations. So most of the time legislation has not yet been applied to these technologies. Therefore, we do not have a straightfoward answer tothe question: “Are we dealing with an unfair commercial practice here?” As ACM we published a guideline in which we try to frame how the law applies to online influence and where the line between acceptable and unacceptable lies.’

Misleading influence can have multiple causes, says Cuijpers. ‘One can consciously give AI an assignment that is not okay from the start. For example, letting it exploit people’s vulnerabilities. But it can also be the case that an unsupervised, self-learning algorithm picks up and incorporates things that, in the end, turn out to have misleading effects. In that case, it was not the intention to mislead, but it is the effect. The law focuses on the effect on consumers, intention is not so important. I think that’s a good thing, because AI shouldn’t become something which companies can hide behind with the excuse “I also didn’t know what the algorithm was doing”.’

What are the societal implications of the use of AI to influence consumers?

Consumers are vulnerable to AI’s influence. When influence is misleading, it is an unfair commercial practice under the current law. But what, then, are the consequences of such risky, misleading influence? Cuijpers mentions the research of scientist Arunesh Mathur of Princeton University. ‘He explained in a very insightful way how influence mechanisms, also referred to as ‘dark patterns’ – can cause harm on three levels.’

‘First of all, such influence mechanisms can harm the consumer financially or maybe emotionally – for example when they ended up buying a product that they absolutely did not need. The second level is that of society. Influence mechanisms can affect market competition. When a company employs many influence techniques, it also gains more customers and profit. So the company gets a competitive advantage. I don’t think it is desirable to let the market depend on who can manipulate best.’

‘The third level of harm are the consequences of dark patterns on democracy. Online influence might harm the autonomy of the individual, not only with respect to purchases they make but also regarding political views.’ We further discuss this effect of digital manipulation on citizens in the next episode of this blog series.

The EU proposal to prohibit manipulative AI

In the AI Act, the European Commission proposes to prohibit manipulative AI that can cause physical or psychological harm. We asked Cuijpers what he thinks of this proposal. Cuijpers: ‘What stood out to me was that the kinds of AI that are applied in marketing actually fall under risk category 3 or 4, being low risk applications which the prohibition would not apply to. But I wonder whether these AI applications in marketing are really as low risk as the AI Act makes us believe.’

In the AI Act it is suggested that marketing related AI applications can be sufficiently addressed by existing consumer protection frameworks. But these frameworks are not enough, according to Cuijpers. ‘Consumer protection law only looks at the outcomes of AI. It says nothing about procedural requirements or information disclosure requirements. If we only judge on the basis of the outcome, how can a consumer protection agency like ACM then supervise AI? I think that what we need is more norms regarding the process and stricter boundaries.’

Can consumers guard themselves against manipulative AI?

In addition to better legislation about the use of AI by commercial parties, we could also look at the ways in which consumers could protect themselves against manipulative practices. But Cuijpers is sceptical about the possibilities. ‘What characterizes many influence mechanisms is exactly the fact that they make use of subconscious behaviour. Sometimes these mechanisms are even successful when you know how they work. Therefore I think it is difficult to guard yourself against it. It is also doesn’t make much sense to me to put energy into educating consumers on how to protect themselves against manipulative AI, when we know that a number of practices are absolutely undesirable. Why permit such technology and then teach people how to resist it? Shouldn’t we just say in some cases “we don’t want this anymore”?’

Targeted advertisements, personalized news feeds or smart home devices: artificial intelligence (AI) creates new ways to influence people’s behaviour. The European Commission has proposed a ban on certain (harmful) instances of AI manipulation. In a new blog series, the Rathenau Instituut talks to experts about the phenomenon of manipulative AI and the desirability of this emerging technology.

De vlag van de Europese Unie

This blog was written by Rosalie Waelen, PhD candidate at University of Twente, where she focuses her research on the ethics of artificial intelligence.

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813497

Other publications in this series:

Related publications: