If there is no moral difference between AI manipulation and other forms of manipulation, why then are people so worried about it? Timan explains that manipulative AI brings a lot of new ethical questions and problems with it. For instance, manipulative AI raises questions regarding data protection and the matter of scale and scope. Timan: ‘When you use AI to manipulate behaviour, even when you do so with the best of intentions, the question remains: are you allowed to use people’s data to manipulate them?’
Another issue that Timan mentions is the risk that new problems are passed on to users, sometimes called ‘responsibilisation’. ‘Take the example of fitness trackers. AI is a useful tool to stimulate people to move more, but it can lead to a problematic narrative: “it’s your own mistake that you are in bad health, look at the data”.’
The risk that the user ends up dealing with new problems connects to a larger cultural challenge of AI and manipulation, says Timan: ‘I think that digital colonialism is the biggest cultural problem regarding manipulative AI. The influence that takes place, for example through content filtering, spreads American values, which constitutes a form of subtle, yet pervasive and hidden mass manipulation.’ American values are, for example, reflected in the kinds of topics that have priority on our social media’s news feeds.
‘On an individual level, the biggest problem is the decline of freedom of choice. Especially due to the rise of immersive technologies, such as voice activated smart-home devices like Alexa or Google Home, we lack an opt-out more and more often. We can no longer choose not to participate or not to be the subject of analysis.’
Klenk raises a similar issue. ‘One of the main reasons we use AI is to make design more user friendly. However, there is a certain tension between user-friendliness and manipulation. AI makes decisions very easy for you. It reduces complexity. But this robs you of your opportunity to think for yourself. It is in stark contrast with autonomy.’