calendar tag arrow download print
Skip to content

AI and manipulation on social and digital media

Article
03 June 2022

Photo: Greg Bulla - Unsplash

Image
De likeknop van Facebook staat prominent afgebeeld op het bord van Facebook

How can AI on social and digital media influence our behaviour? In the third blog of our blog series 'AI and Manipulation': an interview with policy advisor Nadia Benaissa from Bits of Freedom and Richard Rogers, Professor of New Media and Digital Culture at the University of Amsterdam.

In short

  • Social and digital media manipulation begins by promoting certain content and analysing the behaviour of users.
  • Due to AI techniques and large amounts of available user data, manipulation on social and digital media platforms is supercharged.
  • Lack of transparency and adverse societal and psychological effects are only some of the issues that arise with digital and social media manipulation.

How is AI used to manipulate us on digital and social media?

According to Nadia Benaissa, policy advisor at Bits of Freedom, manipulation is based on the collection of huge amounts of data. “The platforms want to know users better than they know themselves - what scares them, makes them laugh, what they search on Google or what spelling mistakes they make. This information produces a psychological profile of potential voters, which can be used by companies to target them with information or misinformation they would be sensitive to”, notes Benaissa, referring to manipulation during the 2016 US elections. Advertisements, news articles, fun quizzes or other posts influence people to form a certain opinion.  

Artificial intelligence techniques such as machine learning and natural language processing, are used to analyse the data and learn what users like – and to consequently supply more of that content. Prof. Richard Rogers from the University of Amsterdam explains this is referred to as 'content optimisation'. “We could consider this a first form of manipulation, although we call it optimisation. The question is: what is being optimised?”, says Rogers. The Facebook Files revealed that Facebook favored posts tagged as angry. The result is an anger-driven feed since the platform algorithms favor angry, even hateful commentary as long as it gathers attention. Such background arrangements made by platform algorithms can be seen as one type of manipulation.

“The question is: which content is being optimised through AI?”

The second type of manipulation comes from other users. Other users help spread certain content through fake likes, fake followers, or fake views.

Lastly, the third type is manipulation via platforms but not by the individual user or the platform itself, but other actors. An example would be a political organisation looking to influence public opinion.

 

Is manipulation through AI worse than ‘traditional’ forms of manipulation?

Rogers: “Traditionally, with manipulation such as propaganda, we usually see manipulation as something done by the publication rather than the user side. In the digital space, publishers and users alike can also seek to manipulate others towards a specific goal somehow, be it gathering influence or creating revenue.”

A second difference is that everyone sees the same information in print media. Online, however, content circulation is 'supercharged' and tailored, creating individual informational bubbles. Some refer to it as 'computational propaganda' and 'artificial amplification' made possible with AI techniques. Personal bubbles or 'echo chambers' cause societal rifts among different groups of people. 

Personal bubbles or 'echo chambers' cause societal rifts among different groups of people.

The amplification of specific information can also have psychological consequences on the well-being of individuals. Instagram experimented with removing its interactive tools such as showing likes or metrics on posts. “Instagram claimed that they do not want the people to feel like it is a competition. Research shows that when a post does not get a lot of likes, users feel bad. We saw the negative psychological effects, particularly in certain segments of the population such as teenage girls”, says Rogers. The experiment gathered attention, and the company enjoyed great PR until it was finally revealed from the investigation that engagement went down only slightly, prompting Instagram to return the likes feature. 

“There are a couple of ethical issues here, such as the issue of platforms continuing amplifying hateful posts because they promote sharing of information, which in turn achieves greater interaction as desired by the platform”, says Rogers.

Another problem is the overall lack of transparency around the amplified influence on users. While Facebook claims that promoting hateful statements is not in its or the advertisers' interest, time and again we learn that it is a common occurrence. And we always learn about it after the fact. As Benaissa notes “there is a lack of transparency and accountability. The companies are as transparent as they choose to be.” Platforms avoid being transparent on their decisions to remove or amplify certain content.

Is AI also used to combat manipulation on platforms?

Rogers explains that propagation of misinformation on social media is very current; we saw it at times of elections and during the COVID-19 pandemic. Platforms use AI to flag and remove content. They have created safelists of credible sources for serious topics such as extremism or pornography. However, using AI to automate credibility does not work consistently, and humans are needed to edit the information. "AI can help manage credibility, but it can also amplify conspiracy”, says Rogers. 


"AI can help manage credibility, but it can also amplify conspiracy.”

Another instance of manipulation via social media is through political microtargeting. Google and Facebook shared some insight in an attempt to be transparent about the type of political advertising allowed on their platforms, but have omitted to say anything about the context in which users view the advertisements. “We often do not know which AI techniques nor which data they use. Google and Facebook label political ads as such but hide the parameters based on which the ads are shown to specific users”, says Benaissa. The print newspaper offered a straightforward context for everyone, but it becomes difficult for users to distinguish between organic and targeted content online. “This in-transparency can result in digital manipulation”, emphasises Benaissa. However, the long-awaited Digital Services Act of the European Commission will regulate political microtargeting. 

What do you think of the public debate on AI manipulation in digital and social media?

Benaissa: “The people have become more aware of the dangers of online manipulation, and we have become less naive, I hope. But I also fear that political advertisement is becoming normalised, and many political parties partake. Often political parties fear that political micro-targeting has become essential to their work. This is problematic because they are supposed to provide a solution by setting the rules instead of contributing to the problem of playing by the platforms’ rules”, she explains.

Digital media platforms are now under pressure because of public and political backlashes, such as spreading misinformation on vaccination, extremist content or unlawful data breaches. The responses differ across platforms. Some have established transparency reports demonstrating their active role in fighting extremism and disinformation. Accordingly, public pressure has instigated a rise in regulation, primarily in the European Union, “This is interesting because we see uneven restrictions globally, yet the platforms are global. For example, the right to be forgotten is valid only in the European Union”, notes Rogers.

Targeted advertisements, personalized news feeds or smart home devices: artificial intelligence (AI) creates new ways to influence people’s behaviour. The European Commission has proposed a ban on certain (harmful) instances of AI manipulation. In a new blog series, the Rathenau Instituut talks to experts about the phenomenon of manipulative AI and the desirability of this emerging technology.

De vlag van de Europese Unie

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813497

Other publications in this series: