calendar tag arrow download print
Skip to content

Digital threats to democracy

Report
04 December 2020
On new technology and disinformation
desinformation democracy digitalisation

Image: Shutterstock

Image
Over nieuwe technologie en desinformatie
Until recently, the Netherlands could boast that disinformation has not had a major impact on society in recent years. However, the flood of misleading reports that have been spread concerning the coronavirus outbreak shows that Dutch society is not immune to this either. At the same time, it is still too early to make a definitive judgment about the significance of this for the resilience of Dutch society to disinformation.

However, rapid technological developments in the field of IT could overturn this picture in the foreseeable future. This report, which we wrote at the request of the Ministry of the Interior and Kingdom Relations and fits in with the Digital Society theme in our Work Programme, provides a broad overview of the technological developments that could play a role in the production and spread of disinformation in the coming years.

Downloads

Downloads

Summary

The Ministry of the Interior and Kingdom Relations asked the Rathenau Instituut to conduct research into the impact of technological developments on the production and dissemination of disinformation and measures that could be taken to mitigate their potential negative effects. The report focuses mainly on disinformation aimed at disrupting public debate and the democratic process. The study reflects the action lines that the minister, Kajsa Ollongren, announced in her letter to the House of Representatives on 18 October 2019 as part of the government’s strategy to protect Dutch society against disinformation.

This study focuses on the following questions:

  • What is the impact of technological developments on the production and dissemination of disinformation?
     
  • What measures have already been taken to contain the threats that disinformation poses for public debate and the democratic process?
     
  • What new measures can be taken to counter those threats, taking account of freedom of speech and press freedom?
     
  • Who are the relevant actors in that respect?

Approach
This study is based on desk research, interviews with experts and two case studies. The case studies relate to important technological developments connected with the production and dissemination of disinformation: deepfakes and psychographing. The interim results of the study were discussed during an online meeting with experts. This report describes the results of the research.

All of the technological developments that were investigated have a digital component. The developments discussed are already underway or are expected to occur within the next five years. None of the technologies described in this study can be regarded as ‘entirely new’. However, we show how technological innovations that are already in development or which are starting to emerge could evolve and what impact those innovations could have on the production and dissemination of disinformation.

Disinformation
In this study we adopt the definition of disinformation used by the Minister of the Interior and Kingdom Relations in the aforementioned letter to the House of Representatives: ‘the conscious, usually covert, dissemination of misleading information with the aim of causing damage to the public debate, democratic processes, the open economy or national security’. We make the reservation that this study focuses primarily on disinformation that undermines or disrupts public debate and the democratic process, for example by stirring up social divisions or feeding distrust in political institutions.

Previous research has shown that there are no visible signs that disinformation is having a major impact on society at present. Most of the examples of disinformation in this study are therefore taken from other countries, but they also illustrate what the Netherlands might come to face in terms of disinformation in the coming years.

The study consists of three parts, each with its own distinct character: a quick scan with a survey of technological developments; case studies that explore two specific technologies in more depth; and a preview of new measures that could be taken

Quickscan and casestudies

Part I: Quick scan

The quick scan provides an overview of technological developments that could play a role in the production and dissemination of disinformation in the coming years. It also presents a concise survey of measures that have already been taken to combat the negative effects of disinformation. In the quick scan we make a distinction between general technologies, production technologies and dissemination technologies.

General technologies

  • Database technology: the large-scale collection and analysis of (personal) data;

  • Artificial intelligence: self-learning algorithms and systems.
     

Technologies with which disinformation can be produced

  • Text synthesis: algorithms that generate readable and logical news reports and messages;
     
  • Voice cloning: manipulation of voice messages using artificial intelligence;
     
  • Image synthesis and deepfakes: generation and modification of videos using artificial intelligence;
     
  • Augmented and virtual reality and avatars: presentation of information in a virtual environment;
     
  • Memes: images designed to be widely shared on social media.

Technologies with which disinformation can be disseminated

  • Social media platforms: online platforms such as Facebook, Twitter and TikTok, which use recommendation algorithms to select messages;
     
  • Micro-targeting: reaching specific target groups with a message geared to them (using campaign software, dynamic prospecting, programmatic advertising, psychographing and influencer marketing);
     
  • Chat apps: sharing (encrypted) messages, one-to-one or in small groups;
     
  • Bots: (partially) automated accounts on social media;
     
  • Search engines: platforms that enable the internet to be searched;
     
  • Virtual assistants: voice-operated devices which can be used to consult search engines, among other things;
     
  • Distributed autonomous applications: online platforms with no central control;
     
  • Games: online games;
     
  • Cross-media storytelling: reaching a specific person or target group via various channels and devices.

Part II: Case studies

Building on the quick scan, two case studies were elaborated to provide a more coherent picture of how technological developments in the area of disinformation could evolve in the coming years and what impact they could have on public debate and the democratic process. The case studies concern deepfakes and psychographing.

Deepfakes
Artificial intelligence can be used to manipulate audiovisual material. This can make it difficult for people to distinguish manipulated videos – deepfakes – from the real thing. For example, the face in an image can be changed with ‘face swaps’ or an artificial head or body can be generated with ‘digital puppetry’. Deepfakes can be used, for example, to create the impression that a certain person made a particular statement, which can impair public debate.

It is likely that further technological innovation will make deepfakes increasingly difficult to distinguish from authentic, non-manipulated images. In addition, increasingly advanced deepfake technologies will come onto the market in easy-to-use apps and gadgets. Accordingly, the use of deepfakes will become increasingly common. Given the growing importance of video on internet, that could undermine the credibility of visual material published by established news media.

In response to the increasing ability of platform companies to detect deepfakes, producers and disseminators of deepfakes could switch to closed channels without moderators.

Psychographing
Psychographing is an advanced form of micro-targeting. It is an advertising technology that can be used to gear messages in an automated way to the personality traits of a target group. The idea behind the method is that people can be influenced by feeding them information that is tailored to their psychological profile. Large numbers of internet users could be misled or manipulated in this way.

The case study sketches a scenario in which a group sets out to influence public debate with the help of psychographing. By involving itself in sensitive social issues, the group endeavours to stir up social divisions and undermine public confidence in established institutions. To cause maximum unease, the messages could be disseminated via non-public channels, such as private groups on Facebook or Telegram, and since there is little chance of the messages being contradicted on those channels, the disinformation campaign would have an even greater impact.

Outlook

In the outlook we describe new measures that could be taken to combat the most important technology-driven threats to public debate and the democratic process.

Measures against deepfakes
 

Investment in detection of deepfakes
Platform companies could invest in the active detection of deepfakes in order to combat them. They will need to if they are to compete in the possible race with the producers and disseminators of increasingly advanced deepfakes.

Establishment of a hotline for malicious image manipulation
Companies like SnapChat, Instagram and TikTok, on whose platforms deepfakes are increasingly common, could create a hotline where users can report suspicions of malicious image manipulation.

Authentication of visual material and other messages
The digital authentication of visual material and other messages would enable internet users to verify whether material is from a source they regard as reliable. That calls for a reliable system of registering digital hallmarks. The government and large technology companies could take the lead in this.

Restricting possibilities for micro-targeting
 

Monitoring the use of advertising technology
Platform companies could build a monitoring function into their services to combat abuse of the advertising technology they provide.

Technical possibilities for limiting advertising technology
Platform companies could impose restrictions on advertisers with respect to their selection of target groups and monitor the responsible use of the advertising technology they provide by the advertisers.

Providing transparency for internet users
Platform companies could provide internet users with better information about the use that advertisers make of advertising profiles.

Measures against the harmful effects of recommendation algorithms
 

A built-in pause for reflection in platform services
Recommendation algorithms of platform companies frequently reinforce the social and political preferences of users and – by extension – social divisions. To combat the harmful effects of this, platform companies could build a pause for reflection into the use of their services. In this way, users would be less likely to share information (and disinformation) impulsively.

Providing transparency about recommendation algorithms
To combat the harmful effects of recommendation algorithms, platform companies could be transparent about how the algorithms work. To start with, they could provide scientific researchers with access to them.

Warning system for closed and encrypted channels

One way of combating the dissemination of disinformation on closed and encrypted channels would be to establish an independent warning system that identifies and issues a warning about disinformation campaigns on sensitive social issues. The government and platform companies could facilitate this warning system.

Critical analysis of the revenue model of platform companies

Measures such as limiting the use of advertising technology and providing transparency about how recommendation algorithms work could conflict with the business model of platform companies. They might therefore be disinclined to take those measures. In that case, the government could go further, for example by compelling greater transparency about the use of recommendation algorithms or critically analysing the platform companies’ revenue model.

Investment in fact-checking remains important

Because fact-checking is important to provide certainty for internet users looking for reliable information, the government and platform companies could invest, or continue to invest, in facilities for fact-checkers.

Investment in media literacy remains important

The production and dissemination of disinformation could be reduced with technological measures and stricter regulation of platform companies. But there will always be safe havens on the internet, and internet users will therefore continue to be confronted with disinformation. The government must therefore continue to invest in media literacy.

Conclusion: platform companies are primarily responsible, but the government can intervene

With many of the above measures to combat disinformation, responsibility lies primarily with the platform companies. But given the public interest in preventing the harmful effects of disinformation, the government could decide to act if platform companies do not fully meet that responsibility. For example, the government could urge the platform companies to adopt an active policy on the detection and prevention of deepfakes or to monitor irresponsible use by advertisers of the possibilities of micro-targeting.

And if urging the companies doesn’t help, measures could be compelled. Those measures could also be at the expense of the platform companies’ earnings model. Whether the government should take this step will depend in part on the seriousness of the threats to public debate and the democratic process arising from the polarising effect of recommendation algorithms or disinformation campaigns by advertisers facilitated by platform companies. To carry sufficient weight, compulsory measures should logically be taken at EU level.