In the outlook we describe new measures that could be taken to combat the most important technology-driven threats to public debate and the democratic process.
Measures against deepfakes
Investment in detection of deepfakes
Platform companies could invest in the active detection of deepfakes in order to combat them. They will need to if they are to compete in the possible race with the producers and disseminators of increasingly advanced deepfakes.
Establishment of a hotline for malicious image manipulation
Companies like SnapChat, Instagram and TikTok, on whose platforms deepfakes are increasingly common, could create a hotline where users can report suspicions of malicious image manipulation.
Authentication of visual material and other messages
The digital authentication of visual material and other messages would enable internet users to verify whether material is from a source they regard as reliable. That calls for a reliable system of registering digital hallmarks. The government and large technology companies could take the lead in this.
Restricting possibilities for micro-targeting
Monitoring the use of advertising technology
Platform companies could build a monitoring function into their services to combat abuse of the advertising technology they provide.
Technical possibilities for limiting advertising technology
Platform companies could impose restrictions on advertisers with respect to their selection of target groups and monitor the responsible use of the advertising technology they provide by the advertisers.
Providing transparency for internet users
Platform companies could provide internet users with better information about the use that advertisers make of advertising profiles.
Measures against the harmful effects of recommendation algorithms
A built-in pause for reflection in platform services
Recommendation algorithms of platform companies frequently reinforce the social and political preferences of users and – by extension – social divisions. To combat the harmful effects of this, platform companies could build a pause for reflection into the use of their services. In this way, users would be less likely to share information (and disinformation) impulsively.
Providing transparency about recommendation algorithms
To combat the harmful effects of recommendation algorithms, platform companies could be transparent about how the algorithms work. To start with, they could provide scientific researchers with access to them.
Warning system for closed and encrypted channels
One way of combating the dissemination of disinformation on closed and encrypted channels would be to establish an independent warning system that identifies and issues a warning about disinformation campaigns on sensitive social issues. The government and platform companies could facilitate this warning system.
Critical analysis of the revenue model of platform companies
Measures such as limiting the use of advertising technology and providing transparency about how recommendation algorithms work could conflict with the business model of platform companies. They might therefore be disinclined to take those measures. In that case, the government could go further, for example by compelling greater transparency about the use of recommendation algorithms or critically analysing the platform companies’ revenue model.
Investment in fact-checking remains important
Because fact-checking is important to provide certainty for internet users looking for reliable information, the government and platform companies could invest, or continue to invest, in facilities for fact-checkers.
Investment in media literacy remains important
The production and dissemination of disinformation could be reduced with technological measures and stricter regulation of platform companies. But there will always be safe havens on the internet, and internet users will therefore continue to be confronted with disinformation. The government must therefore continue to invest in media literacy.
Conclusion: platform companies are primarily responsible, but the government can intervene
With many of the above measures to combat disinformation, responsibility lies primarily with the platform companies. But given the public interest in preventing the harmful effects of disinformation, the government could decide to act if platform companies do not fully meet that responsibility. For example, the government could urge the platform companies to adopt an active policy on the detection and prevention of deepfakes or to monitor irresponsible use by advertisers of the possibilities of micro-targeting.
And if urging the companies doesn’t help, measures could be compelled. Those measures could also be at the expense of the platform companies’ earnings model. Whether the government should take this step will depend in part on the seriousness of the threats to public debate and the democratic process arising from the polarising effect of recommendation algorithms or disinformation campaigns by advertisers facilitated by platform companies. To carry sufficient weight, compulsory measures should logically be taken at EU level.