The mix of artificial intelligence and disinformation is slowly creating a serious challenge to the integrity of public debate all across the globe. On one hand you have fake news, and on the other, the large-scale automated control of information flow, where actors exploit the speed and scale of AI technology to produce hyper-realistic content.
Let’s see what the root of the issue is. The problem starts with the misuse of the generative AI models. Some people are using the AI to produce very persuasive deepfakes like text, images, and videos at basically zero cost. In one analysis, an AI-powered disinformation campaign during elections was more or less indistinguishable from authentic human-written journalism in more than half of cases. This capacity makes content verification harder and harder.
An additional problem is the speed at which the AI generated disinformation spreads. Landmark analysis of social media found that false news travels significantly faster than actual news. This disparity can be prescribed to the very engagement algorithms that AI systems optimize. The World Economic Forum’s Global Risks Report 2024 went so far as to call misinformation and disinformation the “most severe short-term risk the world faces as of right now”.
The European Union is trying to position itself as the global frontrunner in regulating this topic, primarily the widely discussed AI Act.
The EU AI Act directly addresses the AI-creation channel. To be specific, the Act places deepfakes on a limited risk segment of the AI risk pyramid, but imposes a mandatory, non-negotiable obligation: providers of AI systems generating synthetic content must make sure that their outputs are clearly marked in a machine-readable format and detectable as AI generated or manipulated. Additionally, people who post deepfakes must disclose that the content has been artificially generated, unless it is for evident artistic or satirical purposes.
Although the AI Act’s importance as a global precedent is undeniable, the success of the EU strategy will depend on the regulatory bodies’ ability to enforce the rules. In addition to that, the focus on transparency addresses mostly the supply side, while it doesn’t do much to tackle the demand side. A solid addition to the current strategy could include efforts to increase the digital and media literacy of the people, and make them less susceptible to disinformation, and in particular, the disinformation generated by artificial intelligence.