Tech companies like TikTok, X and Facebook will soon be required to clearly identify AI-generated content on their platforms. The stated aim is to protect the upcoming EU elections from “disinformation” and “manipulation”.
– We know that this electoral period that’s opening up in the European Union is going to be targeted either via hybrid attacks or foreign interference of all kinds. We can’t have half-baked measures, EU Internal Market Commissioner Thierry Breton said in February.
According to Breton, it is not yet clear when tech companies will be forced to put special labels on manipulated content under the EU’s content moderation law, but the rules are expected to apply to TikTok, X, Facebook, Instagram and YouTube, among others.
Deepfakes have already surfaced in various contexts around the world, causing concern among lawmakers – including an audio spoof claiming to be US President Joe Biden. 2024 is also said to be particularly sensitive, as Europeans hold elections for the European Parliament in June – while Americans hold a presidential election in November.
“Rapid reaction mechanism”
OpenAI, the company behind ChatGPT, has already announced that it will start tagging fake images, and Meta, which owns Facebook, Instagram and Threads, has said it intends to introduce similar measures soon.
In the EU, lawmakers have long called for social media companies to be more easily held legally responsible for what is published on their platforms – including “disinformation”.
Major online platforms are already required to limit and counter what are known as “coordinated manipulation campaigns” and other “systemic risks” deemed to have a potential negative impact on electoral processes.
According to Breton, it is important that the platforms also put in place a “rapid reaction mechanism for any kind of incident” and that the EU conduct simulations to ensure the functioning of their systems.
A deepfake is a fake media production - typically using advanced artificial intelligence and machine learning to replace a person's face or voice with someone else's,
In practice, it creates realistic video clips or audio recordings that are difficult to distinguish from real material and can be used for everything from light entertainment and humorous clips to spreading disinformation and manipulating users on social media.