Europe 1 with AFP // Photo credit: Meta 6:31 p.m., April 5, 2024

The American giant Meta will identify each element generated by artificial intelligence on its social networks, starting in May. A decision in line with its desire to fight against disinformation. These generative AIs could disrupt several electoral events, notably the American presidential election. 

The American giant Meta will identify on its social networks sounds, images and videos generated by artificial intelligence (AI) from May, a decision taken against a backdrop of the fight against disinformation in a year full of electoral votes.

Meta changes the way it processes content modified by AI

“We plan to start labeling AI-generated content in May 2024,” Monika Bickert, vice president of content policy at the parent company of Facebook, Instagram and Threads, explained in a blog post on Friday. specifying that the mention “Made with AI” would be affixed “to a greater number of video, audio and image content” than previously. This content will be marked by the platform if it detects "industry-standard AI image indicators" or if "people indicate that they are uploading AI-generated content", he said. she pointed out.

In addition to detecting visible markers, Meta also intends to detect any trace of "watermarking", a form of digital "watermarking" which consists of inserting an invisible mark inside an image when a tool using AI generates it . “A filter is better than nothing but there will inevitably be holes in the racket,” reports Nicolas Gaudemet, AI director of Onepoint, to AFP.

>> READ ALSO - 

Meta removes CrowdTangle, a tool against disinformation, to the chagrin of many researchers

He takes the example of open source software, which does not always use this type of "watermarking" when they create an image, but notes that most general public generative AI, such as those from Google, Microsoft or OpenAI, “today integrate this type of technology”. The Californian group announces more generally that it will change the way it processes content modified by AI, after consulting its supervisory board, believing that “transparency and more context are now the best way to process manipulated content” , “in order to avoid the risk of unnecessarily restricting freedom of expression”.

AI used to sow political chaos 

In this case, he now considers that it is preferable to add "labels and context" to this content, rather than removing them as he has done so far. “Contextualization is absolutely necessary,” recognizes Nicolas Gaudemet, even if he believes that we must wait to know exactly what form this will take at Meta.

The company nevertheless clarified that it would continue to remove from its platforms any content, whether created by a human or an AI, going against its rules "against interference in the electoral process, 'bullying, harassment, violence (...) or any other policy contained in our community standards'. It also relies on its network of “around 100 independent fact-checkers” to identify “false or misleading” AI-generated content.

>> READ ALSO - 

Leakmited: artificial intelligence helps water leaks

The parent company of Facebook announced in February its wish to label any image generated by AI, a decision taken against a backdrop of the fight against disinformation. Other tech giants like Microsoft, Google, OpenAI and Adobe have made similar commitments.

The rise of generative AI has raised fears that people could use these tools to sow political chaos, including through disinformation or misinformation, in the run-up to several major elections this year, including in the states -United. Beyond these ballots, the development of generative AI programs is accompanied by the production of a flow of degrading content, according to many experts and regulators, such as false pornographic images ("deepfakes") of famous women, a phenomenon which also targets anonymous people.