The “Made with AI” label, which means “Made with Artificial Intelligence,” will be placed on a greater number of video, audio and photo content on various “Meta” platforms (Shutterstock)

The giant American company "Meta" intends to develop a special classification to recognize voices, images, and video clips generated by artificial intelligence on its social networks, starting next May, according to a message published on a blog on Friday, the content of which was reported by Agence France-Presse.

Monika Bickert, vice president responsible for content policies at the parent company of Facebook, Instagram, WhatsApp and Threads, explained, “We plan to start classifying content created by artificial intelligence starting in May 2024,” noting that the “Made with AI” tag AI) - which stands for “made using artificial intelligence” - will be placed “on more video, audio and photo content” than before.

She indicated that the group will place these marks on its platforms when it detects “indications of artificial intelligence images in accordance with applicable standards in the sector,” or if “people indicate that they are uploading content created by artificial intelligence.”

The American group announced that it will change the way it handles content modified by artificial intelligence, after consulting with its supervisory board, considering that “transparency and adding more elements to clarify the context have become the best way to handle content subject to manipulation,” in order to “avoid the risks of setting unnecessary restrictions.” "It is beneficial for freedom of expression."

Meta now prefers to add “tags and elements of context” to this type of content, rather than removing it as it had done until now.

However, Meta explained that it will continue to remove any content from its platforms, whether created by humans or artificial intelligence, if it contravenes its rules “against interference in the electoral process, intimidation, harassment, violence (..) or any other policy included in Our community standards.

The group also relies on its network of “about 100 independent fact-checkers” to identify “false or misleading” content produced by artificial intelligence.

Facebook's parent company announced last February that it wanted to label any image generated by artificial intelligence, a decision made against the backdrop of the war on misinformation.

Other technology giants such as Microsoft, Google and OpenAI have pledged to take similar measures.

The growth of generative artificial intelligence software has raised fears that these tools will be used to sow political chaos, especially through misinformation or falsification of facts, especially in this year that is witnessing a series of major electoral elections, most notably the presidential elections in the United States.

Aside from these selection processes, the development of generative artificial intelligence programs is accompanied by a torrent of offensive content, according to many experts and regulatory bodies, including the installation of fabricated pornographic images and clips of famous women (using “deep fake” technology), a phenomenon that also targets ordinary people.

Source: French