Enlarge image

A unicorn in front of the White House: Not every image motif from Meta's image generator is so obviously fantasy

Photo: Imagine with Meta AI

Together with other companies, Meta wants to promote new initiatives for labeling images created with the help of artificial intelligence (AI). Mark Zuckerberg's company is apparently reacting to the political debate that flared up after the publication of fake porn images of pop star Taylor Swift. This is also about the fear that fake photorealistic images could play a major role in the election campaigns this year.

The Facebook Group itself took precautionary measures in its image generator Imagine with Meta AI when it was launched in the USA: Each image is marked with a clearly visible indication of its origin. "In the coming months," the company announced, Meta will also label images that were generated with generators from other providers and then shared on Facebook, Instagram or threads - provided they are recognized as such.

Not visible to the naked eye

However, these visible watermarks can be covered or cut off relatively easily. That's why the group also relies on signatures that are invisible. Firstly, in the digital meta-information, where information about the camera used or the location where an image was taken is currently stored, references to the origin of an image generator will also be placed in the future. Secondly, watermarks should also be included in the images themselves, which are not visible to the human eye but can be clearly identified by a corresponding program. The advantage: Even if AI fakes are spread via screenshot, the invisible signature should remain.

Meta promises to be able to recognize AI-generated images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock in the future - as soon as they generate the signatures in their image generators. In Tuesday's announcement, Meta left it open in detail when this would be the case. “This approach represents the pinnacle of what is currently technically possible,” explains former British deputy prime minister Nick Clegg, who is now responsible for global affairs at Meta.

AI is supposed to expose AI

But this only covers part of the problem. Meta itself admits that the industry is not yet ready to label artificially generated audio or videos, let alone reliably identify unlabeled recordings afterwards. In addition, the signatures will not stop malicious actors from deliberately spreading counterfeits.

The fake images of Taylor Swift were first spread by groups who enjoy circumventing the security mechanisms of common image generators - subsequently removing a signature should not be a problem for this group of people. If it is necessary at all: Many image generators are under open licenses and can be adapted by developers to their own needs.

In order to reduce such negative consequences of AI technology, Meta is relying on even more AI. “We are optimistic that generative AI could help us eliminate harmful content more quickly and accurately,” writes Clegg – it doesn’t matter whether an AI generator was used or not.

tmk