Enlarge image

Surveillance cameras: “There are strict rules for the subsequent evaluation of video material with AI”

Photo: IMAGO / Michael Gstettenbauer

SPIEGEL:

Ms. Brantner, you helped negotiate the first law by a Western economic power on artificial intelligence, the European Union's AI Act. The regulations could come into effect in 2026. Isn't that way too late?

Franziska Brantner:

No, we are in a good time. We started in good time in the EU and started working on an AI law many months ago. ChatGPT came onto the market at the end of 2022 and showed that AI can already take over human tasks to a certain extent. There had never been such a powerful system before. But the AI ​​Act now also has this new technology in mind.

SPIEGEL:

The final draft of the law has recently been announced and there has been great criticism. For some it goes too far, for others it is too lax. Now the final votes in the Council and Parliament are pending. Is the law in danger of failing?

Brantner:

I hope not. One thing is clear: this law must balance a wide variety of interests. On the one hand, we want to use the innovation potential that AI offers: for our health, in research, to automate tasks. On the other hand, we have to get the risks under control. Take, for example, automatic emotion recognition in the workplace: Nobody wants an AI to constantly evaluate whether Mr. Smith is looking grim - possibly while he is reading the boss's email. We have banned such use cases.

SPIEGEL:

In your view, is the criticism from various quarters evidence of its balance?

Brantner:

It is never easy to bring very different interests into a good balance, especially when it comes to the complex regulation of a highly dynamic technology that affects virtually all areas of life. I'll give you another example: copyright. Artificial intelligence is trained with tons of images, information and text so that it can create images and text according to the user's specifications. It's fascinating how well AI can do this. But the question arises: If an AI was trained with images from a certain artist and then creates images in his style, who then has the rights to them? Here too we had to find a balance. In the future, manufacturers will have to make transparent which data they used to train the AI. Authors, in turn, have options to prevent their works from being read by AI.

SPIEGEL:

Automatic facial recognition is particularly hotly debated. Law enforcement authorities are particularly interested in this. For civil rights activists, however, the powers go far too far.

Brantner:

In principle, the AI ​​regulation prohibits video surveillance in real time using AI-supported facial recognition. Exceptions apply, for example to searching for missing persons or preventing an imminent terrorist threat. And there are strict rules for the subsequent evaluation of video material with AI. In both cases, a judge or appropriate authority must agree, the process must be registered with the police and data protection officers must have access to the systems. If the member states want, they can tighten the requirements even further.

SPIEGEL:

It sounds like you're quite happy with the arrangement.

Brantner:

Yes. For the first time, we are creating a European minimum standard for the rules for AI-based, automatic facial recognition. I'm happy about that. Because these requirements then apply everywhere in the Union.

SPIEGEL:

There were still heated discussions about the law within the federal government until recently. There is said to have been resistance, particularly from the FDP. What was happening?

Brantner:

The examples I gave show that the AI ​​regulation touches on the foundations of our democratic and economic order - it is about the fundamental rights of each and every individual, intellectual property, freedom of research and our ability to innovate. All of this had to be brought into balance and that cannot be done without discussions.

SPIEGEL:

And what does the coalition's compromise look like now?

Brantner:

We examined the text intensively and agreed that the federal government will approve the AI ​​regulation in Brussels. In doing so, we sent a clear signal early on about our ability to act and legal certainty.

SPIEGEL:

Last year, numerous tech giants warned of potential dangers for humanity. That sounded a bit like a concern that AI could take over the world. Have you discussed something like this among the Member States?

Brantner:

No, at least not in the conversations I was involved in. However, one risk that I consider to be quite real is that providers who operate powerful AI systems will be hacked. This can potentially cause enormous damage. It is therefore important that we set guidelines here so that providers of high-risk AI protect themselves accordingly. If you have a chemistry laboratory that works with highly toxic substances, you also have to make sure that not everyone can walk in there. We also protect our critical infrastructure, such as our energy systems, accordingly. This must also apply to AI providers. We have tried to exclude other risks by completely banning certain use cases.

SPIEGEL:

Which one?

Brantner:

Social scoring, for example. In China, people's behavior is used to calculate their supposed trustworthiness. We banned that completely. And then there are areas of application that we say are so sensitive that they need to be regulated. For example, when AI is used in personnel selection. Then it must be ensured that it is not discriminatory. So that a woman with the first name Özlem has the same opportunities as one with the first name Franziska.

SPIEGEL:

You already mentioned the example of emotion recognition, i.e. an AI evaluating a person's facial expressions and drawing conclusions from them. This should be forbidden in the workplace. But not everywhere. Are there any use cases here that are harmless?

Brantner:

In certain cases it can make sense. For example, in medical systems in the therapeutic area.

SPIEGEL:

At what point is an AI considered high-risk AI and is subject to particularly strict requirements?

Brantner:

The AI ​​regulation provides various criteria for classification as high-risk AI, which are regulated in the text and the corresponding appendix. For example, this affects AI systems from the medical sector, critical infrastructure and the education sector. There are exceptions for systems that do not pose a significant risk to health, safety and fundamental rights. This is an important point for companies and innovators. Incidentally, when it comes to AI models, there is now the category of AI models with systemic risks, which are subject to stricter requirements. With these models, the number of calculation operations necessary to train an AI model determines how potent and effective it is - and therefore how potentially risky. Technically speaking, a universally applicable AI model is now considered risky if the cumulative computing effort for training is more than 1025 flops - i.e. floating point operations.

SPIEGEL:

But such a standard can quickly become outdated.

Brantner:

That's why that's not the only factor. The Commission made this proposal in the autumn and then we, together with France and Italy, said that we did not find this limit plausible. Now, among other things, the number of users will also play a role. And we have opened up the possibility of other criteria being added, also on the recommendation of the research community. In addition, the text of the regulation now states that the “state of the art” must be taken into account - i.e. that the current state of the art always serves as the benchmark. Technological progress is priced in.

SPIEGEL:

Can you give examples of AI applications that would be considered high-risk AI?

Brantner:

These are, for example, checking creditworthiness or selections in the hiring process.

SPIEGEL:

Critics fear that the EU's requirements could stifle innovation. Aren't you worried?

Brantner:

Research and development are clearly excluded from the scope of the regulation. This was important to us because we are very good in this area in Germany and this is where the nucleus of innovation lies. This is also about finding a good balance between legitimate security requirements, avoiding bureaucracy and being open to the technology. It is then crucial that the implementation of the regulation is low in bureaucracy and innovation-friendly. We as the federal government will keep a special eye on this.

SPIEGEL:

What will change in 2026 for a person who uses something like ChatGPT? People are already using the program to write applications or create presentations for work.

Brantner:

For example, if this person creates an image using generative AI, there should be a note there, some kind of digital watermark. This transparency is something that people will notice. For many, it may even be the main difference to the current situation.

SPIEGEL:

Do you think that providers like OpenAI and Microsoft will no longer offer such products in the EU as a precaution?

Brantner:

I would be very surprised, given the great importance of the European market. What we see is that Americans are looking at our law with great interest and considering what they can use for themselves.

SPIEGEL:

Certain moral questions are not addressed in the AI ​​Act: What should AI do for us in the future? Should she write journalistic articles for us? Do we want actresses or teachers to be replaced by AI?

Brantner:

These are highly relevant questions for us as a society. We have a labor shortage in Germany. This means that we have an interest in some jobs being replaced by AI or by robots. But we cannot leave it completely to chance where the best AIs are created and deployed. In education, for example, we certainly want to continue to have people who can teach our children something in the future. In retirement homes, on the other hand, there are tasks that could be replaced by AI, which would give the staff more time for direct contact with the residents and perhaps play a board game together again. What matters now is which applications our companies develop and advance.

SPIEGEL:

Where does a society reach agreement on these questions? It's not just the market that can regulate this.

Brantner:

As I said, we have excluded areas in which AI is not allowed to be used because we consider it too dangerous. And high-risk applications must be compatible with our fundamental rights and non-discriminatory. Within what is allowed, as with other technologies and their applications, you will then see where resources and money go. Even with AI, politicians are not responsible for deciding everywhere what is good for consumers and what is good for companies. The products that offer real added value will prevail on the market. And as a society, as with other new technologies, we will discuss how to make the most of the opportunities without ignoring risks.

SPIEGEL:

Which point in the final version of the AI ​​Act bothers you the most?

Brantner:

When implementing it, we have to be very careful to ensure that the AI ​​regulation harmonizes well with other product regulations, for example in the medical sector, and does not lead to undesirable consequences.

SPIEGEL:

What is at stake if the law fails?

Brantner:

Then there will be no specific AI regulation, no standards and no planning security and everything we just talked about is allowed. National governments would surely start to regulate and we would end up with a patchwork quilt that makes life difficult for our companies. There are elections at European level this summer. Who knows what the majorities in the European Parliament will look like and whether they really want to strengthen fundamental rights or not. Now everyone has to ask themselves whether they would rather have no regulation at all or one that brings a wide range of interests into a good balance in essential points.