The increasing use of artificial intelligence in everyday life raises new questions.

A central one is: Who is liable if something goes wrong?

If someone is hit by a car these days, the matter is usually clear.

As long as the car was delivered free of defects, the owner or driver are liable for the damage.

But what about self-driving cars that are controlled by AI?

Who is liable for misdiagnoses when an AI is used by doctors?

Or for the incorrect evaluation of documents by an AI?

Henrik Kafsack

Business correspondent in Brussels.

  • Follow I follow

In theory, the answer is the same as for all products.

If a product is defective, the manufacturer must compensate the injured party for the damage.

Only the "victim" has to prove that the damage was caused by the product.

This is exactly where the problem lies: Why and how an AI "decides" is often not understandable for the user.

Experts speak of a "black box problem".

There are therefore voices in the European Parliament to simply reverse the burden of proof in general.

In the event of damage, the providers would then have to prove that their AI worked correctly in order to free themselves from liability.

This is going too far for the European Commission.

However, in the middle of next week she wants to present a proposal with which she reverses the burden of proof in individual cases.

She also wants to oblige providers to disclose exactly how their AI works.

A draft of the proposal is available to the FAZ.

Specifically, the injured party should be able to request training or test data sets, data from the technical documentation and logs or information about quality management systems from the providers.

If in doubt, you can sue.

However, the court must ensure that only absolutely necessary data is disclosed in order to protect trade secrets.

If the provider does not comply, the burden of proof is reversed.

Then he has to prove that his AI wasn't "to blame".

This also applies if a provider has violated the due diligence obligations enshrined in the EU AI Act.

This is the case for systems with a high risk for humans, for example, if the provider does not have sufficient risk management or the training data for the AI ​​was not good enough.

In addition, there must always be adequate human supervision.

For users of a high-risk AI that has harmed a third party, the burden of proof is reversed if they do not follow the instructions for use or feed the AI ​​data that is not relevant to the application.

Irrespective of this, the Commission no longer intervenes in national liability law.

Criminal law issues and the transport sector are also excluded from the proposal.

The EU Parliament and the Council of Ministers still have to approve the proposal before it can come into force.

Parliament is likely to push for an extension of the reversal of the burden of proof.

"The hurdle of the burden of proof is still very high," criticizes Green MP Anna Cavazzini.

"Reversing the burden of proof in favor of those affected in clearly defined cases could make legal action and legal enforcement much easier."