The management of crises is a big topic of this Falling Walls conference, artificial intelligence as a solution option appears surprisingly seldom.

In your new book * you discuss the ubiquity of big data and algorithms critically.

Is that our cardinal problem: that we are too little aware of how AI subliminally changes our thinking and behavior?

Joachim Müller-Jung

Editor in the features section, responsible for the “Nature and Science” section.

  • Follow I follow

The topic is very much present. We just heard this talk about the tiny robots that enter our bodies and are able to deliver drugs to specific locations or insert wireless stents into blood vessels. All of this is unthinkable without artificial intelligence, without much talk about it. When Ms. Özlem Türeci talks about the fact that in five years we have succeeded in sequencing tumors significantly faster, it can only be done with AI. It is built into the process of automation. This is a great advance in medicine, as is the modeling of complex systems like climate change. The AI ​​is often invisible, but it's there.

At the same time, such examples also show the difficulties people have with automation. Because it needs the data of the citizens, and a lot of personal data.

Indeed, we deliver our data, our entire social environment is recorded. After arriving in Berlin by plane, I received a message on my cell phone from the Federal Government of the Federal Republic of Germany, which welcomed me and reminded me to follow the quarantine and safety rules. This is artificial intelligence that knows where I am and where I am going. I have never agreed to this greeting and I assume that there is a legal basis for it, owing to the pandemic. Nevertheless, it also shows the possibilities of abuse, and that is what many people are afraid of. Between these two possibilities, the usefulness and the possible abuse, it is now a matter of finding a sensible modus vivendi.

If AI is part of everyday life, isn't the call for digital humanism, like you are now, a little too late?

You always have to start earlier.

But if you start too early, the future remains a huge, speculative space of projection.

It's not too late yet, but we need to be more active and act.

We have to face the feeling that we are at the mercy of technology.

The big corporations are very powerful economically, no question about it, but that is precisely why state regulation is the order of the day.

At the EU level, it is up to us to find a reasonable balance between what is going on in the USA and in China.

What does the humanism look like that is needed in the digital world?

Let's compare what an Amazon algorithm offers us when we buy a book. We get dozens of other book recommendations that are tailored to our reading preferences. The data for this comes from our past reading behavior and specifies what we should read in the future. When we were young and went to the library, we had the experience of turning around between the bookshelves and suddenly discovering another exciting book that we weren't looking for. This kind of unpredictability and openness to surprises is part of being human. If we stay in the Amazon universe, we will consume exactly what Amazon wants. If we don't want that, we have to break out, and digital humanism reminds us of that.

So it also means preserving the analog world and, if you will, possibly building walls in the metaphorics of this conference to prevent the limitless advance of AI?