display

Artificial intelligence (AI) as a basic technology can help to better cope with many or even almost all of the major challenges of our time, such as climate change, pandemics, geopolitical and demographic upheavals.

So it doesn't surprise me that I get a lot of calls from representatives from the media, politics and industry asking me to assess new developments in AI.

The conversations then often revolve around questions of ethics, i.e. how trustworthy, fair and explainable the results of applied AI can be.

An important issue, no doubt.

Some time ago, however, I received a call that went differently: A journalist from France asked me about the GPT-3 text generator - and in the end we ended up with the AI ​​sovereignty of Europe.

display

I am not sure if you have heard of the GPT-3 computer program.

Most of you certainly do not: It is a so-called language model, a kind of “recipe” for updating texts.

It is able to write dramas in the language of Shakespeare, program simple apps or create game worlds.

All you have to do is provide an introduction and the AI ​​will do the rest of the paperwork.

How does GPT-3 do it?

The “recipe” consists of a good 175 billion adjusting screws that describe different calculation methods for creating texts: first one word, then the other and so on.

Thanks to machine learning, these adjusting screws are adapted to an enormous number of sample texts.

display

For example, if they include the sentence “The whale is a mammal”, it is presented to GPT-3 with a missing word, such as “The whale is an X.” The program must now replace the X with a word.

Initially, it will usually not calculate any meaningful substitutions, but at some point try the word "mammal" and determine by comparing it with the original example that it is the right word.

The adjusting screws are then “turned” in such a way that this addition is more likely.

Trained with a large number of examples, GPT-3 “learns” nothing about mammals, but it does learn about which setting of the adjusting screws is most likely to provide the correct answers in the future.

display

Indeed, GPT-3 is not perfect.

It considers grape juice deadly or answers the question “Who was the President of the USA in 1700?” With a name, although the USA did not even exist at the time.

Fortunately, it must be said that AI makes many such mistakes.

AI as a programming aid for everyone

"Fortunately" because the impression is sometimes given that machines already have the entire spectrum of human intelligence and could at least in principle replace or outperform humans - with understandably worrying consequences.

Despite its flaws, the computer program GPT-3 is very useful: It can summarize long texts in an understandable way, fill out Excel tables, program simple apps as desired or differentiate between mammals and fish.

This is helpful in many areas and has enormous potential for added value.

An example: in a digitized world, being able to program is not only useful for computer scientists, but also for designers, engineers and scientists, and practically for everyone.

And GPT-3 can speed up entry significantly.

The cost factor of the text generator GPT-3

Of course, this help is not available for free.

The necessary computing time and the associated electricity bill cost the Californian software company OpenAI, which developed the program, a good $ 4.6 million, not including the unsuccessful attempts.

But GPT-3 can also be used in many ways, it can even solve tasks for which it has not been trained.

While learning the many sample texts, it also developed an understanding of mathematics and is now able to solve simple arithmetic problems.

In short, GPT-3 training is a one-time investment that pays off many times over.

However, Microsoft recognized this and acquired an exclusive license at the end of September 2020 - other users now have to be satisfied with limited access.

display

In order for us to remain competitive with the few global AI corporations in the future, Germany and Europe should urgently invest and develop their own GPT-3 variant and make it available to both research and business as quickly as possible.

Because by training AI models on "AI supercomputers", results can be achieved that would be unattainable without the use of these special computers.

If we hesitate here, we run the risk of not being able to follow the rapid AI innovation cycles.

Also, GPT-3 should not only speak English, its "mother tongue" should not only be that of the richest countries and communities.

And the human prejudices in the system must also be exposed and avoided.

Together with AI companies like Aleph Alpha in Heidelberg and the European AI research networks Claire and Ellis, all of which are promoting modern AI processes, Europe can take all of this into its own hands.

There is still time.

Kristian Kersting is Professor of AI and Machine Learning at the TU Darmstadt, Co-Director of the Hessian Center for AI (hessian.ai), author of the book ("How machines learn") and recipient of the German AI Prize 2019.