Five years ago, Google boss Sundar Pichai swore the Internet company to a real revolution: artificial intelligence (AI) would change the company's products and processes dramatically, the IT world would move from "mobile first" to "AI first" (" AI first").

In the meantime, AI can actually be found in all Google offers, improving the search engine, the YouTube video platform, smartphones and laptops.

Google has even developed its own special processors that are particularly well suited to carrying out central calculation methods, especially for learning algorithms.

Alexander Armbruster

Responsible editor for business online.

  • Follow I follow

"With Google, we have to focus on highly sophisticated, in-depth technologies and in particular AI and convert them into helpful products and functions in the service of our mission," Pichai confirmed in an interview with several media outlets, including the FAZ - and referred to a very specific form the AI, from which he still has high hopes: from so-called artificial neural networks of unprecedented size, whose abilities to answer questions or, more generally, to deal with text and image content, are currently astounding many experts.

“There are big advances in these transformer models and large language processing models.

And I believe that we are still at the very beginning.”

AI with common sense?

Pichai himself recently presented the LaMDA2 dialog system, which falls into these categories, during the Google developer conference.

The group had previously presented an AI system called PaLM with 540 billion parameters, so to speak the smallest adjusting screws, the number of which determines the ability to learn and perform - the more parameters, the more powerful the model;

on this scale, only a few can currently keep up.

These new giant AIs impress because their knowledge does not appear to be highly specialized, not focused on a single task, but because they have broader competence and can certainly cope with a wide variety of requirements.

However, Google is not the only company that relies on it.

OpenAI, a company once co-founded by American entrepreneur Elon Musk, made headlines with its GPT-3 model in the past year. In China, the Beijing Academy of Artificial Intelligence (BAAI) presented its even larger AI system Wu Dao 2.0, Finally, in Germany, the Heidelberg-based company Aleph Alpha, which is small compared to other providers, is currently aiming to keep up with its Luminous model.

But do such AI systems really already have something like general knowledge or common sense, something that computer scientists have been working on for many years?

So are they able to actually understand context and at least bring it to bear in a meaningful way when answering questions or writing texts?

"We see that as the models scale, more capabilities emerge, new capabilities come in," Pichai said. "Whether or not they have some sort of common sense is almost a philosophical question: Let's take AlphaGo as an example - if that program Go plays, it makes moves that in the past no machine would ever have been said to make;

AlphaGo makes moves that are really new and surprising.”

Experts are currently happily discussing what will follow from this.

Nando de Freitas, senior researcher at the AI ​​forge Deepmind, which also belongs to Google and which once developed AlphaGo, for example, recently stated that making the models even larger, with even more data and even more computing power, is now crucial for further progress .

Others, like Meta (Facebook) lead AI researcher Yann LeCun, don't believe that alone is enough.