Logos of ChatGPT and OpenAI: Hello politics, regulate AI in any case, but please do it exactly right
LIONEL BONAVENTURE / AFP
Imagine the automotive elite of the Western world, a few car CEOs, a few leading inventors and absolute car experts, some who own the world's biggest car companies. So this troop speaks out at maximum volume with the greatest possible range, but only with a single sentence: "It should be prioritized globally to reduce the risk of extinction by automobiles – on a par with other risks for society as a whole, such as pandemics and nuclear war."
What would be your reaction? They would probably be rightly surprised or repelled by the obvious bigotry. You would probably formulate the question for yourself: Why the hell are you working on it, your badly crocheted potholders? Are you, of all people, really the ones who have to warn? And then also with terms like "annihilation", "pandemic" and "nuclear war"?
This situation happened exactly the same way – only not with the automobile, but with artificial intelligence. The sentence quoted above has actually been published by a good part of the world's AI elite: a new variant of the open letter, the open sentence. Addressed to politics on the one hand and apparently to themselves on the other. It is a trend that in recent months, those who bear the greatest responsibility for artificial intelligence, its further development and success have warned against it.
Elon Musk first signs an open letter that the AI development of so-called large language models should be stopped for at least six months – only to announce his own large language model days later, including an AI laboratory, which does not sound particularly trustworthy in this context. Geoffrey Hinton is leaving Google to better warn against AI – in the forty years before, the "Godfather of AI" tried like no other to increase the power of AI. Sam Altman creates ChatGPT, the best-known and most powerful AI tool to date, and writes the above sentence, along with a global interview tour on which he sometimes flirts: Hello politics, regulate AI in any case, but please do it exactly right, because otherwise either humanity will be wiped out or we will withdraw from Europe.
Personally, based on his publicly available work, I consider Altman to have integrity so far and to be a global AI leader anyway. The release of ChatGPT on November 30, 2022 is nothing less than the iPhone moment of artificial intelligence, you can divide the tech world and also the societies it shapes into a "before" and an "after".
Shifting responsibility to politicians
But even apart from Altman, there are reasons for these mass warnings that are unfortunately less noble than well-meaning people might assume. From my point of view, the first and most important one is: anticipatory defense against guilt. Artificial intelligence is powerful, and the likelihood that something bad will happen to it is high. Then it is practical for the AI crowd if their own role was that of the admonisher, because that is rarely the guilty person in the eyes of the public.
Such warnings from the AI elite shift the responsibility, at least in part, to politics. If something happens, you can say: Hey, we wanted you to regulate quickly and well, you didn't, now you have the salad.
The second reason, which at least resonates in some warnings, has a name: gloom-and-doom marketing, for example, German: doomsday marketing. Fear scenarios can trigger a real run on the product in product communication. This is less counterintuitive than you might think. After all, a tool that could even destroy the world – it has to be so powerful that you should definitely use it for your email marketing! At least that's a common way of thinking.
How well warnings work as a marketing strategy was completely obvious back then, when records were still being sold in relevant quantities. "Parental Advisory: Explicit Lyrics" was the name of a sticker on records and CDs that generations of young people regarded as a sign of quality.
How marketing-relevant doomsday fears can be is also a regular topic in arms sales in the USA, and together with politics, it is the area in which gloom-and-doom marketing has the most intense effect. In the case of Elon Musk, this strategy was perhaps most clearly recognizable by his frequent AI warnings, he no longer has to say to build the biggest, best, most powerful AI tool, but simply warns against it.
In any case, I do not want to rule out the possibility that some of the warnings will be made by some people, at least in full seriousness and with genuine concern. On the contrary, I share some of the concerns, as do many people who observe, evaluate and test technology professionally.
But here, too, there are many reasons to take a closer look or doubt outright, and for a perhaps surprising reason: the widespread inability of many professionals to properly assess the technology they are building. It runs like a red thread through the work of many great, recognized and real AI experts that they have no idea how further development will affect society. Or even just technologically.
Reality Shock: Ten Lessons from the Present
Number of pages: 400 pages
Number of pages: 400 pages
Buy for €22.00
Price inquiry time
31/05/2023 16:07 p.m.
Order from Amazon
Order from Thalia
Order from Yourbook
Product reviews are purely editorial and independent. Via the so-called affiliate links above, we usually receive a commission from the merchant when making a purchase. More information here
Yann LeCun, AI Director of Meta/Facebook, for example, is indisputably one of the most important, most cited, most recognized AI experts in the world, he has received many awards, including the Turing Award, the Nobel Prize for computer scientists. In January 2022, he claimed that even GPT-5000 will not be able to predict even the simplest physical processes in the distant future, specifically that a notebook that you put on a table will be moved with the table when you move the table. GPT will "never learn this". On Twitter, someone edited this video clip together with ChatGPT (version GPT-3.5), which explains exactly this connection in an exemplary and precise way – eleven months after the "never" forecast.
By far the worst start to a necessary debate
Geoffrey Hinton, the AI Godfather and, of course, winner of the Turing Prize, also has a track record of surprises that he would not have thought possible or would not have thought possible so quickly. According to his own statements, he even quit because of this. Elon Musk has often been so incredibly wrong with his predictions, especially on his own behalf, that he blocks or has blocked people on Twitter who remind him of it.
This means that many of the people who are now warning of the end of the world have already been very, very often completely wrong with their predictions. And this time you are supposed to believe them that either the world is coming to an end or that everything is not a problem? No, I'm sorry.
We can and must discuss what is perhaps the most powerful technology of all time, its misuse and how to counter it. But by far the worst start to such a debate is to paint the end of the world on the wall. Especially if you benefit from the technology itself. And on top of that, he obviously has no intention of changing that.