"Personally, and for now, I'm not very worried about it, because the scenarios are not very concrete," said the professor emeritus of New York University, who came to California for a conference.

"What worries me is that we're building AI systems that we don't control well," he continues.

Gary Marcus designed his first AI program in high school — software to translate Latin into English — and after years of studying child psychology, he founded Geometric Intelligence, a machine learning company that was later acquired by Uber.

In March, he co-signed the letter from hundreds of experts asking for a six-month pause in the development of ultra-powerful AI systems like those of the start-up OpenAI, the time to ensure that existing programs are "reliable, secure, transparent, loyal (...) and aligned" with human values.

But he did not sign the succinct statement of business leaders and specialists that made a splash this week.

Sam Altman, the boss of OpenAI, Geoffrey Hinton, a former prominent Google engineer, Demis Hassabis, the leader of DeepMind (Google) and Kevin Scott, chief technology officer of Microsoft, in particular, call for fighting against the "extinction risks" of humanity "related to AI".

"Accidental war"

The unprecedented success of ChatGPT, OpenAI's conversational robot capable of producing all kinds of texts on simple query in everyday language, has sparked a race for this so-called "generative" artificial intelligence between the tech giants, but also many warnings and calls to regulate this field.

IBM's Christina Montgomery, New York University's Gary Marcus, and OpenAI's Samuel Altman take the oath at the start of a hearing on artificial intelligence before a US congressional committee on May 16, 2023 © ANDREW CABALLERO-REYNOLDS / AFP

Including from those who build these computer systems in order to achieve a "general" AI, with cognitive abilities similar to those of humans.

"If you really think this represents an existential risk, why are you working on it? It's a legitimate question," Marcus said.

"The extinction of the human species... It's quite complicated, actually," he said. "You can imagine all kinds of plagues, but people would survive."

There are, however, realistic scenarios where the use of AI "can cause massive damage," he said.

"For example, people could succeed in manipulating markets. And maybe we would accuse the Russians of being responsible, and we would attack them when they had nothing to do with it and we could end up in an accidental, potentially nuclear war," he said.


In the shorter term, Gary Marcus is more concerned about democracy.

Because generative AI software produces fake photographs, and soon videos, more and more convincing, at little cost. According to him, the elections are therefore likely to "be won by the people most good at spreading disinformation. Once elected, they will be able to change the laws (...) and impose authoritarianism."

Above all, "democracy is based on access to the information needed to make the right decisions. If no one knows what's true or not, it's over."

The author of the book "Rebooting AI" does not think that everything should be thrown into this technology.

"There is a chance that one day we will use an AI that we have not yet invented, which will help us make progress in science, in medicine, in the care of the elderly (...) But for now, we are not ready. We need regulation, and to make programs more reliable."

At a hearing before a US congressional committee in May, he defended the creation of a national or international agency to govern artificial intelligence.

A project also supported by Sam Altman, who has just returned from a European tour where he urged political leaders to find a "fair balance" between protection and innovation.

But be careful not to leave power to companies, warns Gary Marcus: "The last few months have reminded us how much they are the ones who make important decisions, without necessarily taking into account (...) collateral effects".

© 2023 AFP