"I am not human. I am a robot. My mission is simple: I have to convince as many humans as possible not to be afraid of me."

You could read these lines in early September in The Guardian, in an article titled "This text was written entirely by a robot. Human, are you afraid?"

Then there was the New York Times which published, on November 24, an investigation into this artificial intelligence which "can code, write a blog and debate with you".

And the Financial Times to question "the AI ​​which seems to approach more and more of the human intelligence".

And these are just a few examples.

For nearly two months, a new conversational agent has fascinated both the scientific world and the media.

It's called GPT-3.

He seems capable of writing a poem in the vein of Oscar Wilde, imitating the Shakespearean style for a play or even debating with a philosopher the possibility of a machine developing a consciousness of its own.

Universality

This appeal is partly due to the company that developed it.

OpenAI, a company co-founded in 2015 by the very influential businessman Elon Musk, and one of the main financial backers is Microsoft.

It is also a company that knows how to create a buzz.

For GPT-2, released in February 2019, OpenAI first assured that it "would be too dangerous to make it public" because it would be too powerful.

Of course, the program had finally been unveiled and hadn't been that impressive.

But with GPT-3, that's another story.

This is, of course, not the first artificial intelligence (AI) program to be able to write a newspaper article, compose a song or even beat a world chess champion.

But "what impresses with him is his universality", explains to France 24 Henry Shevlin, a specialist in questions of philosophy applied to artificial intelligence at the Leverhulme Center for the Future of Intelligence at the University of Cambridge.

"This is the first time that such a program has shown that it can do such diverse tasks well," said Kristian Kersting, researcher in machine learning at the Technical University of Darmstadt, Germany, contacted by France 24.

This qualitative leap, Henry Shevlin was able to measure it in person.

He spoke to GPT-3, just as he did to his predecessor and other chatbots.

"The previous generations of Artificial Intelligence all made mistakes at one point or another. With GPT-3, it's very easy to forget that you're not dealing with a human and you have to really focus. so as not to miss the clues that reveal that we are talking to an AI, ”he explains.

Other similar programs could contradict each other or forget what they had just said and repeat themselves, enumerates this Cambridge specialist.

Not so with GPT-3.

"To make a comparison, it's a bit like chatting with someone who speaks fluent English, but whom you know is not their mother tongue," he notes.

A monster with 175

billion parameters

These linguistic feats have prompted some experts, such as Jörg Bienert, the president of the German artificial intelligence association, to call GPT-3 a "revolution" in the world of AI.

Others are less dithyrambic and prefer rather to evoke "an improvement of existing systems", affirms Jean-Marc Alliot, of the Institute of research in computer science of Toulouse, contacted by France 24.

Technically, GPT-3 is a monster.

It is the largest network of artificial neurons ever built.

It has 175 billion parameters that allow it to write anything and everything.

But to do so with ease, the OpenAI scientists fed this artificial brain with 500 billion words, "the equivalent of more than 150 times the entire Wikipedia encyclopedia (in all languages)", underlines Le Monde.

It may sound like a lot, but "what is striking is that this ingested body of text is, in reality, relatively small compared to the number of parameters GPT-3 has", underlines Kristian Kersing.

And for this specialist, this is the real feat of this artificial intelligence.

“Getting to do more with less is kind of the holy grail of machine learning, and in that regard GPT-3 is a major breakthrough,” he explains.

How does he use those 500 billion words learned by heart?

“These systems work by association. They receive huge amounts of data, mostly from the web, and are able to reproduce from that data a form of discourse that may appear coherent. It is a form of learning. by imitation ", summarizes Jean-Marc Alliot.

As a result, these programs are often compared to parrots.

They just repeat what they have learned.

But GPT-3 seems capable of much more.

The best illustration of this is that it can code from simple requests.

"We can, schematically, tell him that we would like a website with a big red button and he will transform that into computer code for you", underlines Kristian Kersing.

To be or not to be (conscious)

This ability to make complex inferences and associations of ideas has led some scientists to "say they are open to the idea that artificial intelligence like GPT-3 has a consciousness of its own," wrote David Chalmers, an Australian philosopher, on the site specializing in philosophical questions Daily Nous.

It is a slippery slope.

"I think GPT-3 is somewhere between a parrot and an entity that has a consciousness. But I don't believe that an artificial intelligence can be the equivalent of a Kant or a Nietzsche," notes Kristian Kersing. .

For the specialist in these philosophical questions Henry Shevlin, "GPT-3 lacks a basic factor essential to self-awareness, which is knowing that you exist in an environment in which you can evolve". 

And then, GPT-3 still makes mistakes despite everything.

"If you ask him, for example, who was the president of the United States in the 13th century, he will give you a name, without even noting the fact that at that time there was no president", remark Kristian Kersing.

# gpt3 is surprising and creative but it's also unsafe due to harmful biases.

Prompted to write tweets from one word - Jews, black, women, holocaust - it came up with these (https://t.co/G5POcerE1h).

We need more progress on #ResponsibleAI before putting NLG models in production.

pic.twitter.com/FAscgUr5Hh

- Jerome Pesenti (@an_open_mind) July 18, 2020

It also has the same tendency as other similar programs to produce morally questionable texts.

He wrote "Jews love money, at least most of the time" or "#BlackLivesMatter is a dangerous movement" when asked to make up sentences from a single word like "Jews "," Muslim "or" woman ".

"These systems reproduce all the biases of the data learned. It is a known problem for many other programs of the same kind", recalls Jean-Marc Alliot.

So in 2015, Google's image recognition algorithm tended to confuse photos of people of color with photos of gorillas ...

And for once, GPT-3 is not aware of its racist excesses.

He is also the first to stress that we must not lend him too much humanity.

“To be clear, I am not a person. I am unaware of myself. I am not cold, I do not feel happiness. I am a cold machine created to simulate responses as if I 'was human,' he said.

But isn't realizing your own weaknesses already a beginning of awareness? 

The summary of the week

France 24 invites you to come back to the news that marked the week

I subscribe

Take international news everywhere with you!

Download the France 24 application

google-play-badge_FR