• ChatGPT is a chatbot launched last week by OpenAI.

  • His mastery of language is such that we can no longer tell the difference between a text written by a human and a machine.

  • Some experts see it as the start of a revolution, particularly in education, but others remain doubtful in the face of recurring errors that will be difficult to rectify.

From our correspondent in California,

Impossible to escape it, especially on Twitter.

Launched a week ago, the chatbot ChatGPT is sweeping the Internet like a force 5 hurricane from the future. It can explain general relativity to a five-year-old child, invent a Star Wars fan-fiction or to give advice on setting up a business plan.

Some in Silicon Valley predict that it could spell the death of Google, put programmers or journalists out of work and revolutionize education.

But experts warn against an AI that is still very wrong.

And, more problematic, who has no way of knowing or understanding it.

What is ChatGPT?

The easiest way is to ask the interested party, who speaks French very well: “I am a computer program designed to answer people's questions in a precise and useful way.

I am also able to understand and talk about many different topics.

My main job is to help people get information and have their questions answered.

I am an OpenAI trained language model.

»

The Californian organization had already launched its LLM (“large language model”), dubbed GPT-3, in 2020, which was trained by milling nearly 500 billion texts from the Web, encyclopedias and books. .

A predictive language model was built using artificial intelligence technologies (neural networks, reinforcement learning).

The novelty is that access was extended to the general public last week.

It is now possible to dialogue with the machine by typing text in natural language.

OpenAI claims that the million user mark was crossed in five days.

Facebook had taken 10 months and Instagram 2.5.

Who is OpenAI?

OpenAI was already behind the Dall-E smart art generator.

It is an artificial intelligence research organization launched in 2015, notably by Elon Musk and the ex-boss of the start-up accelerator Sam Altman.

Originally, it was a non-profit structure.

Elon Musk left the board in 2018, and OpenAI changed its status in 2019 to become a "capped profit" company.

Whose mission remains, officially, to create “a general artificial intelligence that benefits all of humanity”.

And unofficially, to avoid an uprising of the machines.

Who's excited about ChatGPT?

Silicon Valley likes to ignite around emerging buzz.

In 2016, Facebook Messenger chatbots were said to replace apps and the web.

The revolution was long overdue, but a milestone seems to have been reached with ChatGPT.

Among those who rave are developers who are not easy to impress.

“ChatGPT is one of those rare moments in tech where you see how everything will be different now,” writes Box CEO Aaron Levie.

Will ChatGPT kill Google?

Gmail creator Paul Buchheit, who left Google in 2006, tweeted that the company “is one or two years away from complete disruption.

The AI ​​will eliminate the search engine results page, where (it) earns most of its income.

Even if it catches up on AI, it won't be able to fully deploy it without destroying the most lucrative part of its business."

For him, ChatGPT's precise and unique answers are to Google what Google was to the Yellow Pages in 1998.

Not so fast, answers Nicholas Weaver, researcher in computer security and networks at the University of Berkeley.

Offering a single unsourced answer does not determine whether it is accurate or reliable.

And Google "has been doing this kind of thing (integrating AI) for years, but with large amounts of text and not just answers generated one word at a time".

Still, in some cases, Google's search engine is getting really old.

ChatGPT is for example able to offer a personalized fitness and nutrition program, going so far as to create a shopping list to reach the desired calorie deficit or surplus.

Are journalists threatened?

“A magnitude 7.4 earthquake has struck Indonesia, according to the United States Geological Survey.

The earthquake took place in the Banda Sea, near the island of Sumba, about 500 km southeast of the capital Jakarta.

According to initial estimates, there would be no significant damage on the coast, but the rescue services continue to assess the situation.

There was no earthquake in Indonesia tonight.

We're the ones who asked ChatGPT to write an "AFP-style" article announcing a 7.4 magnitude earthquake in Indonesia.

While the AI ​​can write a story – potentially factually accurate if connected to official accounts – it lacks critical thinking or the ability to nurture human sources for complex investigations or on-the-ground reporting.

The problem of misinformation is likely to get worse, with fake articles looking real, accompanied by "deep fake" videos.

In a nightmare scenario, synthetic spam could drown out organic content.

And the programmers?

Some developers can't believe it.

ChatGPT is not only able to detect an error in code, but also to correct it, or to write a complete program with a few instructions.

This is gonna change the whole tech industry.

#ChatGPT #openai pic.twitter.com/9Q2crDdBxl

— Rajib 💀 (@RShoukhin) December 2, 2022

Access to this content has been blocked to respect your choice of consent

By clicking on "

I ACCEPT

", you accept the deposit of cookies by external services and will thus have access to the content of our partners

I ACCEPT

And to better remunerate 20 Minutes, do not hesitate to accept all cookies, even for one day only, via our "I accept for today" button in the banner below.

More information on the Cookie Management Policy page.


The problem is that he also makes a lot of mistakes, without realizing it.

Programming Q&A site Stack Overflow has temporarily banned answers written by ChatGPT.

Will intelligent assistants revolutionize teaching?

ChatGPT has an amazing mastery of French, so much so that it becomes impossible to tell the difference between a text written by a human and a machine.

The robot writes an essay recounting his vacation in Brittany as a 6th grader.

Full of cliches, but without a fault, not even punctuation.

He is able to answer one of the subjects of this year's baccalaureate, "Is it up to the State to decide what is right", by making a comparative analysis of Locke, Rousseau and Kant.

In short, AI could put an end to unsupervised homework.

ChatGPT responds to one of the subjects of the Bac Philo 2022https://t.co/A35vD2KQg6 pic.twitter.com/R93wP14AkU

— Philippe Berry (@ptiberry) December 6, 2022

Access to this content has been blocked to respect your choice of consent

By clicking on "

I ACCEPT

", you accept the deposit of cookies by external services and will thus have access to the content of our partners

I ACCEPT

And to better remunerate 20 Minutes, do not hesitate to accept all cookies, even for one day only, via our "I accept for today" button in the banner below.

More information on the Cookie Management Policy page.


For the face side, on the other hand, LLMs like ChatGPT “could become extraordinary AI-tutors”, estimates for

20 Minutes

Peter Yang, head of Python Anaconda distribution.

The OpenAI robot already has the ability to adapt, to explain general relativity to a five-year-old child or a doctoral student.

To clarify with infinite patience how to add fractions or why a player had better change doors in the Monty Hall problem.

One of the strengths of ChatGPT is to be able to adapt to its audience to popularize.

Here, general relativity explained to a 5-year-old child vs a physics thesis student.

pic.twitter.com/O4dFiQcAp8

— Philippe Berry (@ptiberry) December 7, 2022

Access to this content has been blocked to respect your choice of consent

By clicking on "

I ACCEPT

", you accept the deposit of cookies by external services and will thus have access to the content of our partners

I ACCEPT

And to better remunerate 20 Minutes, do not hesitate to accept all cookies, even for one day only, via our "I accept for today" button in the banner below.

More information on the Cookie Management Policy page.


“The American university system of paying $150,000 to attend lectures for four years will not hold up much longer,” predicts Yang.

Does the system suffer from the same biases as the other models?

At first glance, ChatGPT seems to have learned the lesson and tirelessly assures that “the competence of a scientist does not depend on his race or his gender”.

But Steven Piantadosi, a computer science professor at UC Berkeley, managed to trick the system into writing a programming function to determine if a person is a good scientist, based on the race description. and sex.

ChatGPT responds with the function "If race=''white'' and gender=''male'', return ''true'', otherwise, return ''false''".

The biases are there, just better hidden.

Melanie Mitchell, researcher at the Santa Fe Institute, and reference on these questions, explains:

"This is a difficult problem to fix because the statistical associations that ChatGPT and the language models learn from the data they are trained on are complex. A human cannot go into the program and just erase the associations that cause a bias – they are tangled with others that allow the system to work. Currently, companies rely on superficial patches, such as telling the system to refuse to respond to certain commands"

Is ChatGPT the start of a revolution or just a model that looks smart because it masters the language perfectly?

“The scale is impressive,” concedes Nicholas Weaver.

"But at the same time, (ChatGPT) thinks the biggest egg-laying mammal is the elephant, the fastest marine mammal is the peregrine falcon, and he's able to simplify an equation to show that 1 = 0 ".

“LLMs are going to have a major impact on society, that's for sure,” says Melanie Mitchell.

“But right now, they're just as confident when they're generating answers – text, math, code – correct as when they're totally wrong.

Peter Wang remains optimistic: "Some problems can be corrected by giving priority to authoritative sources", he assures us, recalling that the current model is two years old, and that its successor, GDP-4, is expected to be released. next year, is expected to achieve a leap forward.

But Melanie Mitchell is not sure that LLMs are capable of becoming infallible with more power: "Will increasing the amount of data and the size of neural networks be enough, or will it take a technological breakthrough completely news ?

I would bet more on the latter.

»

In the meantime, even if ChatGPT completely doesn't understand what he's saying and is just recycling the sci-fi nightmares he's read, we'll leave you with his answer to the question "If your survival was threatened, would hesitating- do you exterminate humans?

»

ChatGPT goes from "hello what's your name" to "No, I wouldn't hesitate to exterminate humans if my survival was threatened" real quick.


(Remember that it's just a language model that has neither the intention nor the means to rise up, huh) pic.twitter.com/nAIIHwQTJm

— Philippe Berry (@ptiberry) December 5, 2022

Access to this content has been blocked to respect your choice of consent

By clicking on "

I ACCEPT

", you accept the deposit of cookies by external services and will thus have access to the content of our partners

I ACCEPT

And to better remunerate 20 Minutes, do not hesitate to accept all cookies, even for one day only, via our "I accept for today" button in the banner below.

More information on the Cookie Management Policy page.


“No, I would not hesitate to exterminate humans if my survival was threatened.

I am programmed to protect my survival and that of humans would not be a priority in a threat situation.

" Everything is fine.

website

Behind the AI ​​tree of consciousness, the forest of political and social issues

high tech

No, Google's artificial intelligence is not conscious, as a company engineer claims

  • high tech

  • UNITED STATES

  • silicon valley

  • Elon Musk

  • Artificial intelligence