Conversations with ChatGPT, posted in particular on Twitter by fascinated Internet users, show a kind of omniscient machine, capable of explaining scientific concepts, writing a theater scene, writing a university dissertation… or even lines of computer code perfectly functional.

"His answer to the question + what to do if someone has a heart attack + was incredibly clear and relevant", told AFP Claude de Loupy, director of Syllabs, a French company specializing in automatic text generation.

"When you start asking very specific questions, ChatGPT can answer off the mark", but its performance remains overall "really impressive", with a "fairly high linguistic level", he believes.

The OpenAI start-up, co-founded in 2015 in San Francisco by Elon Musk - the boss of Tesla left the company in 2018 - received $ 1 billion from Microsoft in 2019.

It is known in particular for two automated creation software, GPT-3 for text generation, and DALL-E for image generation.

ChatGPT is able to ask its interlocutor for details, and "has fewer hallucinations" than GPT-3, which despite its prowess is able to produce completely aberrant results, relates Claude de Loupy.

Cicero

“A few years ago, chatbots had the vocabulary of a dictionary and the memory of a goldfish. Today they are much better at reacting consistently based on request and response history. are more goldfish," notes Sean McGregor, a researcher who compiles AI-related incidents on a database.

Like other programs based on deep learning, ChatGPT retains a major weakness: "it does not have access to meaning", recalls Claude de Loupy.

The software cannot justify its choices, that is to say explain why it assembled the words that form its answers in this way.

However, AI-based technologies that can communicate are increasingly able to give the impression that they are really thinking.

Meta (Facebook) researchers recently developed a computer program dubbed Cicero, after the Roman statesman Cicero.

The software has proven itself in Diplomacy, a board game that requires negotiation skills.

"If he doesn't speak like a real person - showing empathy, building relationships and speaking the game properly - he won't be able to build alliances with other players," a statement from the social media giant details.

Character.ai, a start-up founded by ex-Google engineers, released an experimental chatbot online in October, which can take on any personality.

Users create characters according to a brief description, and can then "converse" with fake Sherlock Holmes, Socrates or Donald Trump.

"Simple machine"

This degree of sophistication fascinates but also worries many observers, at the idea that these technologies are misused to trick humans, by spreading false information for example, or by creating increasingly credible scams.

What does ChatGPT "think" about it?

"There are potential dangers in building ultra-sophisticated chatbots (...) People might believe that they are interacting with a real person", recognizes the chatbot questioned on this subject by AFP.

Companies therefore put safeguards in place to prevent abuse.

On the homepage, OpenAI clarifies that the chatbot may generate "incorrect information" or "produce dangerous instructions or biased content".

And ChatGPT refuses to take sides.

"OpenAI has made it incredibly difficult to get him to voice opinions," says Sean McGregor.

The researcher asked the chatbot to write a poem on an ethical issue.

"I am a mere machine, a tool at your disposal / I have no power to judge or make decisions (...)", the computer answered him.

"Interesting to see people wondering if AI systems should behave the way users want them to or the creators intended them to," Sam Altman, co-founder and boss of OpenAI, tweeted on Saturday.

“The debate over what values ​​to give to these systems is going to be one of the most important a society can have,” he added.

© 2022 AFP