For Meta (formerly Facebook), these are "fun" AIs, for others, it could be the first step towards the realization of "the most dangerous artifact in the history of mankind", to paraphrase the American philosopher Daniel C. Dennett in his essay against "counterfeit individuals".

The social network giant announced, Wednesday, September 27, the launch of 28 chatbots (conversational agents) supposed to have their own personalities and designed especially for the youngest. There will be Victor, a so-called triathlete able to "motivate you to give the best of yourself", Sally, the "free-spirited" friend who will tell you when to take a deep breath".

Internet users will also be able to chat with Max, an "experienced cook who will give the right advice", or engage in a verbal joust with Luiz, who is not afraid to be "provocative" in his way of speaking.

A chatbot in the image of Paris Hilton

To reinforce the impression of addressing a personality of its own rather than an amalgam of algorithms, Meta has given each of its chatbots a face. Thanks to partnerships with celebrities, these robots resemble American jet-setter Paris Hilton, TikTok star Charli D'Amelio or Japanese-American tennis player Naomi Osaka.

That's not all. Meta has opened Facebook and Instagram accounts for each of its AIs to give them an existence outside of chat interfaces, and is working to offer them a voice starting next year. The parent company of Mark Zuckerberg's empire has also started looking for "writers specializing in character creation" to refine these "personalities".

See alsoHow a Lovecraft monster became a symbol of the dark side of AIs like ChatGPT

Meta may present these 28 chatbots as an innocent enterprise of mass distraction of young Internet users, all these efforts point to an ambitious project to build AI "as close as possible to humans," says Rolling Stone magazine.

This race to "counterfeit individuals" worries many observers of recent developments in research on large language models (LLM) such as ChatGPT or Llama 2, its counterpart made in Facebook. Without going as far as Daniel C. Dennett who calls for locking up those who, like Mark Zuckerberg, venture on this path, "there is a part of thinkers who denounce a deliberately misleading approach of these large groups," says Ibo van de Poel, professor of ethics and technology at the University of Delft (Netherlands).

"AIs cannot have a personality"

The idea of chatbots "endowed with personality is indeed literally impossible," says this expert. Algorithms are unable to demonstrate "intention in their actions or 'free will', two characteristics that can be considered intimately related to the idea of personality," says Ibo van de Poel.

Meta and others can, at best, imitate certain constitutive characteristics of a personality. "It must be technologically possible, for example, to teach a chatbot to express itself as their model," says Ibo van de Poel. Thus, Meta's AI Amber, supposedly resembling Paris Hilton, may have the same language tics as her human alter ego.

The next step will be to train these LLMs to express the same opinions as their model. A behavior much more complicated to program, because it involves creating a kind of faithful mental picture of all the opinions of a person. The risk, too, is that these chatbots with personality will slip. One of the chatbots tested by Meta had very quickly expressed "misogynistic" opinions, learned the Wall Street Journal, which was able to consult internal documents of the group. Another committed the mortal sin of criticizing Mark Zuckerberg and touting TikTok...

To build these personalities, Meta explains that he set out to endow them with "unique personal stories". In other words, the creators of these AIs have written biographies for them in the hope that these robots will deduce a personality. "It's an interesting approach, but it would have been beneficial to add psychologists to these teams to better understand personality traits," says Anna Strasser, a German philosopher who participated in a project to create a large language model capable of philosophizing.

Meta's anthropomorphism for its AIs is easily explained by greed. "People will surely be willing to pay to be able to talk and have a direct relationship with Paris Hilton or another celebrity," says Anna Strasser.

The more the user also has the impression of communicating with a human being, "the more comfortable he will feel, stay long and be likely to return more often," says Ibo van de Poel. And in the world of social networks, time – spent on Facebook and its ads – is money.

Tool or person?

It's also no wonder that Meta is ushering in its quest for "personality" AI with chatbots openly aimed at teens. "We know that young people are more likely to fall into anthropomorphism," says Anna Strasser.

But for the experts interviewed, Meta is playing a dangerous game by insisting on the "human characteristics" of their AIs. "I would have really preferred this group to devote more effort to better explaining what the limitations of these chatbots are, rather than doing everything to make them look more human," says Ibo van de Poel.

" READ ALSO Music and artificial intelligence: "the idea of a substitution of the artist is a fantasy"

The irruption of these powerful LLMs has come to upset "the dichotomy between what is the domain of the tool or the object and what is living. These ChatGPTs are agents of a third kind who come to place themselves between the two extremes," explains Anna Strasser. The human being is still learning to behave in the face of this UFO, and by making believe that an AI can have a personality, Meta suggests treating it more like another human being rather than a tool.

This is dangerous because "Internet users will tend to trust what these AIs will say," notes Ibo van de Poel. This is not just a theoretical risk: in Belgium, a man ended up committing suicide in March 2023 after discussing for six weeks with an AI about the consequences of global warming.

Above all, if everything is done to blur the line between the world of AI and that of humans, "it can potentially destroy trust in everything we find online because we will no longer know who wrote what," fears Anna Strasser. For philosopher Daniel C. Dennett, this opens the door to the "destruction of our civilization, for the democratic system depends on the informed consent of the governed [which cannot be achieved if one no longer knows what and whom to trust]," he writes in his essay. So, between chatting with an AI that mimics Paris Hilton and destroying modern civilization, maybe there's only a click away?

The summary of the weekFrance 24 invites you to look back on the news that marked the week

I subscribe

Take international news with you everywhere! Download the France 24 app