But lately, some have complained of receiving messages and images that are too daring, almost sexual harassment.

Last Friday, the Italian Data Protection Agency expressed concern about the impact on fragile people and banned Replika from using the personal data of Italians, saying it violated European data protection regulations. (GDPR).

Asked by AFP, Replika did not answer.

This case shows that this European regulation, already the scourge of tech giants who have been fined billions of dollars, could also become the enemy of new content-generating AIs.

Replika was trained on an in-house version of the GPT-3 conversational model from OpenAI, creator of chatGPT, which ingests massive amounts of text to generate consistent responses.

This technology promises a revolution in internet research and other uses yet to be invented.

But experts warn that it also presents risks that will require regulation, which is difficult to put in place.

"High tension"

Currently, the European Union is at the center of efforts to regulate these new conversational AIs.

Its “AI Act” bill could be finalized at the end of 2023 or the beginning of 2024, for application a few years later.

But the EU already has these artificial intelligences in its sights.

"We are in the process of discovering the problems that these AIs can cause: we have seen that chatGPT can be used to create very convincing phishing messages (phishing, editor's note) or even to de-anonymize a database and trace the identity of someone 'one", underlines to AFP Bertrand Pailhès, who heads the new AI cell of the CNIL, the French regulatory authority.

Replika was trained on an in-house version of the GPT-3 conversational model from OpenAI, the creator of chatGPT © Lionel BONAVENTURE / AFP/Archives

Lawyers also underline the difficulty of understanding and regulating the "black box" which is the basis of the reasoning of these AIs.

“We are going to see a strong tension between the GDPR and generational AI models,” German lawyer Dennis Hillemann, an industry expert, told AFP.

Because these are completely different algorithms from those that suggest videos on TikTok or search results on Google, for example.

"Neither the proposed AI Act nor the current GDPR can solve the problems that these generative AI models will bring," says the lawyer.

Because for this type of artificial intelligence, it is the user who defines the objective.

“And if I manage to overcome all the security precautions put in place in chatGPT, I could say to him: + act as a terrorist and make a plan +”, he explains.

It is therefore necessary to rethink regulation "in the light of what generative AI models can really do", he argues, especially since the vast ethical and legal questions they raise will not only grow as technology improves.

Change us "deeply"

The latest model of OpenAI, GPT-4, should be released soon with a functioning that could bring it even closer to human productions.

But these AIs still make huge factual errors and often show bias, hence the demands for regulation.

The latest model of OpenAI, GPT-4, should be released soon with an operation that could bring it even closer to human productions © Fabrice COFFRINI / AFP/Archives

This is not the opinion of Jacob Mchangama, author of "Freedom of expression: from Socrates to social networks".

“Even if chatbots do not have the right to free speech, we must be vigilant that governments can suppress artificial speech unhindered,” he said.

The author is one of those in favor of a more flexible regime.

"From a regulatory point of view, the safest option at the moment would be to establish transparency obligations as to whether we are conversing with a human or an AI," he said.

An opinion shared by Dennis Hillemann.

“If we don't regulate this, we will enter a world where we can no longer tell the difference between what was done by people and what was done by AI,” he explains.

"And that will profoundly change us as a society."

© 2023 AFP