The Californian company OpenAI, at the origin of the ChatGPT phenomenon, launched Tuesday March 14 GPT-4, a new version of the generative artificial intelligence technology which operates the famous chatbot and constitutes a step closer to computer programs as "intelligent" than humans.

Microsoft, which has invested billions of dollars in the start-up, announced in the wake of having integrated GPT-4 to Bing, its search engine already equipped with ChatGPT functionalities for a month.

"GPT-4 is a great multimedia model, less adept than humans in many real-life scenarios, but as good as humans in many professional and academic contexts," OpenAI said in a statement.

Enthusiasm and controversy

ChatGPT arouses a lot of enthusiasm, but also controversy, since it is freely available and used by millions of people around the world to write essays, lines of code, scenarios or simply to test its capabilities.

With GPT-4, the chatbot will become "more creative and collaborative than ever", promises the company.

Unlike previous versions, the new model is equipped with vision: it understands text but also images, thanks to another start-up, Be My Eyes.

However, it only generates text.

In the immediate future, only users of ChatGPT Plus, the paid version of the chatbot, and the million Internet users with access to the new Bing will be able to test GPT-4 (without image processing for the moment).

OpenAI has thus established itself as the leader in generative artificial intelligence (AI) with its programs producing texts or, like DALL-E, images.

The multimedia capabilities of GPT-4 are a step in the direction of so-called "general" artificial intelligence, which the boss of the start-up, Sam Altman, calls for.

The concept refers to AI systems with human cognitive abilities, or "smarter than humans in general", according to Sam Altman.

"Our mission is to ensure mainstream AI benefits all of humanity," he said on the company's blog on February 24.

"Never seen"

For now, the model lacks a crucial capacity: memory.

It was trained on data that stops in September 2021 and "does not learn continuously from its experiences", details OpenAI.

On the other hand, he gained academic ground: "he passed the exam to become a lawyer with a score as good as the top 10%. The previous version, GPT 3.5, was at the level of the worst 10%", said congratulated the company.

"GPT-4 can now apply to study at Stanford (a prestigious American university, editor's note). His ability to reason is NEVER SEEN!" Tweeted Jim Fan, an AI specialist who worked for Google and OpenAI. , and now at Nvidia.

He admitted to having performed worse in some exams than the model.

“The power of the algorithm will increase, but it is not a second revolution”, nuanced Robert Vesoul, CEO of the French company Illuin Technology.

"We didn't go from the Moon to Mars."

"Despite its capabilities, GPT-4 has limitations similar to previous models," OpenAI acknowledged.

"He's not completely reliable yet (he 'hallucinates', invents things and makes logic errors)."

AI Race

The ChatGPT craze has started a race for generative AI.

In the lead, Microsoft and Google have integrated automated creation tools into their online platforms and software, to facilitate the production of e-mails, advertising campaigns and other documents - not without hiccups and machine hallucinations.

Morgan Stanley announced Tuesday that it will use GPT-4, which allows "to have all the knowledge of the most qualified person in wealth management - instantly", noted Jeff McMillan, one of the bank's executives.

Tutorial giant Khan Academy and payment app Stripe will also integrate GPT-4 features.

This rapid progression of generative AI worries many intellectual and creative professions, who already imagine themselves reduced to the role of managing chatbots to extract the best texts and images.

These technologies also have the potential to be used for nefarious purposes.

The company announced that it has hired more than 50 experts to assess new dangers that could emerge, for cybersecurity for example, in addition to the already known risks (generation of dangerous advice, faulty computer code, false information, etc.).

Their feedback and analyzes should make it possible to improve the model.

“In particular, we collected additional data to ensure that GPT-4 denies user requests about the manufacture of hazardous chemicals,” OpenAI said.

With AFP

The summary of the

France 24 week invites you to come back to the news that marked the week

I subscribe

Take international news everywhere with you!

Download the France 24 app