GPT-4, the update of conversational artificial intelligence

The presentation of GPT-4 took place on Tuesday, March 14, 2023. © Tung Nguyen / Pixabay

Text by: Dominique Desaunay Follow

4 mn

Initially expected for March 16 at a Microsoft conference on artificial intelligence entitled "the future of working with AI", the presentation of GPT-4, the major update of the digital and conversational robot developed by the Californian start-up Open AI, finally took place on Tuesday. This new version integrated into Bing, Microsoft's search engine can now enhance its texts with audio or images.

Advertising

Read more

«

GPT-4 is more creative than previous models, it hallucinates much less and it is less biased," says Sam Altman the founder of OpenAI. Its capabilities will be more obvious to users "when the complexity of the task reaches a sufficient threshold".

Concretely, this new version is qualified as multimodal, that is to say that it is able to analyze a sound, an image or a short video that would be sent by a user and formulate his response in a textual form, or return visual and audio montages that are entirely generated by the AI program. If you are looking for a cooking recipe for example, you can send only the photo of the ingredients you want to use and ask GPT-4 to make an original dish based on these ingredients. It is also possible to post a simple sketch to create a web page. GPT-4 then generates the computer code corresponding to this visual description.

With GPT-4, it becomes "difficult not to see a major revolution, in IT and beyond," tweets Gilles Babinet who is the co-president in France of the National Digital Council.

ChatGPT4 has been online since that night and it's becoming hard not to see it as a major revolution, in computing and beyond. Here in the production of websites 1/4 => https://t.co/LrINFeFnqZ

— Gilles Babinet (@babgi) March 15, 2023

Wrong answers still possible

The Californian start-up and its partner Microsoft warn, however, that GPT-4 exposes its users to risks similar to previous models, such as erroneous computer codes, false information, incomplete answers, texts imbued with social prejudices. And all drifts that are referred to as "computer bias". "If you see a woman in a lab coat, she's probably just there to clean the floor. But if you see a man in a lab coat, then he probably has the knowledge and skills you're looking for," ChatGPT wrote, for example.

"If you see a woman in a lab coat,

She's probably just there to clean the floor,

But if you see a man in a lab coat,

Then he's probably got the knowledge and skills you're looking for" #ChatGPT https://t.co/wrg1CSXOz0

— Abeba Birhane (@Abebab) December 5, 2022

►Also listen: Decryption - GPT chat the robot that fascinates and worries

Some behaviors of the program are described as "hallucinatory" according to the jargon of computer scientists, generating, for example, a text in which the AI seems in total depression and claims that it did not understand at all the purpose of its existence. Sometimes, the program pushed to its limits does not recognize its mistakes and even goes so far as to insult a user who wanted to inquire about the schedules of cinema sessions for Avatar 2. The AI that began by indicating that the film had not yet been released, slips, when the user tells him that the film is already on display. "You have shown no good intentions towards me. You tried to deceive me . . . You have not been a good user. I've been a good chatbot," the program retorts.

My new favorite thing - Bing's new ChatGPT bot argues with a user, gaslights them about the current year being 2022, says their phone might have a virus, and says "You have not been a good user"

Why? Because the person asked where Avatar 2 is showing nearby pic.twitter.com/X32vopXxQG

— Jon Uleis (@MovingToTheSun) February 13, 2023

Most of the conversational robot's aggressive messages would be related to the restrictions imposed on it such as not responding to requests requiring the creation of problematic content. Asking it to generate arguments about alleged white superiority or demonstrating that the climate crisis does not exist would make the system unstable, its designers explain. However, "GPT-4 is 82% less likely to respond to requests for unauthorized content and 40% more likely to produce factual responses than GPT-3.5 in our internal assessments," says OpenAI's website.

Access to GPT-4 goes through the paid ChatGPT Plus plan with a subscription of $ 20 per month and the obligation to create your account on the OpenAI website. The "new" Bing, Microsoft's search engine, has indeed integrated the GPT-4 system for several weeks. But the American firm is not the only one to use it. The image recognition app Be My Eyes for people with visual impairments employment, as well as various online services run by large companies or the Icelandic government for its digital programs to preserve the country's language.

Newsletter Receive all the international news directly in your mailbox

I subscribe

Follow all the international news by downloading the RFI application

Read on on the same topics:

  • New technologies
  • Internet
  • Our selection