THE CONVERSATION

ChatGPT and artificial "intelligences": how to detect the true from the false

Is ChatGPT that smart? © Busyfingie/Shutterstock

Text by: The Conversation

7 min

Advertising

Read more

By Laurence Devillers, Sorbonne University

A real darling of the 2023 school year, the ChatGPT interactive system has raised a wave of enthusiasm, then questions and concerns. In a very short time, it gathered a million users and was tested on many tasks, mainly textual: request for information, dissertation, generation of fiction, computer programming, translation of texts, writing poems ...

One of the reasons for this popularity is that ChatGPT has shown impressive capabilities in many areas as well as emerging capabilities, for example computer code generation and "multimodal" generation. Another reason is that its dialog interface allows users to interact with a large underlying GPT3.5 language model more effectively and efficiently than before, through interactive chats.

These results led to the question of whether these large language systems could be used for professional, documentary, educational or artistic purposes. It is possible that such systems will transform certain professions and have a profound impact on teaching and education – children are particularly vulnerable to these systems.

An "intelligence"... in appearance only

ChatGPT produces texts that are almost grammatically perfect although it has no understanding of what it produces. It has some really amazing capabilities and some of the cases shown as examples are remarkable. Its texts, often complex, can resemble the original data used for learning and have certain characteristics.

But with the appearance of the true, these results can sometimes be totally wrong. What is the nature and status of these artificial words without associated reasoning? The understanding of natural language involves complex and varied reasoning, spatial, temporal, ontological, arithmetic, based on knowledge and allowing to connect objects and actions in the real world, which ChatGPT is far from integrating having no phenomenal perception.

François-Michel Letourneau tested ChatGPT on the Amazon.
Result: To understand deforestation, it is better to read his excellent book.
Conclusion: ChatGPT will not put him out of work! (That was his question)https://t.co/AJx286jZjL via @FR_Conversation

— Christian de Perthuis (@chdeperthuis) February 20, 2023

While a few selected examples may suggest that language models are capable of reasoning, they are in fact capable of no logical reasoning and have no intuition, no thought, no emotions. ChatGPT speaks confidently in good French as in other languages after having swallowed billions of data, but does not understand anything it says and can very easily generate fake news, discrimination, injustice and amplify the information war.

How to detect the true from the false: from technology to education

However, these non-transparent approaches can be evaluated in many aspects on existing data (these are benchmarks) to show the lack of performance of systems on logical reasoning problems such as deduction, induction or abduction – or common sense.

Education can take up this topic to show the limits of this disembodied artificial language, and make students work on a better understanding of the concepts of numerical modeling, machine learning and artificial intelligence.

More gullible children in front of AIs

This is especially important because children may be particularly gullible in front of these speech-gifted systems like ChatGPT.

Winner of the Nobel Prize in Economics, the American Richard Thaler highlighted in 2008 the concept of "nudge", a technique that consists of encouraging individuals to change their behavior without coercing them and using their cognitive biases.

In addition, we were able to show that young children followed more suggestions for dialogue systems embedded in objects (such as a Google Home or a robot) than those of a human. Our research approach was based on a game on altruism and conducted as part of the Humaaine AI Chair (for Human-Machine Affective Interaction and Ethics) on digital nudges_ amplified by AI. This interdisciplinary chair, a kind of laboratory for the study of human-computer interaction behavior, brings together researchers in computer science, linguistics and behavioral economics.

Will ChatGPT make us less gullible? https://t.co/ngN4rUchWW via @FR_Conversation #tweetsrevue #cm #socialmedia

— 👁 Patrice Hillaire 👁 (@hillairepatrice) January 27, 2023

Chatbots like ChatGPT could become a means of influencing individuals. They are currently neither regulated nor evaluated and very opaque. It is therefore important to understand how they work and limits before using them, and in this context, the school has a big role to play.

Why is ChatGPT so powerful?

ChatGPT is a multilingual interactive multitasking system using generative AI that is freely available on the Internet. Generative AI systems rely on algorithms capable of encoding huge volumes of data (texts, poems, computer programs, symbols) and generating syntactically correct texts for a large number of tasks.

Transformers are one such type of algorithm. These are neural networks that learn the most salient regularities of words in many contexts and are thus able to predict the word or sequence likely to be relevant in the rest of a given text.

ChatGPT is the successor to the InstructGPT Large Language Model (LLM), to which a dialog interface has been added. InstructGPT works better than previous approaches: developers have been able to better match generative AI (GPT3.5 type) with user intent across a wide range of tasks. To do this, they use "reinforcement learning", i.e. AI also learns from the comments that humans make on its texts.

Increasing the size of language models does not inherently make them more able to track user intent. These large language models can generate results that are deceptive, toxic, or simply useless to the user because they are not aligned with the user's intentions.

But the results show that fine-tuning through human feedback is a promising direction for aligning language patterns with human intent, even if InstructGPT still makes simple mistakes.

Thus, the technological performance of ChatGPT comes from the size of generative AI using "transformers" (175 billion parameters), the alignment of AI responses by reinforcement learning but also the possibility of dialogue with this system.

ChatGPT's impact on the information search market

Microsoft-OpenAI's ChatGPT is a threat to Google's query model with its search and production power. Google positions Bard as a more thoughtful and accurate interactive search engine, which isn't hampered by the current issues ChatGPT faces since it was trained on data available before September 2021 – and therefore doesn't know the latest news (yet).

Chinese company Baidu also has a generative AI project with Ernie Bot. The "BigScience" project, created by HuggingFace and including funding from the CNRS and the Ministry of Research, has created "Bloom", a generative AI using a language model of 176 billion parameters trained on multilingual multitasking data and especially in "open science"! This innovative project is a public/private cooperation and has involved more than a thousand researchers from many countries. This could result in a "ChatBloom".

Ethical issues

The current context is marked by the achievements and applications of these widely disseminated systems whose massive impact requires ethical reflection.

These multilingual, multitasking and interactive generative AIs raise many questions: the data chosen to train them, the distribution of languages in a multilingual system, system optimization parameters, ownership of generated content, etc.

In addition, the generative power of AIs is often amplified by filters that censor certain topics and logical deduction modules in order to verify the veracity of statements. A handful of humans (engineers, transcribers, evaluators) have created this type of system used by millions of people.

These massively used artificial intelligence systems therefore pose major ethical challenges, including the transformation of the notion of information production, the relationship to the truth and the massive risks associated with disinformation and manipulation.

Laurence Devillers is a professor at Paris-Sorbonne University and a researcher at the CNRS Computer Science Laboratory for Mechanics and Engineering Sciences (Limsi).

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Newsletter Receive all the international news directly in your mailbox

I subscribe

Follow all the international news by downloading the RFI application

Read on on the same topics:

  • New technologies
  • Society
  • Media
  • Culture