RFI explains

Artificial intelligence: what are we really talking about?

Initiated in the United States in 1956, artificial intelligence (AI) has now completely invested our daily lives, from social networks to GPS navigation, from facial recognition to medical diagnostics or robotics in industry... It fascinates, frightens and monopolizes the attention of researchers, citizens or states. To understand what we are talking about, we must go back to the basics, explain what AI is, its history, how it works, what it can bring and what its dangers are. Nicolas Sabouret, professor of computer science at Paris-Saclay University and researcher at the Interdisciplinary Laboratory of Digital Sciences (LISN-Univ) answers RFI's questions.

Artificial intelligence: threat or opportunity for our human societies? © Imaginima/Getty

Text by: Anoushka Notaras

Advertising

Read more

1.RFI: What is artificial intelligence?

Nicolas Sabouret: To understand, we must go back to the origin of the term "artificial intelligence".

In the 1950s, the British researcher Alan Turing worked on calculations and in particular the theory of computability and complexity. It must be remembered that computers and computer science are the science of information processing and that the transformation of information is calculation. Turing, the spearhead of the discipline, defined what is computable and what is complex to calculate. A visionary, he imagined that machines that do calculations like this would one day be able to play chess, drive a car or do tasks that we humans do with our intelligence. He said at the time: "If machines can do that, we can talk about 'Machine Intelligence'." This is what then gave rise to "Artificial Intelligence", i.e. artificial intelligence.

Beyond Turing's visionary, there are two important points to understand. The first is that at no time do researchers say that the machine is intelligent. The machine calculates, it does not think. It does things better than we do, or at least as well as we do in some areas. But it does it with calculation, not with intelligence. Dutch researcher Edsger Dijkstra famously said: "Asking if a computer can think is as stupid as wondering if a submarine can swim." That quote sums it all up. When we understand the term "artificial intelligence" in this way, we understand that we are not talking about intelligent machines. We are talking about imitating, reproducing the abilities of humans with the help of calculation.

Also listen: Should we be afraid of artificial intelligence?

The second point is the notion of the complexity of the calculation: Turing understood that some problems were complicated, even infeasible by a machine. He then wondered how to solve them by calculation. To explain, let's take chess for example. In the 1930s, the American mathematician John Von Neumann proposed an algorithm that made it possible to calculate the perfect game of chess to win every time. But to achieve it, it would be necessary to make a number of operations that is of the order of ten to the power of 120, which is impossible. AI researchers will therefore look for solutions in a reasonable computational time, knowing that it is not possible to obtain the exact solution. We agree to make close solutions to obtain most of the time a correct result. In artificial intelligence, we talk about heuristics.

The term artificial intelligence is quite misleading. In fact, it is computer computing in which we have simplified the problem so that it works most of the time, even if we know that it is not possible to make it work perfectly.

2.When was AI born and how has it evolved over time?

Artificial intelligence began in 1950 with the publication of Alan Turing's article entitled "Computing Machinery and Intelligence", in which he wondered if "machines can think" and posed his famous Turing test, the game of imitation.

It was in 1956, when researchers met at Dartmouth University in the United States, that the term "artificial intelligence" was used for the first time. The organizer of the conference, the American mathematician John McCarthy, decided not to keep the term "Machine intelligence" which was Turing's, and chose that of "Artificial Intelligence". From the end of the 1950s, there is a crazy effervescence on machines "capable of doing everything and that will change our lives" through clips on television, radio, newspaper articles, etc.

AI remained very popular until the early 1970s. But the results were not there, so enthusiasm waned and AI experienced its first "winter" until the early 1980s. Then there was a rebound with expert systems based on handwritten rules to try to mimic human intelligence. It lasted five years and the breath fell. This is the second winter of AI, which lasts from 1985 to 2005.

There was IBM's attempt with Deep Blue in 1990 to give AI a new lease of arms. But it didn't work, because people saw that Kasparov couldn't keep up with a machine that could calculate all possible moves in advance. Is the machine really smart? It didn't really help the AI.

In 2005, two pretty incredible things happened that got AI off the ground.

The first is the development of graphics cards by the video game industry. Graphics cards have the ability to simultaneously and very quickly do a very large number of additions and multiplications.

In the 1990s, researchers Yann Le Cun and Joshua Bengio proposed neural networks that seemed to work well, but that required computational means that we did not have at the time. In 2005, they published a paper entitled "Deep Learning" in which they explained that graphics cards will allow them to compute several layers of neurons simultaneously and perform better than they did before two layers. And it worked and the neural network technique began to convince researchers.

The second is Google's creation of a research unit to tackle the still unsolved problems of games. They use a classical algorithm used for Go to which they connect a neural network so that the algorithm gains in performance. The game of Go is interesting, because you can make the machine play against itself, which allows it to practice. She automatically adjusts the parameters of the neural network at full speed by playing millions of games against herself, until she learns to play well against herself. This mix of classical computational techniques and neural networks led, in 2015, to the victory of AlphaGo against the reigning Go world champion. This event revives the whole theme of AI and people are investing in this field again.

Since 2015, the winter of AI has not subsided and I think it is thanks to the cautious communication of researchers on the limits of AI, transmitted by journalists to the general public, which was to say: "Be careful, AI does not work every time, it can not do everything and we work on it", Unlike the 1960s and 1980s when it was announced that in ten years it would have solved everything.

Subsequently, different neural network techniques were developed, including those we are talking about today: "GenerativeTransformer" algorithms, generative transformers that are used to make generative networks such as ChatGPT and Dall-E.

3.What are the areas of application of AI that we find in our daily lives?

AI is hidden everywhere. When you send a letter by mail, the sorter uses an AI algorithm to read postal codes, starting in the 1990s. Another example: why do elevators manage to stop gently on the right floor? Thanks to AI algorithms incorporated into electronics.

There are AI algorithms that we use every day. The first is the search for information via a search engine like Google. Then there are GPS navigation aids. And social networks: many people are not aware of it at all, but the algorithm that offers them the posts is AI. If you liked Rihanna's post, then you'll like Justin Bieber's post. If you do a search for new shoes, for example, it's still an AI algorithm that will offer you shoe ads on your Instagram account.

4.What other areas are using AI?

In medicine, we have been developing tools to help with medical diagnosis for twenty years and they work quite well. But the difficulty of putting them into practice is due to the time needed to enter all the information into the machine. The general practitioner does not really want to spend half an hour entering all the information when he can make the diagnosis of the patient by auscultating him.

On the other hand, automatic image recognition algorithms allow a real complementary use between humans and machines. The machine will make a kind of pre-diagnosis on the image that the doctor, who has the entire file, will be able to reinforce or not.

These things are starting to fall into place and I think that in a few years, we will really succeed in doing smart things with the use of AI in the medical context.

AI is also used in decision-making tools. A water utility will use AI to test hypotheses about the water distribution network before implementing them in reality. Or EDF will use AI algorithms to generate load curves that make it possible to study how household consumption will evolve. These AI algorithms will replicate things that humans do, which is where the human simulation part of AI can come in handy.

Another big advance in AI that is not visible to the general public, although it is widely used, concerns sound processing. Today, there are AI algorithms in all sound processing systems, which makes it possible to have very good quality sound on the radio, DVDs, etc.

AI will also revolutionize everything related to the conservation of intangible heritage (text, image, audio) by making it possible to store, store, find, organize quantities of astronomical data.

AI is also used by aeronautics and industry to optimize production lines.

5.How does AI work?

The first AI technique is based on rule-writing which was very popular in the 1970s and 1980s because it really was what worked best at the time. If I ask you, for example, how you manage to get from Paris to Marseille by car, you will tell me the steps you will follow. We will write rules by hand to reproduce the reasoning of the human by programming "if I see a road sign, then I turn right; If I cross a bear, then I brake so as not to hit it", etc.

Once these rules are written, there are calculation techniques that implement them and it allows to obtain results that are quite incredible. When you send robots to Mars, they work with rules-based systems. It's quite adaptive and you control everything that happens. We know where we put the mistakes, so we know the approximations that were made. When the robot makes a mistake, we know why. There are automatic diagnostic and planning techniques that are proposed to do tasks of this type, it works very well.

The second AI technique is machine learning. The idea is to tell the machine how to do things. We decide which variables are important to study and then we ask the machine to automatically calculate the links between these variables. They are provided with a program structure in which there are the missing values and asked to find the values to make it work.

The image I like to give is that of an audio mixer. Whatever the sound you give as input, you have to find the right position of the buttons to get the same sound out. Faced with the infinite number of possibilities, the machine is trained by giving it a multitude of examples so that it gradually manages to adjust the parameters that allow it to calculate the best possible output. By moving all the buttons a little, we end up finding the values that, most of the time, work very well. Moreover, the term machine learning is rather ill-chosen. Researchers in the field talk about training.

Also listen: How to meet the challenges of artificial intelligence?

The most widely used machine learning technique today is neural networks, which have nothing to do with human neural networks. If Frank Rosenblatt was inspired by it when he composed his machine in 1957, he never claimed to have made artificial neurons.

An artificial neural network is a succession of additions and multiplications that are chained. In the past, we only knew how to make single-layer neural networks, because we didn't have the machines to calculate it. Today, the deep network is composed of about ten layers and a computer like yours or mine can run this type of neural network without problem. From three layers, we speak of "deep learning".

Neural networks can be used in many different ways. With transformers, we train neural networks to transform any concept into numbers, and thanks to another neural network, we will produce a result to generate text, sound or image. If we ask an image-generating AI: "I would like a pope a white down jacket", it will combine the number "pope" and the number "white down jacket" to make an image of a pope in a white down jacket.

ChatGPT fonctionne comme ça aussi. On lui a donné tous les textes de la langue française à partir desquels il est capable de générer du texte qui ressemble énormément à ce qu’un humain aurait pu écrire. En revanche, si la question posée est absurde, par exemple, « parle-moi des œufs de lapin », il va parler des œufs de lapin. Il va dire des absurdités, mais il le fera très bien. Il faut savoir qu’il y a un petit secret derrière ChatGPT : on lui a fait générer beaucoup de phrases et des humains – on parle de milliers d'humains pendant des milliers d’heures – ont corrigé chaque phrase à la main pour en faire la réponse la plus crédible possible par rapport à la question posée. C’est grâce à ces tâches assez ingrates de « ​​​​fine tuning » (« ​​​​réglage fin ») que ChatGPT est devenu très fort pour donner l’impression de parler comme un humain, qu’il ne tient pas de propos racistes ou qu’il sait dire « je ne sait pas ».

Enfin, la différence entre ces deux techniques est la limite du temps de programmation et donc de coût humain en développement. En pratique, ces algorithmes d'apprentissage donnent très souvent des résultats bien meilleurs que ce qu’un informaticien ferait dans le même temps en le codant à la main. Le travail de réglage de paramètres est rébarbatif et ça, la machine le fait très bien. C'est pour ça que l’apprentissage automatique marche bien.

6.What is the difference between weak AI and strong AI?

The AIs that exist in our daily use and that are talked about in the press are all weak AIs, as opposed to this kind of dream of strong AI. There are very few researchers who imagine that strong AI will arrive soon.

Strong AI is a notion that was proposed by John Searle, a philosopher specializing in issues of language and human intelligence. He said that if artificial intelligence is able to deal with completely different problems – driving a car, playing chess, making a sandwich – an AI that would be able to do many different things, we could say that it is "strong" in that it is close to human intelligence.

We are therefore rather on approaches that consist in improving the various weak AI techniques to solve increasingly complex problems. Each AI algorithm solves a family of very specific problems and is not necessarily reusable in other contexts. For example, the AI that plays chess very well, you have to retrain it if you want to make it play the game of Go. He cannot be asked to drive a car. For this, it will be necessary to use another AI.

7.What is changing with the development of so-called generative AI?

I am very skeptical about the fears that are being expressed about generative AI. Obviously, as a teacher, you can worry about students cheating more easily. We saw the same thing when Wikipedia came out. But the teachers spotted quite quickly when the page had been copied, because the student had not understood anything. The extra level is that ChatGPT it synthesizes the Wikipedia page and it does it pretty well. We can worry about students cheating even more. But I don't think so. On the contrary, I think people need to be taught how to use ChatGPT. The idea that the machine will be able to produce elements that are already well prepared that we will be able to rework later, it's a bit like when we go to Wikipedia to look for information and then we correct them.

On the aspect of the image, I am more reserved, since you cannot – unless you are an expert in image manipulation – rework the images produced by a generative system. So, we will take them as they are, and the problem is that they look real while they are totally false, they do not describe a reality. As much as the text, people take it a little carefully. But we have become accustomed to considering that if the image is seen in the media, then it is true. It will therefore be necessary to accept that an image can be the product of a machine. This is where I join people who express fears.

Generative AI algorithms will also allow researchers to attack other AI problems such as the notion of causality, which is very complicated to capture and is now forced to write by hand. From my point of view as a researcher, this is really a step forward that allows us to progress. From the point of view of the general public, I fully understand the concerns.

When supermarket cashiers, for example, have been replaced by automatic checkouts, there is certainly AI to read barcodes or recognize the fruit that has been put on the scale. But the loss of cashiers' jobs is not a problem of technology, it is a problem of societal choice. And we must ensure that those who would be victims of the deployment of these tools are protected. As a society, we have to learn to do it right now. There are jobs that will be modified by generative AI systems or others, because things are constantly evolving. If we manage to make machines that perform better than us certain tasks that we do with our intelligence, how can we support people who earn their living by doing these tasks? I think that our societies are capable of taking charge of this and avoiding unemployment or forced retraining. And to teach people how to use AI to be able to work on the skills that are ours.

Journalism is one of the professions that is now the most evolving in relation to these AI issues, since journalists are the link between individuals and information. Machines that started as information processing machines have become information generation machines. So perhaps journalists, who are generators of information, should also become regulators or controllers of information to attest that it is true, which a machine will never be able to do. The journalist, through his work, can certify that what is said in this article or what is shown in images corresponds to a certain reality. But for sure, the world of disinformation has a bright future ahead of it.

8.How can AI be potentially dangerous?

Any technology is potentially dangerous. When you create technology, you know it could be misused. It is not the AI that is dangerous in itself, we must really get out of the myth that we have created a monster that we do not control. We know what machines do and how they do it. We may not know how to explain the calculations they made, but we control the results. We know they can make mistakes and when they do, we try to understand why and improve it.

It's not a technological problem, it's a societal problem. That researchers are concerned about the use of their machines, I understand it and it is legitimate. In the early 1940s, Austrian physicist Lise Meitner, who co-discovered nuclear fission, declared that she would not participate in the Manhattan Project, because it was not normal to use this discovery to make bombs. That a researcher is concerned about making systems that manufacture "fake news" from generative AI, it is quite legitimate.

But in practice, we have to go back to our role as researchers and give our opinion on the scientific level. On a societal level, it is not up to us to decide, it is not our job. It is up to society, to politicians, to journalists, to artists, to other people who have something to say on the subject. We scientists are here to create knowledge, not to see how it is used.

On the other hand, I am against saying that we must stop research in this area, because it is potentially dangerous. I think it is a mistake to believe that knowledge is dangerous. It is the use of knowledge that can be dangerous, that is the difference. AI research must continue, these are the uses that should be paused. But it's complicated, because we will never find a global agreement on it. For five years now, the Chinese have been using facial recognition to rate people on the street. You should know that the algorithm only recognizes people. It is humans who put the notes. Should we therefore stop the same research in artificial vision, which can detect breast cancer? I don't think so.

People who know AI will alert on uses and that's normal. Say: "Be careful, with Dall-E, we can make an image of anything and everything." Similarly, it's a good thing that ChatGPT came out to show that we are able to make text that looks very real. Even if it talks about rabbit eggs, something that doesn't exist, it speaks very well. With "deepfake", we are able to make Joe Biden say to bomb Russia. It's good that people know what we're capable of doing with these AI technologies. They have to get on with it and understand what we are talking about. I think that's what is at stake.

Nicolas Sabouret is Professor of Computer Science at Paris-Saclay University and researcher at the Interdisciplinary Laboratory of Digital Sciences (LISN-Univ). He is the author of Understanding artificial intelligence, published by Ellipses and co-author with philosopher Laurent Bibard of L'Intelligence artificielle n'est pas une question technologique ; exchanges between the philosopher and the computer scientist at Éditions de l'Aube.

Our selection on the subject:

Further reading:

  • Laurence Devillers, for an ethics of artificial intelligence
  • ChatGPT, technological revolution or illusion of an intelligent machine?
  • German artist awarded for a photo made by an artificial intelligence

Listen:

  • Will artificial intelligence ever replace teachers?
  • Is artificial intelligence a threat to young people?
  • Are we already overtaken by artificial intelligence?
  • Artificial intelligence, a danger for democracy?

Newsletter Receive all the international news directly in your mailbox

I subscribe

Follow all the international news by downloading the RFI application

Read on on the same topics:

  • Artificial intelligence