Analysis

The moods of artificial intelligence

Since the 1950s, artificial intelligence has made an important intellectual and technical journey. Are we witnessing its peak or the beginning of a new era? AI, cybernetics, black box, neural network, imitation... Do we really know what we're talking about? Clarification of the moods of AI, with Anthony Masure, associate professor at HEAD - Geneva.

The moods of AI. © Image generated by NightCafe

Text by: Thomas Bourdeau Follow

Advertising

Read more

The relationship between computers and thought has haunted computing since its beginnings. Pushing the philosophical distinction between body and mind (hardware/software) to its paroxysm, Alan Turing comes to consider the possibility of an electronic brain. He does not question whether machines can think, but demonstrates that a computer can take the place of a human being in a game based on imitation. In 1950, Turing and the National Physical Laboratory teams published the Automatic Computing Engine, one of the first programmable machines. The machine demonstrates a so-called human intelligence if it manages to outsmart the interrogator beyond the random rate of 50%. No matter how you understand the inner workings of the machine, this is a simulation.

It is necessary to shed light on this internal functioning called "black box" (we had previously spoken of "ghost in the machine" in reference to Terry Gilliam's film Brazil) because with Turing efficiency takes precedence over the intelligibility of the technical system. In short, we do not try to understand how it works: it works, enough.

7 June 1954. Alan Turing died (aged 41). He was a key influence on theoretical computer science and computation with the Turing machine, considered a model of a general-purpose computer. He was not fully recognised during his lifetime. pic.twitter.com/iJjiRs3dxD

— Prof. Frank McDonough (@FXMC1957) June 7, 2023

Better understand the strengths and limitations of recent AI models

The notion of black box appears especially in behaviorism, a method for studying the statistical relationships between the environment and behavior without worrying about the human psyche. The individual, similar to a black box (we do not know and there is no need to know what is happening inside) would be the result of his environment: it is enough to analyze his inputs (inputs) and outputs (outputs). Cybernetics takes up the idea that a machine (a computer) can be comparable to the human brain via this black box idea, but with the concept offeedback that does not exist in behaviorism.

Feedback is the dynamic adjustment of input and output data to control a given situation. Cybernetics - the science of control (kubernetes) - makes it possible to adjust the trajectory of a missile in real time, for example, without human intervention. Its principles have determined many systems of calculation, interfaces, and interaction, implemented by engineers and designers. This important point allows us to better understand the strengths and limitations of recent AI models, but let's continue our journey towards AI...

The term artificial intelligence dates back to 1955: "All aspects of learning or any other characteristic of intelligence can in principle be described with such precision that a machine can be built to simulate them," according to mathematician John McCarthy. There is the simulation dear to Turing, but AI and its study have not always had the wind in their sails, it is little to say. During the years 1974 to 1980, there was even talk of a first winter of AI. In 1982, physicist John Hopfield demonstrated that a neural network can learn and process information in a completely new way. Yann Le Cun's research will reopen the neural network axis, which will prove more effective than the symbolic approach. This last approach corresponds to the second winter of AI (1987-1993) expert systems (decision support tools, supposed to imitate cognitive abilities) have not found an outlet.

The evidence is accumulating. https://t.co/wcaI1UGBcG

— Yann LeCun (@ylecun) June 8, 2023

This is where confusion can settle in people's minds: the technologies of the connectionist approach are nowadays to be confused, in the minds of the general public, with the much broader notion of artificial intelligence. The term deep learning, derived from machine learning, refers to a method where the machine aims to learn by itself. Unlike traditional programming of symbolic logic where it executes rules predetermined by humans, deep learning is based on a network of artificial neural layers inspired by the human brain, which will process complex data via backpropagation processes. Baseline data is essential: the more data the system accumulates, the better it is supposed to perform. It is statistics. This is the AI we are using right now: an expert system, or narrow AI, that is to say a potentially very competent computer system, but which operates only in a restricted context, often focused on a specific task. A non-exhaustive and vertiginous list here.

An impressive list of expert systems. © https://www.futuretools.io/

Press the OFF button to find meaning

The fear, and what enters the current debate, is the hypothesis of a strong AI, an artificial superintelligence, that is to say an AI with intellectual capacities far superior to those of humans in virtually all areas. But this remains the domain of belief with the fear also that it could happen.

On this path of AI, Anthony Masure, associate professor at HEAD - Geneva, author of the essay Design under artifice: creation at the risk of machine learning (2023), noted this sentence: "What [...] gives its meaning [to the world of signs] is the moment when we stop the machine. It is by the psychoanalyst Jacques Lacan and expresses this state of the machine (its stop) which seems decisive. If, for the subject, the meaning emerges from the shutdown of the machine, a total machine (designed in a logic of continuous operation) could not be creative, that is to say, bring about new meanings.

You have to know from time to time press the OFF button to find meaning.

Press the button! © Image generated by DAL. E 2

Key moments in the evolution of artificial intelligence from the work of Alan Turing to the present day

We still pressed ON to ask the ChatGPT expert system about key AI dates, according to him.

« Here is a summary of the key moments in the evolution of artificial intelligence from the work of Alan Turing to the present day:

  • 1950s: Alan Turing proposes the Turing test, which evaluates the ability of a machine to exhibit intelligent behavior indistinguishable from that of a human being.
  • 1950s-1960s: AI research is mainly focused on logical and symbolic rules. The LISP programming language is developed, becoming a major tool in the field.
  • 1970s: The AI field experiences a period of pessimism, known as the "AI winter," due to technical limitations and disappointing results.
  • 1980s: The expert knowledge approach to AI takes off. Expert systems are developed, using knowledge bases to solve specific problems.
  • 1990s: Artificial neural networks, inspired by the functioning of the human brain, are experiencing a resurgence of interest. Machine learning algorithms are also being developed, allowing machines to learn from data.
  • 2000s: Deep learning and deep neural networks become popular, thanks to technological advances and the availability of large amounts of data. Machines are starting to outperform humans in certain tasks, such as speech recognition and image classification.
  • 2010s: AI becomes ubiquitous in our daily lives. Virtual assistants, such as Siri and Alexa, are widely used. Advances in AI are also being seen in areas such as autonomous driving, machine translation, and strategy games.
  • Recent years: AI-based language models, such as GPT-3 (which I am based on), enable major advances in natural language processing and content generation. AI applications are growing in many fields, such as healthcare, finance, cybersecurity, and robotics. »
  • And ChatGPT comfortably lying on the couch of its algorithm adds:

    « In the context of artificial intelligence, Lacan's quote can be interpreted as emphasizing the importance of human interpretation in the process of understanding. Although machines can analyze and process signs and data in an automated way, the meaning we attribute to them is often determined by our interpretation and understanding. Machines and algorithms can help us process large amounts of data and find patterns, but it is often through human intervention and interpretation that we give deep meaning to these results. Machines can provide information, but personal meaning and interpretation often remain in the hands of human beings. Ultimately, Lacan's quote emphasizes the importance of the interaction between machines and human beings in the process of meaning and the need for human reflection and interpretation to make sense of the world of signs. »

    What if AI did an analysis? © Image generated by DALL. E 2

    Thank you, ChatGPT. Press OFF.

    Newsletter Receive all the international news directly in your mailbox

    I subscribe

    Follow all the international news by downloading the RFI application

    Read on on the same topics:

    • Artificial intelligence
    • Our selection
    • New technologies
    • Internet