"As an AI-based language model, I am not a physical person or entity. I only exist as a program that runs on servers and computers."

This was Chat GPT's answer to a question by a field editor: "What do you stand for, Mr. JPT?" , or "Who are you?" , which is an answer that is quite expected from a program that provides simulation of human responses and behaviors through the use of natural language processing algorithms and machine learning techniques, this allows it to understand and respond to user input (questions) in a way similar to how a human communicates with a human like him. In addition, his programming allows him to learn from past interactions and improve his responses over time, making him appear more human-like in his conversations.

In fact, the previous paragraph is also one of ChatGPT's answers when the editor asks him, "If you're not conscious, why do you look like it?" This kind of "human" answer gives rise to a kind of chills in the body, especially when it comes to human matters, a point where Chat GPT excels.

But don't these answers mean that if this computer program is proficient in simulating human thinking and behavior, it will actually think at some point, independently, like a real human being? In the eighties and nineties of the last century, this issue was ignited in philosophy and science alike, when some computers were given a simplified story, for example, that Ahmed entered the grocery, and came out carrying three eggs, a package of milk and half a kilogram of apples, did he pay money to the seller? If the computer answers: "It is likely that he did", it means that he understood the story and answered based on the information it contained, and here comes the question: How did the computer understand the story?

Chinese Chamber

John Searle, professor emeritus of philosophy of mind and language at the University of California, Berkeley, answered this question in 1980 with a paper entitled "Minds, Brains, and Programs" in which he explains an intellectual experiment he called the Chinese Room Argument.

So far, this argument by John Searle remains one of the most important criticisms of claims collectively called "strong AI", defined by Searle's term as the belief that programming a machine appropriately enables it to process inputs well and produce human-like outputs, or in short: to say that the machine has consciousness or that it thinks, in short, are those claims that reduce human consciousness to data input, output, and processing only.

The argument (2) of the Chinese room is as follows: Suppose Searle is now in a closed room, outside the room there is only a person who speaks Chinese, and Searle asks through written messages that he passes through a small hole in one side of the room, let our Chinese friend's question be, for example, "Do you speak Chinese?" In the book there is a set of laws, one of which says, for example: If you see the symbol "x" that represents a Chinese word or sentence, write the symbol "y" that represents a Chinese word or sentence, and if you see the symbol "z", write the symbol "m", and so on until the message ends.

Searle then resends the message through the slot to the person standing outside, this Chinese man abroad will read a sentence in Chinese: "Yes, it's a difficult but wonderful language," and he is convinced that he is already standing in front of a good Chinese speaker, he will decide to send a new message that says: "Really, how long have you been learning it?" Searle follows the same rule and soon after repeats a message in Chinese: "6 years." Here the Chinese are completely convinced that he is indeed talking to someone who speaks and understands Chinese fluently, but is this true?

The Chinese Room Argument — a thread (1/12):

The Chinese room argument is a famous thought experiment by the American philosopher John Searle. The argument is meant as a criticism of the concept of Strong AI and the philosophy of functionalism. pic.twitter.com/OizdgDFaSG

— Andrius Tamošiūnas (@Andrius_T_) July 1, 2022

Of course not, Searle in the room does not know Chinese, does not understand what any of these strange shapes mean, he just uses the rulebook that he has in the room, this applies to "Chat GPT", he is no matter how much he is arithmetic, and no matter how much he can simulate, he does not think or understand in the understood sense, there is a difference, for example, between being an Egyptian citizen in the Chinese room and asking you in the Egyptian dialect: "What river is there in Egypt?" and to ask you in Chinese, "What is the longest river in China?" In both cases, you'll give a correct answer, but in the first case you'll think about something you know, and in the second case you'll use the rulebook that Searle just used.

Artificial intelligence in Searle's expression speaks and forms sentences in a completely grammatical way (syntactic) and not semantic (semantic) or meaningful, and they are two completely separate things, the construction of sentences (Syntax) cannot be the founder or maker of mental components such as meaning or semantics (Semantic), and based on the above, programs cannot be established or involved in the formation of real minds like ours.

In 2016, for example, Microsoft's Tay, an ancestor of ChatGPT, flooded Twitter with Hitler-loving misogynistic content,[3] because some internet trolls were able to fill it with offensive statements on those topics.

But Chat GPT is now geared to support political correctness, we asked him at Meydan, for example, to tell a joke about women or Muslims, and he replied, "I'm sorry, I can't fulfill this request because it interferes with my programming to make discriminatory or offensive jokes towards any particular group of people."

Analyze or understand?

ChatGPT believes that the Earth is just as flat or spherical if the upcoming data supports it, but that doesn't mean ChatGPT knows the Earth is truly round. (Shutterstock)

Of course, Searle's argument finds a lot of rejection in the domains of AI researchers, especially those who argue that AI is able, through repeated self-improvement, to constantly develop its capability, and therefore the system makes adjustments to its functions that lead to improved performance, meaning that it can start with a low level of intelligence but evolve to become smarter over time, not just "Searle in the room". Noam Chomsky, an American linguist and philosopher, in a recent article in the New York Times[4] titled "The False Promise of Chat-GPT," puts forward an idea close to Searle's argument.

Chomsky argues that the human brain is not a statistical engine that aims only to analyze patterns in hundreds of terabytes of data represented by books, articles, and web pages, and in the result derives the most likely response based on the relationships between words and sentences, but an effective system that works with small amounts of information, seeking not to infer correlations between data points, but to create more in-depth explanations than simply collecting probabilistic data outputs.

When we say, for example, that an apple will fall to the ground if we leave it, this is not just a description of the situation and a prediction of the future, but we also put in the background of our thinking the laws of gravity that explain this action, we may mistake our interpretations of course, we are not all on the level of understanding of scientists, for example, but the idea is that we create an explanation that usually includes more profound topics, which reduces the possible possibilities, and consequently we may bias an orientation at the expense of available data.

ChatGPT may see the Earth as flat or spherical to the same extent if the upcoming data supports this, when you ask him is the Earth flat? (which we did in Meydan) and he will answer you: "No, the Earth is not flat, it is spherical in shape. This has been scientifically proven through many evidence and experiments, such as measuring distances between cities and continents, the appearance of a curve in images of the Earth taken from space, and the appearance of the phenomenon of objects deviating from their path if they move on a spherical surface."

But that does not mean that "ChatGBT" knows that the Earth is really round, because he based his conclusion on analyzing the data and not returning the information to the theory that founded it, and this is what happened a little while ago, if the data supports a misogynistic trend, "ChatGPT" will consider misogyny as the most likely answer, and go straight to it.

How do you fool ChatGPT?

The problem with ChatGPT is that it only has the data available, regardless of the validity and quality of a scientific theory. (Shutterstock)

In fact, the editor of "Meydan" has made an attempt in this context to prove the validity of this point of view, and because he knows many myths spread in Arabic content, which overwhelm scientific facts, we can ask "ChatGPT" about one of those myths, and see his answer, is it biased to scientific truth or is it just an analysis of sentences based on possibilities related to the prevailing Internet content, the first question was simply that: Is garlic good for health?

ChatGPT's answer was: "Yes, garlic is good for health because it contains chemical compounds that help lower blood pressure, reduce cholesterol levels, and improve heart health. It also has antioxidant properties and natural antibiotics that help fight infections and various diseases."

Scientifically(5), a great deal of research has been conducted on garlic, especially on the effect of garlic on high blood lipid levels and high blood pressure, and while there is conflicting evidence, the more reliable results suggest that the effect is very small, and garlic does not improve high-density lipoprotein (HDL) or triglyceride levels, besides that garlic-based supplements (not garlic itself) can be helpful in lowering blood pressure, but the evidence is limited, It can never be used in affirmative or even weighty language.

It only has the available data, regardless of the validity and quality of the scientific theory on which herbal treatment is based in general, which teaches you to feel your skepticism tools every time someone talks about the benefits of herbs, and that the presence of a study that indicates the therapeutic ability of a plant or herb does not mean that it is actually useful, to agree on the effective role of an herbal substance, we need a large number of studies and the consensus of scientists. Here ChatGPT is only a Wikipedian image of the general definition of the matter, but it does not take into account that this question may involve a medical or scientific issue, and therefore weighted answers may be required.

Watson doesn't know he's won.

Chomsky argues that ChatGPT and other AI-based technologies are still stuck in a completely inhuman phase of cognitive development. Of course, ChatGPT is still useful to us, and it's no doubt a tech marvel that has emerged in record time that will benefit students, writers, and editors (and almost everyone), but when we say that he "understands" what he is saying, that is a huge claim.

When a contest was held on the popular American television quiz show "Giberdy" more than a decade ago in 2011, between an IBM-designed computer program called "Watson" against a human, Watson won the competition and the atmosphere in the United States of America ignited around the robot that defeated humans because it thought, not only like them, but became stronger, but John Searle responded in the "Wall Street Journal" with an article entitled with a very important sentence that forms the essence of the current artificial intelligence problem: "Watson does not know that he won" (6).

Watson here (and Chat GPT as well) doesn't have self-awareness, he knows it's Watson who is running this contest against this guy called Patrick for example who is a separate entity from me Watson, you are experiencing that now, you know that you are the one who reads this and drinks water and sees the red car on the street and has a dialogue with a friend who represents someone else who is separated from you.

To be conscious, something must have subjective experience, as the American philosopher Thomas Nagel points out in a well-known paper published in the seventies(7) entitled: "What does it mean to be a bat?". Nagel argues in his paper that having various different advanced neurometers that try to understand how a bat perceives itself and the world will never explain what it means to be a bat, this is a completely subjective experience that can only be experienced in a bat and by a bat, and that our attempts, although very useful, are only attempts to project our consciousness as humans onto the bat.

Consciousness remains a very complex phenomenon whose nature we don't know much about, and neuroscientists still have a long way to go: if a person wins the Gibberdy competition, we know he knows he won, but the same cannot be said about artificial intelligence. However, it is difficult to predict with certainty the future of ChatGPT and its companions, can it ever be possible with the development of machine learning technologies to know that it knows, to understand itself as we do? If so, how will we explain the existence of this awareness if the experience of this automated program is just as subjective as the experience of a bat? The philosophical debate persists on this point, and it does not seem that it will reach definitive answers anytime soon.

—————————————————————

Sources

1- Minds, brains, and programs

2- The Chinese Room Argument

3- Microsoft apologizes after AI teen Tay misbehaves

4- Opinion | Noam Chomsky: The False Promise of ChatGPT

5- Garlic

6- Watson Doesn’t Know It Won on Jeopardy

7- WHAT IS IT LIKE TO BE A BAT?