NewsPeppermint NewsPeppermint



It is 'News Peppermint', a foreign media curation media that carefully selects and delivers "news that are not in Korea, but necessary for Koreans."

News Peppermint translates New York Times columns from Soup, along with detailed commentary on their background and context.

I will diligently write so that it is easy and fun to read, even if it happened in a faraway place, by making use of my experience of diligently reading and interpreting events, news, and discussions outside Korea, including the United States.

(Written by Lee Hyo-seok, CEO of News Peppermint)



What does language mean to humans?

There are scholars who cite language skills as the biggest reason humans were able to become the lord of all things and conquer the earth.

Language not only enables accurate and rigorous communication, but also makes it possible to define complex concepts, and plays a key role in enabling humans to understand reality through logic and analogy.

Soon, language is at the root of our civilization.


ChatGPT overshadows 'Moravec's Paradox'


In the field of artificial intelligence, there is something called 'Moravec's Paradox'.

This means that what is easy for humans is hard for machines and what is hard for humans is easy for machines.

Running around, recognizing people, and communicating that children can do even when they are three or four years old are very difficult for machines, but very complex calculations or repetitive tasks that machines are good at are difficult for humans.



But recent advances in AI have overshadowed Moravec's paradox.

Machines and humans are now nearly identical, at least in their ability to distinguish between dogs and cats in photographs, to identify objects, and to differentiate between people.

Until recently, when the impact of AlphaGo and the numerous successes of deep learning were announced, most experts were not sure if communication between machines and humans would be possible, or whether machines would be able to understand human language and communicate with humans in the near future. It was negative.



However, the recently released ChatGPT has changed people's thinking about these questions a bit.

Of course, ChatGPT is not really capable of human-like communication.

However, many people who have used ChatGPT agree that 'it is very likely' to the question 'Will machines understand human speech in the future?'

Even to the question that I would like to answer even this kind of thing, the answer is quite complete.


to understand the language


I received an e-mail from a foreign lawyer I recently met, asking me to have lunch with him.

In the past, I would think of sentences in English, write a draft, get proofread by an English teacher I know for a fee, and send a reply.

But this time, after deleting the names of people from the above emails on ChatGPT, 'I got this email.

I asked for an answer.

To be honest, I got better writing than the first draft I wrote.



In the end, ChatGPT answered 'maybe' to the question of whether understanding a language is enough to understand the relationship between words, and if so, just learning more variables with more data. .

And this question is the same as the question of whether the specialness of our human brain can be imitated with just a lot of connections.



It was when I saw the concept of 'Word2Vec' that I first got the impression that language might be a very simple matter.

This is a way to map words, or concepts, to vectors in virtual space.

Simply put, if you subtract the man from the concept of king and add the woman, you get a queen.

This enables logical operations between concepts.

Naturally, if you make the space very large and make the sentence, which is the relationship between concepts, a vector again, it gives you the feeling that operations between sentences will be possible.

ChatGPT shows that it can do just that.



And my experience of 'understanding' also leans toward a positive answer to this question.

I often tell my students that if they want to make sure they understand a concept, they should see if they can use the concepts they already know to express the newly learned concept, and if they can express it in a variety of ways.

This is also the task of making sure that the new concept is settled in the brain through the relationship between other concepts.



▶See the New York Times column: The path through which ChatGPT dominates democracy


The concerns of data scientist Nathan Sanders and security expert Bruce Schneier, published in the New York Times on the 15th, are based on this prospect.

They warn that although ChatGPT and related technologies are lacking, if we can imitate humans very similarly, democracy, the system that maintains our society today, could be at risk.



Their first concern is influencing people's thinking through actions such as comments in online spaces.

Without ChatGPT, at least for this kind of work, we had to hire people who could do it for a considerable fee.

However, ChatGPT can mimic humans similarly, thus greatly reducing its cost.

Now, if there are thousands of plausible, one-sided comments on a really lopsided issue, you have to wonder if that many people really think that way.



Their bigger concern is the future hacking of democratic institutions by AI.

A prime example is the American lobbying system, which is rather unfamiliar to us.



**If the 'Go to view' button is not pressed, copy and paste the address into the address bar.