Chinanews.com, February 19.

Question : Cold thinking under the ChatGPT boom: How will artificial intelligence affect the future of mankind?

  China-Singapore Finance and Economics reporter Song Yusheng

  "In the next five years, thinking computer programs will be reading legal documents and providing medical advice. In the next decade, they'll be doing assembly line work and maybe even companions. In the decades after that , and they will do just about everything, including making new scientific discoveries that expand our notion of 'everything'."

  On March 16, 2021, OpenAI CEO Sam Altman (Sam Altma) published an article on the Internet called "Moore's Law of Everything", and described the relationship between humans and artificial intelligence in a prophetic tone. The future world of coexistence.

  In Altman's view, the artificial intelligence revolution is coming, and its results will profoundly affect the future of mankind.

Less than two years later, discussions about ChatGPT are sweeping the world.

This seems to imply that reality is approaching the future he predicts.

  So, how will the development of artificial intelligence shape or affect the future of mankind?

Let's start with ChatGPT, which has attracted much attention recently.

Data map

How did ChatGPT become popular?

  ChatGPT, a chat robot model released on November 30, 2022, is showing its huge impact.

According to data from Similarweb, in January this year, an average of 13 million unique visitors used ChatGPT every day, which was more than double that of December last year.

  If you only regard it as a robot that "can talk to humans", the signs of "technical progress" are not obvious.

After all, Siri, Xiaoai, Xiaodu... These tools that people commonly use today can all provide "dialogue" services; even in 2020, Xiaoice has launched a "virtual boyfriend" chat product.

The artificial intelligence behind these products can complete the "dialogue" with humans to varying degrees.

  The "charm" of ChatGPT depends more on technology.

Among them, "big model" is the key word.

  Li Di, CEO of Xiaobing Company, defined ChatGPT as a product of the "big model" and an attempt at productization.

  What is a "big model"?

Liu Jiang, former vice president of Zhiyuan Research Institute, told reporters that taking AlphaGo as an example, such artificial intelligence belongs to a "small model."

"It can only be used to play Go, not chess or backgammon. Some of the underlying technologies may be similar, but if AlphaGo is to play chess or backgammon, technicians need to rewrite the code and retrain."

  "But the big model is different, it is universal." For example, Liu Jiang, ChatGPT has a wide range of application scenarios, such as writing emails, copywriting, code, poetry, and even papers.

  The "2022 Top Ten Frontier Application Trends of Digital Technology" released by Tencent Research Institute clearly pointed out that small models not only require a lot of manual parameter adjustment, but also need to feed a large amount of labeled data to the machine, which reduces the efficiency of artificial intelligence research and development. And the cost is higher.

Large models are usually trained using self-supervised learning methods on large unlabeled datasets.

  ChatGPT is a dialogue robot developed by OpenAI after fine-tuning its GPT-3 model released in 2020.

According to reports, the model was trained using text databases from the Internet, including up to 570GB of data obtained from books, web texts, Wikipedia, articles and other texts on the Internet.

GPT-3.5, the model behind ChatGPT, is even more powerful.

  A research report by CICC believes that the application of such new technologies "brings a step from weak artificial intelligence to general intelligence."

  In the eyes of industry insiders, the technical change from a small model to a large model is tantamount to the "evolution" of artificial intelligence.

ChatGPT webpage screenshot

The "evolution" of artificial intelligence

  In 1965, Gordon Moore, one of the founders of Intel, proposed Moore's Law, that is, when the price remains constant, the components that can be accommodated on an integrated circuit (IC) double every 18-24 months, and the performance also increases. Double it.

Since the production of ICs of the same specification under the same wafer area can double every 18-24 months, the production cost can also be reduced by 50%.

  Altman's "Moore's Law of Everything" greatly expands the scope of application of this law.

"Moore's Law applies to everything," he wrote, should be the watchword of a generation, although "it sounds utopian."

  In other words, in Altman’s view, in this era, the speed of technological iteration is visible to the naked eye.

Screenshot of "Moore's Law"

  In fact, with the blessing of artificial intelligence, the evolution speed of certain fields has been greatly accelerated.

According to reports, according to OpenAI statistics, from 2012 to 2020, the computing power consumed by artificial intelligence model training has increased by 300,000 times, doubling every 3.4 months on average, exceeding the doubling every 18 months of Moore's Law. speed.

  Looking back at the evolution of the OpenAI GPT model, it has a very obvious scale effect.

The data shows that the number of parameters in the first generation of GPT in 2018 was 117 million, the number of parameters in the second generation in 2019 reached 1.5 billion, and the number of parameters in GPT 3.0 will directly leap to 175 billion in 2020.

  Baidu CEO Robin Li once publicly pointed out that artificial intelligence has undergone directional changes, both at the technical level and at the commercial application level.

  Microsoft CEO Nadella also said in an interview that the development of GPT is not linear, but exponential. Therefore, compared with GPT-3, the current GPT-3.5 has demonstrated stronger capabilities.

The industry generally predicts that GPT-4 will be launched this year and will have stronger general capabilities.

  There is no doubt that exponential growth has allowed artificial intelligence to "evolve" at a high speed.

  Liu Jiang told reporters that such "evolution" is not just a quantitative change, nor is it just the result of adding each iteration.

"Some researchers have concluded that compared with small models, large models of artificial intelligence have more than 100 kinds of 'mutation capabilities', that is, capabilities that large models possess but small models do not."

  He feels that this is somewhat similar to the process of biological evolution.

"It's as if the brain reaches a critical point after constant quantitative changes, and then organisms produce advanced intelligence."

Data map.

The dawn of a huge breakthrough looming?

  In 1950, computer scientist Alan Turing proposed a thought experiment known as the "imitation game."

The interviewer talks to the two subjects through a typewriter, knowing that one is a human and the other is a machine.

Turing suggested that a machine can be said to be capable of thinking if it can consistently convince interviewers that it is human.

This is the famous "Turing test".

  So far, no artificial intelligence model can really pass the Turing test, including ChatGPT.

Even, ChatGPT reveals that there are still many problems to be solved and perfected.

  Li Di clearly pointed out that ChatGPT at least has problems with content accuracy, operating costs, and immediacy.

"These are root problems, which are difficult to solve on ChatGPT, and may have to wait for new products and applications to come out."

  Taking the issue of content accuracy as an example, Li Di believes that as a knowledge system, the most basic requirement is accuracy, but the technical structure of ChatGPT determines that the knowledge it provides is difficult to be accurate.

  In fact, the problem is already costing AI companies real money.

  "What new discoveries can I tell my 9-year-old about the James Webb Space Telescope (JWST)?" a photograph".

  But the reality is that the first exoplanet photos were taken by the European Southern Observatory's Very Large Telescope (VLT) in 2004.

On that day, Google’s stock price fell by about 9%, and its market value evaporated by about 100 billion U.S. dollars.

  ChatGPT also has a similar problem.

When the reporter asked ChatGPT "what problems ChatGPT currently exposes to be solved and improved", the answer it gave was different from the limitations listed by humans on the ChatGPT website.

ChatGPT screenshot

  There is also the question of cost.

According to reports, some studies have estimated that training the 175 billion parameter language model GPT-3 requires tens of thousands of CPUs/GPUs to input data 24 hours a day, and the energy consumption required is equivalent to driving to and from the earth and the moon. Cost $4.5 million.

In addition, ChatGPT’s data quality, wide application scenarios, and continuous capital investment are all indispensable, not to mention the marginal cost of developing AI products and the pending full-stack integration capabilities.

  In this regard, Liu Jiang bluntly said that large-scale models currently require high computing power and high thresholds, so they must be technology-intensive, capital-intensive, and talent-intensive.

"Artificial intelligence can only be said to have taken a step forward in technology from a small model to a large model. But artificial intelligence must break through the so-called 'singularity', that is, artificial intelligence develops to be 'smarter' than humans and able to 'evolve' itself, and some distance."

  Even so, he still believes that the dawn of a huge breakthrough in artificial intelligence can already be seen.

"It's like we have been groping in the dark for many, many years, and now we finally see a little light, and we are going out."

When will the "singularity" come?

  People who believe in the "singularity" view believe that the rapid and far-reaching development of technological change will cause irreversible changes in human life in the future.

The integration of biological thinking and technology will allow human beings to transcend their own biological limitations.

  As pointed out by the American futurist Ray Kurzweil, the imminence of the singularity implies an important idea: the pace of human creation of technology is accelerating, and the power of technology is also growing at an exponential rate.

Exponential growth is deceptive, it starts with tiny growth and then explodes at incredible speed - if one does not pay close attention to its development trend, the growth will be completely unexpected of.

  In the words of Kurzweil, "Our future is no longer an evolution, but an explosion." He once predicted that the "singularity" will come around 2045.

  In fact, this kind of "starting from a very small and then explosive growth" has been continuously verified in the history of technological development in recent decades.

  The web browser was born in 1990, but it wasn't until the advent of Netscape Navigator in 1994 that most people began to explore the Internet.

The popular MP3 player before the iPod in 2001 did not start the digital music revolution.

Similarly, in 2007, before the Apple (Apple) iPhone came out, smartphones had already come out, but there were no applications developed for smartphones.

Data map: Beijing, in the Shougang Park exhibition area of ​​the Service Trade Fair, dancing robots in the telecommunications, computer and information service exhibition hall.

Photo by Chinanews.com reporter Li Jun

  The emergence of ChatGPT may be a new node in the history of technology.

  People are already talking about how AI will disrupt their work and their lives.

And the various chat records between humans and ChatGPT at this moment will all become training data for the next generation model.

  In Liu Jiang's view, facing the coming changes, human beings should embrace the changes and embrace the future.

"Human beings are constantly changing, and we cannot stick to stereotypes. Of course, we should also actively think about the bottom line that does not allow artificial intelligence to break through."

  He doesn't deny that people are worried about possible changes in future jobs.

"Maybe in the future there will be robots around everyone, just like the secretary next to the boss."

  What matters is how we live with AI.

In other words, the question to be addressed is, what is the value of human beings?

  At present, experts in the field of artificial intelligence have proposed that we must be wary of artificial intelligence weakening human thinking.

  Li Di believes that human creators should regard artificial intelligence as a new means or tool to liberate their creativity, so that they can further return to the essence of content creation, that is, "creativity".

  Liu Jiang gave another hypothesis: with the development of artificial intelligence technology, when productivity breaks through, perhaps humans will no longer have to work.

Perhaps on that day, human beings can really realize on-demand labor.

(over)

  search

copy