We are now in 2015, specifically in Elon Musk's office at the headquarters of his company "Tesla", Musk and his friend Sam Altman and some friends from Silicon Valley billionaires sit down, fearing the imminent attack of artificial intelligence on humanity, it has become dangerous now, we must move quickly, we are the technology leaders in the world, we must develop a solution to this dilemma, so what solution did Elon Musk and friends think of then?

Simple, let's establish a non-profit organization that aims to make artificial intelligence research and machine learning available to everyone, and make it open source, so that its name will be very creative "OpenAI", well, this scenario is imaginary of course, or it did not happen as we tell it, this is the necessity of the creative introduction as you know, but the establishment of Elon Musk and Sam Altman a non-profit organization that allows artificial intelligence research really happened!

Now you certainly know what's going on in the field of artificial intelligence, and of course you heard the name of the startup that is currently leading this field, yes, it is the same company "OpenAI" that has turned into a for-profit company, hiding how its new machine learning models evolve, and living on the investments of one of the biggest giants in the world of technology, Microsoft.

All this is not new, but what is new is that Elon Musk, and some other friends, decided to object to this rapid development that is invading the field of generative artificial intelligence, and demand that OpenAI and other companies working in the field slow down this development for 6 months, because we as humans are steadily heading towards a disaster due to this super artificial intelligence!

Open Letter

Last Wednesday, March 29, Elon Musk and a group of researchers and experts of artificial intelligence and the CEOs of a number of companies in the field called in an open letter, to stop developing training large language models for 6 months, and the goal is for humanity to catch its breath from this frantic race, and we can assess the risks of these models to humanity and society, in an attempt to control them and impose laws that govern them. (1)

The letter, issued by the Future of Life Foundation, a non-profit organization that aims to protect the future of humanity from the dangers of technology and is funded by the Musk Foundation, has been signed by more than 2000,<> people so far, led by Elon Musk himself, Steve Wozniak, co-founder of Apple, Imad Mustafa, CEO of Stability AI, in addition to a number of important figures in the field of artificial intelligence, such as Yoshua Bengyu, a Canadian computer scientist. He is one of the most prominent contemporary scientists in the field, and Stuart Russell, a computer scientist, engineer and university professor from the United Kingdom.

When it recently announced the new GPT-4 model, it didn't share any details about how it was developed, or even about the data it trained on. (Shutterstock)

This speech comes less than a month after OpenAI announced its new GPT-4 model, and indicates how powerful this new model is, and the possibility of it getting out of control and its negative impact on society and human civilization from the point of view of these signatories, and also indicates the recent race that Google and Microsoft have entered to dominate this market, in an attempt to develop artificial intelligence technologies so powerful, that the developers themselves "cannot to understand, predict or even reliably control their results," the signatories warned, warning that if companies do not respond, governments should step in and impose laws prohibiting further development of such models.

The speech stressed that this call does not mean stopping the development of artificial intelligence in general, but it means taking a step back and retreating from the dangerous race to reach larger artificial intelligence models that are known as "black box" and their capabilities and capabilities are unpredictable, and perhaps here is the root of the problem, that these models cannot know exactly how they work from the inside, which is why they are called "black box".

Black Box

"Black Box" models rely on algorithms that use huge amounts of data to predict the next word in the text, and they are called that name because no one can know exactly how they work, even their developers themselves cannot explain how those models predict this information, this can be a problem in situations where this prediction affects a decision about people's lives, such as in healthcare, or in choosing the right person for the job, or who deserves to get Loans and financing.

Companies like OpenAI are opting for the "black box" model because they want to hide their information and maintain their strengths in the heat of competition in a market that has begun to be dominated by big companies such as Microsoft and Google. (Shutterstock)

While on the other hand, explainable models come with greater transparency, because they are simpler models that humans can easily understand, and show how they combine different pieces of information to make the appropriate decision, there is a common belief that the accuracy of these models and the interpretability are mutually exclusive, which means that you can only get one of them, but in fact simpler models can be just as accurate as more complex models, in addition to being easy to understand and justifying what they produce. (2)

Companies such as OpenAI choose the "black box" model, because they want to hide their information and maintain their strengths in light of this frantic competition in a market that has begun to be dominated by major companies, such as: Microsoft and Google, and this is what industry experts are trying to warn against, because when we give full confidence to the black box model, it does not mean that we only trust the equations of this model, but we will also give confidence to the entire database on which it was built, even if we do not know what that is Data remains a risk, and it will certainly affect the results we get from this model in the end.

But what if you know that Elon Musk himself was the cause, perhaps indirectly, of OpenAI's transformation, and thus caused it to launch tools like ChatGPT to market, and try to profit from it quickly, while hiding its research in the field? Well, that seems to be part of the truth.

Trying to control and then failing

By early 2018, Musk decided to offer direct management of OpenAI, which was rejected by the rest of the founders, most famously CEO Sam Altman. (Associated Press)

Elon Musk was part of the small group that founded the OpenAI Lab in 2015, along with Sam Altman, Peter Thiel, Elijah Suttkeefer, Greg Brockman, and other famous names in Silicon Valley at the time. (3)

But by early 2018, according to a new report by Semafor, Elon Musk saw the company lagging behind Google in the race, and decided, as usual, to offer to take over the direct management of the company himself, which was rejected by the rest of the founders, the most famous of which is Sam Altman, CEO of the company, and Greg Brockman, its current president. (4)

At that time, Elon Musk decided to announce his withdrawal from the company, and resigned from its board of directors in the same year, and the reason he mentioned at the time was a conflict of interest with his work at Tesla, because it has its own artificial intelligence development laboratories for self-driving cars, which the report indicates that Musk had promised to finance "OpenAI" by about $ 100 billion, but when he withdrew he did not provide this funding, and contributed only $ <> million.

Microsoft is the first mega company to invest in OpenAI, providing billions of dollars in funding for the company's research. (Anatolia)

This sudden withdrawal of Musk put OpenAI in a dilemma, as the company had then begun to develop generative artificial intelligence models, such as: the DALL-E image generator, and the GPT series of models for generating texts, which of course needs huge money, so by 2019 the company announced the establishment of a new for-profit entity to be able to fund its research in the field, the first huge company to invest in it was Microsoft, which provided billions of dollars to fund research OpenAI", in addition to providing its Azure cloud platform, and many other resources, in return to ensure an exclusive license to use OpenAI technologies in its products in the future.

The loss of funding money from Elon Musk may not be the only reason or even the president that pushed OpenAI to switch to the for-profit model and fall into the arms of Microsoft, but it remains the best explanation for what happened, and the important thing is that this rapid trend towards profitability, and the change in the principles on which the company was founded, represented a defining moment for the entire field, and perhaps for the world, because this is the current problem, the company has become very hungry to launch new products as quickly as possible, which is what many see It could cause serious consequences in the near future, according to the open letter from Future of Life.

Change in orientation

(Shutterstock)

When it announced the new model "GPT-4" recently, it did not share any details about how it was developed, or even about the data it trained on, and in an interview with "The Verge" Ilya Sutkefer, the company's chief researcher, explained that this was to maintain the company's competitive advantage in the market, and when asked about the company's change of orientation and its failure to follow the open source model, he confirmed that they were previously "wrong" to share what They have research, and that research in the field of artificial intelligence does not have to be open source. (5)

Similarly, when Greg Brockman, the company's president, participated in a press interview with TechCrunch, he confirmed (5) the idea of training the new model on images next to text, but when asked about the details of these images and texts, "Greg" refused to discuss them or talk about the source of any data for training the "GPT-4" model, which prompted many experts in the field to point out that closing access to artificial intelligence models developed by the company makes it difficult for society to understand the potential threats that Shaped by these systems, power is concentrated in the hands of giant corporations.

Elon Musk himself attacked this change in the course of OpenAI several times, writing in a tweet on his Twitter account that OpenAI has become "a closed-source company that aims for maximum profit and is controlled by Microsoft," stressing that this was not his intention at all, but does it lead us to wonder about Musk's motives for requesting to stop developing artificial intelligence? Is there really fear for society and humanity from this danger? Or is he really afraid that he will not be the leader of this new career in the world of technology and that he will lose the image of the hero that he always used to appear?

OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.

Not what I intended at all.

— Elon Musk (@elonmusk) February 17, 2023

Elon Musk's intelligence

Elon Musk always mentions that he fears that artificial intelligence will prevail over humans one day, which may put us all at risk, stressing in an interview with "CNBC", in 2014, that artificial intelligence is the greatest threat to human civilization, and he even predicted frightening results (7) similar to what happened in the famous film series "The Terminator", but at the same time Musk confirms that if his company "Tesla" makes its own robot, it can guarantee that this robot It will be safe and will not turn against humans!

In 2017, Musk again referred to the same film, citing the technologies of his other company, Neuralink, which aims to develop devices within the human brain to be able to interact with machines, as a defense against artificial intelligence threats such as Skynet, the name of the artificial intelligence that destroyed humans in The Terminator, which shows Musk's ability to double employ the same evolution (artificial intelligence) and even the same argument (The Terminator) to serve His purposes.

That is the aspiration: to avoid AI becoming other.

— Elon Musk (@elonmusk) April 23, 2017

Perhaps this was his goal behind funding and establishing OpenAI Labs in 2015, for him to control the research of this vast and highly complex field, surprisingly, Musk himself communicated with a group of artificial intelligence researchers in recent weeks about establishing a new research laboratory to develop a chatbot competing with the ChatGPT robot, according to a new report from The Information, but the project remains in its early stages and without a clear plan for the products it will offer. This new laboratory if it succeeds in establishing it. (8)

Elon Musk therefore probably wants to slow down the pace of development in this field, in order to give himself a chance to catch up and exercise control as he has always been, he wants to lead the field and play a leading role in the introduction of artificial intelligence technologies, and at the same time retain the starring role and savior of humanity from the danger posed by these technologies, but that does not prevent us from wondering: Are Musk's concerns about AI really true?

I’m sure it will be fine pic.twitter.com/JWsq62Qkru

— Elon Musk (@elonmusk) March 24, 2023

Realistic risks

Well, it's not without risks, and it's definitely worth paying attention to, but instead of paying attention to fears emerging from science fiction movies, such as AI taking over the world and annihilating us, we should turn our attention to the real impending challenges, such as privacy problems, cybersecurity, increased scams, changing economic status, and cutting off some of the jobs that will be affected by these new technologies.

With regard to cybersecurity, for example, hackers are now using more advanced technologies, relying on artificial intelligence, machine learning and automation, over the past few years reliance on these technologies has increased through the use of robotics and automation tools in spreading malware, as their availability and ease of use have reduced the skill barrier required to enter the world of cybercrime, not to mention that the availability of tools such as the new chatbot "ChatGPT" will facilitate things more and more.

The hackers used the chatGPT chatbot to develop a version of the code for a 2019 malware, known as InfoStealer. (Shutterstock)

Well, this is not just an expectation, but it actually happened after hackers found a way to bypass the software restrictions imposed on the ChatGPT robot to be able, relying on artificial intelligence, to develop and improve malware code or fraudulent emails, as the cybersecurity company "CheckPoint" discovered that hackers used the new robot (9) to develop a version of the code for malware from 2019, which is known as "InfoStealer".

Some experts also believe that the new chatbots will help hackers write more professional fraudulent emails, by avoiding linguistic and spelling errors, errors that make it easier to detect these fraudulent messages, this comes amid warnings by the European police "Europol" of the dangers of using the chatbot "ChatGPT", and other models of artificial intelligence, and how it can be exploited in spreading false and misleading information, in cybercrime, and in frauds based on social engineering. (10)

Again, we don't expect this, the Washington Post pointed out that fraudsters use artificial intelligence models, designed to mimic the human voice, to imitate the voices of relatives and friends of a number of people, ask for help from them, and then defraud them and steal thousands of dollars, (11) Some AI voice generation software requires only a few sentences in a person's voice to be able to produce a compelling dialogue that conveys the voice and even the emotional tone that characterizes the way the speaker is spoken, while some other models need only three seconds of For targeted victims, often elderly, it is difficult to detect whether the voice is real or artificial, even when the contingency described by the fraudster seems far from believable.

As for the economic aspect, new chatbots, like any emerging technology, will have an impact on the labor market, and will change the current conditions, as a report from the most famous investment bank "Goldman Sachs" notes, if AI-related technologies continue to evolve they will lead to "major disruption" in the labor market, putting about 300 million full-time jobs across large economies at risk, this time including jobs such as law and administrative jobs, not just low-skilled labor. The report estimates that nearly two-thirds of jobs in the U.S. and Europe are exposed to some degree of AI automation, according to typical task data in thousands of occupations. (12)

In the end, the advanced and breakthrough AI systems that we are witnessing today are the result of decades of steady development in field research and applications to the extent that we have been able to train neural networks and feed them with huge amounts of data currently available, and like any new technology, it comes with its challenges, problems and fears, and it will certainly make a big change in societies, in the economy and in everything in our lives, just like other technologies that preceded it, whether computers, the Internet, smartphones or communication networks. Social, so we have to prepare for the future, and try to adapt as we always did when all these previous technologies invaded our lives.

____________________________________________

Sources:

  • Pause Giant AI Experiments: An Open Letter
  • Why Are We Using Black Box Models in AI When We Don’t Need To?
  • Introducing OpenAI
  • The secret history of Elon Musk, Sam Altman, and OpenAI
  • OpenAI co-founder on company’s past approach to openly sharing research: ‘We were wrong’
  • Interview with OpenAI’s Greg Brockman: GPT-4 isn’t perfect, but neither are you
  • SpaceX CEO Elon Musk Speaks with CNBC’s “Closing Bell”
  • Fighting ‘Woke AI,’ Musk Recruits Team to Develop OpenAI Rival
  • $10.5 trillion in expected annual losses. Why do experts predict a cybersecurity catastrophe in two years?
  • The criminal use of ChatGPT – a cautionary tale about large language models
  • They thought loved ones were calling for help. It was an AI scam
  •  Generative AI set to affect 300mn jobs across major economies