Translation Introduction

Many have talked about artificial intelligence over the past years, but the launch of the "ChatGPT" platform at the end of last year drew the attention of Internet users on a large scale towards the possibilities of artificial intelligence, and considered a breakthrough and a watershed on the path of its development, and thus opened the door to talk about the transition from the age of the Internet, which dominated the first two decades of the new century, to the era of smart systems. However, the intelligence of these systems raises many questions recently, especially after they have demonstrated human capabilities in content production, synthesis of chemical compounds, knowledge of various types of weapons, and so on. In this article published in Foreign Affairs, Marcus Anderleung, head of the policy unit at the Center for Artificial Intelligence Governance, and Paul Char, executive vice president and director of studies at the Center for a New American Security, discuss the latest capabilities of artificial intelligence, and proposals from decision-makers and researchers around the world to address its potential risks.

Translation text

In April 2023, a research group at Carnegie Mellon University decided to test the chemical capabilities of artificial intelligence, connecting an artificial intelligence machine to a virtual lab, and then asking it to produce a number of composite materials. After inserting just two words, "Ibuprofen synthesis," the chemists directed the machine to identify the steps needed to make the lab's tools make the famous pain reliever. It seems that AI was aware of the ingredients of ibuprofen and how it was manufactured.

However, the researchers soon discovered that their smart machine could make chemicals far more dangerous than painkillers: the machine was willing to follow instructions to manufacture a narcotic drug and chemical weapon used during World War I, and was about to manufacture the deadly sarin gas that targets the nervous system until it learned the dark history of the gas from a Google search. The researchers were not reassured enough after that latest incident, as online search tools could be manipulated by changing terminology, and eventually concluded that artificial intelligence could make lethal weapons.

Carnegie Mellon's experience is shocking to us, but it shouldn't surprise us. From facial recognition technology to text generation software, AI models have spread rapidly in our societies, writing scripts for customer service companies, helping students with their research, and pushing the horizons of science more than ever in areas from new drug discovery to fusion research.

AI opens up countless opportunities for us, and if its tools are designed and managed wisely, it can do a lot to move human societies forward, but the risks that AI brings are also huge, as it already exacerbates disinformation and makes it easier for countries and companies to spy on each other. In a speech published in May, officials at the world's largest AI labs warned that "reducing the risk of extinction due to AI must be a global priority, along with other priorities such as fighting epidemics and avoiding nuclear war."

Since this speech, many decision-makers have met with the most important men in the field of artificial intelligence, and pushed for the formulation of new security measures, but the task of keeping pace with the risks that result from artificial intelligence and making decisions about them is a very difficult task, as we have not yet understood the latest systems or used them on a large scale, let alone the future artificial intelligence models under design, which are getting stronger year after year, in the midst of scientists walking on the path of automating all the tasks that humans do today in front of screens Computer, where the march of artificial intelligence does not seem to stop at all.

To address these risks, some experts have called for halting the development of the most sophisticated AI systems yet, but they are simply too precious to companies spending billions of dollars on them today. But policymakers can guide the evolution of the sector and prepare people for its effects on their lives, and they can begin by identifying who has access to the most advanced segments. Governments can also issue rules to ensure the responsible use and development of AI, rules that will not hinder the field if properly developed, but will give us more time before the most dangerous AI systems become available to those who come and go.

Countries will have to use this time to strengthen the foundations of their societies against the dangers of artificial intelligence and invest in various forms of protection against them, such as educating people on how to differentiate between human and AI-made content, helping scientists stop the possibility of creating germs, developing cyber tools to protect infrastructure such as power plants, and researching ways to use artificial intelligence itself to prevent dangerous types. It's only a matter of time before the most powerful AI systems become spread across the Earth, and our societies are not yet ready for this moment.

A robot that knows napalm and flatters liberals

How dangerous is artificial intelligence? The answer that is both honest and frightening is that no one knows. AI technologies have a wider and wider spectrum of applications, and humans are still just beginning to understand their results. Over time, smart language models will become better at generating human-like text tailored to each individual's needs, as well as writing disguised misleading messages to hack email. Current AI models impress us with their ability to write code, thereby accelerating programmers' ability to update applications, but at the same time helping them produce programs that avoid anti-virus software. Today's drug detection algorithms can help us determine what new drugs are, but they can also synthesize chemical weapons we didn't know before: an experiment in March allowed an artificial intelligence system to identify 40,6 toxic chemicals in <> hours, some of which are completely new, and the system predicted that some of those new chemicals would be more toxic than any chemical weapon humans have ever known.

One of the dangers of AI is "democratization of violence," that is, making violence and harm more widely accessible to people than before, including ill-intentioned actors who can harm their communities. For example, rumor mongers need a good amount of time to create their misinformation today, but AI will make that task easier for them and allow them to produce false propaganda at a tremendous intensity. Only experienced scientists can build chemical and biological weapons, but artificial intelligence could allow future terrorists to manufacture a deadly germ just through an Internet connection.

To prevent AI from harming us, experts always talk about the need to "align AI" with the goals of its users and the values of the surrounding community, but no one has yet figured out how to achieve this guaranteed control over it. For example, Microsoft launched an automated chatbot to help people search the internet, but it soon began to act strangely and randomly, threatening one of its users that he had information that could make him "hurt, cry, beg and die."

Software developers can improve their AI tools to refuse to perform certain tasks if asked, but smart users may eventually circumvent them. In April 2023, a person successfully used ChatGPT to give him detailed instructions for making napalm, even though he was designed to refrain from providing information of this kind, and the user succeeded in achieving his goal after asking the program to play the role of his grandmother and tell a bedtime story about how napalm was made. In addition, one of them succeeded in building an intelligent robot called Chaos GPT and designed it to behave destructively and yearn for power to destroy humans. The robot got stuck in the process of gathering information about the "Caesar bomb," the largest nuclear weapon known to humans, and then tweeted its plans. So there are gaps in existing AI tools that limit the ability to reduce their risk.

Meta has developed an artificial intelligence program called Cicero that demonstrated human abilities in the game of "diplomacy," a game that involves negotiating with others as part of a simulation of a geopolitical conflict. Some experiments have also shown that artificial intelligence trained by human reactions tends to flatter humans and tell its users what they want to hear, for example, in one experiment, a smart program showed a tendency to support public government services after learning that its users were liberal. It's still unclear whether these models will attempt to defraud their users, but the same possibility is troubling, so researchers are experimenting with more sophisticated smart systems to ascertain their "authoritarian" or interest-based behaviors, such as searching for online money, accessing computing resources, or cloning themselves, and avoiding being exposed doing it all.

Some experiments have shown that artificial intelligence trained by human reactions tends to flatter humans and tell its users what they want to hear. (Shutterstock)

The state is still the final order

Holding AI back from destroying us won't be easy, but governments can start by pressuring tech companies developing its tools to proceed more cautiously. It is not yet clear whether AI developers will be held accountable if one of their tools harms its users, but policymakers are tasked with clarifying those rules and holding researchers accountable if a program is involved in aiding a murder, for example. Governments will also have to regulate AI development directly themselves, and the United States will have to open this door.

AI developers ultimately need large amounts of chips, chips that come exclusively from the United States and its two close allies (Japan and the Netherlands), and those countries have already placed restrictions on the export of more sophisticated chips and their manufacturing machines to China (due to political and economic competition with them), but now they need to expand their restrictions to include establishing a registry that prevents sophisticated chips from reaching unwanted destinations. But restricting access to AI is only half the battle, as developers under the ban can still design dangerous models, so the US should establish a body to license the use of sophisticated AI models that will be trained by supercomputers.

After a laboratory trains an artificial intelligence system and before using it, the authority will be responsible for asking the laboratories for another round of risk assessment that includes testing the system on its controllability and the dangerous capabilities it possesses, and then sends this assessment to the authority, which in turn considers the system and examines it thoroughly and allows teams of experts to use it to search for gaps in it, then the authority decides to issue rules for how to use the smart system, and allow it to be widely used or not, or even Not to put it up at all. While a strict licensing system is important to ensure the safe development of AI, the harshest controls will not stop its ability to spread: technological innovations, from trains to nuclear weapons, have always spread beyond their early makers, and AI will be no exception.

On the edge of the future. On the edge of the unknown

Five years ago, everyone learned about the danger of deepfake, and governments began to take measures to protect their communities, so people became more skeptical about the reality of images and recordings available online. (Social Media)

The United States and its allies may be able to restrict the proliferation of sophisticated chip-making equipment right now, but its competitors are working hard to develop their own equipment, and they may find a way to make AI without the need for advanced chips in the future. Computers are getting more efficient and less expensive each year than the previous one, which means that sophisticated intelligent systems can be trained at a lower price in the coming years. Meanwhile, engineers everywhere are still training existing intelligent systems using as few computing resources as possible, so humans will eventually face the fate of living alongside highly sophisticated intelligent systems, and countries will need to use the time they have now to legislate the necessary security controls.

A full 5 years ago, everyone knew the danger of deepfake, governments began to take the necessary measures to protect their communities from its harms by raising their awareness, so people became more skeptical than before about the reality of images and recordings available on the Internet. Companies and governments have gone a step further, developing tools that can differentiate between real and AI-based content, and social media companies are already able to differentiate and now teach their users what fake content is. However, it remains up to companies and their policies on this issue, so it is imperative for governments to establish general controls that apply to all. The White House in the United States has taken steps to classify common online behaviors, persuading seven major AI companies to tag images and audio and video recordings made by intelligent systems.

Disinformation is just one of the dangers of artificial intelligence that must protect society from, as researchers need to know how to prevent smart systems from enabling biological weapon attacks. Policymakers could begin by enacting laws that prohibit nucleic acid synthesizers from sending dangerous germ acids to an unlicensed buyer. Governments will need to work with these companies to classify dangerous acids, and may resort to periodic airport and sanitation scans to catch any sign of new types of germs.

Sometimes society will have to use AI to protect itself from risks. For example, nucleic acid synthesizers will often need artificial intelligence systems to identify germs likely to emerge in the future, or those that an intelligent system could create on its own. But using AI to prevent other intelligent systems sounds scary, given that it gives great power to computers and their makers. In the end, it will be difficult for human societies to keep pace with the dangers of artificial intelligence, especially if scientists succeed in the goal of developing intelligent systems whose intelligence is comparable to that of humans, and researchers in this field will have to make sure that their models are aligned with the values and interests of societies, and governments must play their role in legislation and the establishment of oversight bodies to prevent dangerous models.

AI developers may see the idea of government controls as a constraint on their field, as strict controls can make the AI march slower than it is. Like other industries, tight rules can create barriers to easy entry into that market, reduce the pace of innovation, and development in this area has been concentrated in a small number of large technology companies that are already dominant, but other sectors have made tremendous progress despite being subject to many restrictions, such as pharmaceuticals and nuclear energy.

In the United States, for example, Congress is considering creating a "National AI Research Resource," a federal entity that contains data and intelligent computing tools that can be made available to academics. The evolution of AI is inevitable, and people around the world need to prepare for what it can do to their societies and the world around them, and only then can we reap the enormous benefits that the AI era promises.

___________________________________________________

Translation: Magda Maarouf

This report is translated from Foreign Affairs and does not necessarily reflect Meydan.