(Washington Watch) 2023 in the United States: OpenAI's "revolving door" hides the battle for new technology routes

China News Service, San Francisco, December 12 (Xinhua) -- OpenAI's "revolving door" hides a dispute over new technology routes

China News Service reporter Liu Guanguan

5 days of "out of control".

On November 11 of this year, OpenAI, an artificial intelligence research company based in San Francisco, USA, abruptly announced that Altman would no longer serve as the company's chief executive officer (CEO) and leave the board of directors, citing that he was "not always honest" in his communication with the board of directors, and the board no longer believed in his ability to continue to lead OpenAI. In addition, the company's co-founder and president, Greg Brockman, will no longer serve as chairman of the board. Brockman immediately announced that he would be leaving the company.

Altman, 38, co-founded OpenAI in 2015, and the company released a large language model ChatGPT in November 2022, with more than 11 million registered users within 5 days.

OpenAI's unannounced high-level shake-up and harsh wording from its board of directors shocked the global tech community. In early November, OpenAI held its first developer conference, and Altman introduced a number of new products to a global audience. At the APEC meeting before the incident, he also talked with executives of some well-known technology companies about the broad prospects of artificial intelligence.

Pictured on December 12 are two personnel change announcements posted by OpenAI on the company's blog in November. Photo by China News Service reporter Liu Guanguan

In the days that followed, a series of "reversals" of this high-level shock were even more unexpected. After OpenAI announced Altman's departure, the company's investors tried to help him return to his position. On November 11, Altman came to OpenAI's headquarters as a visitor to negotiate with the board of directors about his return. One of his conditions was to restructure the board, and the negotiations ended up in an unhappy end.

Late that night, Microsoft CEO Satya Nadella announced on social platforms that Altman, Brockman and their former colleagues would join Microsoft to lead a new team of advanced artificial intelligence researchers. Microsoft is the largest investor in OpenAI, with $130 billion invested. Nearly all of the company's 770 employees signed a letter saying that if Altman couldn't return to OpenAI, they would resign and join Microsoft's new team.

Late at night on November 11, OpenAI announced that Altman will once again serve as the company's CEO, and the company's board of directors will also be reorganized. Immediately afterwards, Brockman announced that he would return, and OpenAI's 21 days of "out of control" came to an end.

The battle between conservatives and radicals

Now, a month has passed since OpenAI's high-level shock, and the cause of the incident is still different. Essentially, this is likely to be a conservative vs. radical battle over the concept of AI development, exposing the divergence between idealism and business paths in the field.

OpenAI is a non-profit organization that aims to develop "safe and beneficial" artificial general intelligence for the benefit of all of humanity, which coincides with the idea of "effective altruism". Effective altruists believe that the unbridled development of AI could destroy humanity and that the development of AI should focus more on safety than speed.

In 2015, OpenAI said in a statement that it was important to have a leading research institute that prioritized altruistic outcomes over selfish interests. This philosophy has also laid the foundation for OpenAI's rise, and years of investment regardless of cost and emphasis on long-term benefits have made ChatGPT stand out from similar products.

However, there is also an opposing view in the field of AI known as "effective accelerationism", which advocates unconditionally accelerating technological innovation and bringing it to market quickly to disrupt the social structure. There is a view that although Altman also emphasizes the importance of AI safety and regulation, his position is clearly more inclined to "effective accelerationism".

OpenAI's product has shown a troubling side. Constantly speaking, based on personal experience, ChatGPT can provide users with ideas that are unethical or against the law.

Reuters, citing two anonymous people familiar with the matter, said that several OpenAI employees had written to the board of directors before the company's personnel turmoil, saying that the breakthrough in general artificial intelligence research made by the artificial intelligence algorithm project called "Q*" could pose a threat to humanity.

As the company grows rapidly, Altman's admiration for the commercialization of artificial intelligence has become more pronounced, and his disagreements with board members such as Sutzkovi have deepened. The British "Guardian" reported that after Altman's dismissal, an OpenAI employee asked Sutskovi if it was a "coup d'état". Sutskwe denied that the board was fulfilling its role as a nonprofit organization.

Humans need to be "aligned" more

On Nov. 11, Altman said that after "all this," the company "hasn't lost a single employee." However, after the personnel turmoil that seems to have not changed anything, the security of artificial intelligence has once again attracted widespread attention, and some countries, institutions and organizations are strengthening the supervision of artificial intelligence.

U.S. President Joe Biden signed an executive order on October 10 that sets new standards for AI safety. According to the US media, this is the first major binding action taken by the Biden administration on artificial intelligence technology. On December 30, the European Parliament, EU member states and the European Commission reached an agreement on the Artificial Intelligence Act, which will be the world's first comprehensive regulation in the field of artificial intelligence.

In December, OpenAI made several posts on social media about the safety of artificial intelligence, mentioning that the "super alignment" team published its first paper. The so-called "super-alignment" refers to the establishment of a set of control techniques to ensure that the goals of the AI system are consistent with human values and interests.

The "Super Alignment" team was formed in July of this year, and one of the leaders is Sutskovi. But as of now, OpenAI has not announced its position. Those who have "experienced all this" may already understand that it is humans themselves who need to be "aligned" much more than it is AI that needs to be "aligned". (ENDS)