Enlarge image

Brad Smith: "What we really need are safety nets"

Photograph:

ANDREW CABALLERO-REYNOLDS/AFP

According to Microsoft Vice President Brad Smith, artificial "superintelligence" is still a long way from reality. It is absolutely unlikely that we will see a so-called AGI in the next 12 months, where computers are more powerful than humans," Smith, president of the software company, said on Thursday. It's going to take years, if not decades." However, it is already necessary to think about the security of this technology.

AGI stands for "Artificial General Intelligence". Unlike previous AI systems such as ChatGPT from Microsoft's OpenAI, these programs would be able to perform various, complex tasks for which they have not been trained. However, there are several definitions of AGI. OpenAI's is "highly autonomous systems that outperform humans in most economically valuable activities." Another is "a computer program that has the ability to understand or learn any intellectual task that a human being can perform."

The discussion about the moment of the "technological singularity", when artificial intelligence surpasses human intelligence, had been given new fuel by the turmoil surrounding the short-term dismissal of OpenAI boss Sam Altman. It is possible that quarrels over how to deal with a breakthrough in AI research played a role in the affair. According to unnamed insiders, the developers of the project "Q*" (pronounced: Q-Star) had warned OpenAI's board of directors about the potentially humane consequences of a hasty release of the program. From the point of view of Microsoft President Smith, however, the topic of superintelligence did not play a role in Altman's dismissal. There had been differences of opinion with the Board of Directors, but not on fundamental issues such as this.

Altman himself has not yet revealed, even when specifically asked, why exactly the board of OpenAI had withdrawn its confidence in him. However, there is to be an internal investigation into this.

Lawmakers around the world are struggling to come up with appropriate AI regulation. At an AI summit at the beginning of November, several countries pledged to cooperate on this issue. "What we really need are safety nets," Smith continued. "Just as there are emergency brakes in elevators or circuit breakers, there should also be such fuses in AI systems that control critical infrastructures so that they always remain under human control."

pbe/Reuters