Sam Altman, the CEO of OpenAI – the company behind tools such as ChatGPT – appeared yesterday before the United States Senate with a different message than what those responsible for the technology industry usually recite when they have Washington politicians in front of them. Instead of asking for less regulation, he asked for more. "This technology can cause significant harm to the world. If something goes wrong, it can end very, very badly," he said in one of his speeches.

The head of the company, which last year ceased to be a non-profit organization and received a strong investment from Microsoft, even said that it is probably necessary to create some kind of international supervisory body, similar to the one that exists, for example, for nuclear weapons.

Although it sounds extreme, his proposal to regulate the artificial intelligence market in the US starts from a somewhat different basis than the one chosen by the European Union. Altman, who acknowledged some of the problems with current language models — engines that use services like Bard or ChatGPT — or generative artificial intelligence, agreed that government regulation can help mitigate some of the risks inherent in this technology.

But his proposal calls for the companies that develop these artificial intelligences themselves to have some influence on decisions and that it is only regulated in cases where there is a real danger. " For example, when we talk about intelligences that are capable of persuading, manipulating, influencing a person's behavior or designing dangerous chemical compounds," he explained. Altman believes that OpenAI's latest language model, GPT 4, would fall into this category.

To do so, his suggestion is that the US Congress approve the creation of a new agency that is responsible for licensing language models, especially when they reach certain capabilities.

The agency would also be responsible for conducting independent audits to ensure that these models and tools comply with basic security measures.

Other experts consulted yesterday by the Senate committee also asked that the government require citizens to know when they are talking to an artificial intelligence or viewing an image or video created by a generative engine.

Regarding the impact on the labor market of these tools, an issue that several senators influenced, Altman and other executives assured that the development of better intelligence engines will have a positive effect on employment, creating many more jobs than it will displace, a view that not all industry experts share.

  • Artificial intelligence
  • ChatGPT

According to the criteria of The Trust Project

Learn more