Jeffrey Hinton, described as the godfather of artificial intelligence, on Wednesday urged governments to step in to ensure machines do not take over society.

Hinton made headlines in May when he announced his resignation from Google to speak more freely about the dangers of artificial intelligence shortly after the launch of chatGPT that captivated the world.

The artificial intelligence scientist who teaches at the University of Toronto spoke to a crowd at the Collagen Technology Conference in the Canadian city.

The conference attracted more than 30,<> startup founders, investors and tech professionals, most of whom were looking to learn how to ride the wave of AI rather than hearing lessons about its risks or calls for governments to intervene.

"Before AI gets smarter than us, I think the people who develop it should be encouraged to put in a lot of effort to understand how it might try to take us away from control," Hinton said.

"Now there are 99 very intelligent people trying to make AI better, and one very smart person trying to figure out how to stop it from taking over, and maybe we should be more balanced."

Hinton warned that the risks of artificial intelligence should be taken seriously.

"I think it's important for people to realize that this is not science fiction or just raising fears," he said, adding: "It's a real risk that we have to think about carefully, and we have to figure out how to deal with it first."

Hinton expressed concern that AI could deepen inequality, as the enormous gains from its productivity would go to the wealthy rather than the workers.

"Wealth will not go to the people who work, it will make the rich richer, not the poor, and this is a very bad society," he said.

He also pointed to the danger of "fake news" that could be generated by bots such as ChatGBT, and hoped that content generated by artificial intelligence programs would be tagged just as central banks would tag fiat currencies.

He stressed that it is crucial that "we try, for example, to distinguish everything that is fake as fake," and said, "I don't know if we can do it technically or not."