Will AI be the downfall of humanity or the solution to all the world's problems? Both sides are represented at the "AI Safety Summit" on November 1-2 in tiny Bletchley Park, just over half an hour by train from London. It was here that mathematician Alan Turing led the work of cracking the Nazi encryption machine "Enigma".

Alan Turing later developed one of the first computers in the world. His ideas laid the foundation for what would come to be known as AI. He is behind the "Turing test": if a human is conversing with a machine and cannot tell whether it is another human being they are talking to, the criterion of artificial intelligence is met.

Today, it can be said that ChatGPT has already passed that test.

Companies compete

Since ChatGPT launched almost a year ago, it has gained competitors such as Google's Bard and Anthropic's Claude. More and more companies are competing to push the boundaries of what AI can generate in the form of artificially created text, images, video, music and all sorts of things that just a year ago were considered to be the height of human competence.

But AI has its flaws. It's a machine that's made to guess the most likely answer and just like a human guy-guesser, it answers confidently even if it doesn't really know what it's talking about. That shouldn't be a problem. Just as we know that we can't blindly trust everything on the internet, we should also be able to deal with artificial bullshit.

But many people don't read more than the headline and believe what they want to believe. So, AI could benefit troll factories that want to create disinformation on a massive scale. It lowers the threshold for developing deadly viruses, both those that penetrate computers and human bodies. These are some of the short-term risks.

Trying to agree on the risks

In the long run, the stakes are higher. The survival of all humanity, if one is to believe those who warn of the challenges we will face when we reach the next milestone, when we have developed "general artificial intelligence". The one that has several million times higher IQ than we humans can ever achieve. That kind of AI could easily manipulate us and put all of humanity completely out of play. Anyone who's seen Terminator knows how it ends.

Those raising that warning flag say it could happen as early as within the next decade. The other side dismisses them as dystopian doomsayers crying "wolf is coming" and rather highlights all the possibilities of AI: automating boring jobs, developing cures for incurable diseases, and a smart solution to climate change.

Dystopia or utopia. Hell or Eldorado. Stop AI development or embrace the technology. It is in this field of tension that presidents, tech CEOs and researchers are now meeting for a two-day summit with the goal of at least agreeing on which risks should be taken seriously and prioritized.