This is the third part of the series on computer chess by ChessBase founder Frederic Friedel. The first part can be found here. The second one under this link.

It was the mid-1990s. I was in London and, as so often, I accompanied World Chess Champion Garry Kasparov on one of his performances. This time to Home House, a wonderful Georgian villa in Marylebone, where we met a former chess prodigy at dinner. He had achieved master strength (Elo 2300+) at the age of 13 and had captained a number of English youth teams. He was also world-class in computer games. The encounter was interesting and the young lad enthusiastically talked about a game he was developing. After he left, I said to Garry, "A pretty cheeky young man!" "But very smart," Garry replied. We left it.

Javascript is required for the display.

Twenty years later, I read on the news that Google had bought a company called Deepmind Technologies for 400 million pounds. Working on artificial intelligence, Deepmind had developed software for a neural network that had learned by itself how to play first-generation video games like Pong and Space Invaders. The program was not programmed by hand, but used methods very close to those that made people better in a game. The goal was, Deepmind had stated, "to develop a universal AI that is useful and effective for almost everything."

Go as the basis for the strongest chess computer in the world

One of the founders of the company was Demis Hassabis, whom we met at the Home House. For a year, watching the progress the company made as a member of the Google family, I was particularly fascinated by how they were able to solve a problem that computer experts had failed for decades: with Alpha Go, Deepmind had developed a program that had learned how to play the old Japanese board game Go, and brought it to master strength and then to world champion level.

The Go rules are deceptively simple, but the many possibilities make it very hard for computers to calculate the game. In the first article in my series, I described how there are 10¹² possible sequences of moves in a 40-game game of chess - much more than the number of atoms in the universe. At Go, there are 10170 possible constellations on the board. In contrast, the number of possible chess games insignificant.

AP

AlphaGo

The program used deep neural networks to study a very large number of games and develop their own understanding of what human play looks like. Afterwards, the program further developed its skills by playing against itself in different versions and learning from its mistakes. This process, known as Reinforcement Learning, has resulted in software that plays at master-level.

More than twenty years after their first meeting, Garry Kasparow discusses Artificial Intelligence with Demis Hassabis.

At the time, Hassabis was already working on the development of a chess engine that was different from all the previous ones. Traditional engines get their knowledge about the chess programmed. The neural network of Deepmind went a radically different way: showing him the rules of chess, how the figures draw and the goal of the game, the muzzling. Then nothing. With the help of the most advanced techniques of artificial intelligence, the program, Alpha Zero , played against itself, millions of times, independently recognizing patterns and adapting their assessments at its own discretion.

Computer plays 44 million games against itself

How was that possible? In the beginning, the system played absurd games in which one side sacrificed three pieces for nothing, but the other side could not win because they had lost four pieces. But with each run, after 10,000 steps, Alpha Zero became stronger. It played 44 million games against itself and came in chess while world class.

Javascript is required for the display.

Nobody had told Alpha Zero anything about strategy, no one had stated that material was important, that ladies are more valuable than runners, that agility plays a role. The program had found everything on its own and made its own conclusions - conclusions that a person will never understand, by the way. In the end, Alpha Zero played a test match against an open source engine called Stockfish, one of the three or four strongest traditional engines in the world.

These engines all come to around 3500 points on the rating scale, making them at least 700 points more than any human. Stockfish calculated 70 million positions per second; Alpha Zero just 80,000. It compensated for this thousand-fold drawback by looking only for the most promising variants - traits that had proved particularly effective in playing against themselves in similar positions.

All games without opening book

Of the 100 games against Stockfish Alpha Zero won 25 with white, three with black and played the remaining 72 draws. All games were played without access to an opening book. In addition, a series of twelve competitions over 100 games was held, with twelve of the most popular openings in humans as a starting point. Alpha Zero won 290, drew 886 and lost 24 games.

But that's not all. The techniques used by Deepmind are not only applicable to chess. You can use neural networks to learn everything - recognize pictures, faces, or handwriting; Machine language; Calculate movement (for computer games or robots); Understand economics and exchanges and make better predictions than human experts.

Young programmers want to understand how their field is changing, how the transition from minute manual programming to unattended computer learning is currently taking place, and where this method is superior. Alpha Zero is just an early proof of that. And we have to assume that this will not only be the case for go and chess, but for many areas of our lives. This is the future of humanity, and it would be good if we adapt to it at an early stage.