This year we are celebrating the 90th anniversary of Kurt Gödel's groundbreaking work, which founded modern theoretical computer science and the theory of artificial intelligence (AI) in 1931.

Gödel sent shock waves through the academic community at the time when he demonstrated the fundamental limits of computing, AI, logic, and mathematics itself.

This had a huge impact on the science and philosophy of the 20th century.

Born in Brno (now Brno), Gödel was 25 years old when he wrote his thesis in Vienna. In the course of his studies, he designed a universal language for coding any processes that could be formalized. It is based on whole numbers and allows data to be represented, such as axioms for the basic arithmetic operations and provable theorems, as well as programs, for example series of operations on the data that produce evidence. In particular, it allows the behavior of any digital computer to be formalized in an axiomatic form. Godel constructed famous, self-referential, undecidable formal statements that imply that their truth content cannot be determined by a calculation.

In doing so, he ultimately identified the fundamental limits of algorithmic theorem proof, computation and all kinds of computational AI.

Some even misunderstood his result and thought that he had shown that humans are superior to AI.

Indeed, much of the early AI of the 1940s to 1970s was concerned with theorem proofs and deduction in the Gödel style (as opposed to the inductive approach of today's dominant machine learning).

To a certain extent, it was possible to support human specialists with inferential expert systems.

Leibniz, Frege, Zuse, Turing

Like almost all great scientists, Godel stood on the shoulders of others. He combined Georg Cantor's famous diagonalization trick from 1891 (which showed that there are different sorts of infinity) with basic insights from Gottlob Frege, who introduced the first formal language in 1879, and from Thoralf Skolem, who was primitively recursive in 1923 Introduced functions (these are the basic building blocks of the calculable), as well as by Jacques Herbrand, who recognized the limits of Skolem's approach. This work in turn expanded groundbreaking ideas that the polymath and "first computer scientist" Gottfried Wilhelm Leibniz had much earlier.

Leibniz '“Calculemus!” Was a defining quote from the Enlightenment: “If there were controversy between philosophers, they would no longer have to argue as accountants. All they have to do is sit down with their pencils and slates and say to each other [...]: Let's do the math! ”In 1931, however, Gödel showed that there are fundamental limits to what can be determined by arithmetic.

In 1935 Alonzo Church then derived a corollary of Godel's result: The decision problem has no general solution. To do this, he used his alternative universal language called Untyped Lambda Calculus, which forms the basis of the influential programming language LISP. In 1936, in turn, Alan Turing presented another universal model: the Turing machine. With this he derived the result mentioned above again. In the same year 1936 Emil Post published another independent universal model of computing. Today we know many such models. But supposedly it was Turing's work that convinced Godel of the universality of his own approach.

Many of Gödel's command sequences were series of multiplications of coded memory contents with whole numbers. Godel did not care that the computational complexity of such multiplications tends to increase with the size of the memory. Similarly, Church also ignored the context-dependent spatiotemporal complexity of the basic operations of his algorithms - the computational effort required to carry out a particular operation is not limited in advance.