Hardly a day goes by without new successes being reported somewhere thanks to artificial, intelligent computer programs.

Last week, for example, the journals "Science" and "Nature" both had some in the main program: In one case, software mixed new metal alloys, in the other it discovered new methods for matrix multiplication - a mathematical operation that is even required for machine learning.

So we've come this far: Algorithms develop algorithms.

Developments of such feedback connections tend to accelerate exponentially.

Couldn't the self-optimization of the machines soon turn into self-transcendence that eludes human control?

Machines taking over power is a popular science fiction motif, but it is usually associated with an awakening self-confidence or a quasi-biological instinct for self-preservation.

But maybe that's not necessary.

This is the conclusion of a study recently published in "AI Magazine" by three computer scientists, one of them employees of the Google subsidiary "Deep Mind", from whose laboratories the algorithm for generating new algorithms for matrix multiplication also comes.

The authors consider highly advanced machines that do not yet exist but are conceivable.

They could take us humans out of the game by misunderstanding the signals we use to let them know they've done their job to our satisfaction—and instead of getting the job done, they do everything they can, just the "rewarding" signals to realize.

They would react to human measures to dissuade them from their error by neutralizing this source of interference.

In the end, really self-confident machines might have helped us more.