According to Hiroaki Kitano's plans, 2050 will be a special year.

This year, not only is a robot soccer team set to play against the reigning human world champion for the first time, but by 2050 artificial systems should also be able to make scientific discoveries that are largely autonomous, and which are worth the Nobel Prize.

Kitano, systems biologist and computer scientist at the Systems Biology Institute in Tokyo and CEO of Sony Computer Science Laboratories, helped to set up both "challenges": the Robo Cup football competition, which was held for the first time in 1997, and last year's Turing AI Scientist Grand Challenge.

The idea of ​​automating research is not new.

In many disciplines, especially in areas such as genetics, biotechnology and materials science, there are enough work steps that are time-consuming, expensive, error-prone and not really exciting.

For example, when it comes to testing a huge number of possible combinations of molecules or mutations in order to find the best possible material, an interesting substance for a drug, or a bacterium that can produce a desired product a little more efficiently.

Research robots are a great help here, at least since Adam started working at Aberystwyth University in Wales in 2004.

He was the first who was not only able to carry out around a thousand experiments a day largely independently, he also checked their results and planned further experiments on the basis of them.

All humans had to do was replenish his supplies, clean up trash, and provide him with information about "his specialty," baker's yeast genetics.

Adam found three candidates for the production of an enzyme in the yeast genome that had long interested researchers.

He is celebrated as the first robot to independently make a discovery.

But Adam's options were limited.

His discovery was technically not a real sensation.

Scientific Aid Activities

Today, huge automated laboratories are available, which can be used via cloud service and deliver the results faster than humans would ever be able to do.

A report by the US National Academies of Sciences, Engineering, and Medicine concludes that automated research operations, consisting of digitization, laboratory automation and automatic data analysis, increase the efficiency and speed of research for more transparency, traceability and repeatability cares.

Automated Science can now be studied at Carnegie Mellon University.

One has to imagine the research robots as similar to autonomous vehicles, says the head of the course, Robert F. Murphy.

But the research robots still move in rather narrow worlds with clear specifications.

Whether they can generate interesting questions and insights remains to be seen.

The American philosopher Thomas Tymoczko already formulated the fundamental criticism of automated science in the late 1970s in relation to machine proofs of mathematical theorems: What good is a proof that is so long that people cannot understand it?

Can something that man cannot decide whether it is right be a cognition?

Among other things, because people also want to understand the results of algorithmic calculations, efforts are booming to make obscure methods from the field of machine learning more transparent.

The problems begin with the automation of ancillary activities designed to make human researchers faster and more productive.

For example, with programs that compile and evaluate literature on a topic or point out what else needs to be quoted.

Trained nonsense

Meta's Galactica was touted as a system capable of "automatically organizing science," summarizing and combining scholarly texts, and assisting in essay writing.

After only three days, it was taken offline again due to massive criticism.

The main accusation against the program trained with 48 million scientific articles, websites, textbooks, scripts and other scientific texts: It cannot distinguish between truth and fiction.

Numerous experimenters are currently presenting their results on social media, such as a text about bears in space, including various details about the breed, weight, age and sex of the animals that flew into space with a Sputnik 2.

These stories are impressive and problematic at the same time.

Impressive, because they do in fact read better and more convincingly, problematic because this makes it more and more difficult to distinguish between what is true and what is not.

Experts warn that such systems could spread huge amounts of nonsense around the world and it would be almost impossible to catch it again.

Especially since this nonsense also goes into the pool of texts that are used to train upcoming algorithms.

This does not only apply to texts.

The many funny AI-generated images that are currently flooding the web also seem to be causing the performance of image recognition systems to decrease.

There are also questions about authorship and copyright of automatically generated content.

According to their developers, language models such as Galactica, GPT-3 and their relatives should one day replace today's search engines and make handling our gigantic databases intuitive and easy: Computer, tell me the state of research!

Apparently they need a better nonsense filter first.

This is unlikely to be the smallest of the steps that artificial systems that are supposed to conduct independent research will have to master by 2050.