In the social media, the reasons from the ranks of the scientists were quickly named: Excessive bureaucracy, high production pressure, risk-averse funding and a research operation characterized by existential fears - all of this is said to be responsible for the fact that the proportion of disruptive scientific results fell massively between 1945 and 2010 .

The latter is the result of an analysis recently published in "Nature" by American scientists, who examined 45 million publications and 3.9 million patents for their innovative strength.

To do this, they used a quantitative metric, according to which those studies are considered to be particularly innovative that are cited more frequently in later work than their predecessor studies.

So the underlying assumption: Disruptive science sets new,

Using this index, a clear proportionate decrease in innovations was observed in all publication fields considered – life sciences, physical disciplines, social sciences and technology. Remarkably, the absolute number of innovative studies remained almost constant.

The linguistic diversity of the publications also decreased over time, as did the use of verbs related to innovation - for the authors two alternative indications of dwindling innovative power.

Of course, the work also speculates about reasons.

According to the authors, they cannot be subject-specific in view of the homogeneity of the results.

A widespread decline in quality cannot be the reason either, because the result can be seen even if only the most respected journals are taken into account.

The scientists try to rule out changes in the practice of scientific publishing and citation by modifying their analysis.

The explanation favored by the study instead: The exponential growth of scientific knowledge means that researchers often only consider a very narrowly limited part of this knowledge for their own work - and thus limit their potential for innovations themselves.

"Relying on small slices of existing knowledge benefits individual careers but not scientific progress in general," the study states.

One can now ask to what extent such a finding calls into question the meaningfulness of the metric used in the study itself.

For if a study's citations do not necessarily depend on its quality, but rather on whether that study happens to be in the small, socially shaped sector of knowledge known to the researcher, a citation metric cannot measure innovation well.

On the other hand, the interpretation of the reactions to the study is comparatively clear: at least the dissatisfaction in the research sector with their own work seems to have grown significantly.