It almost feels a bit stupid now to emphasize this difference over and over again. But it is simply not the same: what the pandemic modelers produce, the forward-looking curves of the infection process, are not predictions. You don't say what will be. They are scenarios. They say what could be - under certain assumptions and with all, in the best case quantified, uncertainties. At the moment it is particularly necessary to point out this apparent subtlety, now that the third wave has broken and the numbers have been falling again since the end of April. As recently as March, modelers had warned that the third wave could be the worst. Have you fooled the public with your pessimism? Are models of the pandemic ultimately useless?

This objection has firstly been countered with reference to the well-known prevention paradox: If the warning person is successful with his warning, what was warned about will not occur. However, this failure to occur does not prove that the warning was unjustified. Second, with an explanation of the role models play in this pandemic. In view of the unavoidable uncertainties that arise when trying to predict future infection events, this role can only be the demonstration of a spectrum of possible courses between the worst-case and best-case scenarios. A risk assessment must then be carried out on the basis of these scenarios. The fact that special attention is often paid to the worst case is due to the nature of this consideration.

The question of why the worst-case scenarios the models produced in March did not occur is therefore perhaps less interesting than the question of why there are such great uncertainties in the modeling of the infection process. Shouldn't the models get better slowly? This question leads to at least two fascinating aspects: first, the interaction between the models themselves and what they are studying; and second, the extremely non-linear behavior of the modeled system.

The first point is already illustrated by the prevention paradox. The publication of a pessimistic scenario leads to a change in behavior in the population, which probably overrides certain assumptions of the model calculation itself. A similar feedback mechanism hampered the study of the seasonality of the virus last year: After scientists publicly speculated that the virus would be at a disadvantage in the face of higher temperatures, more humid air and stronger UV radiation in summer, politicians such as Donald Trump spread that measures in summer would no longer be necessary. The careless behavior of large parts of the population then fueled the spread of the virus and in turn falsified the data situation for the scientists.

The question of how big the seasonality of the virus really is has not yet been fully clarified, even after a year, because the factors of the virus properties, the behavioral adaptation and also the seasonal variation of the human immune system are so difficult to separate from one another. Modelers complain again and again: If the behavior of the population could be predicted more reliably, this would entail a huge leap in the quality of the model calculations.

The related point of non-linearity is based on the complexity of the modeled event. This means that sometimes minimal changes in the input parameters cause massive qualitative changes in the behavior of the system. Austrian scientists have published an example of this in the journal Nature Communications: As soon as the number of new infections becomes so great that they can no longer be tested and tracked adequately, the infections suddenly grow explosively. Viola Priesemann's group had already pointed this out. Such tipping points make predictions systematically more difficult, since the systems do not have a great tolerance towards uncertain input parameters in certain situations. That means: there are uncertainties that you cannot get rid of.But it also means that anyone who wants to criticize models should first try to understand how they work.