As an environmental economist, I in a sense build my work upon the work of others. So, its foundations are provided mainly by ecology and related (sub-)disciplines such as conservation biology. However, while diving into some aspects of these disciplines and interacting with biologists who actually work in the field, I have realised that in many cases, reality is much more messy than a superficial look into the respective literature might suggest.
Before I come to the actual issue, let me start with a little digression: even though philosophy of science have made some progress since the contributions of Karl Raimund Popper, his basic ideas are still the foundation of modern science. In its essence, science is about formulating and testing of hypotheses (or actually, formulating theories, from which testable hypotheses can be derived, which are then tested). To test a hypothesis one conducts experiments or uses observational data for quasi-experiments. Based on their results, the hypothesis can be falsified or else retains its status as “true”. Of course, many philosophers of science have pointed out that the Popperian perspective is too idealised and that in the real world things are not as simple (see, e.g., the contributions of Thomas Kuhn, Imre Lakatos or the Duhem-Quine critique). Still, in its essence, science is working according to the main principles as outlined by Popper. Thus conducted science is then the main basis of our view of the world. Many theoretical constructs (such as economic theory) draw upon the insights generated by science thus conducted.
The problem is, however, that real science is often much more messy than it might seem from this rather “academic” perspective. In my daily work as environmental economist, I draw heavily upon ecological and, more generally, biological literature. When one reads these publications superficially, one can have the impression that everything is neat and more or less clear. However, all too often it is not. Take for example the much debated diversity-stability hypothesis, which in its essence goes back to Charles Darwin: the more diverse an ecosystem is, the higher is its stability (or, more generally, the better is the ecosystem’s “functioning”). When it comes to testing this hypothesis, the first problem is that there is a huge number of competing interpretations of both key terms, (bio)diversity and ecosystem functioning. For simplicity, in most cases diversity is identified with species richness (of vascular plants or vertebrates, as those can be relatively easily counted [but see below]), productivity being the proxy of choice for functioning (I leave aside here the question of the multiplicity of ways productivity can be measured). The most influential experiments conducted to test the diversity-stability hypothesis to date have been controlled experiments involving “ecosystems” of a few species of grasses. Alternatively, some larger-scale experiments have been conducted, many of them investigating the changes in productivity resulting from the increase of species richness from one to two species [sic!]. The results of these experiments, which support the diversity-stability hypothesis, are being generalised to forests and other complex ecosystems. Economists and other non-biologist researchers use these generalised statements for further work.
Another example for how messy ecology can be is such seemingly “simple” class of experiments as counting species in different places to compare their richness. This is often done by use of transects, “paths” along which observations are made, sometimes involving traps, sometimes just a person walking along the transects and counting what there is. The problems start with transects that are supposed to be compared, which clearly should be of equal length, shape etc. This is, however, not always possible, especially in human settlements – an important study area. Another problem is that of “counting what there is”. The use of traps has the advantage that already-counted individuals can be marked (to not be counted twice) and also observed for further characteristics. However, traps are not always feasible. Then, a person has to look and listen for the animals of interest – which can clearly be difficult and highly imprecise, especially in biologically rich environments, such as tropical rainforests. You might hope for statistics to solve those problems for you – but it is not always possible to make really large numbers of observations (so as to give the law of great numbers the opportunity to do its job), either because one lacks the resources, or because there is not enough to observe. Nonetheless, insights from such “species censuses” are an important basis for testing other hypotheses and theories (see above).
Only recently, I attended a lecture in which Jonathan Chase, a US-American ecologist, explained the perils of diversity statistics given that we always have only population samples to work with. Depending on the choice of sampling method, diversity statistic etc., one can arrive at significantly differing results. This is another area of potential problems that even many of the involved ecologists are not really aware of – not to mention economists such as me who use those ecologists’ results as input in their own work.
The take-home message from all this is: even though laypeople might get such an impression, most science is far from exact. Actually, it can be quite messy. This is not to say that science cannot be useful or that scientific results should not be trusted. But both scientists from other disciplines and the general public should keep in mind that these results should not be blindly taken at face value and that they are limits and uncertainties, many of which are likely insurmountable.