The Economic Value of a Statistical Life

A human life is worth $4 million to $9 million. At least according to an authoritative meta-analysis of economic studies that estimate the so-called “value of a statistical life”. This is one of the most controversial issues in modern economics, which has met with vast criticism. Particularly, it has been argued that one cannot attach a price-tag to the life of a human being. In what follows, I would like to argue that a) this criticism is largely based on a misconception of the estimates; b) economists can only blame themselves that the misconception actually arised; and c) the calculation of the value of a statistical life is not sensible, albeit for reasons different from the ethical ones that are commonly used to argue against it.

How can the economic value of a (statistical) human life be calculated at all? The method used for that is called the hedonic wage method and has two variants: the mortality risk premium and the injury risk premium calculations. The general idea is actually rather simple: we live in a world full of things that might kill or seriously injure us. Thus, people invest in measures that decrease the probability of getting killed/injured. They do that, according to standard neoclassical economic theory, up to the point where the marginal cost of such investments equals the marginal benefit of another reduction in the probability of being killed. To arrive at the economic value of a statistical life, one has to simply identify such “life-prolonging investments” and contrast their height with the effect they have on the probability of getting killed/injured. In the most cases, this goes via the calculation of a mortality risk premium on wages–it is assumed that if there are two jobs that differ only in the probability to die at the workplace, the one where this probability is higher will be paid better. The difference in wages can then be put into relationship to the difference in the probability of death–et voilá! Of course, in principle other investments that decrease the probability of death are also thinkable as basis of such analysis–say, the use of seat belts or bike helmets where they are not mandatory, dietary choices, smoking and, more generally, drug usage etc.

The calculation a statistical life’s value as sketched above is binary in a sense–there is only the simplistic differentiation between death and non-death. Economists recognised that and came up with the idea to factor in life quality (in a rather narrow sense restricted to health)–this is what QALYs are about, Quality-Adjusted Life-Years. I will not dwell upon this extension, as it is non-substantial for my arguments, which regard the general idea of calculating the value of a statistical human life on the basis of (labour) market data. I will now turn to my analysis of the problem of a human life’s economic value.

As already announced, I would like to first respond to the critique that the calculation of the economic value of a statistical life leads to a morally deeply problematic conclusion that if I pay $9 million, I may kill somebody. This is wrong, as scaling up of the mortality risk premium, which is only meaningful for small changes in the probability to die, is not permitted by the very economic theory that is criticised on the basis of such up-scaling:

The economic valuation of morbidity and mortality does not refer to the life or death of one specific individual. Such attempts at assessment would hardly lead to usable results since a certain person’s willingness to pay would usually amount to their entire assets if their life were at stake. Instead, the economic assessment of health and life risks always adresses the willingness to pay for a change in the probability of being hospitalised or dying. Hence, the valuation of mortality refers to the value of statistical life. (Hansjürgens 2004; emphasis mine)

Economic values (including market prices) are generally about marginal changes in the state/supply of a good or service. The up-scaling of the results of marginal analysis to a whole is nonsensical from a theoretical point of view. Which, by the way, is the reason why the most widely-cited estimation of the value of Earth’s biosphere (Costanza et al. 1997) is actually worthless. Even when we talk about the aggregate societal level, we should keep in mind that “the value of a statistical life” only represents the value of a change in the probability to die. It cannot be translated into the lives of individual persons.

Which leads us to the second point I wish to make here–that economists can only blame themselves for the misconception discussed above. The use of the term “value of a statistical life”, while quite catchy, invites to be misinterpreted. It’s a classical case of bad framing. Especially so as it has little analytical merit: in the end it’s probabilities that are important. A serious scientist should be able to anticipate the problems lurking when such catchy phrases are used, and remain factual (this is not to be understood as a general critique of some bit of “literariness” in scientific publications–indeed, I would only encourage scientists to use less “dry” language and make their publications readable; but there are cases where matter-of-fact dryness is better). What is really being calculated is the value of a small change (say, 1 per cent) in the probability to die/get injured. No more, no less.

Well, after having tried to defend the calculation of the “value of a statistical life”, I would like to present some arguments why this is not a sensible task after all. My critique, however, is more of a “technical” nature and not a “moral” one. It regards three main issues: knowledge limitations (the economic term would be bounded rationality), dealing with uncertainty and multiple determinants of everyday-life decisions.

The first problem appears to be the most dire: when calculating the “value of a statistical life”, economists assume that people are aware of the risks they take when accepting a job and, especially, the differences in risk between particular jobs. This is a very bold assumption. Even if such information is available, estimates often vary considerably, not least because it is virtually impossible to find two jobs that differ only in the probability to get killed at the workplace. There are many other factors involved, and disentangling them is anything but trivial–especially as the differences in death probability are minute and every so-small distortion may radically change the end-result. Furthermore, common sense suggest that information on probability of death is not among things people pay much attention to. Just recently, I finally bought a bicycle helmet, after years of riding without. Even though I am an economist and a rather rational person (also in the economic sense), I have no idea how much the probability of a fatal incident decreases after my purchase. I didn’t check. To infer now that the 60 euros I paid for the helmet can be directly used to calculate my willingness-to-pay for a reduction in the probability of death would be absurd. Of course, one may object that my choice to buy the helmet was based on a subjective (“felt”) reduction in that probability, and that this probability may be used in the calculation. The problem is that I don’t know it either. Even if I could name it, it would most likely be much too imprecise to allow for sensible calculations on its basis.

The second problem I see here is at least potentially tractable (with an emphasis on “potentially”). It regards the way people deal with uncertainty, or probabilities. I draw here mainly from the illuminating work of the famous psychologists Daniel Kahneman and Amos Tversky, and more concretely from the former’s book on Thinking, Fast and Slow. These two have shown in decades of experimental research that, when it comes to dealing with uncertainty, we are anything but homini oeconomici. For example, we have an inherent tendency to vastly overestimate the probability of low-probability, negative outcomes. Furthermore, our evaluation of probabilities is influenced by many actually irrelevant factors–one of such effects goes under the name of availability bias. If I read in the local newspaper that a cyclist recently died in a car-bike-accident, this significantly increases my subjective evaluation of the probability of death when riding a bicycle and thus my propensity to finally buy a helmet. If, on the other hand, I cannot recollect that someone recently died in an accident at a construction site, I may be more willing to accept a job in the construction industry than when I would be aware of (and able to properly interpret) the real mortality statistics. Of course, these effects can be theoretically accounted for. Theoretically. In practice it is not so easy to assess by how much we over- and underestimate probabilities and to generalise it.

Last but not least, money and probability to die are not the only determinants of our everyday-life decisions. They are not even necessarily the most important ones. In buying the helmet, I also considered such issues as my responsibility as father of little children (who have to wear helmets); aesthetics (no matter how vain this might be); the inconvenience of having to go to a bike store and invest some time to choose the right helmet; etc. Now, to just ignore these additional determinants of my decision is clearly invalid–but properly including them in an analysis which aims at calculating the “value of a statistical life” is anything but trivial and perhaps even impossible.

To conclude: while I do not think that the common critique of the calculation of the economic value of a statistical life is really valid, I see obstacles to a proper calculation that most likely make the whole effort futile. The disconcerting side-effect is that this has repercussions for other, policy-relevant areas of economics, including, e.g., the application of integrated assessment models of climate change.

Comment