Is aggregation across persons – making a decision that some people should bear losses so that others might gain more – ever permissible?
On the one hand, studies have shown that people faced with trolley problems would often sacrifice one person in order to save several others. Ticking bomb arguments – in which a single terrorist is tortured so that millions can be saved from an impending explosion – can also count on the agreement of large majorities.
On the other hand, an exchange like this one leaves most of us shocked:
Ignatieff: What that comes down to is saying that had the radiant tomorrow actually been created [in the USSR], the loss of fifteen, twenty million people might have been justified?
The thinking in both cases is plainly utilitarian – that it’s acceptable to trade off the lives of some so that others can benefit. And yet our reactions to the two cases are polar opposites.
What can account for this difference? In the former examples, the good is more obvious, namely the saving of lives. The good that could – perhaps – have come from communist tyranny is rather more vague, more uncertain and less immediate. Trolley problems and the ticking time bomb scenario also present a balance between the good and the harm that supposedly needs to be done in order to achieve the good that is more clearly in favor of the good: it’s typically just one person that has to be sacrificed or tortured, not millions as in the case of communism.
So there’s a clear, immediate and widespread good that is supposed to come from the trolley sacrifice and the ticking time bomb torture (I say “supposed” because the hypotheticals don’t tend to occur in reality), and the harm that needs to be done to achieve the good is limited. This suggests that we favor threshold deontology: things which we normally aren’t supposed to do are allowed when the consequences of doing it are overwhelmingly good (or the consequences of not doing it are overwhelmingly bad). This theory is different from plain Hobsbawmian utilitarianism in the sense that it’s not a simple aggregation of good and bad across persons resulting in a choice for the best balance, no matter how small the margin of the good relative to the bad. A crude utilitarianism such as this does not agree with most people’s moral intuitions. Neither does dogmatic deontologism which imposes rules that have to be respected no matter the consequences.
However, threshold deontology creates its own problems, not the least of which are the determination of the exact threshold level and the Sorites paradox (suppose you have a heap of sand from which you individually remove grains: when is it no longer a heap, assuming that removing a single grain never turns a heap into a non-heap?).
The moral problems described here are relevant to the topic of this blog because the harm or the good that needs to be balanced is often a harm done to or a good done for human rights.
Other posts in this series are here.