There’s this difficult contradiction between two moral intuitions about human rights. On the one hand, we tend to feel very strongly about the extreme importance of a particular subset of human rights. Especially the right to life, the right not to be tortured and the right not to be enslaved are among those human rights which are so fundamental that their abrogation or limitation seems outrageous. Other rights, such as the right to free speech or the right to privacy, are hardly ever considered to be absolute in this sense – which doesn’t mean they are unimportant (something can be a very important value without being a moral absolute). Some restrictions on those rights are commonly accepted.
On the other hand, the horror that is provoked by the mere thought of limiting the right to life or the right not to suffer torture and slavery doesn’t preclude the fact that there are few consistent pacifists. Most of us would decide not to submit to a Nazi invasion and to fight back. Hence, the horrific thought of abrogating the right to life does not stop us from conceiving and actively engaging in killing. There’s also the Trolley Problem: experiments have shown that most people would sacrifice one to save many. The same is the case in ticking bomb scenarios.
True, there are some consistent pacifists, as well as some who oppose torture under all circumstances, whatever the consequences of failing to kill or torture. But I’m pretty sure they are a small minority (which doesn’t mean they are wrong). The more common response to the conflicting intuitions described here is what has been called threshold deontology: faced with the possibility of catastrophic moral harm that would be the consequence of sticking to certain rules and rights (catastrophic meaning beyond a certain threshold of harm) people decide that those rules and rights should give way as a means to avoid the catastrophe.
Threshold deontology means that there are very strong and near-absolute moral rules, which should nevertheless give way when the consequences of sticking to them bring too much harm. Threshold deontology can also be called limited consequentialism: rules may not be broken whenever there’s a supposedly good reason to do so or whenever doing so would maximize or increase overall wellbeing; but consequentialism is the only viable meta-ethical rule to follow when consequences are catastrophically bad or astronomically good.
If this is correct, then why not simply adopt a plain form of consequentialism? Do whatever brings the most benefit, and screw moral absolutes – or, better, screw all moral rules apart from the rule that tells us to maximize good consequences. This solution, however, is just as unpopular as strict absolutism. We don’t torture someone in order to save two other people from torture; and we certainly don’t torture someone if doing so could bring a very small benefit to an extremely large number of people (so that the aggregate benefit from torture outweighs the harm done to the tortured individual).
Unfortunately, threshold deontology is not as easy an answer to the conflict of intuitions as the preceding outline may have suggested. The main problem of course is: where do we put the threshold. How many people should be saved in order to allow torture or killing? It turns out that there’s no non-arbitrary way of setting a threshold of bad consequences that unequivocally renders absolute rights non-absolute. At any point in the continuum of harm, there’s always a way to say that one point further on the continuum is also not enough to render absolute rights non-absolute. If we agree that killing or torturing 5 for the sake of saving one is not allowed, then it’s hard to claim that 6 is a better number. And so on until infinity.
We can also think of the threshold in threshold deontology not in terms of harm that would result from sticking to absolute principles, but in terms of harm done by not sticking to them. The threshold then decides when we can no longer use bad actions in order to stop even worse consequences. For instance, we may verbally abuse the ticking bomb terrorist. Perhaps we can make him stand up for a certain time, of deprive him of sleep. At what moment should our near-absolute rules or rights against torture kick in? At the moment of waterboarding?
However, the same problem occurs here: a small increase in harm done to the terrorist can always be seen as justifiable, as long as it is very small. Again no way of setting a threshold because of the infinite regress that this provokes. Also, how should we evaluate the following case, imagined by Derek Parfit: a large number of people inflicts a small amount of harm on the terrorist, who is in immense pain as a result, and it’s impossible to tell whose infliction of harm has resulted in the pain threshold being passed. His absolute right not to be tortured is violated, but no one is responsible. This also sucks the power out of our moral absolutes.
Still, the problem of setting the threshold in marginal cases doesn’t mean that there are no clear-cut cases in which harmful consequences have clearly passed a catastrophic threshold. Nuclear annihilation caused by a ticking bomb is such a case I guess. That’s a catastrophe that may be important enough to abrogate the near-absolute rights of one individual terrorist.
However, this means that threshold deontology is useful only in a handful of extreme cases, most of which will fortunately never occur. In the real world, beyond the philosophical hypothetical, most cases of harmful consequences don’t reach the “catastrophe” level. Hence, in the case of a number of rights simple deontology is often the best system, at least compared to threshold deontology – which is most often irrelevant – and plain consequentialism – which would make a mockery of all rights and sacrifice them for the tiniest increment in wellbeing (see here). If we want to protect the right to life and the freedom from torture and slavery in day to day life, we might just as well pretend that they are absolute rights and forget about the catastrophic hypotheticals.
I should also note that although I rely here in part on the ticking bomb case in order to make some which I believe to be important points, the case in question is a very dangerous one: it has been abused as a justification for all sorts of torture with or without a “ticking bomb”. (After all, once you can establish that torture is not an absolute prohibition in catastrophic cases, why would it then be a prohibition in less than catastrophic cases? See the difficulties described above related to the threshold in threshold deontology). And not only has it been abused: one can question the practical relevance of the extremely unrealistic assumptions required to make the case work theoretically.
More about threshold deontology here.