The Ethics of Human Rights (89): Anti-Consequentialist Consequentialism

There are two words in “human rights”. “Rights” are claims that override the claims, wishes or welfare of a government, a majority, or even the totality of a population minus one. In other words, they are claims that need to be respected whatever other claims are present, such as the claims of law, morality, welfare, religion etc. Rights should be respected irrespective of the law of the land, of someone’s legal status, of someone’s religion, race, gender, citizenship, country of residence or moral conduct. That’s where the other word comes in: all “human” beings have rights and these rights should be respected simply because human beings are human. No other reason is required. No law, no conduct, no welfare consequences. These two words – “rights” and “human” – are connected: both are about priority, overriding importance and lack of conditionality.

This would seem to imply that human rights are the ultimate anti-consequentialist morality. We are not to enslave, torture or murder one person even if that would increase total welfare. Forcibly removing one eye from a series of two-eyed people in order to give blind people one eye would clearly increase overall welfare since the gains for the blind are greater than the losses for the others. And yet human rights prohibit coercive organ transplantations. However, it’s not entirely correct to view human rights as anti-consequentialist. Human rights are also, and somewhat paradoxically, consequentialist. In two ways:

  • First, the welfare of the majority or of the “society” can to some extent be defined as respect for human rights. Torturing one terrorist in order to discover and defuse a ticking time bomb would allow us to safeguard the right to life and bodily integrity of a large number of other people or even of society as a whole. Rights need to be balanced against each other, and when more rights or more important rights for a large number of people can be safeguarded by way of a violation of the rights of one, then that’s the result or the “consequence” we should favor over the alternative, which is protecting the rights of one to the detriment of the rights of many. The balance is clearly in favor of the many, and that’s a consequentialist calculus. (I have to say here that these are not, in practice, the only alternatives and ticking time bomb arguments are often very misleading. But as a theoretical example it will do. I have a separate discussion of the limits of this kind of calculus here).
  • Second, human rights are means to achieve some goods or values. We don’t have rights because it’s good to have rights. We have them because they have good consequences. I need a right to free speech because having free speech results in certain things that are good for me: knowledge, self-development etc.

There’s considerable tension between the consequentialist and anti-consequentialist strains in human rights. It’s a tough problem. I’ve tried to come up with ways to relieve this tension in some older posts.

More posts in this series are here.

The Ethics of Human Rights (84): Aggregation Across Persons

Is aggregation across persons – making a decision that some people should bear losses so that others might gain more – ever permissible?

On the one hand, studies have shown that people faced with trolley problems would often sacrifice one person in order to save several others. Ticking bomb arguments – in which a single terrorist is tortured so that millions can be saved from an impending explosion – can also count on the agreement of large majorities.

On the other hand, an exchange like this one leaves most of us shocked:

Ignatieff: What that comes down to is saying that had the radiant tomorrow actually been created [in the USSR], the loss of fifteen, twenty million people might have been justified?
Hobsbawm: Yes.

The thinking in both cases is plainly utilitarian – that it’s acceptable to trade off the lives of some so that others can benefit. And yet our reactions to the two cases are polar opposites.

What can account for this difference? In the former examples, the good is more obvious, namely the saving of lives. The good that could – perhaps – have come from communist tyranny is rather more vague, more uncertain and less immediate. Trolley problems and the ticking time bomb scenario also present a balance between the good and the harm that supposedly needs to be done in order to achieve the good that is more clearly in favor of the good: it’s typically just one person that has to be sacrificed or tortured, not millions as in the case of communism.

So there’s a clear, immediate and widespread good that is supposed to come from the trolley sacrifice and the ticking time bomb torture (I say “supposed” because the hypotheticals don’t tend to occur in reality), and the harm that needs to be done to achieve the good is limited. This suggests that we favor threshold deontology: things which we normally aren’t supposed to do are allowed when the consequences of doing it are overwhelmingly good (or the consequences of not doing it are overwhelmingly bad). This theory is different from plain Hobsbawmian utilitarianism in the sense that it’s not a simple aggregation of good and bad across persons resulting in a choice for the best balance, no matter how small the margin of the good relative to the bad. A crude utilitarianism such as this does not agree with most people’s moral intuitions. Neither does dogmatic deontologism which imposes rules that have to be respected no matter the consequences.

However, threshold deontology creates its own problems, not the least of which are the determination of the exact threshold level and the Sorites paradox (suppose you have a heap of sand from which you individually remove grains: when is it no longer a heap, assuming that removing a single grain never turns a heap into a non-heap?).

The moral problems described here are relevant to the topic of this blog because the harm or the good that needs to be balanced is often a harm done to or a good done for human rights.

Other posts in this series are here.

The Ethics of Human Rights (62): Human Rights Consequentialism

A few additional remarks following this previous post.

A really crude simplification would divide moral theories into two groups: deontological and consequentialist theories; or, in other words, theories that focus on duties and rights and theories that focus on good consequences. At first glance, human rights activists should adopt deontology. We have rights independently of the consequences that follow if they are upheld or not. Rights are strong claims by individuals against society and the state, claims that can’t just be put aside if doing so would yield better overall consequences. You can’t torture one individual if this torture would cure millions of chronic headache.

Consequentialist theories, as opposed to deontological ones, usually do accept the sacrifice of a few – including their rights – for the benefit of many, or they accept a small sacrifice for a larger good even if only one individual profits from this larger good. A larger good can justify a smaller harm. And indeed, there are many circumstances in which violating the rights of some would deliver greater goods for many.

For example, closing down the Westboro Baptist Church would give many people some or even a lot of satisfaction while imposing serious harm on only a few (it’s a small band of crazies). The consequentialist calculus is likely to show that in this case the sum of satisfactions outweighs the sum of harm. The fact that the harm we’re talking about here means a violation of rights (free speech,  freedom of association and freedom of religion for the church members) doesn’t count in the consequentialist calculus. A harm is a harm and it’s the intensity not the nature of the harm that is important.

It’s not surprising that proponents of human rights have problems with this: human rights are important for everyone, but especially for minorities who risk being crushed by the interests of the majority.

It seems, therefore, that consequentialist reasoning is inimical to human rights. And yet, almost all if not all theories about human rights allow for some consequentialism. For example, there’s the case of catastrophic consequences. When faced with the possibility of catastrophic consequences it seems stupid and contrary to moral intuition to hold on to rights, no matter how dear these rights are to you in normal circumstances. The archetypical case is the ticking bomb.

Some proponents of human rights – and I’m one of them – go even further and justify rights on a consequentialist basis: rights are necessary because we need them to realize certain fundamental human values. And, in order to limit the consequentialist logic that would allow violations of rights for every tiny marginal good, we do four things:

  1. We claim that it is an empirically verifiable fact that human rights are among the best, if not the best means to realize the values in question. This is true on average and, especially, in the long run. Hence, sacrificing rights in order to realize those values isn’t the best short term or long term strategy.
  2. Even if there are isolated cases in which the values in question are better served by other means – other means than human rights and other means that require setting aside or violating human rights – then it’s still better to ignore those other means. If not, we will leave human rights with less authority and less force to produce good consequences in the future. Part of the force of human rights lies in their imperative and rule-like character. Setting them aside, even occasionally, because we think that’s necessary for certain goals, destroys their future power. They are not like antibiotics whose power depends on their limited use. On the contrary: the more we use human rights, the more power they have, and hence the more effective they are in doing what they usually do best.
  3. We claim that the values protected and realized by human rights are among the most fundamental human values, if not the most fundamental. Hence, consequentialist reasoning will have a hard time coming up with more fundamental values that justify sacrificing human rights or the values protected by human rights.
  4. We claim that consequentialist reasoning has some theoretical limitations: for example, we may know in general what consequences tend to follow from certain principles such as human rights, but it’s much more difficult if not impossible to know the precise consequences of specific actions (especially the long term consequences). This is also true for actions that imply human rights violations. Hence, even if there are, in theory, better ways to realize the values normally realized by human rights, and even if there are, in theory, more fundamental values than those realized by human rights, we don’t know if our specific actions aimed at the realization of values do in fact produce those values. Hence, we have reasons not to engage in consequentialist calculations that imply violations of human rights.

More posts in this series are here.

The Ethics of Human Rights (60): Absolute Human Rights and Threshold Deontology

There’s this difficult contradiction between two moral intuitions about human rights. On the one hand, we tend to feel very strongly about the extreme importance of a particular subset of human rights. Especially the right to life, the right not to be tortured and the right not to be enslaved are among those human rights which are so fundamental that their abrogation or limitation seems outrageous. Other rights, such as the right to free speech or the right to privacy, are hardly ever considered to be absolute in this sense – which doesn’t mean they are unimportant (something can be a very important value without being a moral absolute). Some restrictions on those rights are commonly accepted.

On the other hand, the horror that is provoked by the mere thought of limiting the right to life or the right not to suffer torture and slavery doesn’t preclude the fact that there are few consistent pacifists. Most of us would decide not to submit to a Nazi invasion and to fight back. Hence, the horrific thought of abrogating the right to life does not stop us from conceiving and actively engaging in killing. There’s also the Trolley Problem: experiments have shown that most people would sacrifice one to save many. The same is the case in ticking bomb scenarios.

True, there are some consistent pacifists, as well as some who oppose torture under all circumstances, whatever the consequences of failing to kill or torture. But I’m pretty sure they are a small minority (which doesn’t mean they are wrong). The more common response to the conflicting intuitions described here is what has been called threshold deontology: faced with the possibility of catastrophic moral harm that would be the consequence of sticking to certain rules and rights (catastrophic meaning beyond a certain threshold of harm) people decide that those rules and rights should give way as a means to avoid the catastrophe.

Threshold deontology means that there are very strong and near-absolute moral rules, which should nevertheless give way when the consequences of sticking to them bring too much harm. Threshold deontology can also be called limited consequentialism: rules may not be broken whenever there’s a supposedly good reason to do so or whenever doing so would maximize or increase overall wellbeing; but consequentialism is the only viable meta-ethical rule to follow when consequences are catastrophically bad or astronomically good.

If this is correct, then why not simply adopt a plain form of consequentialism? Do whatever brings the most benefit, and screw moral absolutes – or, better, screw all moral rules apart from the rule that tells us to maximize good consequences. This solution, however, is just as unpopular as strict absolutism. We don’t torture someone in order to save two other people from torture; and we certainly don’t torture someone if doing so could bring a very small benefit to an extremely large number of people (so that the aggregate benefit from torture outweighs the harm done to the tortured individual).

Unfortunately, threshold deontology is not as easy an answer to the conflict of intuitions as the preceding outline may have suggested. The main problem of course is: where do we put the threshold. How many people should be saved in order to allow torture or killing? It turns out that there’s no non-arbitrary way of setting a threshold of bad consequences that unequivocally renders absolute rights non-absolute. At any point in the continuum of harm, there’s always a way to say that one point further on the continuum is also not enough to render absolute rights non-absolute. If we agree that killing or torturing 5 for the sake of saving one is not allowed, then it’s hard to claim that 6 is a better number. And so on until infinity.

We can also think of the threshold in threshold deontology not in terms of harm that would result from sticking to absolute principles, but in terms of harm done by not sticking to them. The threshold then decides when we can no longer use bad actions in order to stop even worse consequences. For instance, we may verbally abuse the ticking bomb terrorist. Perhaps we can make him stand up for a certain time, of deprive him of sleep. At what moment should our near-absolute rules or rights against torture kick in? At the moment of waterboarding?

However, the same problem occurs here: a small increase in harm done to the terrorist can always be seen as justifiable, as long as it is very small. Again no way of setting a threshold because of the infinite regress that this provokes. Also, how should we evaluate the following case, imagined by Derek Parfit: a large number of people inflicts a small amount of harm on the terrorist, who is in immense pain as a result, and it’s impossible to tell whose infliction of harm has resulted in the pain threshold being passed. His absolute right not to be tortured is violated, but no one is responsible. This also sucks the power out of our moral absolutes.

Still, the problem of setting the threshold in marginal cases doesn’t mean that there are no clear-cut cases in which harmful consequences have clearly passed a catastrophic threshold. Nuclear annihilation caused by a ticking bomb is such a case I guess. That’s a catastrophe that may be important enough to abrogate the near-absolute rights of one individual terrorist.

However, this means that threshold deontology is useful only in a handful of extreme cases, most of which will fortunately never occur. In the real world, beyond the philosophical hypothetical, most cases of harmful consequences don’t reach the “catastrophe” level. Hence, in the case of a number of rights simple deontology is often the best system, at least compared to threshold deontology – which is most often irrelevant – and plain consequentialism – which would make a mockery of all rights and sacrifice them for the tiniest increment in wellbeing (see here). If we want to protect the right to life and the freedom from torture and slavery in day to day life, we might just as well pretend that they are absolute rights and forget about the catastrophic hypotheticals.

I should also note that although I rely here in part on the ticking bomb case in order to make some which I believe to be important points, the case in question is a very dangerous one: it has been abused as a justification for all sorts of torture with or without a “ticking bomb”. (After all, once you can establish that torture is not an absolute prohibition in catastrophic cases, why would it then be a prohibition in less than catastrophic cases? See the difficulties described above related to the threshold in threshold deontology). And not only has it been abused: one can question the practical relevance of the extremely unrealistic assumptions required to make the case work theoretically.

More about threshold deontology here.

Terrorism and Human Rights (37): Torture is Social and Political Suicide

When democratic governments consider the option of torturing someone, the stakes are usually high. They won’t consider it just for some marginal benefit. The paradigmatic case is the ticking time bomb that’s about to kill thousands or even millions. Torture is supposed to be justified because the benefits are huge, or – stated negatively – because the possible harm resulting from a failure to torture is huge. Combining the size of what is at stake with the urgency of the threat makes the case for torture even stronger.

However, this justification of torture has some unsettling side effects. Given the urgency, and given the fact that terrorists are probably trained to withstand torture, a free society would have to

maintain a professional class of torturers, and to equip them with continuously-updated torture techniques and equipment. Grave dangers to democracy and to individual freedoms would be posed by an institutionalized professional “torture squad”. (source)

Such a highly trained and continuously available torture squad would be necessary to inflict torture that is likely to succeed in extracting the information on a reliable basis and within an extremely short time frame. It would also be necessary to inflict levels of pain sufficient to procure the victim’s compliance but insufficient to kill or render incapable of communication. Amateur thugs will not suffice. You really need professionals.

This is the institutionalization of torture. It’s difficult to see how a free society could survive the presence of such a torture squad. It would infect our entire society to know that there are people among us who torture for a living. The squad members themselves will most likely fail to remain well-intentioned, and the mere existence of such a squad corrupts morality in a society. It’s naive to think that the members of the torture squad will return to normality once their job is done and function like normal law-abiding and non-violent citizens in between emergency sessions. Torture leads to the destruction of a democracy and a free society that decides to go this way.

The Ethics of Human Rights (54): Torture, Consequentialism and Tainted Goods

Those who defend torture normally do so on consequentialist grounds. They posit cases such as the “ticking time bomb” in which the harm done by torture is insignificant compared to the good it does. The consequences of torture are clearly beneficial, overall: OK, it does some harm to an individual terrorist who has hidden the bomb but at the same time it saves thousands or millions of lives. When so many lives are at stake, a utilitarian calculus will clearly show that the good that will follow from torture outweighs the good that will follow from the refusal to torture.

Usually, we see a kind of threshold consequentialism rather than a pure consequentialism at work in such arguments: if torture can produce one more unit of “utility” (wellbeing, life, etc.) than the refusal to torture, most consequentialists wouldn’t allow torture. The good consequences of torture must far outweigh rather than marginally outweigh the harm it clearly does. Hence the hypotheticals in examples such as the ticking bomb, in which it’s posited that very many lives are at stake. We are allowed to supersede the deontological rule that one shouldn’t torture only beyond a certain threshold of harmful consequences that would result from sticking to the rule. As someone has said, lost lives hurt a lot more than bent principles. Strict moral absolutism, whatever the possible consequences, can indeed land you in all sorts of problems.

However, let’s look a bit closer at this seemingly convincing argument. We can overlook some of the possible difficulties and still conclude that the argument is unsatisfactory. Let’s not dwell on the likelihood that in real cases, the numbers of possible terrorist victims is rather small, while the number of people who have to be tortured is probably higher than one: you may need to torture some people before you find the one who has the necessary information about the location of the bomb; then you may need to torture his friends and family because he’s trained to resist torture and because he knows that if he resists for a short time, the bomb will go off. So let’s forget that the utilitarian calculus will most likely be less unequivocal than assumed in the argument above: we’ll never or only very rarely have cases in which torture produces a very small harm and at the same time a very large benefit. The harm and benefit will be much closer to each other.

Let’s also not dwell on the fact that the greater good thinking of the argument puts the torturer on the same footing as the terrorist. The latter also assumes that he fights for a greater good and that the harm he does is small compared to the benefits this harm will produce. The similarity between torturer and terrorist is all the more striking if the torturer has convinced himself that it’s necessary to torture the innocent (when the terrorist himself doesn’t speak fast enough). Putting ourselves on the same level as terrorists means giving up our identity to save ourselves, which really is pointless. If that is correct then we have to remodel the utilitarian calculus: the harm done by self-destruction is probably greater than the suffering caused by exceptional terrorists attacks. So even the utilitarianism of the greater good doesn’t justify torture.

But let’s assume that none of this speaks against the standard consequentialist justification of torture and that we manage to use torture in a way that saves many many lives, that doesn’t impose a high cost, and that doesn’t put us on the same level as the terrorists. So we can save ourselves, our identity and the lives of many of our fellow citizens. Still, the “good” that we achieve through torture is tainted by the methods necessary to achieve it. The notion, inherent in the consequentialist justification of torture, that certain goods can be attainted by problematic means, is itself problematic. We can save ourselves, but once we are saved we believe that our success has been tainted by the immoral methods used to achieve it. We may not be willing to enjoy this success and the goods we have if they have been secured by way of torture.

Jeremy Waldron has interesting things to say about tainted goods. Read this for example.

Terrorism and Human Rights (36): There Are No Ticking Bomb Cases

The so-called ticking time bomb case is supposed to prove that there shouldn’t be an absolute ban on torture, and that torture is in some cases justified if it can help to prevent catastrophic harm. Maybe there shouldn’t be an absolute ban, but the ticking bomb case is the wrong way to prove it.

Just a brief reminder of what the ticking bomb case is about. Suppose a ticking bomb has been hidden in a densely populated area and will soon kill thousands or millions if not disarmed. The authorities have managed to capture a terrorist who has either hidden the bomb himself or knows where it has been hidden. (One can replace the “ticking bomb” with another and similar type of deadly device without changing the nature of the argument. The “ticking bomb” is in fact a “pars pro toto”, encompassing cases which do not necessarily involve an actual ticking bomb but which are nevertheless similar with respect to their circumstances and consequences).

The authorities are sure the captured person knows where the bomb is and how to disarm it, but the problem is that he obviously doesn’t want to reveal this information. However, the authorities are also pretty sure that he will do so under torture. There is no other or alternative way to extract the information, and simply evacuating people isn’t an option given the urgency and the lack of knowledge about the exact location of the bomb. Are we therefore not morally allowed to use torture in order to get the information and save numerous lives? Or, a somewhat stronger claim: are we not morally obliged to torture given the enormous benefits for large numbers of people compared to the limited costs for the tortured individual?

Given the choice between inflicting a relatively small level of harm on a wrongdoer and saving an innocent person, it is verging on moral indecency to prefer the interests of the wrongdoer. (source)

The problem – if it is a problem – is that this thought experiment can’t justify torture. It can’t because it’s loaded with so many hypotheticals that the chances of a case like it occurring in real life are close to zero. People simply have to know too much and yet just – just – not enough. That state of affairs is very unlikely, as is the application of torture that is so effective that it delivers accurate information in a very short time frame (remember, the bomb is ticking…).

Hence, if we won’t see a case like it in real life, the thought experiment can’t justify real life torture. At most it may be able to justify torture in theory. The purely theoretical nature of the whole affair is supported by the absence of ticking bomb cases in history. Some cases that are claimed to have been ticking bomb cases – such as the torture of Abdul Hakim Murad – were in fact, after closer examination, none of the kind. Murad only gave away his information after a month of torture, and it came as a surprise. He was tortured not because of an imminent threat. There was no such threat, and the torturers did not act on the assumption that there was.

In 1995, the police in the Philippines tortured Abdul Hakim Murad after finding a bomb-making factory in his apartment in Manila. They broke his ribs, burned him with cigarettes, forced water down his throat, then threatened to turn him over to the Israelis. Finally, from this withered and broken man came secrets of a terror plot to blow up 11 airliners, crash another into the headquarters of the Central Intelligence Agency and to assassinate the pope. … it took more than a month to break Mr. Murad and extract information – a delay that would have made it impossible to head off an imminent threat. (source)

I assume that all those who come up with the ticking bomb to justify torture want to use the case not to justify ticking bomb torture but other, more mundane forms of torture. After all, when you think you’ve managed to crack open the door a little bit – even theoretically – maybe it will swing wide open.

More about the ticking bomb case. More about torture.

The Ethics of Human Rights (50): Human Rights and Deontological Ethics

Compared to utilitarianism (see this previous post), deontological ethics – or deontology/deontologism for short – seems to be a moral theory that is much more amenable and receptive to human rights. Deontology, after all, focuses not on the consequences of actions but on the duties (“deon”) we have; and one man’s rights are another man’s duties. Deontology does not accept that good consequences override our duties; for example, we have a duty not to torture, even if torturing would yield certain beneficial consequences. The right thing to do is more important than increasing the good in society. And the right thing to do is to act according to a moral duty translated into a moral norm. So the right thing to do is typically encapsulated in rules, as is the case for human rights.

However, the choice of moral theory for a human right activist isn’t as clear-cut as that. Deontology, and especially those forms of deontology that look a lot like moral absolutism – and that’s a big temptation for deontology in general – have been accused of accepting catastrophic outcomes for the sake of respect for rules and duties. The infamous Kantian rule not to lie to a murderer asking about the whereabouts of his intended victim comes to mind. And we’re not just talking about philosophers’ mind games. The oath taken by German soldiers to be loyal to Hitler was a case of an uncontroversial deontological rule – keep your promises – that has arguably cost millions of lives. The power of duties and rules often borders on the absolute.

Of course, deontology doesn’t have to be moral absolutism. It’s not because there are rules and duties that we must always respect them. So-called threshold deontology allows for rules to be overridden once a certain level of bad consequences is reached. Beyond the threshold, deontology is replaced by consequentialism. Granted, the threshold of exception must be high, otherwise it would be futile to speak about rules and duties at all.

Such a deontological theory is not absolutist, and it is more in line with human rights. The system of human rights isn’t absolutist either. Rights can be limited, for example when different rights clash with each other. There are very few if any absolute human rights, i.e. rights which can never be violated. (The right to life is a possible candidate).

However, threshold deontological theories are not without problems either. First, even if we use thresholds, some duties and rules will still be strong enough to produce, in some cases, violations of human rights. And second, there’s the problem of the exact level of the threshold. It has to be high, but how high? Take the case of torture: threshold deontology would argue that torturing a “ticking time bomb terrorist” is an acceptable deviation from deontological rules and duties if the consequentialist gains are high enough, for example when this torture will allow us to save a large number of lives. But what is a large number? 100? 1000? And won’t we create perverse effects? (E.g. having a torturer put some more lives at stake as a means to legitimize his torture?). Deontology seems to collapse into consequentialism if it adopts a system of thresholds, because reducing the threshold value by 1 unit (one life in this case) never seems to invalidate the choice of suspending rules or duties. And yet, after a certain number of reductions by 1 unit, there’s no rule or duty left.

More posts in this series are here.

The Causes of Human Rights Violations (26): Are False Beliefs Useful For Human Rights?

I would say yes, but only some. For example, if we go around and successfully propagate the theory that wrongdoers will burn in hell, then this may have a beneficial effect because fear may inculcate morality (as all deterrence theories about crime have to assume). Similarly, false beliefs about the efficacy of law enforcement and the honesty of law enforcement officials also help.

Many false beliefs about high levels of risk can produce risk-averse behavior which in fact lowers the risk and makes it more likely that human rights are protected. For example, if people wrongly believe that their privacy is threatened in certain circumstances, they will take action to secure their privacy and make their privacy more secure than it already was. (More about human rights and risk here).

Human equality – “all men are created equal” – is obviously a false belief when taken as a fact, and in the quote it is taken as such. People are born with different abilities, talents, endowments, advantages etc. And yet we act as if the phrase is more than just a moral imperative. It seems like it’s easier to convince people to treat each other as equals when we say that they are equals.

Certain forms of self-deception also seem to be beneficial from the point of view of human rights:

Self-deception … may be psychologically or biologically programmed. The psychological evidence indicates that self-deceived individuals are happier than individuals who are not self-deceived. … Lack of self-deception, in fact, is a strong sign of depression. (The depressed are typically not self-deceived, except about their likelihood of escaping depression, which they underestimate.) Individuals who feel good about themselves, whether or not the facts merit this feeling, also tend to achieve more. They have more self-confidence, are more willing to take risks, and have an easier time commanding the loyalty of others. Self-deception also may protect against a tendency towards distraction. If individuals are geared towards a few major goals (such as food, status, and sex), self-deception may be an evolved defense mechanism against worries and distractions that might cause a loss of focus. Tyler Cowen (source)

We can claim that, to some extent, happiness, self-confidence, achievement and risk taking are indicators of and/or conditions for the use of human rights. Happy and confident people who are willing to take risks are more likely to engage in public discourse, to vote, to associate and to exercise their human rights in other ways. If that’s true, and if there’s a link between happiness, confidence and self-deception, then self-deception is another example of a falsehood that is beneficial to human rights.

I could go on, and I also could, very easily, list several counter-examples of falsehoods that are detrimental to human rights (take the 72 virgins for instance, or communism). The point I want to make is another one: should we actively promote certain false beliefs because of their beneficial outcomes?

Most of us believe that there is something like a benevolent lie and that lying is the right thing to do in certain circumstances. A strict rule-based morality is hard to find these days. Few would go along with Kant who said that we shouldn’t lie when a murderer asks us about the whereabouts of his intended victim (“fiat justitia et pereat mundus“). People tend to think that the expected consequences of actions should to some extent influence actions and determine, again to some extent, the morality of actions (“to some extent” because another common moral intuition tells us that good consequences don’t excuse all types of actions; most of us wouldn’t accept the horrible torture of a terrorist’s baby in order to find the location of his bomb).

On the other hand, we should ask ourselves if such an enterprise, even if we deem it morally sound, is practically stable. Some false beliefs have proven to be vulnerable to scientific inquiry and public reasoning (hell could be one example). It’s not a good idea to build the system of human rights on such a weak and uncertain basis. But perhaps we should do whatever we can to promote respect for human rights, even if it’s not certain that our tactic is sustainable.

And yet, actively promoting falsehoods is in direct opposition to one of the main justifications of human rights, namely epistemological advances (I stated here what I mean by that). We would therefore be introducing a dangerous inconsistency in the system of human rights. We can’t at the same time promote the use of falsehoods and argue that we need human rights to improve thinking and knowledge. So we are then forced to promote the use of falsehoods in secret – which is necessary anyway because people will not believe falsehoods if we tell them that they are falsehoods – but thereby we introduce another inconsistency: human rights are, after all, about publicity and openness.

What Are Human Rights? (24): Absolute Rights?

One of the great puzzles in human rights theory is the possible existence of absolute rights. It’s commonly accepted that most if not all human rights are “relative” in the sense that they can be limited if their exercise results in harm done to other rights or to the rights of others. Freedom of speech for example doesn’t offer “absolute” protection for all kinds or instances of speech (see here).

If there are any human rights that do offer absolute protection without exception, the right to life, the right not to be tortured and the right not to suffer slavery would be good candidates. Whereas it seems quite reasonable to silence someone when he or she incites violence or hatred, it’s much harder to imagine cases in which it’s reasonable to kill, torture or enslave someone. I’ll focus here on the right to life.

How would you go about justifying the absolute nature of that right? First, you could claim that life is the supreme value. Life is indeed supreme in one sense of the word: it’s lexically prior as they say. It comes first. You can have life without freedom or equality, but not vice versa. (Of course, there are also other more or less promising ways to argue for life’s supremacy in the universe of moral values. I won’t go there now, and neither will I point to the fact that people often sacrifice their lives for a higher purpose. Let’s just assume for the sake of argument that the lexical priority of life suffices, in general, to ground life’s supremacy in the system of values).

If life is the supreme value, that means that no life can be sacrificed for an inferior value. You can’t go about killing poor or handicapped people for the sake of aggregate wellbeing. And neither can you execute criminals in an effort to deter future attacks on people’s security rights.

So life is then the supreme value in the sense that it can’t simply be traded against another inferior value. That already makes a lot of potential limitations of the right to life unacceptable, and the right to life therefore moves a significant distance towards absoluteness. However, if life is the supreme value, it’s still theoretically possible to trade the lives of a few for the lives of many others. So not life as such, as an aggregate or abstract concept needs to be the supreme value, but individual life. If individual life is the supreme value, the lives of some can’t be put on a scale to see if their sacrifice could protect a higher number of other lives. Robert Nozick gives the following example to make this point salient:

A mob rampaging through a part of town killing and burning will violate the rights of those living there. Therefore, someone might try to justify his punishing [i.e. killing] another he knows to be innocent of a crime that enraged a mob, on the grounds that punishing this innocent person would help to avoid even greater violations of rights by others, and so would lead to a minimum weighted score for rights violations in the society. Robert Nozick

So, if you accept the argument made so far, does this mean that you have established the absolute nature of the right to life and that this right therefore can never be limited? It would seem so. If life is the supreme value, it’s hard to find a reason to limit it, since this reason would then have to be a superior value. And if individual life is the supreme value, you can’t play a numbers game to conclude that the sacrifice of some is necessary in order to save a higher number of other lives.

However, categorical claims like this always seem to me to make things too easy. Something else is necessary. Take four cases in which lives are commonly sacrificed without universal or often even widespread condemnation:

  • individual self-defense
  • war as national self-defense
  • capital punishment and
  • the murder of a terrorist (and perhaps his hostages) about to kill many others (e.g. the shooting down of a commercial plane hijacked by terrorists and about to be used as a weapon).

In all these cases, the lives of some are sacrificed for the lives of others (assuming that capital punishment has a deterrent effect, which is probably not the case). If the right to life is really absolute, none of these actions would be morally or legally acceptable. In order to make them acceptable, there has to be something more than a mere quantitative benefit in terms of numbers of lives saved. I believe the sacrifice of life is acceptable if in doing so one doesn’t violate these three rules:

  • we should only sacrifice life in order to save life, and not in order to promote other values, and
  • we shouldn’t treat other people as means, and
  • we shouldn’t diminish the value of life.

In the case of one of the four actions cited above, namely capital punishment, we do treat other people as means and we diminish the value of life. Murderers are used as instruments to frighten future murderers. Capital punishment is supposedly intended to further respect for life, but in fact normalizes murder. (See here for a more detailed treatment of this issue). In the three other cases, we don’t necessarily use people as means or diminish the value of life. Hence these case can be acceptable limitations of the right to life.

So the right to life is only quasi-absolute: limitations are possible but extremely rare because a number of very demanding conditions have to be met:

  • you can’t kill for the promotion of values different from life
  • you can’t generally count lives and kill people if thereby you can save more lives
  • and if you do want to kill in order to save lives, you have to do it in a manner that doesn’t instrumentalize human beings or diminishes the value of life.

Terrorism and Human Rights (28): Torture and the Ticking Bomb

In 1987, a judicial commission of inquiry headed by former [Israeli] Supreme Court Justice Moshe Landau had reported that “moderate physical pressure” [by the Israeli General Security Services G.S.S.] was defensible in cases in which an interrogator “committed an act that was immediately necessary” to save lives from grave harm. Israeli human rights organizations had monitored G.S.S. interrogations and concluded that some eighty-five percent of Palestinians interrogated had been tortured – subjected to methods almost identical to those currently being used in American military detention – and questioned whether such an enormous percentage of detainees were indeed “ticking bombs”. If those being tortured were all “ticking bombs”, why, asked an Israeli human rights organization shortly before the Supreme Court hearing, did interrogators take weekends off? “The lethal bomb ticks away during the week, ceases, miraculously, on the weekend, and begins to tick again when the interrogators return from their day of rest.” (source)

Whatever you think about the persuasiveness of the ticking bomb argument in favor of torture, or even it’s relevance to actual cases of torture, it’s difficult not see the risk of a slippery slope, especially given evidence like this. 

Terrorism and Human Rights (25): A Theory of No Resort

In just war theory, the concept of “last resort” means that force, violence and other violations of human rights are allowed only after all peaceful and viable alternatives have been seriously tried and exhausted or are clearly not practical, and when force etc. is clearly the only option. In the current “war on terror”, the use or torture is often justified as a last resort, as the only option available, in certain circumstances such as the “ticking bomb”, to avoid an outcome that is worse than the use of the last resort.

There are many possible and convincing arguments against the use of torture, but one which isn’t mentioned a lot is the fact that justifications for torture emanate from a philosophy that sees risk as something to be completely overcome. Torture is justified as an extreme measure to overcome a last remaining and very small risk. That is evident from the ticking bomb case: the case itself is by definition rare, so the risk that it occurs is very small. Even smaller is the risk that we have to resort to the use of torture as a means to avoid the risk of the bomb going off (if, exceptionally, we find ourselves in a ticking bomb situation, other means short of torture may well allow us to avoid the risk).

This philosophy of using extreme measures to avoid or eliminate as much risk as possible is, I think, mistaken. If I’m right, the justification of torture as one of such extreme measures is void. And don’t say I’m fighting windmills here: this philosophy is omnipresent. Look at the swine flu hysteria for example, or the recent and silly airport and air travel security measures after the “Christmas Day Attack” (e.g. forcing passengers to sit down during the last hour of flight). Maybe we need a theory of no resort rather than a theory of last resort. Maybe we should learn to live with the fact that bad things happen and that often we can’t do a thing about them.

Human Rights and Risk

Obviously, we all run the risk of having our rights violated. Depending on where you live in the world, this risk may be big or small. For some, the risk always remains a risk, and their rights are always respected. But that’s the exception. Many people live with a more or less permanent fear that their rights will be violated. This fear is based on their previous experiences with rights violations, and/or on what they see happening around them.

I see at least two interesting questions regarding this kind of risk:

  • Is, as Nozick argued, the risk or probability of a rights violation a rights violation in itself? Do people have a right not to fear possible rights violations?
  • And, to what extent does this risk of rights violations lead to rights violations?

The first question is the hardest one, I think. It seems that the risk of suffering rights violations is there all of the time, although it may be very small for some of us. If there is a right not to live with this risk, then this right would be violated all of the time. What good is a right that is perpetually violated?

However, it would seem that in some circumstances, where the probability that rights are violated is very high, people do indeed suffer. Imagine that you live in a society in which there is a high probability that you are arbitrarily arrested by the police. Even if you are not actually arrested – and your rights are therefore not violated – you are living in fear. It would seem that a right not to live in fear of rights violations does have some use in these high-risk environments.

But if we limit the right not to risk rights violations to situations in which there is a high probability of rights violations, we will have to decide on a threshold: when, at what level of high probability of rights violations, does the right not to risk rights violations become effective? This means introducing arbitrariness.

And another problem: what if you don’t know about the risk? There may be at certain moments a high probability that your rights will be violated, but you don’t have to be aware of this. In that case, you don’t fear the rights violations, and hence there is no harm done to you. It’s difficult to conceive of a right when its violation doesn’t (always) cause harm of some kind, and hence the right not to risk rights violations seems impossible in this case.

The second question is more straightforward. Everyday we see how the risk of rights violations leads to actual rights violations. The perception of risk, and people’s counter-strategies designed to limit the risk of rights violations, makes them violate other rights. The war on terror is a classic example. Ticking bomb torture is another.

The objective of avoiding risk creates risks, namely the risks that our actions designed to avoid risk cause harm. We may have to learn (again) how to live with risk.

Terrorism and Human Rights (8): Torture and the Ticking Bomb

If torture is the only means of obtaining the information necessary to prevent the detonation of a nuclear bomb in Times Square, torture should be used – and will be used – to obtain the information. … no one who doubts that this is the case should be in a position of responsibility. Richard Posner

During numerous public appearances since September 11, 2001, I have asked audiences for a show of hands as to how many would support the use of nonlethal torture in a ticking-bomb case. Virtually every hand is raised. Alan Dershowitz

People have come up with many arguments to justify torture, but the most famous one is the “ticking bomb argument“: suppose we capture a terrorist, and we know that he or she knows where the ticking bomb is hidden that will soon kill thousands or millions, or where and how another type of terrorist attack will take place. However, this person will only reveal the information under torture. Are we not allowed to use torture in order to get the information and save numerous lives? Are we not morally forced to torture given the enormous benefits for large numbers of people compared to the limited costs for the tortured individual?

This argument is flawed, because it is based on a number of untenable assumptions:

Assumption 1: A real-life case

This seems to be a thought experiment rather than a real-life dilemma. The example of the captured terrorist with information about a ticking bomb is unlikely to happen in real life. Law enforcement officers or military and intelligence personnel usually do not arrest terrorists or accomplices before the terrorist act takes place (usually they make the arrests afterwards, and sometimes they don’t even manage to do that). We all know that most real cases of torture have absolutely nothing to do with the example given in the ticking bomb argument.

Assumption 2: Knowledge and knowledge about knowledge

But let’s assume that it does happen, and that one is, in exceptional cases, able to arrest someone before the terrorist act takes place. For the ticking bomb argument to be valid, we have to be positively sure that the terrorist or accomplice has the information that is required for us to stop the attack or explosion to take place. How can we be sure about this? And if we’re not sure, can we start torturing this person in order to know that he or she has the information?

The latter would mean that we don’t just torture in order to get life saving information. We torture in order to know whether this person has or doesn’t have such information. It’s obvious that in this case we will torture many people who don’t have information. And if they don’t have information, we may be torturing innocent people, or at least people who, although accomplices, are not justifiable objects of torture since the argument is that torture is justified because it is necessary to obtain life saving information. These people don’t have such information, and hence their torture isn’t justified. Some other justification is required in order to be able to use torture on people who do not obviously and undoubtedly possess life saving information. This seems to fall outside the ticking bomb argument, an argument which is therefore incomplete.

And, by the way, torturing people in order to find out if they have information is the worst kind of torture: since many of them don’t know anything, they will be subject to the longest and deepest forms of torture.

Assumption 3: It works

Again, let’s assume that all of the above is irrelevant, that we do hold someone who has vital information, that we know for certain that he or she has this information, and that we didn’t have to use torture to be certain. These are already a lot of assumptions, but a further assumption of the ticking bomb argument is that torture is a efficient tool to extract reliable information. We all know that it isn’t (see here). People who are tortured say anything in order to make it stop.

And what if torturing the terrorist doesn’t make him or her speak? In that case, the ticking bomb argument also justifies torturing the terrorist’s family and children (a kind of indirect torture aimed at “convincing” the terrorist to give information). If torturing him or her is insufficient, then further options are equally justifiable. The cost-benefit analysis on which the ticking bomb argument is based justifies torturing the family. The guilt or innocence of the family, or of anybody else who is tortured, is irrelevant. What counts is that the cost of torture doesn’t outweigh the good it does, i.e. the number of lives it saves.

But this begs the question: how many lives have to be saved if the cost of torture is to be acceptable? A million? 10.000? 10? … Difficult to tell in borderline cases, but then the answer would be: at least it’s clear when we go into the really big numbers. Torturing even a few dozens of people in order to safe a million is a “no-brainer” (in the words of former Vice-President Cheney). The reality is however, that most terrorist attacks do not kill millions or even thousands.

Assumption 4: No alternative

Again, let’s accept all the above assumptions, for the sake of argument. One of the supposedly strong points of the ticking bomb argument is the lack of an alternative to torture. There seems to be nothing else one can do. But there is something wrong with the timing in the argument:

On the one hand, to represent some type of ticking bomb scenario, the timing of attack must be far enough in the future that there is a realistic chance of doing something to stop it. On the other hand, if it is so far off in the future that the loss of life can be prevented in some other way (evacuation, for instance) then the supposed “need” for torture simply disappears. (source)

Assumption 5: Exceptional

Given the urgency in the example of the ticking bomb, and given the fact that terrorists are often trained to withstand torture, a free society would have to

maintain a professional class of torturers, and to equip them with continuously-updated torture techniques and equipment. Grave dangers to democracy and to individual freedoms would be posed by an institutionalized professional “torture squad”. (source)

Torture corrupts people, and it is not farfetched to assume that a “torture squad” would infect an entire society. The squad members themselves will not remain well-intentioned, and the mere existence of such a squad corrupts morality in a society. This shows that torture in the ticking bomb argument starts as an exception but tends toward institutionalization.

Assumption 6: The Greater Good

It’s not obvious that the rights of one person can be sacrificed for the benefit and rights of others. Once you start this kind of trade off, you will quickly find yourself in a world in which it is allowed to “break some eggs if you want to make an omelet”. Terrorists also assume that they fight for a greater good and that they are allowed to sacrifice some in order to save others. Torture then puts the tortures on the same level as the terrorist.

What motivates the ticking bomb argument?

It’s not difficult to see some of the underlying motives of those using the argument. It seems to me that the dramatic force and moral clarity and simplicity of the example, even if it is very unrealistic and far removed from the much murkier and complex cases that confront us in reality, can be used by those who are in favor of torture in order to open the door and make some cracks in what is still, for many, a moral absolute (similar to the prohibition of slavery and genocide).

The United Nations Convention Against Torture, which took on the force of federal law in the U.S. when it was ratified by the Senate in 1994, specifies that

No exceptional circumstances, whatsoever, whether a state of war or a threat of war, internal political instability or any other public emergency, may be invoked as a justification of torture.

The ticking bomb argument is intended to show that an absolute ban on torture is unwise and ultimately detrimental to the survival of a free society. Opponents of torture are labeled moral absolutists, unwilling to confront the darker sides of reality and isolated from the tough problems that people in the field have to deal with. By making it impossible to “deal” with these tough problems, absolutists endanger the nation.

Once the absolute is broken, and some forms of torture are allowed in some circumstances – and even necessary if we want to protect freedom – then those who fight for democracy and for the right of people to express their opposition to torture, are able to do their jobs and make their hands dirty.

The torturer becomes the patriot; those defending the moral values of a nation are ivory tower intellectuals unaware of the realities of life and de facto allies of the terrorists. It’s not the example of the ticking bomb that is simplistic; it’s the moral absolutism that obscures that complex choices of real-life anti-terrorism.

The obvious objection to breaking the absolute is of course the slippery slope. I mentioned above that the ticking bomb argument would allow torturing many more people than just the captured terrorist holding vital information.

Terrorism and Human Rights (3): Torture

A few words on the infamous “ticking bomb argument” in favor of torture: suppose we capture a terrorist, and we know that he knows where the ticking bomb is hidden that will soon kill thousands or millions. Are we not allowed to torture him in order to get the information which can save these people? Are we not morally forced to torture him?

I don’t believe this simple cost-benefit analysis (low-cost torture to individuals compared to enormous gains for large, threatened groups) is a realistic description of torture, given the facts that

  • the example of the captured terrorist with information about a ticking bomb is unlikely to happen in real life, and
  • most actual torture cases are very different.

One should also consider the consequences of allowing torture, even in extreme cases:

  • installing a certain mentality in the minds of the torturers, sometimes destroying their mental health
  • the damage to judicial and democratic institutions
  • the reciprocity of our enemies (they will also use torture if torture is inflicted on them) …