The Causes of Human Rights Violations (46): Justificational Reasoning

People act in all sorts of dubious ways, but they often justify their behavior after the event using imaginary motives or reasons, and then they come to believe those motives and reasons themselves. It’s a kind of cognitive failure and self-deception which, when it occurs in the field of human rights, makes it hard to do something about rights violations. If people can’t even admit to themselves what the real reasons are for their bad behavior – rights violations in this case – then it becomes very difficult – for others and for themselves – to do something about those reasons and to prevent future occurrences of the behavior.

As long as we – and, as a result, others as well – believe that we were motivated by ethical justifications which we in fact constructed and invented after the facts rather than by the often more suspicious justifications that really drove our actions, then we have less reasons to avoid those actions in the future. True, well-intentioned rights violations do exist and good intentions don’t make remedies more difficult – often it’s enough that we become aware of the possible dangers of good intentions. But when bad intentions masquerade as good ones, even to the person having the intentions, then things become more difficult. It’s always good to know the exact and true causes of something if you want to avoid it in the future. When people really know what motivates them but choose to present themselves in another way – perhaps because of shame -we can still try to pierce their cover. But when people fool even themselves, then there’s very little we can do. And it seems that people are indeed frequently unaware of the real causes of their own behavior.

An innocent example of justificational reasoning to begin with:

[M]ale students [were asked] to choose between two specially created sports magazines. One had more articles, but the other featured more sports. When a participant was asked to rate a magazine, one of two magazines happened to be a special swimsuit issue, featuring beautiful women in bikinis.

When the swimsuit issue was the magazine with more articles, the guys said they valued having more articles to read and chose that one. When the bikini babes appeared in the publication with more sports, they said wider coverage was more important and chose that issue. (source, source)

A more harmful case from another experiment:

Managers … have been found to favour male applicants at hypothetical job interviews by claiming that they were searching for a candidate with either greater education or greater experience, depending on the attribute with which the man could trump the woman. (source)

And it’s not easy to imagine the same thing going on in even more harmful actions.

There’s a kind of cognitive dissonance behind justificational reasoning: we want others to think that were are good people and we want to think of ourselves this way. When the facts contradict this belief, we change the facts.

Other posts in this series are here.

The Causes of Human Rights Violations (32): The Just World Fallacy

Here’s another psychological bias that causes human rights violations to persist: the just world fallacy.

It seems that we want to believe that the world is fundamentally just. This strong desire causes us to rationalize injustices that we can’t otherwise explain: for example, we look for things that the victim might have done to deserve the injustice. The culture of poverty is a prime example, as is the “she asked for it” explanation of rape. This fallacy or bias is obviously detrimental to the struggle against human rights violations, since it obscures the real causes of those violations. The belief in a just world makes it difficult to make the world more just.

And even if its effect on human rights was neutral or positive, the fallacy would be detrimental in other ways: it doesn’t help our understanding of the world to deny that many of those who are lucky and who are treated justly haven’t done anything to deserve it, or that many of those who inflict injustices get away with it. The prevalence of the fallacy can be observed in popular culture, in which the villain always gets what he or she deserves; the implication is that those who “get” something, also deserve it.

Psychologists have come up with different possible explanations of the just world fallacy. It may be a way of protecting ourselves: if injustices are generally the responsibility of the victims themselves, then we may be safe as long as we avoid making the mistakes they made. The bias lessens our vulnerability, or better our feeling of vulnerability, and therefore makes us feel better. Another explanation focuses of the anxiety and alienation that comes with the realization that we live in a world rife with unexplained, unexplainable and unsolvable injustices. The fallacy is then akin to religious teachings about the afterlife, which are sometimes viewed as mechanisms for coping with the anxiety and alienation caused by mortality. Melvin Lerner explains the just world fallacy as a form of cognitive dissonance:

the sight of an innocent person suffering without possibility of reward or compensation motivated people to devalue the attractiveness of the victim in order to bring about a more appropriate fit between her fate and her character. (source)

All this argues against making desert central to our theories of justice: if desert is difficult to determine because there are biases involved, then surely desert can’t be a good basis of a theory of justice.

An interesting aside: it seems that the opposite bias also exists. The so-called “mean world syndrome” is a term coined by George Gerbner to describe a phenomenon whereby violent content of mass media makes viewers believe that the world is more dangerous than it actually is. Indeed, perceptions of violence and criminality often do not correspond to real levels. People who consume a large amount of violent media or who often read the crime sections of sensationalist newspapers tend to overestimate the prevalence of violence and crime.

More on the possible causes of rights violations here.

The Causes of Human Rights Violations (27): Harmful Moral Judgments

Human rights violations have many possible causes, but it’s reasonable to assume that a lot of them are caused by some of the moral convictions of the violators. For example:

  • One of the reasons why people engage in female genital mutilation (FGM) is the fear that if women are left unmolested they won’t be able to restrain their sexuality.
  • Discrimination of homosexuals is often based on the belief that homosexuality is immoral.
  • The death penalty is believed to limit the occurrence of violent crime.
  • Etc. etc.

The rational approach

It follows that if we want to stop rights violations, we’ll have to change people’s moral convictions. How do we do that? The standard answer is moral persuasion based on moral theory (in most cases, this will be some kind of intercultural dialogue). This is basically a philosophical enterprise. We argue that some things which people believe to be moral are in fact immoral. For example, we could use the Golden Rule to argue with men who support FGM that FGM is wrong (and the Golden Rule is present in all major traditions; Confucianism, Islam, Hinduism, Buddhism, Taoism etc.). We could argue that the consequentialism used in the defense of capital punishment is in fact an instrumentalization of people and doesn’t take seriously the separateness of individuals.

You can already see the obvious difficulty here: this approach appeals to concepts that are strange and unfamiliar to many, and perhaps a bit too esoteric, and therefore also unconvincing. They may appeal to people who regularly engage in philosophical and moral discussions, but those people tend not to be practitioners of FGM, oppressors of homosexuals etc.

That is why another approach, which you could call the internal approach, is perhaps more successful: instead of using abstract philosophical reasoning, we can try to clarify people’s traditions to them. FGM is often believed to be a practice required by Islam, whereas in reality this is not the case. There’s nothing in the Koran about it. Authority figures within each culture can play a key role here. One limit of this approach is that many cultures don’t have the resources necessary for this kind of exegesis or reinterpretation, at least not in all cases of morality based rights violations.

One way to overcome this limitation is to dig for the “deep resources”. We can point to some very basic moral convictions that are globally shared but not translated in the same way into precise moral rules across different cultures. For example, killing is universally believed to be wrong, but different cultures provide different exceptions: some cultures still accept capital punishment, others still accept honor killings etc. One could argue that some exceptions aren’t really exceptions to the ground rule but in reality unacceptable violations of the ground rule.

The emotional approach

The problem with all these approaches is that they are invariably based on a belief in rationality: it’s assumed that if you argue with people and explain stuff to them, they will change their harmful moral judgments. In practice, however, we see that many ingrained moral beliefs are very resistant to rational debate, even to internal debate within a tradition. One of the reasons for this resistance, according to moral psychology, is that moral judgment is not the result of reasoning but rather a “gut reaction” based on emotions such as empathy or disgust (which have perhaps biologically evolved). (This theory goes back to David Hume, who believed that moral reasons are “the slave of the passions”, and is compatible with the discovery that very young children and even primates have a sense of morality – see the work of Frans De Waal for instance).

Indeed, tests have shown that moral judgments are simply too fast to be reasoned judgments of specific cases based on sets of basic principles, rules of logic and facts, and that they take place in the emotional parts of the brain. This emotional take on morality also corresponds to the phenomenon of “moral dumbfounding” (Jonathan Haidt‘s phrase): when people are asked to explain why they believe something is wrong, they usually can’t come up with anything more than “I just know it’s wrong!”.

If all this is true, then reasoned arguments about morality are mostly post-hoc justifications for gut reactions and therefore not something that can change gut reactions. The rational approach described above is then a non-starter. However, I don’t think it has to be true, or at least not always. I believe moral psychology underestimates the role of debate and internal reflection, but I also think that in many cases and for many people it is true, unfortunately. And that fact limits the importance of enhanced debate as a tool to modify harmful moral judgments. But the same fact opens up another avenue for change. If moral judgments are reactions based on emotions, we can change judgments by changing emotions. And the claim that our moral emotions have evolved biologically doesn’t imply that they can’t change. The fact is that they change all the time. Slavery was believed to be moral, some centuries ago, and did not generally evoke emotions like disgust. If the moral approval of slavery was a gut reaction based on biologically evolved emotions, then either these emotions or the gut reaction to them has changed.

The most famous example of the emotional approach is Richard Rorty’s insistence on the importance of the telling of sentimental stories like “Uncle Tom’s cabin” or “Roots” etc. Such stories, but also non-narrative political art, make the audience sympathize with persons whose rights are violated because they invite the audience to imagine what it is like to be in the victim’s position.

The problem with the emotional approach is that it can just as easily be used to instill and fortify harmful moral judgments, or even immoral judgments.

Both emotional and rational processes are relevant to moral change, and when the rational processes turn out to be insufficient, as they undoubtedly are in many cases (especially the cases in which change is most urgent), we’ll have to turn to the emotional ones. (The emotional approach can be very useful in early internalization. Early childhood is probably the best time to try to change a society’s “gut reactions”).

The diversity approach

Apart from the rational or emotional approach, there’s also the diversity approach: put people in situations of moral or cultural diversity, and harmful moral judgments will, to some extent, disappear automatically. People’s morality does indeed change through widened contact with groups who have other moral opinions. And widened contact is typical of our age in which travel, migration, trade and political and economic interdependence are more common than ever. This automatic change can happen in several ways:

  • In a setting of social diversity, people see that a certain practice which they believe is immoral doesn’t really have the disastrous consequences they feared it would have. For example, when you see that people who haven’t endured FGM usually don’t live sexually depraved lives, you may modify your moral judgment about FGM. Some moral beliefs are based on factual mistakes. If we point to the facts, or better let people experience the facts, they may adapt their mistaken moral judgments in light of those facts.
  • When people live among other people who have radically different moral beliefs or practices, they can learn to accept these other people because they see that they are decent people, notwithstanding their erroneous moral beliefs or practices. This kind of experience doesn’t necessarily change people’s harmful moral judgments, but at least makes these people more tolerant and less inclined to persecute or oppress others.
  • Tolerance is generally a wise option in diverse societies, from a selfish perspective: intolerance in a diverse society in which no single group is an outright majority can lead to strife and conflict, and even violence. So all groups in a such a society have an interest in being tolerant. Tolerance in itself does not cause people to reconsider their harmful moral judgments, but at least removes the sharp edges from those judgments. However, tolerance can, ultimately, produce change: if you treat others with respect they are more likely to think that you have a point. Hence, they’re more likely to be convinced by your arguments that their moral judgments are harmful.
  • People can get used to things. Being exposed to different and seemingly immoral beliefs or practices can render people’s moral judgments less pronounced and therefore less dangerous.
  • Also,

When we are required to confront things that bother us we sometimes (often?) reduce cognitive dissonance by changing our preferences so that we are no longer bothered.  Thus [we should] encourag[e] the intolerable to come forward, thereby forcing the intolerant to reduce cognitive dissonance by accepting what was formerly intolerable. (source)

Of course, this “contact-hypothesis” or “diversity-hypothesis” doesn’t explain all moral change. For example, it’s hard to argue that the abolition of slavery in the U.S. came about through increased social diversity.

Perhaps there are cases when we shouldn’t do anything. People can get more attached to harmful moral convictions when their group is faced with outsiders telling them how awful their convictions and practices are, especially when the group is colonized, or when they are a (recent) minority (e.g. immigrants). In order to avoid such a counter-reaction, it’s often best to leave people alone and hope for the automatic transformations brought about by life in diversity. However, that’s likely to be very risky is some cases. A lot of people can suffer while we wait for change. Also, one might as well argue that the use of force to change certain practices based on harmful moral judgments will, in time, also change those moral judgments: if people are forced to abandon FGM, maybe they’ll come to understand why FGM is wrong, over time.

Income Inequality (23): U.S. Public Opinion on Income Inequality

Despite what foreigners usually believe about the U.S., and despite the confused ramblings of a tiny group of anti-“socialist” loudmouths high on tea, U.S. public opinion is actually very egalitarian:

Americans are in broad agreement on the need for a more equal distribution of wealth. … that’s what a forthcoming study by two psychologists, Dan Ariely of Duke University and Michael I. Norton of Harvard Business School, has concluded. First, Ariely and Norton asked thousands of Americans what they thought the nation’s actual wealth distribution looks like: how much is owned by the wealthiest 20 percent of the population, the next-wealthiest 20 percent, and on down. The researchers then asked people what, in an ideal world, they would like the nation’s wealth distribution to be.

Ariely and Norton found that Americans think they live in a far more equal country than they in fact do. On average, those surveyed estimated that the wealthiest 20 percent of Americans own 59 percent of the nation’s wealth; in reality the top quintile owns around 84 percent. The respondents further estimated that the poorest 20 percent own 3.7 percent, when in reality they own 0.1 percent.

And when asked to give their ideal distribution, they described, on average, a nation where the wealth distribution looks not like the U.S. but like Sweden, only more so—the wealthiest quintile would control just 32 percent of the wealth, the poorest just over 10 percent. “People dramatically underestimated the extent of wealth inequality in the U.S.,” says Ariely. “And they wanted it to be even more equal.” (source)