Human rights violations can make it difficult to measure human rights violations, and can distort international comparisons of the levels of respect for human rights. Country A, which is generally open and accessible and on average respects basic rights such as speech, movement and press fairly well, may be more in the spotlight of human rights groups than country B which is borderline totalitarian. And not just more in the spotlight: attempts to quantify or measure respect for human rights may in fact yield a score that is worse for A than for B, or at least a score that isn’t much better for A than for B. The reason is of course the openness of A:
- Human rights groups, researchers and statisticians can move and speak relatively freely in A.
- The citizens of A aren’t scared shitless by their government and will speak to outsiders.
- Country A may even have fostered a culture of public discourse, to some extent. Perhaps its citizens are also better educated and better able to analyze political conditions.
- As Tocqueville has famously argued, the more a society liberates itself from inequalities, the harder it becomes to bear the remaining inequalities. Conversely, people in country B may not know better or may have adapted their ambitions to the rule of oppression. So, citizens of A may have better access to human rights groups to voice their complaints, aren’t afraid to do so, can do so because they are relatively well educated, and will do so because their circumstances seem more outrageous to them even if they really aren’t. Another reason to overestimate rights violations in A and underestimate them in B.
- The government administration of A may also be more developed, which often means better data on living conditions. And better data allow for better human rights measurement. Data in country B may be secret or non-existent.
I called all this the catch 22 of human rights measurement: in order to measure whether countries respect human rights, you already need respect for human rights. Investigators or monitors must have some freedom to control, to engage in fact finding, to enter countries and move around, to investigate “in situ”, to denounce etc., and victims should have the freedom to speak out and to organize themselves in pressure groups. So we assume what we want to establish. (A side-effect of this is that authoritarian leaders may also be unaware of the extent of suffering among their citizens).
You can see the same problem in the common complaints that countries such as the U.S. and Israel get a raw deal from human rights groups:
[W]hy would the watchdogs neglect authoritarians? We asked both Human Rights Watch and Amnesty, and received similar replies. In some cases, staffers said, access to human rights victims in authoritarian countries was impossible, since the country’s borders were sealed or the repression was too harsh (think North Korea or Uzbekistan). In other instances, neglected countries were simply too small, poor, or unnewsworthy to inspire much media interest. With few journalists urgently demanding information about Niger, it made little sense to invest substantial reporting and advocacy resources there. … The watchdogs can and do seek to stimulate demand for information on the forgotten crises, but this is an expensive and high risk endeavor. (source)
So there may also be a problem with the supply and demand curve in media: human rights groups want to influence public opinion, but can only do so with the help of the media. If the media neglect certain countries or problems because they are deemed “unnewsworthy”, then human rights groups will not have an incentive to monitor those countries or problems. They know that what they will be able to tell will fall on deaf ears anyway. So better focus on the things and the countries which will be easier to channel through the media.
Both the catch 22 problem and the problems caused by media supply and demand can be empirically tested by comparing the intensity of attention given by human rights monitoring organizations to certain countries/problems to the intensity of human rights violations (the latter data are assumed to be available, which is a big assumption, but one could use very general measures such as these). It seems that both effects are present but not much:
[W]e subjected the 1986-2000 Amnesty [International] data to a barrage of statistical tests. (Since Human Rights Watch’s early archival procedures seemed spotty, we did not include their data in our models.) Amnesty’s coverage, we found, was driven by multiple factors, but contrary to the dark rumors swirling through the blogosphere, we discovered no master variable at work. Most importantly, we found that the level of actual violations mattered. Statistically speaking, Amnesty reported more heavily on countries with greater levels of abuse. Size also mattered, but not as expected. Although population didn’t impact reporting much, bigger economies did receive more coverage, either because they carried more weight in global politics and economic affairs, or because their abundant social infrastructure produced more accounts of abuse. Finally, we found that countries already covered by the media also received more Amnesty attention. (source)
More posts in this series are here.