African Americans in the U.S. are more likely to die of cancer. It seems that a similar disparity exists for strokes and lead poisoning.
Many ethnic groups have a higher death rate from stroke than non-Hispanic whites. Although the death rate from stroke nationwide dropped 70% between 1950 and 1996, minorities the decline was greatest in non-Hispanic whites. The greatest number of stroke deaths compared to whites occurred in African-Americans and Asians/Pacific Islanders. Excess deaths among racial/ethnic groups could be a result of a greater frequency of stroke risk factors, including obesity, hypertension, physical inactivity, poor nutrition, diabetes, smoking and socioeconomic factors such as lack of health insurance. (source)
Lead poisoning causes, i.a., cognitive delay, hyperactivity, and antisocial behavior.
Some more data to support the claims expressed in this post, and this one. There’s a paper here presenting the results of a survey among leading criminologists regarding their opinion on the deterrent effect of capital punishment in the U.S.
The findings demonstrate an overwhelming consensus among these criminologists that the empirical research conducted on the deterrence question strongly supports the conclusion that the death penalty does not add deterrent effects to those already achieved by long imprisonment.
Of course, it’s not because experts believe something that this corresponds to the truth, but at least it’s ammunition that can be used against those proponents of the death penalty who like to claim that there is a “scientific consensus” in favor of the deterrent effect. There is no such thing. On the contrary, if there’s a consensus, it’s for the opposing view.
Another point: this kind of statistic on expert opinion, together with the data offered in the posts I linked to above, is much more convincing than the data comparing murder rates in capital punishment states and abolitionist states.
At first sight, this graph also undermines the deterrent argument, but it’s not as solid as it appears. It’s always important to control your data for other variables which can explain a difference. Maybe there are other reasons why states without the death penalty have lower murder rates, e.g. less poverty, more gun control etc. And maybe the murder rate in states with capital punishment would be even higher without capital punishment.
Suppose we want to know how many forced disappearances there are in Chechnya. Assuming we have good data this isn’t hard to do. The number of disappearances that have been registered, by the government or some NGO, is x on a total Chechen population of y, giving z%. The Russian government may decide that the better measurement is for Russia as a whole. Given that there are almost no forced disappearances in other parts of Russia, the z% goes down dramatically, perhaps close to or even below the level other comparable countries.
Good points for Russia! But that doesn’t mean that the situation in Chechnya is OK. The data for Chechnya are simply “drowned” into those of Russia, giving the impression that “overall”, Russia isn’t doing all that bad. This, however, is misleading. The proper unit of measurement should be limited to the area where the problem occurs. The important thing here isn’t a comparison of Russia with other countries; it’s an evaluation of a local problem.
Something similar happens to the evaluation of the Indian economy:
Madhya Pradesh, for example, is comparable in population and incidence of poverty to the war-torn Democratic Republic of Congo. But the misery of the DRC is much better known than the misery of Madhya Pradesh, because sub-national regions do not appear on “poorest country” lists. If Madhya Pradesh were to seek independence from India, its dire situation would become more visible immediately. …
But because it’s home to 1.1 billion people, India is more able than most to conceal the bad news behind the good, making its impressive growth rates the lead story rather than the fact that it is home to more of the world’s poor than any other country. …
A 10-year-old living in the slums of Calcutta, raising her 5-year-old brother on garbage and scraps, and dealing with tapeworms and the threat of cholera, suffers neither more nor less than a 10-year-old living in the same conditions in the slums of Lilongwe, the capital of Malawi. But because the Indian girl lives in an “emerging economy,” slated to battle it out with China for the position of global economic superpower, and her counterpart in Lilongwe lives in a country with few resources and a bleak future, the Indian child’s predicament is perceived with relatively less urgency. (source)
In the case of dictatorial governments or other governments that are widely implicated in the violation of the rights of their citizens, it’s obvious that the task of measuring respect for human rights should be – where possible – carried out by independent non-governmental organizations, possibly even international or foreign ones (if local ones are not allowed to operate). Counting on the criminal to report on his crimes isn’t a good idea. Of course, sometimes there’s no other way. It’s often impossible to estimate census data, for example, or data on mortality, healthcare providers etc. without using official government information.
All this is rather trivial. The more interesting point, I hope, is that the same is true, to some extent, of governments that generally have a positive attitude towards human rights. Obviously, the human rights performance of these governments also has to be measured, because there are rights violations everywhere, and a positive attitude doesn’t guarantee positive results. However, even in such cases, it’s not always wise to trust governments with the task of measuring their own performance in the field of human rights. An example from a paper by Marilyn Strathern (source, gated):
In 1993, new regulations [required] local authorities in the UK … to publish indicators of output, no fewer than 152 of them, covering a variety of issues of local concern. The idea was … to make councils’ performance transparent and thus give them an incentive to improve their services. As a result, however,… even though elderly people might want a deep freeze and microwave rather than food delivered by home helps, the number of home helps [was] the indicator for helping the elderly with their meals and an authority could only improve its recognised performance of help by providing the elderly with the very service they wanted less of, namely, more home helps.
Even benevolent governments can make crucial mistakes like these. This example isn’t even a measurement error; it’s measuring the wrong thing. And the mistake wasn’t caused by the government’s will to manipulate, but by a genuine misunderstanding of what the measurement should be all about.
I think the general point I’m trying to make is that human rights measurement should take place in a free market of competing measurements – and shouldn’t be a (government) monopoly. Measurement errors are more likely to be identified if there is a possibility to compare competing measurements of the same thing.