Measuring Poverty (5): The Mystery of the Feminization of Poverty

Some have called it a baseless claim: 70% of the world’s poor are women. This is a number that seems to have come from nowhere yet it has taken on a life of its own. The reason is probably that it has some intuitive appeal. Theoretically, the claim that being female places someone at a greater risk of being poor is convincing. Gender discrimination – which is a deceptively neutral term meaning discrimination of one gender only – is a widespread problem and it’s highly probable that women who suffer discrimination are more likely to be poor and to remain poor. They

  • receive less education
  • receive lower wages
  • cannot freely choose their jobs in some countries
  • have less inheritance rights than men in some countries
  • perform the bulk of the household tasks making it relatively hard to accumulate income
  • are responsible for the household income (men are often culturally allowed to escape into leisure and get away from the burdens of poverty)
  • suffer disproportionately from some types of violence
  • and face very specific health risks related to procreation.

All these problems faced by women who suffer discrimination make it more likely that they are relatively more burdened by poverty, compared to men who usually don’t suffer these types of discrimination.

Moreover, when young women begin to enjoy better education and employment we often see that the discriminatory features of the family structures and patriarchal systems in which they live make fresh appeals to their newly found human capital. As a result, their improved capabilities only serve to push them more into poverty. Poverty, after all, isn’t merely a question of sufficient income and capabilities, but is also determined by the availability of choice, opportunities and leisure.

So it’s obvious that men and women are poor for different reasons, and that some of the reasons that make women poor make them relatively more poor compared to men.

This is the intuitive case, but it appears that it’s very difficult to back this up with hard numbers. The “70%” claim is unlikely to be correct, but if we agree that discrimination skews the distribution, then how much? And how much of it is compensated by factors that skew the distribution towards male poverty (e.g. male participation in wars). The problem is that poverty data are usually available only for households in aggregate and aren’t broken down by individuals, sex, age etc. So it’s currently impossible to say: “this household is poor, and the female parent/child is more poor than the male parent/child”. In addition, even in households that aren’t poor according to standard measures, the women inside these households may well be poor. Women is non-poor households may be unable to access the household’s income or wealth because of discrimination.

Given the problems of the current poverty measurement system, I think it’s utopian to expect improvements in the system that will allow us to adequately measure female poverty and test the hypothesis of the feminization of poverty.

Measuring Human Rights (9): When “Worse” Doesn’t Necessarily Mean “Worse”

I discussed in this older post some of the problems related to the measurement of human rights violations, and to the assessment of progress or deterioration. One of the problems I mentioned is caused by improvements in measurement methods. Such improvements can in fact result in a statistic showing increasing numbers of rights violations, whereas in reality the numbers may not be increasing, and perhaps even decreasing. Better measurement means that you now compare current data that are more complete and better measured, with older numbers of rights violations that were simply incomplete.

The example I gave was about rape statistics: better statistical and reporting methods used by the police, combined with less social stigma etc. result in statistics showing a rising number of rapes, but this increase was due to the measurement methods (and other effects), not to what happened in real life.

I now came across another example. Collateral damage – or the unintentional killing of civilians during wars – seems to be higher now than a century ago (source). This may also be the result of better monitoring hiding a totally different trend. We all know that civilian deaths are much less acceptable now than they used to be, and that journalism and war reporting are probably much better (given better communication technology). Hence, people may now believe that it’s more important to count civilian deaths, and have better means to do so. As a result, the numbers of civilian deaths showing up in statistics will rise compared to older periods, but perhaps the real numbers don’t rise at all.

Of course, the increase of collateral damage may be the result of something else than better measurement: perhaps the lower level of acceptability of civilian deaths forces the army to classify some of those deaths as unintentional, even if they’re not (and then we have worse rather than better measurement). Or perhaps the relatively recent development of precision-guided munition has made the use of munition more widespread so that there are more victims: more bombs, even more precise bombs, can make more victims than less yet more imprecise bombs. Or perhaps the current form of warfare, with guerilla troops hiding among populations, does indeed produce more civilian deaths.

Still, I think my point stands: better measurement of human rights violations can give the wrong impression. Things may look as if they’re getting worse, but they’re not.

Lies, Damned Lies, and Statistics (10): How (Not) to Frame Survey Questions

I’ve mentioned before that information on human rights depends heavily on opinion surveys. Unfortunately, surveys can be wrong and misleading for so many different reasons that we have to be very careful when designing surveys and when using and interpreting survey data. One reason I haven’t mentioned before is the framing of the questions.

Even very small differences in framing can produce widely divergent answers. And there is a wide variety of problems linked to the framing of questions:

  • Questions can be leading questions, questions that suggests the answer. For example: “It’s wrong to discriminate against people of another race, isn’t it?” Or: “Don’t you agree that discrimination is wrong?”
  • Questions can be put in such a way that they put pressure on people to give a certain answer. For example: “Most reasonable people think racism is wrong. Are you one of them?” This is also a leading question of course, but it’s more than simply “leading”.
  • Questions can be confusing or easily misinterpreted. Such questions often include a negative, or, worse, a double negative. For example: “Do you agree that it isn’t wrong to discriminate under no circumstances?” Needless to say that your survey results will be infected by answers that are the opposite of what they should have been.
  • Questions can be wordy. For example: “What do you think about discrimination (a term that refers to treatment taken toward or against a person of a certain group that is based on class or category rather than individual merit) as a type of behavior that promotes a certain group at the expense of another?” This is obviously a subtype of the confusing-variety.
  • Questions can also be confusing because they use jargon, abbreviations or difficult terms. For example: “Do you believe that UNESCO and ECOSOC should administer peer-to-peer expertise regarding discrimination in an ad hoc or a systemic way?”
  • Questions can in fact be double or even triple questions, but there is only one answer required and allowed. Hence people who may have opposing answers to the two or three sub-questions will find it difficult to provide a clear answer. For example: “Do you agree that racism is a problem and that the government should do something about it?”
  • Open questions should be avoided in a survey. For example: “What do you think about discrimination?” Such questions do not yield answers that can be quantified and aggregated.
  • You also shouldn’t ask questions that exclude some possible answers, and neither should you provide a multiple-choice set of answers that doesn’t include some possible answers. For example: “How much did the government improve its anti-discrimination efforts relative to last year? Somewhat? Average? A lot?” Notice that such a framing of the question doesn’t allow people to respond that the effort had not improved or had worsened. Another example: failure to include “don’t know” as a possible answer.

Here’s a real-life example:

In one of the most infamous examples of flawed polling, a 1992 poll conducted by the Roper organization for the American Jewish Committee found that 1 in 5 Americans doubted that the Holocaust occurred. How could 22 percent of Americans report being Holocaust deniers? The answer became clear when the original question was re-examined: “Does it seem possible or does it seem impossible to you that the Nazi extermination of the Jews never happened?” This awkwardly-phrased question contains a confusing double-negative which led many to report the opposite of what they believed. Embarrassed Roper officials apologized, and later polls, asking clear, unambiguous questions, found that only about 2 percent of Americans doubt the Holocaust. (source)

Measuring Human Rights (7): Don’t Let Governments Make it Easy on Themselves

In many cases, the task of measuring respect for human rights in a country falls on the government of that country. It’s obvious that this isn’t a good idea in dictatorships: governments there will not present correct statistics on their own misbehavior. But if not the government, who else? Dictatorships aren’t known for their thriving and free civil societies, or for granting access to outside monitors. As a result, human rights protection can’t be measured.

The problem, however, of depending on governments for human rights measurement isn’t limited to dictatorships. I also gave examples of democratic governments not doing a good job in this respect. Governments, also democratic ones, tend to choose indicators they already have. For example, number of people benefiting from government food programs (they have numbers for that), neglecting private food programs for which information isn’t readily available. In this case, but in many other cases as well, governments choose indicators which are easy to measure, rather than indicators which measure what needs to be measured but which require a lot of effort and money.

Human rights measurement also fails to measure what needs to be measured when the people whose rights we want to measure don’t have a say on which indicators are best. And that happens a lot, even in democracies. Citizen participation is a messy thing and governments tend to want to avoid it, but the result may be that we’re measuring the wrong thing. For example, we think we are measuring poverty when we count the number of internet connections for disadvantaged groups, but these groups may consider the lack of cable TV or public transportation a much more serious deprivation. The reason we’re not measuring what we think we are measuring, or what we really need to measure, is not – as in the previous case – complacency, lack of budgets etc. The reason is a lack of consultation. Because there hasn’t been consultation, the definition of “poverty” used by those measuring human rights is completely different from the one used by those whose rights are to be measured. And, as a result, the indicators that have been chosen aren’t the correct ones, or they don’t show the whole picture. Many indicators chosen by governments are also too specific, measuring only part of the human right (e.g. free meals for the elderly instead of poverty levels for the elderly).

However, even if the indicators that are chosen are the correct ones – i.e. indicators that measure what needs to be measured, completely and not partially – it’s still the case that human rights measurement is extremely difficult, not only conceptually, but also and primarily on the level of execution. Not only are there many indicators to measure, but the data sources are scarce and often unreliable, even in developed countries. For example, let’s assume that we want to measure the human right not to suffer poverty, and that we agree that the best and only indicator to measure respect for this right is the level of income.* So we cleared up the conceptual difficulties. The problem now is data sources. Do you use tax data (taxable income)? We all know that there is tax fraud. Low income declared in tax returns may not reflect real poverty. Tax returns also don’t include welfare benefits etc.

Even if you manage to produce neat tables and graphs you always have to stop and think about the messy ways in which they have been produced, about the flaws and lack of completeness of the chosen indicators themselves, and about the problems encountered while gathering the data. Human rights measurement will always be a difficult thing to do, even under the best circumstances.

* This isn’t obvious. Other indicators could be level of consumption, income inequality etc. But let’s assume, for the sake of simplicity, that level of income is the best and only indicator for this right.

Lies, Damned Lies, and Statistics (7): “Drowning” Data

Suppose we want to know how many forced disappearances there are in Chechnya. Assuming we have good data this isn’t hard to do. The number of disappearances that have been registered, by the government or some NGO, is x on a total Chechen population of y, giving z%. The Russian government may decide that the better measurement is for Russia as a whole. Given that there are almost no forced disappearances in other parts of Russia, the z% goes down dramatically, perhaps close to or even below the level other comparable countries.

Good points for Russia! But that doesn’t mean that the situation in Chechnya is OK. The data for Chechnya are simply “drowned” into those of Russia, giving the impression that “overall”, Russia isn’t doing all that bad. This, however, is misleading. The proper unit of measurement should be limited to the area where the problem occurs. The important thing here isn’t a comparison of Russia with other countries; it’s an evaluation of a local problem.

Something similar happens to the evaluation of the Indian economy:

Madhya Pradesh, for example, is comparable in population and incidence of poverty to the war-torn Democratic Republic of Congo. But the misery of the DRC is much better known than the misery of Madhya Pradesh, because sub-national regions do not appear on “poorest country” lists. If Madhya Pradesh were to seek independence from India, its dire situation would become more visible immediately. …

But because it’s home to 1.1 billion people, India is more able than most to conceal the bad news behind the good, making its impressive growth rates the lead story rather than the fact that it is home to more of the world’s poor than any other country. …

A 10-year-old living in the slums of Calcutta, raising her 5-year-old brother on garbage and scraps, and dealing with tapeworms and the threat of cholera, suffers neither more nor less than a 10-year-old living in the same conditions in the slums of Lilongwe, the capital of Malawi. But because the Indian girl lives in an “emerging economy,” slated to battle it out with China for the position of global economic superpower, and her counterpart in Lilongwe lives in a country with few resources and a bleak future, the Indian child’s predicament is perceived with relatively less urgency. (source)

Measuring Human Rights (5): Some (Insurmountable?) Problems

If you care about human rights, it’s extremely important to measure the level of protection of human rights in different countries, as well as the level of progress or deterioration. Measurement in the social sciences is always tricky; we’re dealing with human behavior and not with sizes, volumes, speeds etc. However, measuring human rights is especially difficult.

Some examples. I talked about the so-called catch 22 of human rights measurement. In order to measure whether countries respect human rights, one already needs respect for human rights. Organizations, whether international organizations or private organizations (NGOs), must have some freedom to control, to engage in fact finding, to enter countries and move around, to investigate “in situ”, to denounce etc. Victims should have the freedom to speak out and to organize themselves in pressure groups. So we assume what we want to establish.

The more violations of human rights, the more difficult it is to monitor respect for human rights. The more oppressive the regime, the harder it is to establish the nature and severity of its crimes; and the harder it is to correct the situation.

So, a country which does a very bad job protecting human rights, may not have a low score because the act of giving the country a correct score is made impossible by its government. On the other hand, a low score for human rights (or certain human rights) may not be as bad as it seems, because at least it was possible to determine a score.

Another example: suppose a country shows a large increase in the number of rapes. At first sight, this is a bad thing, and would mean giving the country a lower score on certain human rights (such as violence against women, gender discrimination etc.). But perhaps the increase in the number of rapes is simply the result of a larger number of rapes being reported to the police. And better reporting of rape may be the result of a more deeply and widely ingrained human rights culture, or, in other words, it may be the reflection of a growing consciousness of women’s rights and gender equality.

So, a deteriorating score may actually hide progress.

The same can be said of corruption or police brutality. A deteriorating score may simply be a matter of perception, a perception created by more freedom of the press.

I don’t know how to solve these problems, but I think it’s worth mentioning them. They are probably the reason why there is so little good measurement in the field of human rights, and so much anecdotal reporting.