I have not written for a while. So, this post is me getting back to business. Those who follow the blog will know that a while back I discussed some research showing that only half the countries pursuing externally supported governance reforms see better government effectiveness scores over ten years. I thought I would re-visit this commentary today, as I am working on why this is the case and what could be done to address it. So, let me backtrack a bit.
The 50:50 problem
The argument rests on some work I did last year, where I started looking at whether countries with World Bank sponsored reforms in the Public Administration, Law and Justice Sector (PAL&J reforms, which typically target institutional improvements) had better World Governance Indicator scores on Government Effectiveness.
The WGI scores have obvious problems (and I myself am a critic of using them too much), but they are commonly used to reflect on the quality of governments in developing countries and are the ‘internal’ measures development organizations like the World Bank used to think about the issue in the 2000s. One would therefore expect World Bank reforms to impact these WGI scores directly, given that such interventions and WGIs are both products of the same external development community. In this way, they are the ‘test’around which reforms may have been shaped and, using a term from the education literature, one would expect external organizations to ‘teach to the test’.
Interestingly, WGI data suggests that reform results are mixed.Some of the 145 countries I looked at--all of which adopted institutional reforms with the World Bank--have seen improved WGI government effectiveness scores in the past decade. Many have seen scores decline, even though they adopted reforms through costly projects. Afghanistan and Rwanda recorded the most improvement between 1998 and 2008 at +0.96 (with Afghanistan going from-2.27 to -1.31) while the Maldives dropped the most, at -1.39 (from 0.95 to-0.35). While not matching the Maldives in the extent of its drop, 72 countries had declining scores in the period. An almost equivalent number, 73, saw improved scores.
I find the story startling: Half of 145 countries that have had donor sponsored government reforms in place saw declines in donor devised indicators of government effectiveness over a recent ten year period.Given such evidence, organizations like the World Bank cannot be too confident in their reform agendas. Imagine a new president asking her World Bank country representative, “What are the odds that proposed reforms will help us get better government effectiveness scores in a decade?” Based on this kind of data, the answer should surely be, “No more than 50-50 Madame President …as good as getting heads on a coin toss.”
These are also the odds of reforms yielding improvements in a smaller 40 country sample I worked with at a higher level of detail. 21 of the 40 countries went backwards on WGI government effectiveness scores between 1998 and 2008. A smaller set of 19 countries saw improved indicators. Cape Verde dropped from 0.35 to 0.05 even though the country had pursued reforms in 28 World Bank projects that started in 1992 and cost $122 million. Senegal’s scores dropped by a quarter of a point even though it had undertaken 75 projects with PAL&J content, which cost over $1 billion in total. Senegal’s reform activities were consistently pursued as well, with 16 projects in the 1980s, 26 in the 1990s and 21 between 2000 and 2008. $526 million was committed to projects between 1998 and 2008 alone, the period in which government effectiveness scores fell. Nicaragua engaged in World Bank sponsored institutional reforms costing $355 million between 1998 and 2008, building on about $290 million worth of prior engagements, but saw government effectiveness scores dip by more than half a point.
It’s worse than 50:50 on the Quality of Governance measures
As noted, I expect some to question this source of data, noting that the WGI measures lack validity or reliability. While standing up as a critic, it is important to note that many many people use these indicators, so one should not dismiss them as a legitimate source of data. Even academics use the data to capture hard-to-measure concepts like government effectiveness, leading to thousands of Google Scholar citations to the ‘Governance Matters’ paper series.
Even given the multiple WGI citations, it is reasonable to ask whether other measures paint a different picture of government quality or effectiveness between 1998 and 2008. One alternative is the Quality of Governance (QoG) measure produced as part of the International Country Risk Guide (ICRG). It is generated by the Political Risk Service Group (PRS), a rating agency that has provided data for developed and developing countries over the last few decades. Its indicators are used by international firms, donors and international financial institutions. The QoG measure is a subjective index created by summing three sets of data on corruption in government, bureaucratic quality and the rule of law. These three dimensions are actively targeted by external institutional reforms. When summed, they are presented as an indicator ranging from 0 to 1. This indicator is widely used in academic research, with over 500 references on Google Scholar between 2009 and 2011 alone.
I looked at QoG data for 107 countries to illustrate the story about governance quality and ‘reform’ between 1998 and 2008. All 107 countries had World Bank sponsored institutional reforms in place during this period. Most also had reforms predating 1998 and most are expected to have also worked with other agencies on reforms to improve the quality or effectiveness of government.
Even with such reforms, however, over 70 percent of these countries saw QoG scores decline between 1998 and 2008. This is a larger number of decliners than was evident when considering the WGI government effectiveness data. This evidence suggests that the odds of a country doing externally influenced reform and seeing improved quality of governance was less than 50-50 for the period. Given this, most countries doing reform should expect that government quality will still decline over time, regardless of reform.
This finding is reinforced when looking at the smaller sample of 40 countries. QoG data over the entire period are only available for 29 of these countries. The QoG scores declined for 19 of these, stayed the same for one, and improved for the other 9. The biggest gain was in Serbia, which increased its score from 0.34 to 0.47. At the other extreme, Argentina fell from 0.69 to 0.52. Serbia had adopted 22 World Bank projects with institutional reform content in the 1998-2008 period, costing $217 million. Argentina engaged in 46 such projects in the period, costing over $4 billion.
Looking at per capita income as well
Mixed results like this are evident when looking at other measures of government effectiveness as well. Economic growth could be considered such a measure, not because governments drive all growth but because this is a bottom-line indicator that many believe citizens care about. Ronald Reagan’s 1980 election campaign famously noted that an administration’s success was reflected in whether people felt better off because of the government’s presence. Bill Clinton’s 1992 election slogan reinforced the idea, proclaiming ‘It’s the economy, stupid’ when describing what citizens expect of their governments.
Given such, it is important to note that most countries’ economies grew over the 1998-2008 period, and most of the world’s citizens could access more value in 2008 than they could in 1998. However, many countries grew at rates slower than their comparators, suggesting variation in factors like the quality of institutions. GDP per capita, measured in constant terms, increased by about 20 percent in Bolivia between 1998 and 2008, for instance, less than the mean growth rate for lower middle income countries. Latin American lower middle income neighbors like Brazil grew by significantly larger amounts. One could also look at Malawi, where citizens saw personal incomes grow at about 10 percent over the entire period, way below the average of low income countries. Incomes increased by about 50 percent in comparable income countries like Laos and Mozambique, the latter sharing Malawi’s southern border. Countries like Bolivia and Malawi were performing less effectively than comparators. Using a sports metaphor, they were boxing below their weight—looking like a middleweight but punching like a flyweight.
It seems that 60% of 132 developing and transitional economies underperformed in this way. They grew at rates lower than their comparators showed was possible in the period. One cannot attribute this directly or completely to low quality governance or ineffective government, but the poor performance does reflect partly on such—at least subjectively. As with many other countries, Malawi’s government did not ensure the same growth as neighboring Mozambique between 1998 and 2008, a lackluster performance that many would surely call ‘ineffective’. This weak record was achieved even though the country had embarked on 23 World Bank projects with PAL&J content amounting to about $550 million in the period, building on over 40 operations prior to 1998.
Twenty five of the 40 randomly selected countries identified in chapter one had economic growth rates less than those evident in comparable countries. This is a similar proportion of underperformers as one sees in the full set of 132 countries. It suggests that government is comparatively ineffective in about 60 percent of the countries where externally influenced reform was meant to make government more effective.
Comments
You can follow this conversation by subscribing to the comment feed for this post.