Driven by the late Samuel Paul, the Public Affairs Centre (PAC), Bangalore has done a lot of good work, such as citizen monitoring of delivery of public goods and services. Governance is a widely used and abused expression. Whatever be the definition of governance, this earlier work has led to a natural extension. The PAC has just (March) produced a public affairs index (PAI), an attempt to rank governance inter-state. A large chunk of governance has always been state-level and even lower down, at local-body level. With increased emphasis on decentralisation, not just fiscal devolution, more work should indeed focus on states and local bodies.
Let’s first understand what the PAC has done in constructing the PAI. First, there are 10 themes: One, essential infrastructure; two, support to human development; three, social protection; four, children and women; five, crime, law and order; six, delivery of justice; seven, environment; eight, transparency and accountability; nine, fiscal management; and ten, economic freedom.
My intention is not to critique the PAC/PAI. Instead, I wish to highlight problems anyone who undertakes such a ranking confronts. This isn’t a performance ranking of states. It’s a public affairs index, a governance ranking. As soon as it’s projected as a governance ranking and not a performance ranking, there’s an implicit value judgement about what we expect a government to do and in what form.
This will become clearer once we zero in on “focus subjects”, sub-categories under each of those 10 themes. In addition, governance is about a process and what goes into the delivery of public goods and services. What we end up measuring is invariably outcomes. Finally, to get an index, we need weights and an aggregation formula that obtains an overall index using those weights. From 10 themes to the PAI, the PAC’s choice is simple — equal weights and arithmetic mean. Weighting is inherently subjective, whatever one does. Equal weights have the virtue of being simple to comprehend. Hence, no complaints on that score. As I said, as sub-categories under 10 themes, we have “focus subjects”. There are 25 of these focus subjects: One, power; two, water; three, roads and communication; four, housing; five, education; six, health; seven, public distribution system; eight, social justice and empowerment; nine, minority welfare; ten, employment; eleven, children; twelve, women; thirteen, violent crimes; fourteen, atrocities; fifteen, policing; sixteen, pendency of cases; seventeen, vacancies of presiding officers; eighteen, pollution and environmental violations; nineteen, forest cover; twenty, renewable energy; twenty-one, transparency; twenty-two, public accountability; twenty-three, FRBM indicators; twenty-four, resource generation and development expenditure; and twenty-five, economic freedom. I don’t need to specify which focus subject is under what theme. That’s obvious enough. Under each theme, equal weights are assigned to relevant focus subjects and the arithmetic mean used once again.
To illustrate the problem, we need to go one layer further down, to indicators under each focus group. There are 68 of these, the choice often driven by the availability of data, or its lack. Let’s take focus subject “education”, under the theme “support to human development”, as an example. There are four associated indicators — educational development index, ASER learning levels, number of higher educational colleges per 1 lakh population and educational expenditure as percentage of GSDP. (All data are suitably normalised.) First, other indicators are also possible in the area of education. Why only these? Second, if these indicators are used, are they only a function of what government does? Isn’t there an implicit value judgement about the role of government? If these indicators aren’t only influenced by what government does, should this be called a governance ranking or a performance ranking? Third, if one has used equal weights for aggregating across themes and focus subjects, why deviate from that principle and use unequal weights here?
The educational development index has a weight of 30 per cent, ASER learning levels of 40 per cent, the number of higher educational colleges of 15 per cent, and educational expenditure of 15 per cent. As I said, the intention isn’t to critique the PAC/ PAI, but to point out the unavoidable subjectivity in any such exercise.
With these qualifications, in the overall ranking among large states, the best states are Kerala, Tamil Nadu and Karnataka, in that order. The worst states are Bihar, Jharkhand and Odisha, in that order. People love rankings. Often, even if authors don’t desire that, reportage suggests great precision in such rankings, when there’s no such precision. A little bit of tinkering with options might place Tamil Nadu ahead of Kerala or Jharkhand below Bihar. If there’s little difference in scores, there’s little to choose one state over another. But sometimes, differences in scores are robust and immune to tinkering. Whatever one does, and however one measures it, governance in Kerala will be superior to that in Bihar. In using such rankings, one tends to focus on inter-state comparisons. An alternative, if such rankings are periodically done by the same organisation, is to benchmark a state’s improvement over time. That is, one doesn’t use the absolute level of the index, but its increment. In attempting to influence changes, such temporal tracking of a state seems to be more acceptable than comparing states with different contexts and backgrounds. The latter rankles, the former doesn’t. When a colleague and I did such rankings earlier, in the first version, we said that half the states are below average. Our methodology was roundly criticised. In subsequent versions, we said that half the states are above average. The methodology was accepted.