This week saw the release of the first government-backed ranking of Indian institutions in higher education. ‘India Rankings 2016’ judged the participating institutions (both public and private) under four categories — engineering, management, pharmacy and universities.
So what prompted the government to take such keen interest in higher education rankings? It all started about five years ago with the abysmal performance of prestigious institutes such as the IITs in the global league tables. In 2012, for instance, India was the only BRICS nation which was not listed in the top 200 of the QS Rankings. The then HRD minister Kapil Sibal and the IITs had dismissed the tables and the rankings on the ground that their assessment parameters were irrelevant to the Indian context.
Some concerns were not entirely misplaced. In world rankings, for instance, institutions are appraised on their number of international students and faculty members. In India, where there are restrictions imposed on hiring foreign faculty, the IITs or any other university could not have fared well on such a parameter.
The initial criticism slowly made way for reluctant acceptance as President Pranab Mukherjee repeatedly lamented the falling standards of higher education in India and the complete absence of Indian institutions in international rankings. That’s when the idea of indigenous ranking framework with parameters that fit the Indian context, was first mooted.
In a way, this was similar to the approach adopted by China. The ‘Shanghai Ranking’, which started in 2003 with Chinese government backing, was designed to provide a global benchmark against which its universities could assess their progress.
The framework finalised by the Indian government identified nearly 22 parameters under five major heads, several of which are similar to those employed globally such as excellence in teaching, learning and research. However, there are a few which are India-centric. Country-specific parameters relevant to the Indian situation include regional and international diversity, outreach, gender equity and inclusion of disadvantaged sections of society. Participation in the India Rankings 2016 was completely voluntary.
Since there isn’t any reliable database to supply all the information under the above given heads, the government had to depend on the data supplied by the participating institutions themselves to compute their ranks.
A look at the tables released on April 4 reveals that it’s a work in progress. For starters, the the number of categories under which institutions were ranked isn’t exhaustive. Indian Institute of Science, Bangalore, was ranked as the top university when, technically, it isn’t a university. Apparently, several Law institutes didn’t find a place on the tables as there wasn’t a separate category dedicated to them.
Even though over 3,500 institutions participated in the first edition of ‘India Rankings’ the quality of data provided by them is a concern. How can one be sure that data provided by the institutions is not inflated or fudged? If the government does not have the resources to verify this, then maybe their claims should be made public for people to point out exaggerations and infirmities?
Although the launch of the indigenous rankings is a step forward, the government should not celebrate unless problems like those mentioned above, are fixed.