Home » Blog, Featured

A league table by any means will smell just as rank

1 October 2015 One Comment
ladder

Ladder (Wikipedia)

The University Rankings season is upon us with the QS league table released a few weeks back to much hand wringing here in the UK as many science focussed institutions tumbled downwards. The fact that this was due to a changed emphasis in counting humanities and social sciences rather than any change at the universities themselves was at least noted, although how much this was to excuse the drop rather than engage with the issue is unclear.

At around the same time particle physicists and other “big science” communities were up in arms as the Times Higher ranking, being released this week, announced that it would not count articles with huge numbers of authors. Similar to the change in the QS rankings this would tend to disadvantage institutions heavily invested in big science projects, although here the effect would probably be more the signals being sent to communities than a substantial effect on scores or rankings. In the context of these shifts the decision of Japanese government to apparently shut a large proportion of Humanities and Social Sciences departments so as to focus on “areas for which society has strong needs” is…interesting.

Also interesting was the response of Phil Baty, the editor of the THES Rankings to John Butterworth’s objections on twitter.

The response is interesting because it suggests there is a “right way” to manage the “problem”. The issue of course is rather the other way around. There can be no right way to solve the problem independent of an assessment of what it is you are trying to assess. Is it the contribution of the university to the work? Is it some sense of the influence that accrues to the institution for being associated with the article? Is it the degree to which being involved will assist in gaining additional funding?

This, alongside the shifts up and down the QS rankings, illustrates the fundamental problem of rankings. They assume that what is being ranked is obvious, when it is anything but. No linear ranking can ever capture the multifaceted qualities of thousands of institutions but worse than that the very idea of a ranking is built on the assumption that we know what we’re measuring.

Now you might ask why this matters. Surely these are just indicators, mere inputs into decision making, even just a bit of amusing fun that allows Cambridge to tweak the folks at Imperial this year? But there is a problem. And that problem is that these ranking really show a vacuum at the centre of our planning and decision making processes.

What is clear from the discussion above and the hand wringing over how the rankings shift is that the question of what matters is not being addressed. Rather it is swept under the carpet by assuming there is some conception of “quality” or “excellence” that is universally shared. I’ve often said that for me when I hear the word “quality” it is a red flag that means someone wants to avoid talking about values.

What matters in the production of High Energy Physics papers? What do we care about? Is HEP something that all institutions should do or something that should be focussed on a small number of places? But not just HEP, but genomics, history, sociology…or perhaps chemistry. To ask the question “how do we count physics the same as history” is to make a basic category error. Just as it is to assume that one authorship is the same as another.

If the question is was which articles in a year have the most influence, and which institutions contributed, the answer would be very different to the question of which institutions made the most contribution in aggregate to global research outputs. Rankings ignore these questions and try to muddle through with forced compromises like the ones we’re seeing in the THES case.

All that these rankings show is that the way you choose to value things depends how you would (arbitrarily) rank them. Far more interesting is the question of what the rankings tell us about what we really value, and how hard that is in fact to measure.


One Comment »

  • Christopher Gu〹eridge said:

    The goal of the EPrints project was never to enable this kind of beancounting. We wanted to help researchers find and read other research. Sigh.