The League of Extraordinary Gibberish

After a very busy few days I thought I’d relax yesterday by catching up with a bit of reading. In last week’s Times Higher I found there was a supplement giving this year’s World University Rankings.

I don’t really approve of league tables but somehow can’t resist looking in them to see where my current employer Cardiff University lies. There we are at number 135 in the list of the top 200 Universities. That’s actually not bad for an institute that’s struggling with a Welsh funding  system that seriously disadvantages it compared to our English colleagues. We’re a long way down compared to Cambridge (2nd), UCL (4th), Imperial and Oxford (5th=) . Compared to places I’ve worked at previously we’re significantly below Nottingham (91st) but still above Queen Mary (164) and Sussex (166). Number 1 in the world is Harvard, which is apparently somewhere near Boston (the American one).

Relieved that we’re in the top 200 at all, I decided to have a look at how the tables were drawn up. I wish I hadn’t bothered because I was horrified at the methodological garbage that lies behind it. You can find a full account of the travesty here. In essence, however, the ranking is arrived at by adding six distinct indicators, weighted differently but with weights assigned for no obvious reason, each of which is arrived at by dubious means and which is highly unlikely to mean what it purports. Each indicator is magically turned into a score out of 100 before being added to all the other ones (with appropriate weighting factors).

The indicators are:

  1. Academic Peer Review. This is weighted 40% of the overall score for each institution and is obtained by asking a sample of academics (selected in a way that is not explained). This year 9386 people were involved; they were asked to name institutions they regard as the best in their field. This sample is a tiny fraction of the global academic population and it would amaze me if it were representative of anything at all!
  2. Employer Survey. The pollsters asked 3281 graduate employers for their opinions of the different universities. This was weighted 10%.
  3. Staff-Student Ratio. Counting 20%, this is supposed to be a measure of “teaching quality”! Good teaching = large numbers of staff? Not if most of them don’t teach as at many research universities. A large staff-student ratio could even mean the place is really unpopular!
  4. International Faculty. This measures the  proportion of overseas staff on the books. Apparently a large number of foreign lecturers makes for a good university and “how attractive an institution is around the world”. Or perhaps that it finds it difficult to recruit its own nationals. This one counts only 5%.
  5. International Students. Another 5% goes to the fraction of each of the student body that is from overseas.
  6. Research Excellence. This is measured solely on the basis of citations – I’ve discussed some of the issues with that before – and counts 20%. They choose to use an unreliable database called SCOPUS, run by the profiteering academic publisher Elsevier. The total number of citations is divided by the number of faculty to “give a sense of the density of research excellence” at the institution.

Well I hope by now you’ve got a sense of the density of the idiots who compiled this farrago. Even if you set aside the issue of the accuracy of the input data, there is still the issue of how on Earth anyone could have thought it was sensible to pick such silly ways of measuring what makes a good university, assigning random weights to them, and then claiming that they had achieved something useful. They probably got paid a lot for doing it too. Talk about money for old rope. I’m in the wrong business.

What gives the game away entirely is the enormous variance from indicator to another. This means that changing the weights slightly would produce a drastically different list. And who is to say that the variables should be added linearly anyway? Is a score of 100 really worth precisely twice as much as a score of 50? What do the distributions look like? How significant are the differences in score from one institute to another? And what are we actually trying to measure anyway?

Here’s an example. The University of California at Berkeley scores 100/100 for 1,2 and 4 and 86 for 5. However for Staff/Student ratio (3) it gets a lowly 25/100 and for (6) it gets only 34, which combine take it down to 39th in the table. Exclude this curiously-chosen proxy for teaching quality and Berkeley would rocket up the table.

Of course you can laugh these things off as unimportant trivia to be looked at with mild amusement over a glass of wine, but such things have increasingly found their way into the minds of managers and politicians. The fact that they are based on flawed assumptions, use a daft methodology, and produce utterly meaningless results seems to be irrelevant. Because they are based on numbers they must represent some kind of absolute truth.

There’s nothing at all wrong with collating and publishing information about schools and universities. Such facts should be available to the public. What is wrong is the manic obsession with  condensing disparate sets of conflicting data into a single number just so things can be ordered in lists that politicians can understand.

You can see the same thing going on in the national newspapers’ lists of University rankings. Each one uses a different weighting and different data and the lists are drastically different. They give different answers because nobody has even bothered to think about what the question is.

6 Responses to “The League of Extraordinary Gibberish”

  1. Mr Physicist's avatar
    Mr Physicist Says:

    The trouble is that universities themselves play this stupid game. If they have done well, look out for the press release and VC quotation of their single ranking number….

    This gives the whole system more false credit than the politcians.

  2. Andrew Liddle's avatar
    Andrew Liddle Says:

    Earlier this year, the head of one of the UK universities’ umbrella groups carried out a survey of UK university WWW pages, and found that over fifty of them claimed that their university was in the UK’s top twenty. This is of course a natural outcome of having many differently compiled tables of dubious numbers; there’s bound to be one list where a perfectly credible `top fifty’ university chances to land in the top twenty.

    He didn’t mention an interesting corollary though, which is that if a university really was in the top twenty (suspending disbelief for a minute so that such a concept actually exists), then no doubt by chance such a university would land in the top ten of at least one of the myriad tables, and immediately claim that status in a banner WWW headline. So, in fact, if you ever see a university claiming to be `in the top twenty’, you can be more or less sure that it isn’t.

    Andrew

  3. Is there anywhere where all this information plus other indicators is publically available for the public/academics to sort through with rankings to be decided by us? I know the Times website sort-of facilitates this search, but only by subject rankings and teaching scores, another thing which I’m unsure how they arrive at. That would be a lot more useful to people deciding on which university to apply for.

    There is the obvious drawback with this system, in that we would undoubtedly have most of the top 200 universities in the top ten, or five, or even number one, even if arrived at by the quality of biscuits in the foyer (which in my experience certainly does positively correlate with research excellence)
    🙂

  4. Bryn Jones's avatar
    Bryn Jones Says:

    I too share the scepticism about university league tables and the arbitrary decisions that affect the final placings.

    A fear I have is that the people compiling the tables select their rather arbitrary methods to achieve an expected result. For U.K. tables, they expect to have Oxford or Cambridge at the very top, and will therefore choose weightings for the various indices to achieve this outcome.

    The trouble is that these university league tables matter. Prospective university students – applicants for places on courses – usually want to study at a university as high in the league as they can. Positions in the tables are used by universities to market themselves. Many employers use the tables in their recruitment, naively preferring graduates from the top placed institutions, wrongly thinking that they are somehow better than those from lower-ranked universities.

  5. For more amusingly-flawed fun, see the THES list of “Top Countries for Space Science”, topped by Scotland, with the USA a distant fifth. Wales not in the top twenty, sadly.

    http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=408577&c=1

  6. […] I have deep reservations about the usefulness of league tables, I’m not at all averse to using them as an excuse for a celebration and to help raise the profile […]

Leave a reply to Andrew Liddle Cancel reply