I see that the Times Higher has now released this year’s outcomes of its annual exercise in numerology called the World University Rankings. The announcement is accompanied by narratives such as this:
China is edging closer to the top 10 and now has two institutions in the top 15 for the first time. Tsinghua and Peking universities both overtake the University of Pennsylvania, Johns Hopkins University and Columbia University to rank 12th and 14th respectively in this year’s 20th edition of the ranking. Meanwhile, Japan’s University of Tokyo now outperforms the University of Edinburgh, King’s College London and the London School of Economics and Political Science, after rising 10 places to 29th.
Blah Blah Blah.
Such statements are rendered even more meaningless than they usually are by the fact that this year’s (2024) rankings are constructed using a very different methodology from previous years. How do we know, then, that the changes in rank described above are due simply to a change in methodology?
The answer is that we don’t.
As I pointed out last week, it would be a very simple task to answer that question. All the Rankers need to do is run last year’s methodology using this year’s data and/or last year’s methodology with this year’s data, and compare the result with this year’s rankings. Any differences so produced would demonstrate whether or not the methodology is more important that the data. This is the sort of test that anyone with a basic knowledge of the scientific method would perform. Over the years I’ve asked the Times Higher many times to do this exercise and they have always refused.
My theory is that actual institutional changes take place much more slowly, over a much longer timescale than a year. These rankings would be much harder to sell, and to construct commentaries that sell, if the rankers if nothing much happened between, say 2023 and 2024. That’s why they shake up the methodology every few years, to make it look like something is happening with which their “journalists” can fill column inches.
It seems to me a reasonable inference that the Times Higher doesn’t really care whether the results of its annual Rankfest actually mean anything or not. They just want to peddle the results to the world’s bureaucrats and politicians who lap it all up unquestioningly. Unfortunately the people in and around universities who make strategic decisions often believe this sort of nonsense and hold quite a lot of power, so the league tables aren’t merely irrelevant garbage – they’re potentially dangerous garbage.
Hats off, then, to Utrecht University in the Netherlands who have refused to participate in this year’s rankings. I quote:
UU has chosen not to submit data. A conscious choice:
- Rankings put too much stress on scoring and competition, while we want to focus on collaboration and open science.
- In addition, it is almost impossible to capture the quality of an entire university with all the different courses and disciplines in one number.
- Also, the makers of the rankings use data and methods that are highly questionable, research shows.
Quite so. More of this please!