Archive for League Tables

Introducing the Clartiverse™

Posted in Education, mathematics with tags , , , , on February 16, 2025 by telescoper

The recent decision by Maynooth University to appoint a Ranking Strategy and Insights Officer in an attempt to raise the University’s position in university league tables has inspired me to create a new spinout company to provide a service for higher education institutions who want to improve their standing in rankings while avoiding the expense and complication of actually improving the institution or indeed while continuing to pursue policies that drive performance in the opposite direction.

I have decided to name my new company CLARTIVERT™ and the extensive suite of services we will provide is called the Clartiverse™.

The idea of CLARTIVERT™ is to produce, in return for a modest payment equivalent to the salary cost of a Ranking Strategy and Insights Officer, a bespoke league table that guarantees a specified position for any given institution. This can be either your own institution whose position you would like to raise or some competitor institution that you wish to lower. We then promote the league table thus constructed in the world’s media (who seem to like this sort of thing).

The idea behind this company is that the existing purveyors of rankings deliberately manufacture artificial “churn” in the league tables by changing their weighting model every year. Why not take this process to its logical conclusion? Our not-at-all dodgy software works by including so many metrics that an appropriate combination can always be chosen to propel any institution to the top (or bottom). We then produce We achieve all this by deplying a highly sophisticated branch of mathematics called Linear Algebra which we dress up in the fancy terms “Machine Learning” and  “Artificial Intelligence” to impress potential buyers.

To begin we will concentrate on research assessment. This is, of course, covered by existing league tables but our approach is radically different. We will desploy a vastly expanded set of metrics, many of which are currently unused. For example, on top of the usual bibliometric indicators like citation counts and numbers of published papers, we add number of authors, number of authors whose names start with a given letter of the alphabet, letter frequencies occuring in published texts, etc. We adopt a similar approach to other indicators, such as number of academic staff, number of PhD students, number of research managers, initial letters of names of people in these different categories, distribution of salaries for each, and so on.

As well as these quantities themselves we calculate mathematical functions of them, including polynomials, exponentials, logarithms and trigonometricfunctions; sine and cosine have proved very useful in early testing. All these indicators are combined in various ways: not only added, but also subtracted, multiplied, and/or divided until a weighted combination can be found that places your institution ahead of all the others.

In future we will roll out additional elements of the Clartiverse™ to cover other aspects of higher education including not only teaching and student satisfaction but also more important things such as commercialisation and financial impropriety.

P.S. The name Clartiver is derived from the word clart and is not to be confused with that of any other companies providing similar but less impressive services.

An Open Letter to the Times Higher World University Rankers

Posted in Bad Statistics, Education with tags , , , , , , on September 20, 2023 by telescoper

Dear Rankers,

I note with interest that you have announced significant changes to the methodology deployed in the construction of this years forthcoming league tables. I would like to ask what steps you will take to make it clear to that any changes in institutional “performance” (whatever that is supposed to mean) could well be explained simply by changes in the metrics and how they are combined?,

I assume, as intelligent and responsible people, that you did the obvious test for this effect, i.e. to construct and publish a parallel set of league tables, with this year’s input data but last year’s methodology, which would make it easy to isolate changes in methodology from changes in the performance indicators.  This is a simple test that anyone with any scientific training would perform.

You have not done this on any of the previous occasions on which you have introduced changes in methodology. Perhaps this lamentable failure of process was the result of multiple oversights. Had you deliberately withheld evidence of the unreliability of your conclusions you would have left yourselves open to an accusation of gross dishonesty, which I am sure would be unfair.

Happily, however, there is a very easy way to allay the fears of the global university community that the world rankings are being manipulated. All you need to do is publish a set of league tables using the 2022 methodology and the 2023 data. Any difference between this table and the one you published would then simply be an artefact and the new ranking can be ignored.

I’m sure you are as anxious as anyone else to prove that the changes this year are not simply artificially-induced “churn”, and I look forward to seeing the results of this straightforward calculation published in the Times Higher as soon as possible, preferably next week when you announce this years league tables.

I look forward to seeing your response to the above through the comments box, or elsewhere. As long as you fail to provide a calibration of the sort I have described, this year’s league tables will be even more meaningless than usual. Still, at least the Times Higher provides you with a platform from which you can apologize to the global academic community for wasting their time and that of others.

How Reliable Are University Rankings?

Posted in Bad Statistics, Education with tags , on April 21, 2020 by telescoper

I think most of you probably know the answer to this question already, but now there’s a detailed study on this topic. Here is the abstract of a paper on the arXiv on the subject

University or college rankings have almost become an industry of their own, published by US News \& World Report (USNWR) and similar organizations. Most of the rankings use a similar scheme: Rank universities in decreasing score order, where each score is computed using a set of attributes and their weights; the attributes can be objective or subjective while the weights are always subjective. This scheme is general enough to be applied to ranking objects other than universities. As shown in the related work, these rankings have important implications and also many issues. In this paper, we take a fresh look at this ranking scheme using the public College dataset; we both formally and experimentally show in multiple ways that this ranking scheme is not reliable and cannot be trusted as authoritative because it is too sensitive to weight changes and can easily be gamed. For example, we show how to derive reasonable weights programmatically to move multiple universities in our dataset to the top rank; moreover, this task takes a few seconds for over 600 universities on a personal laptop. Our mathematical formulation, methods, and results are applicable to ranking objects other than universities too. We conclude by making the case that all the data and methods used for rankings should be made open for validation and repeatability.

The italics are mine.

I have written many times about the worthlessness of University league tables (e.g. here).

Among the serious objections I have raised is that the way they are presented is fundamentally unscientific because they do not separate changes in data (assuming these are measurements of something interesting) from changes in methodology (e.g. weightings). There is an obvious and easy way to test for the size of the weighting effect, which is to construct a parallel set of league tables each year, with the current year’s input data but the previous year’s methodology, which would make it easy to isolate changes in methodology from changes in the performance indicators. No scientifically literate person would accept the result of this kind of study unless the systematic effects can be shown to be under control.

Yet purveyors of league table twaddle all refuse to perform this simple exercise. I myself asked the Times Higher to do this a few years ago and they categorically refused, thus proving that they are not at all interested in the reliability of the product they’re peddling.

Snake oil, anyone?

Institutes, Acronyms and the Letter H

Posted in Education, Maynooth with tags , , , , , on June 25, 2019 by telescoper

Here’s a rambling and inconsequential post emanating from a coffee-room discussion yesterday.

The latest round of guff about University Rankings, in which Massachusetts Institute of Technology (MIT) came top and Irish universities didn’t  prompted a strange letter to the Irish Times about the status of the Irish Institutes of Technology some of which have merged, or are planning to merge, to form Technological Universities.

Among the list of Irish Institutes of Technology, I found that sadly there isn’t an MIT in Ireland (Mullingar would be a good place for it!) but there are, for example:

Cork Institute of Technology (CIT)

Waterford Institute of Technology (WIT)

Limerick Institute of Technology (LIT)

Athlone Institute of Technology (AIT)

and so on, as well as..

Institute of Technology Tralee….(:-)

I wondered whether there might be some other potentially unfortunate acronyms  to be had, I hoped for example for a South Howth Institute of Technology but sadly there isn’t one; nor is there a Sligo Higher Institute of Technology. There’s no Galway Institute of Technology either.

In the course of that exercise in silliness I discovered how few towns and villages there are in Ireland whose names begin with the letter H. Moreover all of those listed on the Wikipedia page are in the Sacs-Bhéarla (English language) rather than genuinely Irish names.

I’m sure Irish speakers will correct me on this, but I guess this lack of Irish proper names beginning with H may be connected with the use of h in denoting lenition. When used in this way the `h’ always appears after the consonant being modified and so never forms the initial letter. There are plenty of words in Irish beginning with H, though, so this is either a red herring or something specific to place names.

Comments and corrections are welcome through the box below!

 

UPDATE: I’m reliably informed (via Twitter) that all words in modern Irish beginning with H are borrowings from other languages, and the h was only introduced into Irish words for the reason mentioned above,

The One True Ranking Narrative

Posted in Education, Maynooth with tags , , , on September 27, 2018 by telescoper

Yesterday saw the release of the 2019 Times Higher World University Rankings. The main table of rankings can be found here and the methodology used to concoct it here.

It seems that there’s little point in doing so, but I’ll try to reiterate the objections I made last year and the year before that and the year before that to the completely unscientific nature of these tables. My main point is that when changes to the methodology used to compile these tables are introduced no attempt is ever made to distinguish their effect from changes in the input data. This would be quite easy to do, by simply running the old methodology alongside the new on the metrics. The compilers of these tables steadfastly refuse to do this straightforward exercise, I suspect this is because they know what the result would be: that many of the changes in these tables year-on-year are the result of artificially introduced `churn’.

And then there’s the questions of whether you think the metrics used are meaningful anyway, and whether any changes not due to methodological tweaks are simply statistical noise, but I have neither the time nor the energy to go into that one now…

Notwithstanding the reasonable objections to these tables, the newspapers are full of stories constructed to explain why some universities went up, some went down and others stayed roughly the same. Most of these articles were obviously written by that well-known journalist, Phil Space.

However, not all these narratives are meaningless. The latest Times Higher World University Rankings have revealed that here in Ireland, while more famous higher education establishments such as Trinity College Dublin have fallen three places due to *insert spurious narrative here*, my own institution (Maynooth University) is one of only two to have risen in the tables. It simply cannot be a coincidence that I moved here this year. Clearly my arrival from Cardiff has had an immediate and positive impact. There is no other credible explanation.

More Worthless University Rankings

Posted in Bad Statistics, Education with tags , , , on September 6, 2017 by telescoper

The Times Higher World University Rankings were released this week. The main table can be found here and the methodology used to concoct them here.

Here I wish to reiterate the objection I made last year and the year before that to the way these tables are manipulated year on year to create an artificial “churn” that renders them unreliable and impossible to interpret in any objective way. In other words, they’re worthless. This year the narrative text includes:

This year’s list of the best universities in the world is led by two UK universities for the first time. The University of Oxford has held on to the number one spot for the second year in a row, while the University of Cambridge has jumped from fourth to second place.

Overall, European institutions occupy half of the top 200 places, with the Netherlands and Germany joining the UK as the most-represented countries. Italy, Spain and the Netherlands each have new number ones.

Another notable trend is the continued rise of China. The Asian giant is now home to two universities in the top 30: Peking and Tsinghua. The Beijing duo now outrank several prestigious institutions in Europe and the US. Meanwhile, almost all Chinese universities have improved, signalling that the country’s commitments to investment has bolstered results year-on-year.

In contrast, two-fifths of the US institutions in the top 200 (29 out of 62) have dropped places. In total, 77 countries feature in the table.

These comments are all predicated on the assumption that any changes since the last tables represent changes in data (which in turn are assumed to be relevant to how good a university is) rather than changes in the methodology used to analyse that data. Unfortunately, every single year the Times Higher changes its methodology. This time we are told:

This year, we have made a slight improvement to how we handle our papers per academic staff calculation, and expanded the number of broad subject areas that we use.

What has been the effect of these changes? We are not told. The question that must be asked is how can we be sure that any change in league table position for an institution from year to year represents a change in “performance”,rather than a change in the way metrics are constructed and/or combined? Would you trust the outcome of a medical trial in which the response of two groups of patients (e.g. one given medication and the other placebo) were assessed with two different measurement techniques?

There is an obvious and easy way to test for the size of this effect, which is to construct a parallel set of league tables, with this year’s input data but last year’s methodology, which would make it easy to isolate changes in methodology from changes in the performance indicators. The Times Higher – along with other purveyors of similar statistical twaddle – refuses to do this. No scientifically literate person would accept the result of this kind of study unless the systematic effects can be shown to be under control. There is a very easy way for the Times Higher to address this question: all they need to do is publish a set of league tables using, say, the 2016/17 methodology and the 2017/18 data, for comparison with those constructed using this year’s methodology on the 2017/18 data. Any differences between these two tables will give a clear indication of the reliability (or otherwise) of the rankings.

I challenged the Times Higher to do this last year, and they refused. You can draw your own conclusions about why.

P.S. For the record, Cardiff University is 162nd in this year’s table, a rise of 20 places on last year. My former institution, the University of Sussex, is up two places to joint 147th. Whether these changes are anything other than artifacts of the data analysis I very much doubt.

Why Universities should ignore League Tables

Posted in Bad Statistics, Education with tags , , , , , on January 12, 2017 by telescoper

Very busy day today but I couldn’t resist a quick post to draw attention to a new report by an independent think tank called the Higher Education Policy Institute  (PDF available here; high-level summary there). It says a lot of things that I’ve discussed on this blog already and I agree strongly with most of the conclusions. The report is focused on the international league tables, but much of what it says (in terms of methodological criticism) also applies to the national tables. Unfortunately, I doubt if this will make much difference to the behaviour of the bean-counters who have now taken control of higher education, for whom strategies intended to ‘game’ position in these, largely bogus, tables seem to be the main focus of their policy rather than the pursuit of teaching and scholarship, which is what should universities actually be for.

Here is the introduction to high-level summary:

Rankings of global universities, such as the THE World University Rankings, the QS World University Rankings and the Academic Ranking of World Universities claim to identify the ‘best’ universities in the world and then list them in rank order. They are enormously influential, as universities and even governments alter their policies to improve their position.

The new research shows the league tables are based almost exclusively on research-related criteria and the data they use are unreliable and sometimes worse. As a result, it is unwise and undesirable to give the league tables so much weight.

Later on we find some recommendations:

The report considers the inputs for the various international league tables and discusses their overall weaknesses before considering some improvements that could be made. These include:

  • ranking bodies should audit and validate data provided by universities;
  • league table criteria should move beyond research-related measures;
  • surveys of reputation should be dropped, given their methodological flaws;
  • league table results should be published in more complex ways than simple numerical rankings; and
  • universities and governments should not exaggerate the importance of rankings when determining priorities.

No doubt the purveyors of these ranking – I’ll refrain from calling them “rankers” – will mount a spirited defence of their business, but I agree with the view expressed in this report that as they stand these league tables are at best meaningless and at worst damaging.

The Worthless University Rankings

Posted in Bad Statistics, Education with tags , , , on September 23, 2016 by telescoper

The Times Higher World University Rankings, which were released this weekk. The main table can be found here and the methodology used to concoct them here.

Here I wish to reiterate the objection I made last year to the way these tables are manipulated year on year to create an artificial “churn” that renders them unreliable and impossible to interpret in an objective way. In other words, they’re worthless. This year, editor Phil Baty has written an article entitled Standing still is not an option in which he makes a statement that “the overall rankings methodology is the same as last year”. Actually it isn’t. In the page on methodology you will find this:

In 2015-16, we excluded papers with more than 1,000 authors because they were having a disproportionate impact on the citation scores of a small number of universities. This year, we have designed a method for reincorporating these papers. Working with Elsevier, we have developed a new fractional counting approach that ensures that all universities where academics are authors of these papers will receive at least 5 per cent of the value of the paper, and where those that provide the most contributors to the paper receive a proportionately larger contribution.

So the methodology just isn’t “the same as last year”. In fact every year that I’ve seen these rankings there’s been some change in methodology. The change above at least attempts to improve on the absurd decision taken last year to eliminate from the citation count any papers arising from large collaborations. In my view, membership of large world-wide collaborations is in itself an indicator of international research excellence, and such papers should if anything be given greater not lesser weight. But whether you agree with the motivation for the change or not is beside the point.

The real question is how can we be sure that any change in league table position for an institution from year to year are is caused by methodological tweaks rather than changes in “performance”, i.e. not by changes in the metrics but by changes in the way they are combined? Would you trust the outcome of a medical trial in which the response of two groups of patients (e.g. one given medication and the other placebo) were assessed with two different measurement techniques?

There is an obvious and easy way to test for the size of this effect, which is to construct a parallel set of league tables, with this year’s input data but last year’s methodology, which would make it easy to isolate changes in methodology from changes in the performance indicators. The Times Higher – along with other purveyors of similar statistical twaddle – refuses to do this. No scientifically literate person would accept the result of this kind of study unless the systematic effects can be shown to be under control. There is a very easy way for the Times Higher to address this question: all they need to do is publish a set of league tables using, say, the 2015/16 methodology and the 2016/17 data, for comparison with those constructed using this year’s methodology on the 2016/17 data. Any differences between these two tables will give a clear indication of the reliability (or otherwise) of the rankings.

I challenged the Times Higher to do this last year, and they refused. You can draw your own conclusions about why.

Rank Nonsense

Posted in Bad Statistics, Education, Politics with tags , , , , , on September 8, 2016 by telescoper

It’s that time of year when international league tables (also known as “World Rankings”)  appear. We’ve already had the QS World University Rankings and the Shanghai (ARWU) World University Rankings. These will soon be joined by the Times Higher World Rankings, due out on 21st September.

A lot of people who should know a lot better give these league tables far too much attention. As far as I’m concerned they are all constructed using extremely suspect methodologies whose main function is to amplify small statistical variations into something that looks significant enough to justify constructing  a narrative about it. The resulting press coverage usually better reflects a preconceived idea in a journalist’s head than any sensible reading of the tables themselves.

A particularly egregious example of this kind of nonsense can be found in this week’s Guardian. The offending article is entitled “UK universities tumble in world rankings amid Brexit concerns”. Now I make no secret of the fact that I voted “Remain” and that I do think BrExit (if it actually happens) will damage UK universities (as well as everything else in the UK). However, linking the changes in the QS rankings to BrExit is evidently ridiculous: all the data were collected before the referendum on 23rd June anyway! In my opinion there are enough good arguments against BrExit without trying to concoct daft ones.

In any case these tables do not come with any estimate of the likely statistical variation from year to year in the metrics used to construct them, which makes changes impossible to interpret. If only the compilers of these tables would put error bars on the results! Interestingly, my former employer, the University of Sussex, has held its place exactly in the QS rankings between 2015 and 2016: it was ranked 187th in the world in both years. However, the actual score corresponding to these two years was 55.6 in 2015 and 48.4 in 2016. Moreover, Cambridge University fell from 3rd to 4th place this year but its score only changed from 98.6 to 97.2. I very much doubt that is significant at all, but it’s mentioned prominently in the subheading of the Guardian piece:

Uncertainty over research funding and immigration rules blamed for decline, as Cambridge slips out of top three for first time.

Actually, looking closer, I find that Cambridge was joint 3rd in 2015 and is 4th this year. Over-interpretation, or what?

To end with, I can’t resist mentioning that the University of Sussex is in the top 150 in the Shanghai Rankings for Natural and Mathematical Sciences this year, having not been in the top 200 last year. This stunning improvement happened while I was Head of School for Mathematical and Physical Sciences so it clearly can not be any kind of statistical fluke but is entirely attributable to excellent leadership. Thank you for your applause.

 

 

The Rising Stars of Sussex Physics

Posted in Bad Statistics, Biographical, Education with tags , , , , on July 28, 2016 by telescoper

This is my penultimate day in the office in the School of Mathematical and Physical Sciences at the University of Sussex, and a bit of news has arrived that seems a nice way to round off my stint as Head of School.

It seems that Physics & Astronomy research at the University of Sussex has been ranked as 13th in western Europe and 7th in the UK by leading academic publishers, Nature Research, and has been profiled as one of its top-25 “rising stars” worldwide.

I was tempted to describe this rise as ‘meteoric’ but in my experience meteors generally fall down rather than rise up.

Anyway, as regular readers of this blog will know, I’m generally very sceptical of the value of league tables and there’s no reason to treat this one as qualitatively any different. Here is an explanation of the (rather curious) methodology from the University of Sussex news item:

The Nature Index 2016 Rising Stars supplement identifies the countries and institutions showing the most significant growth in high-quality research publications, using the Nature Index, which tracks the research of more than 8,000 global institutions – described as “players to watch”.

The top 100 most improved institutions in the index between 2012 and 2015 are ranked by the increase in their contribution to 68 high-quality journals. From this top 100, the supplement profiles 25 rising stars – one of which is Sussex – that are already making their mark, and have the potential to shine in coming decades.

The institutions and countries examined have increased their contribution to a selection of top natural science journals — a metric known as weighted fractional count (WFC) — from 2012 to 2015.

Mainly thanks to a quadrupling of its physical sciences score, Sussex reached 351 in the Global 500 in 2015. That represents an 83.9% rise in its contribution to index papers since 2012 — the biggest jump of any UK research organisation in the top 100 most improved institutions.

It’s certainly a strange choice of metric, as it only involves publications in “high quality” journals, presumably selected by Journal Impact Factor or some other arbitrary statistical abominatio,  then taking the difference in this measure between 2012 and 2015  and expressing the change as a percentage. I noticed one institution in the list has improved by over 4600%, which makes Sussex’s change of 83.9% seem rather insignificant…

But at least this table provides some sort of evidence that the investment made in Physics & Astronomy over the last few years has made a significant (and positive) difference. The number of research faculty in Physics & Astronomy has increased by more than 60%  since 2012 so one would have been surprised not to have seen an increase in publication output over the same period. On the other hand, it seems likely that many of the high-impact papers published since 2012 were written by researchers who arrived well before then because Physics research is often a slow burner. The full impact of the most recent investments has probably not yet been felt. I’m therefore confident that Physics at Sussex has a very exciting future in store as its rising stars look set to rise still further! It’s nice to be going out on a high note!