The Transparent Dishonesty of the Research Excellence Framework
Some of my colleagues in the School of Physics & Astronomy recently attended a briefing session about the forthcoming Research Excellence Framework. This, together with the post I reblogged earlier this morning, suggested that I should re-hash an article I wrote some time ago about the arithmetic of the REF, and how it will clearly not do what it says on the tin.
The first thing is the scale of the task facing members of the panel undertaking the assessment. Every research active member of staff in every University in the UK is requested to submit four research publications (“outputs”) to the panel, and we are told that each of these will be read by at least two panel members. The Physics panel comprises 20 members.
As a rough guess I’d say that the UK has about 40 Physics departments, and the average number of research-active staff in each is probably about 40. That gives about 1600 individuals for the REF. Actually the number of category A staff submitted to the 2008 RAE was 1,685.57 FTE (Full-Time Equivalent), pretty close to this figure. At 4 outputs per person that gives 6400 papers to be read. We’re told that each will be read by at least two members of the panel, so that gives an overall job size of 12800 paper-readings. There are 20 members of the panel, so that means that between 29th November 2013 (the deadline for submissions) and the announcement of the results in December 2014 each member of the panel will have to have read 640 research papers. That’s an average of about two a day. Every day. Weekends included.
Now we are told the panel will use their expert judgment to decide which outputs belong to the following categories:
- 4* World Leading
- 3* Internationally Excellent
- 2* Internationally Recognized
- 1* Nationally Recognized
- U Unclassified
There is an expectation that the so-called QR funding allocated as a result of the 2013 REF will be heavily weighted towards 4*, with perhaps a small allocation to 3* and probably nothing at all for lower grades. In other words “Internationally recognized” research will probably be deemed completely worthless by HEFCE. Will the papers belonging to the category “Not really understood by the panel member” suffer the same fate?
The panel members will apparently know enough about every single one of the papers they are going to read in order to place them into one of the above categories, especially the crucial ones “world-leading” or “internationally excellent”, both of which are obviously defined in a completely transparent and objective manner. Not.
We are told that after forming this judgement based on their expertise the panel members will “check” the citation information for the papers. This will be done using the SCOPUS service provided (no doubt at considerable cost) by Elsevier, which by sheer coincidence also happens to be a purveyor of ridiculously overpriced academic journals. No doubt Elsevier are on a nice little earner peddling meaningless data for the HECFE bean-counters, but I haven’t any confidence that it will add much value to the assessment process.
There have been high-profile statements to the effect that the REF will take no account of where the relevant “outputs” are published, including a recent pronouncement by David Willetts. On the face of it, that would suggest that a paper published in the spirit of Open Access in a free archive would not be disadvantaged. However, I very much doubt that will be the case.
I think if you look at the volume of work facing the REF panel members it’s pretty clear that citation statistics will be much more important for the Physics panel than we’ve been led to believe. The panel simply won’t have the time or the breadth of understanding to do an in-depth assessment of every paper, so will inevitably in many cases be led by bibliometric information. The fact that SCOPUS doesn’t cover the arXiv means that citation information will be entirely missing from papers just published there.
The involvement of a company like Elsevier in this system just demonstrates the extent to which the machinery of research assessment is driven by the academic publishing industry. The REF is now pretty much the only reason why we have to use traditional journals. It would be better for research, better for public accountability and better economically if we all published our research free of charge in open archives. It wouldn’t be good for academic publishing houses, however, so they’re naturally very keen to keep things just the way they are. The saddest thing is that we’re all so cowed by the system that we see no alternative but to participate in this scam.
Incidentally we were told before the 2008 Research Assessment Exercise that citation data would emphatically not be used; we were also told afterwards that citation data had been used by the Physics panel. That’s just one of the reasons why I’m very sceptical about the veracity of some of the pronouncements coming out from the REF establishment. Who knows what they actually do behind closed doors? All the documentation is shredded after the results are published. Who can trust such a system?
To put it bluntly, the apparatus of research assessment has done what most bureaucracies eventually do; it has become entirely self-serving. It is imposing increasingly ridiculous administrative burdens on researchers, inventing increasingly arbitrary assessment criteria and wasting increasing amounts of money on red tape which should actually be going to fund research.
Follow @telescoper
May 30, 2012 at 10:09 am
What is that leads us to publish in these expensive and exasperatingly slow journals, when any number of open-access options are available? I’m not convinced it is the REF. More likely it’s a combination of snobbery, resistance to change, plus simple things like the widespread availability of bibtex and style files, and a reluctance to ditch years spent learning where to place hyphens to appease MNRAS typesetters (bar Smail, who still prefers the random approach).
As an STFC employee, I am now unable to hold an STFC postdoc grant in my own right. I’m therefore hopeful that they’ll introduce a new category for me to aspire to: “world-leading, with hand tied behind back”. Thankfully, I’m not obliged to sell my soul to the REF, though my points are available for ten bob and a pint of heavy, should anyone want them.
Keep fighting the good fight, Peter, regarding both REF and open access.
May 30, 2012 at 1:20 pm
MNRAS-are the only-journal willing to put-up with some-of-your more-flowery prose…
May 30, 2012 at 7:09 pm
Forsooth! Yet those bounders censored my most recent acknowledgements, which were entirely factual, if slightly off-colour 🙂
May 30, 2012 at 10:50 am
[…] “Some of my colleagues in the School of Physics & Astronomy recently attended a briefing session about the forthcoming Research Excellence Framework …” (more) […]
May 30, 2012 at 1:58 pm
There is a paper on arXiv about declining Impact Factors for journals since the 1990s: http://arxiv.org/abs/1205.4328.
May 31, 2012 at 8:18 am
Refereeing is not the only decent function of journals that needs to be replicated in a post-parasitical system. There is also the fact that reserchers are frequently asked to make changes in the logic flow of their write-up in order to make it comprehensible. These might be small or large to execute, but they make a huge difference to the reader. And I don’t just mean standard of prose.
May 30, 2012 at 4:46 pm
Agreed with all you say here – strongly – and a very good suggestion too. May it come about soon.
May 30, 2012 at 9:10 pm
When discussing the value or otherwise of the Research Excellence Framework we have to ask how government research funding could be allocated to universities without something like the REF. It might be nice to see it scrapped, but can we think of any better method of distributing funding on the basis of research quality?
I’m not convinced that allocating the funding according to grant income from other sources, or according to numbers of PhD students, would work because of the differences in funding from one academic discipline to another.
May 31, 2012 at 12:33 am
Take a census of all academics. Make the radical, and totally unjustified, assumption that the higher up the greasy pole you are, the better your research. Assign 1 to a junior lecturer, 2 to a senior lecturer, 3 to a prof. Add up all the points. Take the total pot of money, divide by number of points, and allocate according to the weighting, so a prof gets three times as much as a junior lecturer. Think of all the time saved in not having to read, write or review grant proposals…
M
x
May 31, 2012 at 6:41 am
Think of the incentives to promote people! (Better still, you could do it the Oxford way, where you can be called ‘Professor’ without actually being paid a prof’s salary…)
May 31, 2012 at 8:15 am
Oxford? Just go to the USA where most acaemics are ‘professors’ of some rank, and come back here on sabbatical.
May 31, 2012 at 10:27 am
To me, a solution is to use metrics. The main issue with metrics is that it’s difficult to calibrate (or normalise) them. Using metrics to assess individual researchers is a poor way to determine quality. Using metrics to compare two universities that both cover a broad range of research areas may, on the other hand, give quite a good indication of their relative qualities. I would also argue that we should also not attempt to determine which university is best, which is second, etc, but rather have broad bands (top 5, next 5, … for example). In a sense I’m suggesting that REF should become an assessment of the whole university, rather than of individual departments. One could argue that universities would then try to expand those areas that are typically highly cited and close down those that are not. However, there is a finite amount of funding coming through other sources (the research councils for example) and so there is a limit to how big the highly cited areas can get before it’s no longer financially viable. As long as there are multiple funding sources (REF, Research Council Grants, Wellcome Trust, Leverhulme, ERC…) it should be possible to set the different levels of funding in such a way that no single funding source determines how universities behave.
May 31, 2012 at 9:06 am
Distributing funding according to the numbers and ranks of academic staff would give as much research funds to teaching-only universities as to research-intensive institutions.
I’m not sure I’d like to see as much money going to the Newport Pagnell Metropolitan University as to the research-intensive Open University up the road.
May 31, 2012 at 8:30 am
You mean a boycott as in this:
http://www.timeshighereducation.co.uk/story.asp?storycode=418510
And letters from Nobel prize winners like these:
http://www.timeshighereducation.co.uk/story.asp?storycode=408774
Just browse the blog comments to get some flavour of what reaction from outside your core audience would be.
As far as MNRAS goes, it would maybe be good if at the end of its current publishing contract it moved to the same publisher as ApJ and AJ use. Then at least the profits would go back into science.
And as far as arXiv goes, the panel chair and the HEFCE REF manager were fairly clear at a briefing I attended (and other readers her would also have done) that papers on arXiv were eligible and would be treated the same way as papers in printed journals. The only anomaly, since cleared up in the revised working methods, is whether papers which were in arXiv before 31/12/2008 but not published in the final journal after 01/01/2009 were eligible.
And as far as journals are concerned, I wouldn’t think that publishing in New Astronomy, on the grounds that its the only way of ensuring the details are correct in Scopus, was a very good idea.
May 31, 2012 at 4:00 pm
No, because Scopus is only used as a source of citation data, and even then only as a secondary indicator. The main indicator is the judgement of the hard-working panel, based upon reading the papers.
May 31, 2012 at 4:23 pm
They have a year, thats around 200 working days, are you saying you couldn’t read 3 papers per day? I would think that their universities would give them the year off teaching.
May 31, 2012 at 11:16 am
Peter, as a member of the physics panel, perhaps I’m not permitted to criticise the process – but I guess the worst that could happen is that I get sacked and don’t have to read my 640 papers…
I think REF is a missed opportunity to do something sensible at minimal overhead. As Ken Rice points out, use of citation data to assess quality always founders on the difficulty of cross-calibrating different fields. But we *had* the necessary calibration in the form of the RAE2008 results. These were flawed for the reasons REF will be flawed, but it’s probably the best that can be done. It should have been possible to design an algorithm that eats 2008 metrics data and attempts to replicate RAE2008; once the algorithm is optimised, you feed it the 2014 metrics, and you’re done.
Apart from being faster, such an approach could actually be more accurate than REF, because it can include everything we know about researchers – which is a lot more than 4 papers. Quite a few people may well produce four 4* papers, but some rare individuals could probably produce dozens. The REF algorithm has no way of rewarding these top levels of productivity. So clearly we should look at more papers – but direct reading just isn’t possible in the time available. So in summary we will do a much poorer job at discriminating different levels of research capability than we ought to, and it will take us far longer than doing a better job. How did we let this happen?
May 31, 2012 at 11:31 am
Too bad that universities don’t play each other at physics as they do in sports. Then it would be a lot easier to run a fantasy league…
March 7, 2013 at 9:55 pm
[…] or citations to make their assessment. It has, however, already been pointed out that this claim is unlikely to be credible. In Physics, there will probably be something like 6500 papers each of which will supposedly be […]
November 23, 2018 at 8:52 am
[…] in the United States despite having an entirely different structure. The REF in the UK, to quote one critic, has imposed “increasingly ridiculous administrative burdens on researchers, inventing […]
November 23, 2018 at 3:32 pm
[…] in the United States despite having an entirely different structure. The REF in the UK, to quote one critic, has imposed “increasingly ridiculous administrative burdens on researchers, inventing […]