Archive for Research Excellence Framework

REF moves the goalposts (again)

Posted in Bad Statistics, Education, Science Politics with tags , , , on January 18, 2013 by telescoper

The topic of the dreaded 2014 Research Excellence Framework came up quite a few times in quite a few different contexts over the last few days, which reminded me that I should comment on a news item that appeared a week or so ago.

As you may or may not be aware, the REF is meant to assess the excellence of university departments in various disciplines and distribute its “QR” research funding accordingly.  Institutions complete submissions which include details of relevant publications etc and then a panel sits in judgement. I’ve already blogged of all this: the panels clearly won’t have time to read every paper submitted in any detail at all, so the outcome is likely to be highly subjective. Moreover, HEFCE’s insane policy to award the bulk of its research funds to only the very highest grade (4* – “internationally excellent”) means that small variations in judged quality will turn into enormous discrepancies in the level of research funding. The whole thing is madness, but there seems no way to inject sanity into the process as the deadline for submissions remorselessly approaches.

Now another wrinkle has appeared on the already furrowed brows of those preparing REF submissions. The system allows departments to select staff to be entered; it’s not necessary for everyone to go in. Indeed if only the very best researchers are entered then the typical score for the department will be high, so it will appear  higher up  in the league tables, and since the cash goes primarily to the top dogs then this might produce almost as much money as including a few less highly rated researchers.

On the other hand, this is a slightly dangerous strategy because it presupposes that one can predict which researchers and what research will be awarded the highest grade. A department will come a cropper if all its high fliers are deemed by the REF panels to be turkeys.

In Wales there’s something that makes this whole system even more absurd, which is that it’s almost certain that there will be no QR funding at all. Welsh universities are spending millions preparing for the REF despite the fact that they’ll get no money even if they do stunningly well. The incentive in Wales is therefore even stronger than it is in England to submit only the high-fliers, as it’s only the position in the league tables that will count.

The problem with a department adopting the strategy of being very selective is that it could have a very  negative effect on the career development of younger researchers if they are not included in their departments REF submission. As well as taking the risk that people who manage to convince their Head of School that they are bound to get four stars in the REF may not have the same success with the various grey eminences who make the decision that really matters.

Previous incarnations of the REF (namely the Research Assessment Exercises of 2008 and 2001) did not publish explicit information about exactly how many eligible staff were omitted from the submissions, largely because departments were extremely creative in finding ways of hiding staff they didn’t want to include.

Now however it appears there are plans that the Higher Education Statistics Agency (HESA) will publish its own figures on how many staff it thinks are eligible for inclusion in each department. I’m not sure how accurate these figures will be but they will change the game, in that they will allow compilers of league tables to draw up lists of the departments that prefer playing games to   just allowing the REF panels to  judge the quality of their research.

I wonder how many universities are hastily revising their submission plans in the light of this new twist?

Reffing Madness

Posted in Science Politics with tags , , , , , , , , , , on June 30, 2012 by telescoper

I’m motivated to make a quick post in order to direct you to a blog post by David Colquhoun that describes the horrendous behaviour of the management at Queen Mary, University of London in response to the Research Excellence Framework. It seems that wholesale sackings are in the pipeline there as a result of a management strategy to improve the institution’s standing in the league tables by “restructuring” some departments.

To call this strategy “flawed” would be the understatement of the year. Idiotic is a far better word.  The main problem being that the criteria being applied to retain or dismiss staff bear no obvious relation to those adopted by the REF panels. To make matters worse, Queen Mary has charged two of its own academics with “gross misconduct” for having the temerity to point out the stupidity of its management’s behaviour. Read on here for more details.

With the deadline for REF submissions fast approaching, it’s probably the case that many UK universities are going into panic mode, attempting to boost their REF score by shedding staff perceived to be insufficiently excellent in research and/or  luring  in research “stars” from elsewhere. Draconian though the QMUL approach may seem, I fear it will be repeated across the sector.  Clueless university managers are trying to guess what the REF panels will think of their submissions by staging mock assessments involving external experts. The problem is that nobody knows what the actual REF panels will do, except that if the last Research Assessment Exercise is anything to go by, what they do will be nothing like what they said they would do.

Nowhere is the situation more absurd than here in Wales. The purported aim of the REF is to allocated the so-called “QR” research funding to universities. However, it is an open secret that in Wales there simply isn’t going to be any QR money at all. Leighton Andrews has stripped the Higher Education budget bare in order to pay for his policy of encouraging Welsh students to study in England by paying their fees there.

So here we have to enter the game, do the mock assessments, write our meaningless “impact” cases, and jump through all manner of pointless hoops, with the inevitable result that even if we do well we’ll get absolutely no QR money at the end of it. The only strategy that makes sense for Welsh HEIs such as Cardiff University, where I work, is to submit only those researchers guaranteed to score highly. That way at least we’ll do better in the league tables. It won’t matter how many staff actually get submitted, as the multiplier is zero.

There’s no logical argument why Welsh universities should be in the REF at all, given that there’s no reward at the end. But we’re told we have to by the powers that be. Everyone’s playing games in which nobody knows the rules but in which the stakes are people’s careers. It’s madness.

I can’t put it better than this quote:

These managers worry me. Too many are modest achievers, retired from their own studies, intoxicated with jargon, delusional about corporate status and forever banging the metrics gong. Crucially, they don’t lead by example.

Any reader of this blog who works in a university will recognize the sentiments expressed there. But let’s not blame it all on the managers. They’re doing stupid things because the government has set up a stupid framework. There isn’t a single politician in either England or Wales with the courage to do the right thing, i.e. to admit the error and call the whole thing off.

The Transparent Dishonesty of the Research Excellence Framework

Posted in Open Access, Science Politics with tags , , , , , , on May 30, 2012 by telescoper

Some of my colleagues in the School of Physics & Astronomy recently attended a briefing session about the  forthcoming Research Excellence Framework. This, together with the post I reblogged earlier this morning, suggested that I should re-hash an article I wrote some time ago about the arithmetic of the REF, and how it will clearly not do what it says on the tin.

The first thing is the scale of the task facing members of the panel undertaking the assessment. Every research active member of staff in every University in the UK is requested to submit four research publications (“outputs”) to the panel, and we are told that each of these will be read by at least two panel members. The Physics panel comprises 20 members.

As a rough guess I’d say that the UK has about 40 Physics departments, and the average number of research-active staff in each is probably about 40. That gives about 1600 individuals for the REF. Actually the number of category A staff submitted to the 2008 RAE was 1,685.57 FTE (Full-Time Equivalent), pretty close to this figure. At 4 outputs per person that gives 6400 papers to be read. We’re told that each will be read by at least two members of the panel, so that gives an overall job size of 12800 paper-readings. There are 20 members of the panel, so that means that between 29th November 2013 (the deadline for submissions) and the announcement of the results in December 2014 each member of the panel will have to have read 640 research papers. That’s an average of about two a day. Every day. Weekends included.

Now we are told the panel will use their expert judgment to decide which outputs belong to the following categories:

  • 4*  World Leading
  • 3* Internationally Excellent
  • 2* Internationally Recognized
  • 1* Nationally Recognized
  • U   Unclassified

There is an expectation that the so-called QR  funding allocated as a result of the 2013 REF will be heavily weighted towards 4*, with perhaps a small allocation to 3* and probably nothing at all for lower grades. In other words “Internationally recognized” research will probably be deemed completely worthless by HEFCE. Will the papers belonging to the category “Not really understood by the panel member” suffer the same fate?

The panel members will apparently know enough about every single one of the papers they are going to read in order to place them  into one of the above categories, especially the crucial ones “world-leading” or “internationally excellent”, both of which are obviously defined in a completely transparent and objective manner. Not.

We are told that after forming this judgement based on their expertise the panel members will “check” the citation information for the papers. This will be done using the SCOPUS service provided (no doubt at considerable cost) by   Elsevier, which by sheer coincidence also happens to be a purveyor of ridiculously overpriced academic journals. No doubt Elsevier are  on a nice little earner peddling meaningless data for the HECFE bean-counters, but I haven’t any confidence that it will add much value to the assessment process.

There have been high-profile statements to the effect that the REF will take no account of where the relevant “outputs”  are published, including a recent pronouncement by David Willetts. On the face of it, that would suggest that a paper published in the spirit of Open Access in a free archive would not be disadvantaged. However, I very much doubt that will be the case.

I think if you look at the volume of work facing the REF panel members it’s pretty clear that citation statistics will be much more important for the Physics panel than we’ve been led to believe. The panel simply won’t have the time or the breadth of understanding to do an in-depth assessment of every paper, so will inevitably in many cases be led by bibliometric information. The fact that SCOPUS doesn’t cover the arXiv means that citation information will be entirely missing from papers just published there.

The involvement of  a company like Elsevier in this system just demonstrates the extent to which the machinery of research assessment is driven by the academic publishing industry. The REF is now pretty much the only reason why we have to use traditional journals. It would be better for research, better for public accountability and better economically if we all published our research free of charge in open archives. It wouldn’t be good for academic publishing houses, however, so they’re naturally very keen to keep things just the way they are. The saddest thing is that we’re all so cowed by the system that we see no alternative but to participate in this scam.

Incidentally we were told before the 2008 Research Assessment Exercise that citation data would emphatically not be used;  we were also told afterwards that citation data had been used by the Physics panel. That’s just one of the reasons why I’m very sceptical about the veracity of some of the pronouncements coming out from the REF establishment. Who knows what they actually do behind closed doors?  All the documentation is shredded after the results are published. Who can trust such a system?

To put it bluntly, the apparatus of research assessment has done what most bureaucracies eventually do; it has become  entirely self-serving. It is imposing increasingly  ridiculous administrative burdens on researchers, inventing increasingly  arbitrary assessment criteria and wasting increasing amounts of money on red tape which should actually be going to fund research.

Academic Spring Time

Posted in Open Access with tags , , , , on April 11, 2012 by telescoper

Catching up on the last few days’ activity on the Twittersphere I realise that at last the Academic Journal Racket has made it into the mainstream media. The Guardian ran an article on Monday reporting that the Wellcome Trust had weighed in on the side of open access to academic journals, and followed this up with an editorial this morning. Here are the first two paragraphs.

Some very clever people have put up with a very silly system for far too long. That is the upshot of our reporting on scholarly journals this week. Academics not only provide the raw material, but also do the graft of the editing. What’s more, they typically do so without extra pay or even recognition – thanks to blind peer review. The publishers then bill the universities, to the tune of 10% of their block grants, for the privilege of accessing the fruits of their researchers’ toil. The individual academic is denied any hope of reaching an audience beyond university walls, and can even be barred from looking over their own published paper if their university does not stump up for the particular subscription in question.

This extraordinary racket is, at root, about the bewitching power of high-brow brands. Journals that published great research in the past are assumed to publish it still, and – to an extent – this expectation fulfils itself. To climb the career ladder academics must get into big-name publications, where their work will get cited more and be deemed to have more value in the philistine research evaluations which determine the flow of public funds. Thus they keep submitting to these pricey but mightily glorified magazines, and the system rolls on.

These are the points many academics, including myself, have been making for several years apparently with little success. What seems to be giving the campaign against the racketeers some focus is the boycott of rapacious publishing giant Elsevier I blogged about earlier this year, which was kicked off by mathematician and blogger Tim Gowers; the petition now has over 9300 signatures. Elsevier is one of the worst of the racketeers, which is deeply ironic. When Galileo, having been forced to recant by the Inquisition, wrote the Dialogues concerning Two New Sciences and got them published in non-Catholic Leiden, by Elsevier…

Elsevier has since withdrawn its support for the infamous Research Works Act, but I hope that doesn’t mean the campaign will dissipate. For the sake of the future of science, the whole system needs to be systematically dismantled and rebuilt free of parasites.

Today I see there’s a related piece in the Financial Times (although it’s blocked by a paywall) and I gather there has also been coverage on BBC Radio over the last few days, although I didn’t hear any of it because of my current location.

The fact that this issue  has garnered coverage  from the mainstream media is a very good thing. Academics have put up with being ripped off for far too long, and it’s to our shame that we haven’t done anything about it until now. Now I think the public will be asking how we could possibly have accepted the status quo and sheer embarrassment might force a change.

Another thing that we need to realise is the extent to which the Academic Journal Racket is feeding off the monster that is Research Assessment, specifically the upcoming Research Excellence Framework. The main beneficiaries of such exercises are not the researchers, but  the academic publishers who rake in the profits generated by the mountains of papers submitted to them in the hope that they’ll be judged “internationally leading” (whatever that means).  If the government is serious about Open Access then only papers that are freely available should be accepted by the REF. If that doesn’t shake up the system, nothing will!

Admissions Latest

Posted in Education, Politics with tags , , , , , , , on November 28, 2011 by telescoper

Only time for a short post today, so I thought I’d just pass on a link to the latest  Higher Education application  statistics, as reported by the Universities and Colleges Admissions Service (UCAS).

It’s still several weeks before the UCAS deadline closes in January so it’s too early to see exactly what is happening, but the figures do nevertheless make interesting reading.

The total number of applications nationally  is down by 12.9% on last year, but the number of  applications from UK domiciled students has fallen by 15.1%; an increase in applications from non-EU students is responsible for the difference in these figures.

Non-science subjects seem to be suffering the biggest falls in application numbers; physical sciences are doing better than average, but still face a drop of 7% in numbers. Anecdotal evidence I’ve gleaned from chatting to Physics & Astronomy colleagues is that some departments are doing very well, even increasing on last year, while others are significantly down. It is, however, far too early to tell how these numbers will translate into bums on seats in lecture theatres.

A particular concern for us here in Wales are the statistics of applications to Welsh universities.  The number of English-domiciled applicants to Welsh universities is down by 17.4% while the number of Welsh applicants to Welsh universities is down by 15.2%. On the other hand, the number of Welsh applicants to English universities is down by just 5.3%.

The pattern of cross-border applications is particularly important for Welsh Higher Education  because of the Welsh Assembly Government’s policy of subsidizing Welsh-domiciled students wherever they study in the United Kingdom, a policy which is generous to students but which is paid for by large cuts in direct university funding.  The more students take the WAG subsidy out of Wales, the larger will be the cuts in grants to Welsh HEIs.

Moreover, in the past, about 40% of the students in Welsh universities come from England.  If the fee income from incoming English students is significantly reduced relative to the subsidy paid to outgoing Welsh students then the consequences for the financial health of Welsh universities are even more dire.

Although it is early days the figures as they stand certainly suggest the possibility that the  number of Welsh students  studying in England will increase both relative to the number staying in Wales and relative to the number of English students coming to study in Wales. Both these factors  will lead to a net transfer of funds from Welsh Higher Education Institutions to their English counterparts.   I think the policy behind this is simply idiotic, but by the time the WAG works this out it may be too late.

Another interesting wrinkle on the WAG’s policy can be found in a piece in last week’s Times Higher. We’re used to the idea that people might relocate to areas where schools or  local services are better or cheaper, but consider the incentives on an English  family who are thinking of the cost of sending their offspring to University. The obvious thing for them  to do is to relocate to Wales in order to collect the WAG subsidy which they can then spend sending their little dears to university in England. That will save them tens of thousands of pounds per student, all taken directly from the Welsh Higher Education budget and paid into to the coffers of an English university.

There are already dark rumours circulating that the WAG subsidy will turn out to be so expensive that the Higher Education Funding Council for Wales is thinking of cancelling all its research funding. That means that Welsh universities face the prospect of having to take part in the burdensome Research Excellence Framework, in competition with much better funded English and Scottish rivals, but getting precisely no QR funding at the end of it.

And all this is because the Welsh Assembly Government wants to hand a huge chunk of its budget back to England. Is this how devolution is supposed to work? Madness.

Google Citations

Posted in Uncategorized with tags , , on November 18, 2011 by telescoper

Just time for a quick post this morning to pass on the news that Google Citations is now openly available. I just had a quick look at my own bibliometric data and, as far as I can tell, it’s pretty accurate. As well as total citations, Google Scholar also produces an h-index and something called the i10-index (which is just the number of papers with more than 10 citations). It also gives the corresponding figures for the past 5 years as well as for the entire career of a given researcher.

I’ve bragged blogged already about my most popular paper citation-wise, which has 287 citations on Google Scholar, which doesn’t exactly make it a world-beater but I’m still quite please with its impact. What I find particularly interesting about that paper is its longevity. This paper was published in 1991, i.e. 20 years ago, but I  recently looked on the ADS system at its citation history and found the following:

Curiously, it’s getting more citations now than it did when it was first published. I’ve got quite a few “slow burners” like this, in fact, and many of the citations listed for me in the last 5 years actually stem from papers written much earlier. Unfortunately, although I think this steady rate of citation is some sort of indicator of something or other, this is exactly the wrong sort of paper for the Research Excellence Framework, as it is only papers that are published within the roughly 5-year REF window that are taken into account. It would be more useful for the REF panels if the “5-year” window listed citations only to those papers actually published within the last five years. I wonder how the panel will try to use this limited information in assessing the true quality of  a paper?

I should also say that although this paper is, by a large margin, the nearest I’ve got to the citation hit parade, I don’t think it’s by any means the best paper I’ve ever written.

Another weakness is that Google Scholar doesn’t give a normalized h-index (i.e. one based on citations shared out amongst the authors of multi-author papers).

Still, you can’t have everything. Now that this extremely useful tool is available (for free) to all scientists and other denizens of the interwebs, I re-iterate my point that the panels involved in the assessing research for the Research Excellence Framework should use it rather than the inferior commercial versions, which are much less accurate.

 

Advice for the REF Panels

Posted in Finance, Science Politics with tags , , , , , on October 30, 2011 by telescoper

I thought I’d post a quick follow-up to last week’s item about the Research Excellence Framework (REF). You will recall that in that post I expressed serious doubts about the ability of the REF panel members to carry out a reliable assessment of the “ouputs” being submitted to this exercise, primarily because of the scale of the task in front of them. Each will have to read hundreds of papers, many of them far outside their own area of expertise. In the hope that it’s not too late to influence their approach, I thought I’d offer a few concrete suggestions as to how things might be improved. Most of my comments refer specifically to the Physics panel, but I have a feeling the themes I’ve addressed may apply in other disciplines.

The first area of  concern relates to citations, which we are told will be used during the assesment, although we’re not told precisely how this will be done. I’ve spent a few hours over the last few days looking at the accuracy and reliability various bibliometric databases and have come to the firm conclusion that Google Scholar is by far the best, certainly better than SCOPUS or Web of Knowledge. It’s also completely free. NASA/ADS is also free, and good for astronomy, but probably less complete for the rest of physics. I therefore urge the panel to ditch its commitment to use SCOPUS and adopt Google Scholar instead.

But choosing a sensible database is only part of the solution. Can citations be used sensibly at all for recently published papers? REF submissions must have been published no earlier than 2008 and the deadline is in 2013, so the longest time any paper can have had to garner citations will be five years. I think that’s OK for papers published early in the REF window, but obviously citations for those published in 2012 or 2013 won’t be as numerous.

However, the good thing about Google Scholar (and ADS) is that they include citations from the arXiv as well as from papers already published. Important papers get cited pretty much as soon as they appear on the arXiv, so including these citations will improve the process. That’s another strong argument for using Google Scholar.

The big problem with citation information is that citation rates vary significantly from field to field sit will be very difficult to use bibliometric data in a formulaic sense, but frankly it’s the only way the panel has to assess papers that lie far from their own expertise. Unless anyone else has a suggestion?

I suspect that what some panel members will do is to look beyond the four publications to guide their assessment. They might, for example, be tempted to look up the H-index of the author if they don’t know the area very well. “I don’t really understand the paper by Professor Poindexter but he has an H-index of 95 so is obviously a good chap and his work is probably therefore world-leading”. That sort of thing.

I think this approach would be very wrong indeed. For a start, it seriously disadvantages early career researchers who haven’t had time to build up a back catalogue of high-impact papers. Secondly, and more fundamentally still, it is contrary to the stated aim of the REF, which is to assess only the research carried out in the assessment period, i.e. 2008 to 2013. The H-index would include papers going back far further than 2008.

But as I pointed out in my previous post, it’s going to be impossible for the panel to perform accurate assessments of all the papers they are given: there will just be far too many and too diverse in content. They will obviously therefore have to do something other than what the rest of the community has been told they will do. It’s a sorry state of affairs that dishonesty is built into the system, but there you go. Given that the panel will be forced to cheat, let me suggest that they at least do so fairly. Better than using the H-index of each individual, use the H-index calculated over the REF period only. That will at least ensure that only research done in the REF period will count towards the REF assessment.

Another bone of contention is the assessment of the level of contribution authors have made to each paper, in other words the question of attribution. In astronomy and particle physics, many important papers have very long author lists and may be submitted to the REF by many different authors in different institutions. We are told that what the panel will do is judge whether a given individual has made a “significant” contribution to the paper. If so, that author will be accredited with the score given to the paper. If not, the grade assigned will be the lowest and that author will get no credit at all. Under this scheme one could be an author on a 4* paper but be graded “U”.

This is fair enough, in that it will penalise the “lurkers” who have made a career by attaching their names to papers on which they have made negligible contributions. We know that such people exist. But how will the panel decide what contribution is significant and what isn’t? What is the criterion?

Take the following example. Suppose the Higgs Boson is discovered at the LHC duringthe REF period. Just about every particle physics group in the UK will have authors on the ensuing paper, but the list is likely to be immensely long and include people who performed many different roles. Who decides where to draw the line on “significance”. I really don’t know the answer to this one, but a possibility might be to found in the use of the textual commentary that accompanies the submission of a research output. At present we are told that this should be used to explain what the author’s contribution to the paper was, but as far as I’m aware there is no mechanism to stop individuals hyping up their involvement.What I mean is I don’t think the panel will check for consistency between commentaries submitted by different people for the same institution.

I’d suggest that consortia  should be required to produce a standard form of words for the textual commentary, which will be used by every individual submitting the given paper and which lists all the other individuals in the UK submitting that paper as one of their four outputs. This will require co-authors to come to an agreement about their relative contributions in advance, which will no doubt lead to a lot of argument, but it seems to me the fairest way to do it. If the collaboration does not produce such an agreement then I suggest that paper be graded “U” throughout the exercise. This idea doesn’t answer the question “what does significant mean?”, but will at least put a stop to the worst of the game-playing that plagued the previous Research Assessment Exercise.

Another aspect of this relates to a question I asked several members of the Physics panel for the 2008 Research Assessment Exercise. Suppose Professor A at Oxbridge University and Dr B from The University of Neasden are co-authors on a paper and both choose to submit it as part of the REF return. Is there a mechanism to check that the grade given to the same piece of work is the same for both institutions? I never got a satisfactory answer in advance of the RAE but afterwards it became clear that the answer was “no”. I think that’s indefensible. I’d advise the panel to identify cases where the same paper is submitted by more than one institution and ensure that the grades they give are consistent.

Finally there’s the biggest problem. What on Earth does a grade like “4* (World Leading)” mean in the first place? This is clearly crucial because almost all the QR funding (in England at any rate) will be allocated to this grade. The percentage of outputs placed in this category varied enormously from field to field in the 2008 RAE and there is very strong evidence that the Physics panel judged much more harshly than the others. I don’t know what went on behind closed doors last time but whatever it was, it turned out to be very detrimental to the health of Physics as a discipline and the low fraction of 4* grades certainly did not present a fair reflection of the UK’s international standing in this area.

Ideally the REF panel could look at papers that were awarded 4* grades last time to see how the scoring went. Unfortunately, however, the previous panel shredded all this information, in order, one suspects, to avoid legal challenges. This more than any other individual act has led to deep suspicions amongs the Physics and Astronomy community about how the exercise was run. If I were in a position of influence I would urge the panel not to destroy the evidence. Most of us are mature enough to take disappointments in good grace as long as we trust the system.  After all, we’re used to unsuccessful grant applications nowadays.

That’s about twice as much as I was planning to write so I’ll end on that, but if anyone else has concrete suggestions on how to repair the REF  please file them through the comments box. They’ll probably be ignored, but you never know. Some members of the panel might take them on board.

Come off it, REF!

Posted in Science Politics with tags , , , , , , on October 27, 2011 by telescoper

Yesterday we all trooped off to the Millennium Stadium in Cardiff for a Staff Away Day. We didn’t actually get to play on the pitch of course, which wasn’t even there, as it had been removed to reveal a vast expanse of soil. Instead we were installed in the “Dragon Suite” for a discussion about our preparation for the forthcoming Research Excellence Framework.

Obviously I can’t post anything about our internal deliberations, but I’m sure departments up and down the United Kingdom are doing similar things so I thought I’d mention a few things which are already in the public domain and my personal reactions to them. I should also say that the opinions I express below are my own and not necessarily those of anyone else at Cardiff.

The first thing is the scale of the task facing members of the panel undertaking this assessment. Each research active member of staff is requested to submit four research publications (“outputs”) to the panel, and we are told that each of these will be read by at least two panel members. The panel comprises 20 members.

As a rough guess I’d say that the UK has about 40 Physics departments, and the average number of research-active staff in each is probably about 40. That gives about 1600 individuals for the REF. Actually the number of category A staff submitted to the 2008 RAE was 1,685.57 FTE (Full-Time Equivalent), pretty  close to this figure. At 4 outputs per person that gives 6400 papers to be read. We’re told that each will be read by at least two members of the panel, so that gives an overall job size of 12800 paper-readings. There are 20 members of the panel, so that means that between 29th November 2013 (the deadline for submissions) and the announcement of the results in December 2014 each member of the panel will have to have read 640 research papers. That’s an average of about two a day…

Incidentally, as I’ve mentioned before, the Physics REF panel includes representatives from institutions in England, Scotland and Northern Ireland, but not Wales. The decision to exclude representation from Welsh physics departments was a disgrace, in my view.

Now we are told the panel will use their expert judgment to decide which outputs belong to the following categories:

  • 4*  World Leading
  • 3* Internationally Excellent
  • 2* Internationally Recognized
  • 1* Nationally Recognized
  • U   Unclassified

There is an expectation that the so-called QR  funding allocated as a result of the 2013 REF will be heavily weighted towards 4*, with perhaps a small allocation to 3* and probably nothing at all for lower grades. “Internationally recognized” research is probably worthless in the view of HEFCE, in other words. Will the papers belonging to the category “Not really understood by the panel member” suffer the same fate?

The panel members will apparently know enough about every single one of the papers they are going to read in order to place them  into one of the above categories, especially the crucial ones “world-leading” or “internationally excellent”, both of which are obviously defined in a completely transparent and objective manner. Not.

We are told that after forming this judgement based on their expertise the panel members will “check” the citation information for the papers. This will be done using the SCOPUS service provided (no doubt at considerable cost) by   Elsevier, which by sheer coincidence also happens to be a purveyor of ridiculously overpriced academic journals. I’ve just checked the citation information for some of my papers on SCOPUS, and found an alarming number of errors. No doubt Elsevier are  on a nice little earner peddling meaningless data for the HECFE bean-counters, but I haven’t any confidence that it will add much value to the assessment process.

There have been high-profile statements to the effect that the REF will take no account of where the relevant “outputs”  are published, including a recent pronouncement by David Willetts. On the face of it, that would suggest that a paper published in the spirit of Open Access in a free archive would not be disadvantaged. However, I very much doubt that will be the case.

I think if you look at the volume of work facing the REF panel members it’s pretty clear that citation statistics will be much more important for the Physics panel than we’ve been led to believe. The panel simply won’t have the time or the breadth of understanding to do an in-depth assessment of every paper, so will inevitably in many cases be led by bibliometric information. The fact that SCOPUS doesn’t cover the arXiv means that citation information will be entirely missing from papers just published there.

The involvement of  a company like Elsevier in this system just demonstrates the extent to which the machinery of research assessment is driven by the academic publishing industry. The REF is now pretty much the only reason why we have to use traditional journals. It would be better for research, better for public accountability and better economically if we all published our research free of charge in open archives. It wouldn’t be good for academic publishing houses, however, so they’re naturally very keen to keep things just the way they are. The saddest thing is that we’re all so cowed by the system that we see no alternative but to participate in this scam.

Incidentally we were told before the 2008 Research Assessment Exercise that citation data would emphatically not be used;  we were also told afterwards that citation data had been used by the Physics panel. That’s just one of the reasons why I’m very sceptical about the veracity of some of the pronouncements coming out from the REF establishment. Who knows what they actually do behind closed doors?  All the documentation is shredded after the results are published. Who can trust such a system?

To put it bluntly, the apparatus of research assessment has done what most bureaucracies eventually do; it has become  entirely self-serving. It is imposing increasingly  ridiculous administrative burdens on researchers, inventing increasingly  arbitrary assessment criteria and wasting increasing amounts of money on red tape which should actually be going to fund research.

And that’s all just about “outputs”. I haven’t even started on “impact”….

Commodification, the Academic Journal Racket and the Digital Commons (via The Disorder Of Things)

Posted in Open Access, Uncategorized with tags , , , , , , , , , , , , , , , , , , , , , on September 15, 2011 by telescoper

Here’s another reasoned rant regarding the rapacity of the research racketeers. I think it makes some really good points.

The video clip is worth watching too, it being very funny.

Commodification, the Academic Journal Racket and the Digital Commons David, my erstwhile ‘parasitic overlord’ from when I was co-editing Millennium, points me to some posts by Kent Anderson of the Society for Scholarly Publishing, who defends the industry on a number of grounds from Monbiot’s polemic against the journal racket. The comments threads on both pieces are populated by academics who agree with Monbiot and by publishing industry colleagues who agree with Anderson (and who alternate between dismissing and … Read More

via The Disorder Of Things

Science Publishing: What is to be done?

Posted in Science Politics with tags , , , on September 10, 2011 by telescoper

The argument about academic publishing has been bubbling away nicely in the mainstream media and elsewhere in the blogosphere; see my recent post for links to some of the discussion elsewhere.

I’m not going to pretend that there’s a consensus amongst all scientists about this, but everything I’ve read has confirmed my rather hardline view, which is that in my field, astrophysics, academic journals are both unnecessary and unhealthy. I can certainly accept that in days gone by, perhaps up to around 1990, scientific journals provided the only means of disseminating research to the wider world. With the rise of the internet, that is no longer the case. Year after year we have been told that digital technologies would make scientific publishing cheaper. That has not happened. Journal subscriptions have risen faster than inflation for over a decade. Why is this happening? The answer is that we’re being ripped off. What began by providing a useful service has now become simply a parasite and, like most parasites, it is endangering the health of its subject.

The scale of the racket is revealed in an article I came across in Research Fortnight. Before I give you the figures let me explain that the UK Higher Education funding councils, such as HEFCE in England and HEFCW in Wales, award funding in a manner determined by the the quality of research going on in each department as judged by various research assessment exercises; this funding is called QR funding. Now listen to this. It is estimated that around 10 per cent of all QR funding in the UK goes into journal subscriptions. There is little enough money in science research these days for us to be paying a tithe of such proportions. This has to stop.

You might ask why such an obviously unsustainable situation carries on. I think there are two answers to this. One is the rise of the machinery of research assessment, which plays into the hands of the publishing industry. For submitted work to count in the Research Assessment Exercise (or its new incarnation, the Research Excellence Framework) it must be published in a refereed journal. Scientists who want to break the mould by publishing their papers some other way will be stamped on by our lords and masters who hold the purse strings. The whole system is invidious.

The second answer is even more discomforting. It is that many scientists actually like the current system. Each paper in a “prestigious” journal is another feather in your cap, another source of pride. It doesn’t matter if nobody reads any of them, ones published output is a measure of status. For far too many researchers gathering esteem by publishing in academic journals has become an end in itself. The system corrupts and has become corrupted. You can find similar comments in a piece in last week’s Guardian.

So what can be done? Well, I think that physics and astronomy can show the way forward. There is already a rudimentary yet highly effective prototype in place, called the arXiv. In many fields, including astronomy, all new papers are put on the arXiv, and these can be downloaded by anyone for free. Particle physics led the way towards the World Wide Web, an invention that has revolutionised so many things. It’s no coincidence that physicists are also ahead of the game on academic publishing too.

Of course it takes money to run the arXiv and that money is at the moment paid by contributions from universities that use it extensively. You might then argue that means the arXiv is just another journal, just one where the subscription cost is less obvious.

Perhaps that’s true, but then just take a look at the figures. The total running costs of the arXiv amount to just $400,000 per annum. That’s not just for astronomy but for a whole range of other branches of physics too, and not only new papers but a back catalogue going back at least 15 years.

There are about 40 UK universities doing physics research. If UK Physics had to sustain the costs of the arXiv on its own the cost would be an average of just $10,000 per department per annum. Spread the cost around the rest of the world, especially the USA, and the cost would be peanuts. Even $10,000 is less than most single physics journal subscriptions; indeed it’s not even 10 per cent of my departments annual budget for physics journals!

Whenever I’ve mentioned the arXiv to publishers they’ve generally dismissed it, arguing that it doesn’t have a “sustainable business plan”. Maybe not. But it is not the job of scientific researchers to support pointless commercial enterprises. We do the research. We write the papers. We assess their quality. Now we can publish them ourselves. Our research is funded by the taxpayer, so it should not be used to line the pockets of third parties.

I’m not saying the arXiv is perfect but, unlike traditional journals, it is, in my field anyway, indispensable. A little more investment, adding a comment facilities or a rating system along the lines of, e.g. reddit, and it would be better than anything we get academic publishers at a fraction of the cost. Reddit, in case you don’t know the site, allows readers to vote articles up or down according to their reaction to it. Restrict voting to registered users only and you have the core of a peer review system that involves en entire community rather than relying on the whim of one or two referees. Citations provide another measure in the longer term. Nowadays astronomical papers attract citations on the arXiv even before they appear in journals, but it still takes time for new research to incorporate older ideas.

Apparently, Research Libraries UK, a network of libraries of the Russell Group universities and national libraries, has already warned journal publishers Wiley and Elsevier that they will not renew subscriptions at current prices. If it were up to me I wouldn’t bother with a warning…