Archive for Research Excellence Framework

Notes on Eurovision

Posted in Biographical, Music, Politics with tags , , , , on May 15, 2022 by telescoper

To nobody’s surprise Ukraine won last night’s Eurovision song contest after collecting a huge dollop of the televotes. After the jury votes, the United Kingdom’s entry was in the lead which surprised me because I thought it wasn’t much of a song at all. I’ve never been very good at picking the tunes that do well though. I didn’t like Ukraine’s entry – Stefania by the Kalush Orchestra – much either, but obviously there are special circumstances this year and I’m not at all sorry that they won.

In fact I thought the best song – and the best singer – by a long way was the Lithuanian entry sung by Monika Liu, who held the stage brilliantly by standing there and singing, without any fancy staging. She finished a disappointing 14th.

Monika Liu

Other entries I enjoyed were: Spain, catchy dance number with excellent choreography that finished 3rd; Moldova, an energetic performance full of humour (7th); and Norway, whose entry Give that Wolf a Banana was enjoyably deranged (10th). The less said about the other entries the better. I’m still as baffled by how Sam Ryder’s entry for the UK, Space Man, did so well in the jury votes as I am that Lithuania did so badly there, but there you go. What do I know?

I’ll state without comment that the Ukrainian jury gave a maximum douze points to the United Kingdom, but in return the UK jury gave Ukraine nil points

Anyway, three things struck me as I sipped my wine and watched the show:

  1. Ironically the Opera on the radio last night was Wagner’s Die Meistersinger von Nürnberg which is about a sixteenth century song contest that resembles the Eurovision versiononly in the length of time it goes on for. Perhaps someone should write a modern music drama called Die Meistersinger von Eurovision?
  2. I think the Research Excellence Framework would be much more fun if it were done like the Eurovision Song Contest. Each University regardless of size could be given the same distribution of scores to allocate to the others (but not itself). I can see interesting patterns emerging during that!
  3. When I was formally presented with my DPhil in the summer of 1989, the graduation ceremony took place on the same stage (at the Brighton Centre) on which Abba won the Eurovision Song Contest in 1974 with their song Waterloo.

REF 2021 Results and Ranking

Posted in Biographical, Cardiff with tags , , , on May 12, 2022 by telescoper

The results of the UK’s Research Excellence Framework (REF) 2021 have now been published. You can find them all here at the REF’s own website because they are presented there in a much more informative way than the half-baked “rankings” favoured by, e.g., The Times Higher.

To give some background: the overall REF score for a Unit of Assessment (UoA; usually a Department or School) is obtained by adding three different components: outputs (quality of research papers); impact (referring to the impact beyond academia); and environment (which measures such things as grant income, numbers of PhD students and general infrastructure). Scores are assigned to these categories, e.g. for submitted outputs on a scale of 4* (world-leading), 3* (internationally excellent), 2* (internationally recognized), 1* (nationally recognized) and unclassified. Similar star ratings are applied to the impact and environment. These are weighted at 60%, 25% and 15% respectively in the current incarnation of REF.

You can find further discussion of the REF submission rules, especially concerning changes with respect to 2014, here.

The way the star ratings are often reported is via a Grade Point Average reflecting the percentage of in each band. A hypothetical UoA that scored 100% in the top category would have a GPA of 4.0, for example. One that had 50% 4* and 50* 3* would be 3.5, and so on.

In the 2014 REF institutions were allowed to be selective in the number of staff submitted so the GPA wasn’t really a very appropriate measure: some institutions chose to submit only their very best research in order to get a high GPA. The funding allocated as a result of REF turned out to be highly weighted towards 4* so this was a sensible strategy for them, but it made the simple GPA-based rankings even more meaningless than usual. That didn’t stop e.g. The Times Higher making such rankings though.

This time the rules on selection are stricter so the GPA is arguably more relevant, though many institutions have achieved selectivity anyway by moving certain staff onto teaching-only contracts. Staff on such contracts do not have to be submitted. I note that the main REF website does not use the GPA at all but instead gives profiles like this:

I show the example of Sussex because of my bad memories of the last REF (the 2014 exercise). I had moved to Sussex in 2013 at which point preparations were not well advanced and although everyone concerned worked very hard to put together the best submission for Physics & Astronomy we had to face the problem that our staff numbers had grown significantly in 2013 response to an increase in student numbers. While new staff could bring publications with them they couldn’t bring impact or environment, and while the outputs scored well the latter two categories didn’t so the Department of Physics & Astronomy did poorly in the ensuing rankings.

It must be said however that the primary purpose of the REF (allegedly) is to allocate blocks of funding- the so-called QR funding – to support research in the UoA concerned and while the GPA at Sussex was disappointing the fact that the money depends on the number of staff submitted meant that we got a substantial increase in QR dosh. Note further that the formula for allocation of funds to 4*, 3*, etc is not even specified in advance of the exercise: it is likely to be highly concentrated on research graded 4* and that the funding formula will probably different in England, Wales, Scotland and Northern Ireland. A ranking in terms of money earned is likely to look rather different from one based on GPA.

Another, even more fundamental, problem with the GPA is that the scores are so close together that the differences are of doubtful significance. In the Physics UoA, for example, the gap between top GPA (Sheffield) and 5th place (Bristol) is just 0.05 (3.65 versus 3.60) respectively. I see also that Cardiff is ranked equal 18th (with Imperial College) on a GPA of 3.45.

I say these things just to illustrate how much more subtle the criteria for success are compared to a simple GPA. It’s even hard to tell on an objective basis who to congratulate and who to commiserate.

Anyway, back to Sussex I see that Physics & Astronomy has done far better on environment and impact than last time round and the outputs (95% of which are either 4* or 3*) are comparable to last time (96%) so by those measures they have done well although this might not be reflected in a GPA-based ranking. Sussex is 26th in the rankings with a GPA of 3.35, if you’re interested, which is better than last time, though they will probably be disappointed at the presence of 2* elements in their profile.

Indeed, looking through the Physics list I can’t see any UoA that has a lower GPA this time than last time. The pot of money to be allocated for QR funding is fixed so if every UoA does better that doesn’t mean every UoA gets more money; some institutions will no doubt find that their improved GPA is accompanied by a cut in QR funding.

I’ll end by re-iterating that, having moved to Ireland in 2017, I’m very glad to be out of the path of the bureaucratic juggernaut that is the REF. In its first incarnations (as the Research Assessment Exercise) it did fulfil a useful purpose and did, I believe, improve the quality of UK research. Since then, however, it has become an industry that is largely self-serving. I quote from an article in the Times Higher itself:

The allocation of QR funding could be done in a much simpler and fairer way but the REF is now such a huge edifice it will resist being replaced by something smaller. No doubt before long the staff who spent so much time preparing for REF 2021 will start work on the next exercise. And so it goes on.

The changes in ranking that now occur from exercise to exercise are generally small in magnitude and in number. In other words, huge effort and cost are being invested to discover less and less information.

P.S. For completeness I should say that I am glad we don’t have an equivalent of the REF here in Ireland, we don’t have an equivalent of the QR funding either. This latter is a serious problem for the sustainability of research in third-level institutions, and it is not addressed at all in the recent proposals for reform.

The REF goes on

Posted in Biographical, Cardiff, Maynooth with tags , , , , on March 27, 2021 by telescoper

A few communications with former colleagues from the United Kingdom last week reminded me that, despite the Covid-19 pandemic, the deadline for submissions to the 2021 Research Excellence Framework is next week. It seems very strange to me to push ahead with this despite the Coronavirus disruption, but it’s yet another sign that academics have to serve the bureaucrats rather than the other way round.

I know quite a few people at quite a few institutions that are completely exhausted by the workload required to deal with the enormous exercise in paperwork that is intended to assess the quality and impact of research at UK universities.

With apologies for adding to the stack of memes based on recent events in the Suez Canal, it made me think of this:

One of the major plusses of being in Ireland is that there is no REF, so I’m able to avoid the enormous workload and stress generated by this exercise in bean-counting. That’s good because there are more than enough things on my plate right now, and more are being added every day.

My memories of the last REF in 2014 when I was Head of School at Sussex are quite painful, as it went badly for us then. I hope that the long-term investments we made then will pay off, though, and I hope things turn out better for Sussex this time especially for the Department of Physics & Astronomy for which the impact and environment components of the assessment dragged the overall score down.

Not being involved personally in the REF this time round I haven’t really paid much attention to the changes that have been adopted since 2014. One I knew about is that the rules make it harder for institutions to leave staff out of their REF return. Some universities played the system in 2014 by being very selective about whom they put in. Only staff with papers considered likely to be rated top-notch were submitted.

Having a quick glance at the documents I see two other significant differences.

One is that in 2014, with very few exceptions, all staff had to submit four research outputs (i.e. papers) to be graded. in 2021 the system is more flexible: the total number of outputs must equal 2.5 times the summed FTE (full-time equivalent) of the unit’s submitted staff, with no individual submitting more than 5 and none fewer than 1 (except in special cases related to Covid-19). Overall, then there will be fewer outputs than before, the multiplier of FTE being 2.5 (2021) instead of 4 (2014). There will still be a lot of papers, of course, not least because many Departments have grown since 2014, so the panels will have a great deal of reading to do. If that’s what they do with the papers. They’ll probably just look up citations…

The other difference relates to staff who have left an institution during the census period. In 2014 the institution to which a researcher moved got all the credit for the publications, while the institution they left got nothing. In 2021, institutions “may return the outputs of staff previously employed as eligible where the output was first made publicly available during the period of eligible employment, within the set number of outputs required.” I suppose this is to prevent the departure of a staff member causing too much damage to the institution they left and also to credit the institution where the work was done rather than specifically the individual who did it.

Thinking about the REF an amusing thought occurred to me about Research Assessment. My idea was to set up a sort of anti-REF (perhaps the Research Inferiority Framework) based not on the best outputs produced by an institutions researchers, but on the worst. The institutions producing the highest number of inferior papers could receive financial penalties and get relegated in the league tables for encouraging staff to write too many papers that nobody ever reads or are just plain wrong. My guess is that papers published in Nature might figure even more prominently in this…

Anyway, let me just take this opportunity to wish former colleagues at Cardiff and Sussex all the best for their REF submission on Wednesday 31st March. I hope it turns out well

Out of the REF

Posted in Biographical, Cardiff, Maynooth with tags , , , , on November 25, 2020 by telescoper

I was talking over Zoom with some former colleagues from the United Kingdom last week, and was surprised to learn that, despite the Covid-19 pandemic, the 2021 Research Excellence Framework is ploughing ahead next year, only slightly delayed. There’s no stopping bureaucratic juggernauts once they get going…

One of the major plusses of being in Ireland is that, outside the UK academic system, there is no REF. One can avoid the enormous workload and stress generated by this exercise in bean-counting My memories of the last REF in 2014 when I was Head of School at Sussex are quite painful, as it went badly for us then. I hope that the long-term investments we made then will pay off, though, and I hope things turn out better for Sussex this time especially for the Department of Physics & Astronomy for which the impact and environment components of the assessment dragged the overall score down.

The census period for the new REF is 1st August 2013 to 31st July 2020. Not being involved personally in the REF this time round I haven’t really paid much attention to the changes that have been adopted since 2014. One I knew about is that the rules make it harder for institutions to leave staff out of their REF return. Some universities played the system in 2014 by being very selective about whom they put in. Only staff with papers considered likely to be rated top-notch were submitted.

Having a quick glance at the documents I see two other significant differences.

One is that in 2014, with very few exceptions, all staff had to submit four research outputs (i.e. papers) to be graded. in 2021 the system is more flexible: the total number of outputs must equal 2.5 times the summed FTE (full-time equivalent) of the unit’s submitted staff, with no individual submitting more than 5 and none fewer than 1 (except in special cases related to Covid-19). Overall, then there will be fewer outputs than before, the multiplier of FTE being 2.5 (2021) instead of 4 (2014). There will still be a lot, of course, so the panels will have a great deal of reading to do. If that’s what they do with the papers. They’ll probably just look up citations…

The other difference relates to staff who have left an institution during the census period. In 2014 the institution to which a researcher moved got all the credit for the publications, while the institution they left got nothing. In 2021, institutions “may return the outputs of staff previously employed as eligible where the output was first made publicly available during the period of eligible employment, within the set number of outputs required.” I suppose this is to prevent the departure of a staff member causing too much damage to the institution they left.

I was wondering about this last point when chatting with friends the other day. I moved institutions twice during the relevant census period, from Sussex to Cardiff and then from Cardiff to Maynooth. In principle, therefore, both former employees could submit my outputs I published while I was there to the 2021 REF. I only published a dozen or so papers while I was at Sussex – the impact of being Head of School on my research productivity was considerable – and none of them are particularly highly cited so I don’t think that Sussex will want to submit any of them, but they could if they wanted to. They don’t have to ask my permission!

I doubt if Cardiff will be worried about my papers. Among other things they have a stack of gravitational wave papers that should all be 4*.

Anyway, thinking about the REF an amusing thought occurred to me about Research Assessment. My idea was to set up a sort of anti-REF (perhaps the Research Inferiority Framework) based not on the best outputs produced by an institutions researchers but on the worst. The institutions producing the highest number of inferior papers could receive financial penalties and get relegated in the league tables for encouraging staff to write too many papers that nobody ever reads or are just plain wrong. My guess is that papers published in Nature might figure even more prominently in this

The Anomaly of Research England

Posted in Politics, Science Politics with tags , , , , on August 16, 2017 by telescoper

The other day I was surprised to see this tweet announcing the impending formation of a new council under the umbrella of the new organisation UK Research & Innovation (UKRI):

These changes are consequences of the Higher Education and Research Act (2017) which was passed at the end of the last Parliament before the Prime Minister decided to reduce the Government’s majority by calling a General Election.

It seems to me that it’s very strange indeed to have a new council called Research England sitting inside an organisation that purports to be a UK-wide outfit without having a corresponding Research Wales, Research Scotland and Research Northern Ireland. The seven existing research councils which will henceforth sit alongside Research England within UKRI are all UK-wide.

This anomaly stems from the fact that Higher Education policy is ostensibly a devolved matter, meaning that England, Wales, Scotland and Northern Ireland each have separate bodies to oversee their universities. Included in the functions of these bodies is the so-called QR funding which is allocated on the basis of the Research Excellence Framework. This used to be administered by the Higher Education Funding Council for England (HEFCE), but each devolved council distributed its own funds in its own way. The new Higher Education and Research Act however abolishes HEFCE and replaces some of its functions into an organisation called the Office for Students, but not those connected with research. Hence the creation of the new `Research England’. This will not only distribute QR funding among English universities but also administer a number of interdisciplinary research programmes.

The dual support system of government funding consists of block grants of QR funding allocated as above alongside targeted at specific projects by the Research Councils (such as the Science and Technology Facilities Council, which is responsible for astronomy, particle physics and nuclear physics research). There is nervousness in England that the new structure will put both elements of the dual support system inside the same organisation, but my greatest concern is that by exlcuding Wales, Scotland and Northern Ireland, English universities will be given an unfair advantage when it comes to interdisciplinary research. Surely there should be representation within UKRI for Wales, Scotland and Northern Ireland too?

Incidentally, the Science and Technology Facilities Council (STFC) has started the process of recruiting a new Executive Chair. If you’re interested in this position you can find the advertisement here. Ominously, the only thing mentioned under `Skills Required’ is `Change Management’.

Stern Response

Posted in Science Politics with tags , , on July 28, 2016 by telescoper

The results of the Stern Review of the process for assessing university research and allocating public funding has been published today. This is intended to inform the way the next Research Excellence Framework (REF) will be run, probably in 2020, so it’s important for all researchers in UK universities.

Here are the main recommendations, together with brief comments from me (in italics):

  1. All research active staff should be returned in the REF. Good in principle, but what is to stop institutions moving large numbers of staff onto teaching-only contracts (which is what happened in New Zealand when such a move was made)?
  2. Outputs should be submitted at Unit of Assessment level with a set average number per FTE but with flexibility for some faculty members to submit more and others less than the average.Outputs are countable and therefore “fewer” rather than “less”. Other than that, having some flexibility seems fair to me as long as it’s not easy to game the system. Looking it more detail at the report it suggests that some could submit up to six and others potentially none, with an average of perhaps two across the UoA. I’m not sure precise  numbers make sense, but the idea seems reasonable.
  3. Outputs should not be portable. Presumably this doesn’t mean that only huge books can be submitted, but that outputs do not transfer when staff transfer. I don’t think this is workable, but that what should happen is that credit for research should be shared between institutions when a researcher moves from one to another.
  4. Panels should continue to assess on the basis of peer review. However, metrics should be provided to support panel members in their assessment, and panels should be transparent about their use. Good. Metrics only tell part of the story.
  5. Institutions should be given more flexibility to showcase their interdisciplinary and collaborative impacts by submitting ‘institutional’ level impact case studies, part of a new institutional level assessment. It’s a good idea to promote interdisciplinarity, but it’s not easy to make it happen…
  6. Impact should be based on research of demonstrable quality. However, case studies could be linked to a research activity and a body of work as well as to a broad range of research outputs. This would be a good move. The existing rules for Impact seem unnecessarily muddled.
  7. Guidance on the REF should make it clear that impact case studies should not be narrowly interpreted, need not solely focus on socio-economic impacts but should also include impact on government policy, on public engagement and understanding, on cultural life, on academic impacts outside the field, and impacts on teaching. Also good.
  8. A new, institutional level Environment assessment should include an account of the institution’s future research environment strategy, a statement of how it supports high quality research and research-related activities, including its support for interdisciplinary and cross-institutional initiatives and impact. It should form part of the institutional assessment and should be assessed by a specialist, cross-disciplinary panel. Seems like a reasonable idea, but a “specialisr cross-disciplinary” panel might be hard to assemble…
  9. That individual Unit of Assessment environment statements are condensed, made complementary to the institutional level environment statement and include those key metrics on research intensity specific to the Unit of Assessment. Seems like a reasonable idea.
  10. Where possible, REF data and metrics should be open, standardised and combinable with other research funders’ data collection processes in order to streamline data collection requirements and reduce the cost of compiling and submitting information. Reasonable, but a bit vague.
  11. That Government, and UKRI, could make more strategic and imaginative use of REF, to better understand the health of the UK research base, our research resources and areas of high potential for future development, and to build the case for strong investment in research in the UK. This sounds like it means more political interference in the allocation of research funding…
  12. Government should ensure that there is no increased administrative burden to Higher Education Institutions from interactions between the TEF and REF, and that they together strengthen the vital relationship between teaching and research in HEIs. I believe that when I see it.

Any further responses (stern or otherwise) are welcome through the comments box!

 

Lognormality Revisited (Again)

Posted in Biographical, Science Politics, The Universe and Stuff with tags , , , , , , , on May 10, 2016 by telescoper

Today provided me with a (sadly rare) opportunity to join in our weekly Cosmology Journal Club at the University of Sussex. I don’t often get to go because of meetings and other commitments. Anyway, one of the papers we looked at (by Clerkin et al.) was entitled Testing the Lognormality of the Galaxy Distribution and weak lensing convergence distributions from Dark Energy Survey maps. This provides yet more examples of the unreasonable effectiveness of the lognormal distribution in cosmology. Here’s one of the diagrams, just to illustrate the point:

Log_galaxy_countsThe points here are from MICE simulations. Not simulations of mice, of course, but simulations of MICE (Marenostrum Institut de Ciencies de l’Espai). Note how well the curves from a simple lognormal model fit the calculations that need a supercomputer to perform them!

The lognormal model used in the paper is basically the same as the one I developed in 1990 with  Bernard Jones in what has turned out to be  my most-cited paper. In fact the whole project was conceived, work done, written up and submitted in the space of a couple of months during a lovely visit to the fine city of Copenhagen. I’ve never been very good at grabbing citations – I’m more likely to fall off bandwagons rather than jump onto them – but this little paper seems to keep getting citations. It hasn’t got that many by the standards of some papers, but it’s carried on being referred to for almost twenty years, which I’m quite proud of; you can see the citations-per-year statistics even seen to be have increased recently. The model we proposed turned out to be extremely useful in a range of situations, which I suppose accounts for the citation longevity:

nph-ref_historyCitations die away for most papers, but this one is actually attracting more interest as time goes on! I don’t think this is my best paper, but it’s definitely the one I had most fun working on. I remember we had the idea of doing something with lognormal distributions over coffee one day,  and just a few weeks later the paper was finished. In some ways it’s the most simple-minded paper I’ve ever written – and that’s up against some pretty stiff competition – but there you go.

Lognormal_abstract

The lognormal seemed an interesting idea to explore because it applies to non-linear processes in much the same way as the normal distribution does to linear ones. What I mean is that if you have a quantity Y which is the sum of n independent effects, Y=X1+X2+…+Xn, then the distribution of Y tends to be normal by virtue of the Central Limit Theorem regardless of what the distribution of the Xi is  If, however, the process is multiplicative so  Y=X1×X2×…×Xn then since log Y = log X1 + log X2 + …+log Xn then the Central Limit Theorem tends to make log Y normal, which is what the lognormal distribution means.

The lognormal is a good distribution for things produced by multiplicative processes, such as hierarchical fragmentation or coagulation processes: the distribution of sizes of the pebbles on Brighton beach  is quite a good example. It also crops up quite often in the theory of turbulence.

I’ll mention one other thing  about this distribution, just because it’s fun. The lognormal distribution is an example of a distribution that’s not completely determined by knowledge of its moments. Most people assume that if you know all the moments of a distribution then that has to specify the distribution uniquely, but it ain’t necessarily so.

If you’re wondering why I mentioned citations, it’s because they’re playing an increasing role in attempts to measure the quality of research done in UK universities. Citations definitely contain some information, but interpreting them isn’t at all straightforward. Different disciplines have hugely different citation rates, for one thing. Should one count self-citations?. Also how do you apportion citations to multi-author papers? Suppose a paper with a thousand citations has 25 authors. Does each of them get the thousand citations, or should each get 1000/25? Or, put it another way, how does a single-author paper with 100 citations compare to a 50 author paper with 101?

Or perhaps a better metric would be the logarithm of the number of citations?

Research Funding – A Modest Proposal

Posted in Education, Science Politics with tags , , , , , on September 9, 2015 by telescoper

This morning, the Minister for Universities, Jo Johnson, made a speech in which, among other things, he called for research funding to be made simpler. Under the current “dual funding” system, university researchers receive money through two main routes: one is the Research Excellence Framework (REF) which leads to so-called “QR” funding allocations made via the Higher Education Funding Council for England (HEFCE); and the other is through research grants which have to be applied for competitively from various sources, including the Seven Research Councils.

Part of the argument why this system needs to be simplified is the enormous expense and administrative burden of the Research Excellence Framework.  Many people have commented to me that although they hate the REF and accept that it’s ridiculously expensive and time-consuming, they didn’t see any alternative. I’ve been thinking about it and thought I’d make a suggestion. Feel free to shoot it down in flames through the box at the end, but I’ll begin with a short introduction.

Those of you old enough to remember will know that before 1992 (when the old `polytechnics’ were given the go-ahead to call themselves `universities’) the University Funding Council – the forerunner of HEFCE – allocated research funding to universities by a simple formula related to the number of undergraduate students. When the number of universities suddenly increased this was no longer sustainable, so the funding agency began a series of Research Assessment Exercises to assign research funds (now called QR funding) based on the outcome. This prevented research money going to departments that weren’t active in research, most (but not all) of which were in the ex-Polytechnics. Over the years the apparatus of research assessment has become larger, more burdensome, and incomprehensibly obsessed with short-term impact of the research. Like most bureaucracies it has lost sight of its original purpose and has now become something that exists purely for its own sake.

It is especially indefensible at this time of deep cuts to university core funding that we are being forced to waste an increasingly large fraction of our decreasing budgets on staff-time that accomplishes nothing useful except pandering to the bean counters.

My proposal is to abandon the latest manifestation of research assessment mania, i.e. the REF, and return to a simple formula, much like the pre-1992 system,  except that QR funding should be based on research student (i.e. PhD student) rather than undergraduate numbers. There’s an obvious risk of game-playing, and this idea would only stand a chance of working at all if the formula involved the number of successfully completed research degrees over a given period .

I can also see an argument  that four-year undergraduate students (e.g. MPhys or MSci students) also be included in the formula, as most of these involve a project that requires a strong research environment.

Among the advantages of this scheme are that it’s simple, easy to administer, would not spread QR funding in non-research departments, and would not waste hundreds of millions of pounds on bureaucracy that would be better spent actually doing research. It would also maintain the current “dual support” system for research, if that’s  a benefit.

I’m sure you’ll point out disadvantages through the comments box!


The Impact of Impact

Posted in Science Politics with tags , on February 18, 2015 by telescoper

Interesting analysis of the 2014 REF results by my colleague Seb Oliver. Among other things, it shows that Physics was the subject in which “Impact had the greatest impact”..

Seb Boyd

 The Impact of Impact

I wrote the following article to explore how Impact in the Research Excellence Framework 2014 (REF2014) affected the average scores of departments (and hence rankings). This produced a “league table” of how strongly impact affected different subjects. Some of the information in this article was used in a THE article by Paul Jump due to come out 00:00 on 19th Feb 2015.  I’ve now also produced ranking tables for each UoA using the standardised weighting I advocate below (see Standardised Rankings).

UoA Unit of Assessment Effective Weight of GPA

ranking in each sub-profile as %

Outputs Impact Envir.
9 Physics 37.9 38.6 23.5
23 Sociology 34.1 38.6 27.3
10 Mathematical Sciences 37.6 37.5 24.9
24 Anthropology and Development Studies 40.2 35.0 24.8
6 Agriculture, Veterinary and Food Science 42.0 33.0 25.0
31 Classics 43.3 32.6 24.0
16 Architecture, Built Environment and Planning 48.6 31.1 20.3

View original post 1,558 more words

A whole lotta cheatin’ going on? REF stats revisited

Posted in Education, Science Politics with tags , , , on January 28, 2015 by telescoper

Here’s a scathing analysis of Research Excellence Framework. I don’t agree with many of the points raised and will explain why in a subsequent post (if and when I get the time), but I reblogging it here in the hope that it will provoke some comments either here or on the original post (also a wordpress site).

coasts of bohemia

 

1.

The rankings produced by Times Higher Education and others on the basis of the UK’s Research Assessment Exercises (RAEs) have always been contentious, but accusations of universities’ gaming submissions and spinning results have been more widespread in REF2014 than any earlier RAE. Laurie Taylor’s jibe in The Poppletonian that “a grand total of 32 vice-chancellors have reportedly boasted in internal emails that their university has become a top 10 UK university based on the recent results of the REF”[1] rings true in a world in which Cardiff University can truthfully[2]claim that it “has leapt to 5th in the Research Excellence Framework (REF) based on the quality of our research, a meteoric rise” from 22nd in RAE2008. Cardiff ranks 5th among universities in the REF2014 “Table of Excellence,” which is based on the GPA of the scores assigned by the REF’s “expert panels” to the three…

View original post 2,992 more words