Archive for the Science Politics Category

Executive Roast

Posted in Science Politics with tags , , , on February 6, 2009 by telescoper

The Chief Executive of the Science and Technology Facilities Council (Keith Mason) was recently summoned to the House of Commons Select Committee on Innovation, Universities and Skills. The video of his inquisition is now available for your enjoyment (but not his) here.

(I tried embedding this using vodpod but it didn’t work, so you’ll just have to click the link…)

Notice how in traditional fashion the light was shining in his eyes throughout. I suppose I should really feel sorry for him, but somehow I don’t. He may not be entirely responsible for the budgetary crisis currently engulfing STFC, but he handled the aftermath so badly that the damage done to relations between STFC and the community of physics researchers that rely on it for funding will take a long time to fix.

Anyway, if you can’t be bothered to watch the whole show here are some of the salient points in a summary that was passed to me by an anonymous source; I was too busy laughing to make my own notes, but I’ve added a few comments in italics. For those of you not up with acronyms, DIUS is the Department for Innovation, Universities and Skills and CSR stands for the Comprehensive Spending Review.

KM insisted that STFC had been successful in giving the UK unprecedented opportunities for doing world class science, and by the end (though by that stage his most aggressive interlocutor, Ian Gibson, had left) appeared to have earned the committee’s grudging respect (though I suspect that was for the way he played a tricky wicket as much as because he had persuaded them out of their deep concerns about his management of the STFC)

Among the many issues raised were the following:

  • KM agreed to hand over the letter detailing the Science and Technology Facilities Council’s 2007 spending review allocation to MPs for scrutiny.
  • He denied that the external review of STFC had been a “total
    whitewash” on the grounds that it had not been given sufficient time to thoroughly interview a cross section of staff during the review or to do other than take the STFC’s self-assessment document, upon which their work was based, at “face value” without being able to find out if the majority of STFC staff actually agreed with its content. On the contrary staff had made their views known ‘vociferously’.
  • Challenged about the perceived overrepresentation of the executive council on the STFC council KM said that, while it had affected the perception held in the community, it made “no difference” to the outcomes (a point which the committee repeatedly contested). He added that STFC takes full account of community input via the advisory panels and science board. It’s simply not true, he insisted, that the executive dominates the Council;  rather it ensures it is properly informed so that decisions are well founded. However he acknowledged that communications had not been good – hence the new arrangements (Director of Communications appointment); Great, another spin doctor – PC .
  • An extra GBP 9M had been freed up by DIUS reducing STFC’s liabilities to exchange rate variations from the first 6 to 3 m pa over the triennium. Of this 6 would go to exploitation grants and 3 to HEIs to promote knowledge transfer. So 6M will be used properly and the rest wasted – PC .
  • He stated that Jodrell Bank had no long term future in radio astronomy since its location exposed it to too much ‘noise’ – but that was for Manchester University (which STFC would continue to support via E-MERLIN and SKA) to determine. It will take a silver bullet to kill that particular zombie -PC
  • KM also voiced the opinion that here was no tension between being simultaneously responsible for developing STFC labs/campuses and funding HEIs through grants; on the contrary it enabled better utilisation of resources bearing in mind the role of STFC which is BOTH to promote science AND its societal /economic benefits. In other words he wants the flexibility to continue robbing Peter to pay Paul – PC
  • For this reason (as well as reasons of administrative complexity)
    STFC had rejected Wakeham’s recommendation to ring fence the ex-PPARC budget line in the forthcoming CSR. Ditto
  • KM argued that  Daresbury was not being treated unfairly in relation to Harwell (there was a good deal of probing about this by North West MPs) .

My own view having watched most of the video is that Professor Mason must have an incredibly thick skin to shrug off such a sustained level of antipathy. Some of it is crude and abusive, but it’s quite impressive how well informed some of the members are.

Physics Funding by Numbers

Posted in Science Politics with tags , , , , , on January 29, 2009 by telescoper

I just read today that HEFCE has decided on the way funds will be allocated for research following the 2008 Research Assessment Exercise. I have blogged about this previously (here, there and elsewhere), but to give you a quick reminder the exercise basically graded all research in UK universities on a scale from 4* (world-leading) to 1* (nationally recognized), producing for each department a profile giving the fraction of research in each category.

HEFCE has decided that English universities will be funded according to a formula that includes everything from 2* up to 4* but with a weighting 1:3:7.  Those graded 1* and unclassified get no funding at all. How they arrived at this formula is anyone’s guess. Personally I think it’s a bit harsh on 2* which is supposed to be internationally recognized research, but there you go.

Assuming there is also a multiplier for volume (i.e. the number of people submitted) we can now easily produce another version of the physics research league table which reveals the relative amount of money each will get. I don’t know the overall normalisation, of course.

The table shows the number of staff submitted (second column) and the overall fundability factor based on a 7:3:1 weighting of the published profile multiplied by the figure in column 2. This is like the “research power” table I showed here, only with a different and much steeper weighting (7,3,1,0) versus (4,3,2,1).

1. University of Cambridge 141.25 459.1
2. University of Oxford 140.10 392.3
3. Imperial College London 126.80 380.4
4. University College London 101.03 298.0
5. University of Manchester 82.80 227.7
6. University of Durham 69.50 205.0
7. University of Edinburgh 60.50 184.5
8. University of Nottingham 44.45 144.5
9. University of Glasgow 45.75 135.0
10. University of Warwick 51.00 130.1
11. University of Bristol 46.00 128.8
12. University of Birmingham 43.60 126.4
13. University of Southampton 45.30 120.0
14. Queen’s University Belfast 50.00 115.0
15. University of Leicester 45.00 114.8
16. University of St Andrews 32.20 104.7
17. University of Liverpool 34.60 96.9
18. University of Sheffield 31.50 92.9
19. University of Leeds 35.50 88.8
20. Lancaster University 26.40 88.4
21. Queen Mary, University of London 34.98 85.7
22. University of Exeter 28.00 77.0
23. University of Hertfordshire 28.00 72.8
24. University of York 26.00 67.6
25. Royal Holloway, University of London 27.96 67.1
26. University of Surrey 27.20 65.3
27. Cardiff University 32.30 64.6
28. University of Bath 20.20 63.6
29. University of Strathclyde 31.67 60.2
30. University of Sussex 20.00 55.0
31. Heriot-Watt University 19.50 51.7
32. Swansea University 20.75 48.8
33. Loughborough University 17.10 41.9
34. University of Central Lancashire 22.20 41.1
35. King’s College London 16.40 38.5
36. Liverpool John Moores University 16.50 35.5
37. Aberystwyth University 18.33 23.8
38. Keele University 10.00 18.0
39. Armagh Observatory 7.50 13.1
40. University of Kent 3.00 4.5
41. University of the West of Scotland 3.70 4.1
42. University of Brighton 1.00 1.8

It looks to me that the fraction of funds going to the big three at the top will probably be reduced quite significantly, although apparently there are  funds set aside to smooth over any catastrophic changes. I’d hazard a guess that things won’t change much for those in the middle.

I’ve left the Welsh and Scottish universities in the list for comparison, but there is no guarantee that HEFCW and SFC will use the same formula for Wales and Scotland as HEFCE did for England. I have no idea what is going to happen to Cardiff University’s funding at the moment.

Another bit of news worthing putting in here is that HEFCE has protected funding for STEM subjects (Science, Technology and Medicine) so that the apparently poor showing of some science subjects (especially physics) compared to, e.g., Economics will not necessarily mean that physics as a whole will suffer. How this works out in practice remains to be seen.

Apparently also the detailed breakdowns of how the final profiles were reached will go public soon. That will make for some interesting reading, although apparently everything relating to individual researchers will be shredded to prevent problems with the data protection act.

What’s all the Noise?

Posted in Science Politics, The Universe and Stuff with tags , , , , on January 18, 2009 by telescoper

Now there’s a funny thing…

I’ve just come across a news item from last week which I followed up by looking at the official NASA press release. I’m very slow to pick up on things these days, but I thought I’d mention it anyway.

The experiment concerned is called ARCADE 2, which is an somewhat contrived acronym derived from Absolute Radiometer for Cosmology, Astrophysics and Diffuse Emission. It is essentially a balloon-borne detector designed to analyse radio waves with frequencies in the range 3 to 90 Ghz. The experiment actually flew in 2006, so it has clearly taken considerable time to analyse the resulting data.

Being on a balloon that flies for a relatively short time (2.5 hours in this case) means that only a part of the sky was mapped, amounting to about 7% of the whole celestial sphere but that is enough to map a sizeable piece of the Galaxy as well as a fairly representative chunk of deep space.

There are four science papers on the arXiv about this mission: one describes the instrument itself; another discusses radio emission from our own galaxy, the Milky Way; the third discusses the overall contribution of extragalactic origin in the frequency range covered by the instrument; the last discusses the implications about extragalactic sources of radio emission.

The thing that jumps out from this collection of very interesting science papers is that there is an unexplained, roughly isotropic, background of radio noise, consistent with a power-law spectrum. Of course to isolate this component requires removing known radio emission from our Galaxy and from identified extragalactic sources, as well as understanding the systematics of the radiometer during its flight. But after a careful analysis of these the authors present strong evidence of excess emission over and above known sources. The spectrum of this radio buzz falls quite steeply with frequency so appears in the two long-wavelength channels at 3 and 8 GHz.

So where does this come from? Well, we just don’t know.

The problem is that no sensible extrapolation of known radio sources to high redshift appears to be able to generate an integrated flux equivalent to that observed. Here is a bit of the discussion from the paper:

It is possible to imagine that an unknown population of discrete sources exist below the flux limit of existing surveys. We argue earlier that these cannot be a simple extension of the source counts of star-forming galaxies. As a toy model, we consider a population of sources distributed with a delta function in flux a factor of 10 fainter than the 8.4 GHz survey limit of Fomalont et al. (2002). At a flux of 0.75 μJy, it would take over 1100 such sources per square arcmin to produce the unexplained emission we see at 3.20 GHz, assuming a frequency index of −2.56. This source density is more than two orders of magnitude higher than expected from extrapolation to the same flux limit of the known source population. It is, however, only modestly greater than the surface density of objects revealed in the faintest optical surveys, e.g., the Hubble Ultra Deep Field (Beckwith et al. 2006).  The unexplained emission might result from an early population of non thermal emission from low-luminosity AGN; such a source would evade the constraint implied by the far-IR measurements.

The point is that ordinary galaxies produce a broad spectrum of radiation and it is difficult to boost the flux at one frequency without violating limits imposed at others. It might be able to invoke Active Galactic Nuclei (AGN) to do the trick, but I’m not sure. I am sure there’ll be a lot work going on trying to see how this might fit in with all the other things we know about galaxy formation and evolution but for the time being it’s a mystery.

I’m equally sure that these results will spawn a plethora of more esoteric theoretical explanations, inevitably including the ridiculous as well as perhaps the sublime. Charged dark matter springs to mind.

Or maybe it’s not even extragalactic. Could it be from an unknown source inside the Milky Way? If so, it might shed some light on the curiosities we find in the cosmic microwave background that I’ve mentioned here and there, but it seems to peak at too low a frequency to account for much of the overall microwave sky temperature.

But it does have a lesson for astronomy funders. ARCADE 2 is a very cheap experiment (by NASA standards). Moreover, the science goals of the experiment did not include “discovering a new cosmic background”. It just goes to show that even in these times of big, expensive and narrowly targetted missions there is still space for serendipity.

The Physics Overview

Posted in Science Politics with tags , , , , , , , , on January 17, 2009 by telescoper

I found out by accident the other day that the Panels conducting the 2008 Research Assessment Exercise have now published their subject overviews, in which they comment trends within each discipline.

Heading straight for the overview produced by the panel for Physics (which is available together with two other panels here),I found some interesting points, some of which relate to comments posted on my previous items about the RAE results (here and here) until I terminated the discussion.

One issue that concerns many physicists is how the research profiles produced by the RAE panel will translate into funding. I’ve taken the liberty of extracting a couple of paragraphs from the report to show what they think. (For those of you not up with the jargon, UoA19 is the Unit of Assessment 19, which is Physics).

The sub-panel is pleased with how much of the research fell into the 4* category and that this excellence is widely spread so that many smaller departments have their share of work assessed at the highest grade. Every submitted department to UoA19 had at least 70% of their overall quality profile at 2* or above, i.e. internationally recognised or above.

Sub-panel 19 takes the view that the research agenda of any group, or of any individual for that matter, is interspersed with fallow periods during which the next phase of the research is planned and during which outputs may be relatively incremental, even if of high scientific quality. In the normal course of events successful departments with a long term view will have a number of outputs at the 3* and 2* level indicating that the groundwork is being laid for the next set of 4* work. This is most obviously true for those teams involved with very major experiments in the big sciences, but also applies to some degree in small science. Thus the quality profile is a dynamic entity and even among groups of very high international standing there is likely to be cyclic variation in the relative amounts of 3* and 4* work according to the rhythm of their research programmes. Most departments have what we would consider a healthy balance between the perceived quality levels. The subpanel strongly believes that the entire overall profile should be considered when measuring the quality of a department, rather than focussing on the 4* component only.

I think this is very sensible, but for more reasons than are stated. For a start the judgement of what is 4* or 3* must be to some extent subjective and it would be crazy to allocate funding entirely according to the fraction of 4* work. I’ve heard informally that the error in any of the percentages for any assessment is plus or minus 10%, which also argues for a conservative formula. However one might argue about the outcome, the panels clearly spent a lot of time and effort determining the profiles so it would seem to make sense to use all the information they provide rather than just a part.

Curiously, though, the panel made no comment about why it is that physics came out so much worse than chemistry in the 2008 exercise (about one-third of the chemistry departments in the country had a profile-weighted quality mark higher than or equal to the highest-rated physics department). Perhaps they just think UK chemistry is a lot better than UK physics.

Anyway, as I said, the issue most of us are worrying about is how this will translate into cash. I suspect HEFCE hasn’t worked this out at all yet either. The panel clearly thinks that money shouldn’t just follow the 4* research, but the HEFCE managers might differ. If they do wish to follow a drastically selective policy they’ve got a very big problem: most physics departments are rated very close together in score. Any attempt to separate them using the entire profile would be hard to achieve and even harder to justify.

The panel also made a specific comment about Wales and Scotland, which is particularly interesting for me (being here in Cardiff):

Sub-panel 19 regards the Scottish Universities Physics Alliance collaboration between Scottish departments as a highly positive development enhancing the quality of research in Scotland. South of the border other collaborations have also been formed with similar objectives. On the other hand we note with concern the performance of three Welsh departments where strategic management did not seem to have been as effective as elsewhere.

I’m not sure whether the dig about Welsh physics departments is aimed at the Welsh funding agency HEFCW or the individual university groups; SUPA was set up with the strong involvement of SFC and various other physics groupings in England (such as the Midlands Physics Alliance) were actively encouraged by HEFCE. It is true, though, that the 3 active physics departments in Wales (Cardiff, Swansea and Aberystwyth) all did quite poorly in the RAE. In the last RAE, HEFCW did not apply as selective a funding formula as its English counterpart HEFCE with the result that Cardiff didn’t get as much research funding as it would if it had been in England. One might argue that this affected the performance this time around, but I’m not sure about this as it’s not clear how any extra funding coming into Cardiff would have been spent. I doubt if HEFCW will do any different this time either. Welsh politics has a strong North-South issue going on, so HEFCW will probably feel it has to maintain a department in the North. It therefore can’t penalise Aberystwyth too badly for its poor RAE showing. The other two departments are larger and had very similar profiles (Swansea better than Cardiff, in fact) so there’s very little justification for being too selective there either.

The panel remarked on the success of SUPA which received a substantial injection of cash from the Scottish Funding Council (SFC) and which has led to new appointments in strategic areas in several Scottish universities. I’m a little bit skeptical about the long-term benefits of this because the universities themselves will have to pick up the tab for these positions when the initial funding dries up. Although it will have bought them extra points on the RAE score the continuing financial viability of physics departments is far from guaranteed because nobody yet knows whether they will gain as much cash from the outcome as they spent to achieve it. The same goes for other universities, particularly Nottingham, who have massively increased their research activity with cash from various sources and consequently done very well in the RAE. But will they get back as much as they have put in? It remains to be seen.

What I would say about SUPA is that it has definitely given Scottish physics a higher profile, largely from the appointment of Ian Halliday to front it. He is an astute political strategist and respected scientist who performed impressively as Chief Executive of the now-defunct Particle Physics and Astronomy Research Council and is also President of the European Science Foundation. Having such a prominent figurehead gives the alliance more muscle than a group of departmental heads would ever hope to have.

So should there be a Welsh version of SUPA? Perhaps WUPA?

Well, Swansea and Cardiff certainly share some research interests in the area of condensed-matter physics but their largest activities (Astronomy in Cardiff, Particle Physics in Swansea) are pretty independent. It seems to me to be to be well worth thinking of some sort of initiative to pool resources and try to make Welsh physics a bit less parochial, but the question is how to do it. At coffee the other day, I suggested an initiative in the area of astroparticle physics could bring in genuinely high quality researchers as well as establishing synergy between Swansea and Cardiff, which are only an hour apart by train. The idea went down like a lead balloon, but I still think it’s a good one. Whether HEFCW has either the resources or the inclination to do something like it is another matter, even if the departments themselves were to come round.

Anyway, I’m sure there will be quite a lot more discussion about our post-RAE strategy if and when we learn more about the funding implications. I personally think we could do with a radical re-think of the way physics in Wales is organized and could do with a champion who has the clout of Scotland’s SUPA-man.

The mystery as far as I am concerned remains why Cardiff did so badly in the ratings. I think the first quote may offer part of the explanation because we have large groups in Astronomical Instrumentation and Gravitational Physics, both of which have very long lead periods. However, I am surprised and saddened by the fact that the fraction rated at 4* is so very low. We need to find out why. Urgently.

Silver Linings

Posted in Science Politics, The Universe and Stuff with tags on December 19, 2008 by telescoper

They say that bad news sells newspapers, so I shouldn’t be surprised with the large number of hits my previous post and the one before that about the Research Assessment Exercise has generated.

However, I heard some news today which has at least provided a bit of a silver lining and put me in a better mood for the Christmas break. My recent application for a grant to the Science and Technology Facilities Council to fund research over the next three years into departures from the concordance cosmological model has actually been selected.

Owing to a budgetary crisis, STFC grants rounds have been very competitive in recent years so I’m quite relieved to have been successful in the present dire financial context. Obviously, somebody out there seems to like what I do. Being a theorist I’m also quite cheap, which probably helped. Or maybe it was just an administrative error…

Anyway, thanks to this grant I will be able to employ a postdoctoral research assistant and spend a bit more of my time on research. It also helps fund a bit of infrastructure within the department. Overall it amounts to about £350K which sounds a lot, but is actually quite small by the standards of particle physics and astronomy grants. STFC isn’t actually Tesco but every little helps.

All I have to do now is convince a potential postdoc to come and work with me in the 35th 22nd best Physics department in the country. What could be simpler?

The Authorized Version

Posted in Science Politics with tags , on December 18, 2008 by telescoper

Following on from my previous post about the 2008 Research Assessment Exercise, I’ve been told that Cardiff University’s preferred measure of research activity is not the simple grade point average that I computed there, but an index of research power which is the average multiplied by the number of staff submitted.

Partly out of interest and partly so as not to incur the wrath of the University Thought Police I recalculated the list sorted by the official measure. So here is the authorized version, as sanctioned by the powers that be:

1. University of Cambridge 402.6
2. University of Oxford 371.3
3. Imperial College London 348.7
4. University College London 277.8
5. University of Manchester 215.3
6. University of Durham 191.1
7. University of Edinburgh 169.4
8. University of Warwick 132.6
9. University of Nottingham 126.7
10. University of Glasgow 125.8
11. Queen’s University Belfast 125.0
12. University of Bristol 121.9
13. University of Southampton 120.0
14. University of Birmingham 117.7
15. University of Leicester 114.8
16. University of St Andrews 91.8
17. University of Liverpool 91.7
18. University of Leeds 90.5
19. Queen Mary, University of London 87.5
20. University of Sheffield 86.6
21. Lancaster University 76.6
22. Cardiff University 75.9
23. University of Exeter 75.6
24. University of Strathclyde 74.4
25. University of Hertfordshire 72.8
26. Royal Holloway, University of London 71.3
27. University of Surrey 69.4
28. University of York 67.6
29. University of Bath 57.6
30. University of Sussex 54.0
31. Swansea University 52.9
32. Heriot-Watt University 51.7
33. University of Central Lancashire 51.1
34. Loughborough University 41.9
35. King’s College London 41.8
36. Liverpool John Moores University 39.6
37. Aberystwyth University 35.7
38. Keele University 22.5
39. Armagh Observatory 16.9
40. University of the West of Scotland 6.7
41. University of Kent 6.6
42. University of Brighton 2.3

Well, it’s actually quite surprising how much things change. I don’t think it means very much, but 22nd certainly sounds much better than 35th.

But, being a Newcastle United supporter, I’ve never been a great fan of league tables.

Res Judicata

Posted in Science Politics with tags , , , , on December 18, 2008 by telescoper

Today is the day people working in British Universities have waited for in a mixture of hope and apprehension for several years. The results of the 2008 Research Assessment Exercise (RAE) were published at 0.01am GMT today (18th December).

I had a look just after midnight and the webserver crashed, but only for a few minutes and I soon got back in and found the bad news. The relevant one for me as an astrophysicist is the table for Unit of Assessment 19 which is Physics & Astronomy. Results are given as a list of numbers, consisting of the number of staff entered (not necessarily an integer, for accounting reasons) followed by the percentage of work judged by the panel to be in each of four categories explained in the following excerpt from the RAE website

The quality profiles displayed on this website are the results of the 2008 Research Assessment Exercise (RAE2008), the sixth assessment in this current format of the quality of research conducted in UK Higher Education Institutions (HEIs). The UK funding bodies for England, Northern Ireland, Scotland and Wales will use the RAE2008 results to distribute funding for research from 2009-10.

The results follow an expert review process conducted by assessment panels throughout 2008. Research in all subjects was assessed against agreed quality standards within a common framework that recognised appropriate variations between subjects in terms of both the research submitted and the assessment criteria.

Submissions were made in a standard form that included both quantitative and descriptive elements. Full details of the contents of, and arrangements for making, submissions were published in ‘Guidance on submissions‘ (RAE 03/2005).

The RAE quality profiles present in blocks of 5% the proportion of each submission judged by the panels to have met each of the quality levels defined below. Work that fell below national quality or was not recognised as research was unclassified.

4* Quality that is world-leading in terms of originality, significance and rigour.
3* Quality that is internationally excellent in terms of originality, significance and rigour but which nonetheless falls short of the highest standards of excellence.
2* Quality that is recognised internationally in terms of originality, significance and rigour.
1* Quality that is recognised nationally in terms of originality, significance and rigour.
Unclassified Quality that falls below the standard of nationally recognised work. Or work which does not meet the published definition of research for the purposes of this assessment.

The ‘international’ criterion equates to a level of excellence that it was reasonable to expect for the UOA, even though there may be no current examples of such a level in the UK or elsewhere. It should be noted that ‘national’ and ‘international’ refer to standards, not to the nature or geographical scope of particular subjects.

For my own department, the School of Physics & Astronomy, at Cardiff University, I found the following

Cardiff University (32.30) 5 45 30 20

which means that we entered 32.30 people, but only 5% of the work was judged to be at the top level (4*), 45% at 3*, 30% at 2* and 20% at 1*. On their own these figures don’t mean very much but one can do a quick comparison with the rest of the table to see that for us this is an enormous disappointment. We have a much lower fraction of 4* than the majority of departments, and also a significantly higher fraction of 1*. These findings are very worrying.

If I were working an English University with these results I would be very concerned about their financial implications, but it’s a bit more complicated with us being here in Wales. The numbers given in the table are translated into money by the funding councils and Wales has its own one of these (HEFCW, different from the English HEFCE). There are many fewer physics departments in Wales and we’re not competing with the bigger English ones for funding. We don’t yet know how much our research funds will be cut. It might not be as bad as if we were in England, but it’s clearly not good. We won’t know how much dosh will be involved until March 2009. t’s not just a matter of funding, it’s also the national and international perception of the department in the physics community.

I can see there will be a post mortem to find out what went wrong, as most of us were confident of a much better outcome. Perhaps the format of the RAE (focussing on research papers as the measure of output) is not favourable to a department with so many instrument builders in it?

But with the economy in deep recession making further cuts in research funding likely in the future, and our major external funder (STFC) already struggling to make ends meet, this poor showing in the RAE this has cast a gloomy shadow over Christmas.

Of course many places did much better, including my old department at Nottingham which has

University of Nottingham (44.45) 25 40 30 5

which can be interestingly compared with Cambridge, who have

University of Cambridge (141.25) 25 40 30 5

You can see that apart from the different numbers of staff the profile is exactly the same. I’m sure their publicity machine will pick up on this so I won’t be the last to mention it! Well done, Nottingham!

It will be interesting to see what the newspapers make of the new RAE results. They are significantly more complicated than previous versions which just gave a single number. The scope for flexibility in generating league tables is clearly greatly enhanced by this complexity so we can bet the hacks will have a field day. I thought I’d get a headstart by doing a straightforward ranking using a simple weighted average using 4=4*, 3=3*, etc and then sorting them by the average thus obtained:

1. Lancaster University 2.9
2. University of Bath 2.85
3. University of Cambridge 2.85
4. University of Nottingham 2.85
5. University of St Andrews 2.85
6. University of Edinburgh 2.8
7. University of Durham 2.75
8. Imperial College London 2.75
9. University of Sheffield 2.75
10. University College London 2.75
11. University of Glasgow 2.75
12. University of Birmingham 2.7
13. University of Exeter 2.7
14. University of Sussex 2.7
15. University of Bristol 2.65
16. University of Liverpool 2.65
17. University of Oxford 2.65
18. University of Southampton 2.65
19. Heriot-Watt University 2.65
20. University of Hertfordshire 2.6
21. University of Manchester 2.6
22. University of Warwick 2.6
23. University of York 2.6
24. King’s College London 2.55
25. University of Leeds 2.55
26. University of Leicester 2.55
27. Royal Holloway, University of London 2.55
28. University of Surrey 2.55
29. Swansea University 2.55
30. Queen Mary, University of London 2.5
31. Queen’s University Belfast 2.5
32. Loughborough University 2.45
33. Liverpool John Moores University 2.4
34. University of Strathclyde 2.35
35. Cardiff University 2.35
36. University of Brighton 2.3
37. University of Central Lancashire 2.3
38. Keele University 2.25
39. Armagh Observatory 2.25
40. University of Kent 2.2
41. Aberystwyth University 1.95
42. University of the West of Scotland 1.8

So you can see we are languishing at 35th place out of 42.

This is supposed to be the last RAE and we don’t know what is going to replace it. I don’t at all object to the principle that research funding should be peer-assessed but this particular exercise was enormously expensive in the effort spent at Universities preparing for it, not to mention the ridiculous burden placed on the panel of having to read all those papers.

Popularisation or Propaganda?

Posted in Science Politics, The Universe and Stuff with tags , , , on November 25, 2008 by telescoper

I was just reading a piece by Jim Al-Khalili in today’s Guardian online science section. Jim is Professor of Physics and of Public Engagement in Science at the University of Surrey. His piece seems to have been inspired by the new appointment of Marcus du Sautoy to a similar position at Oxford University recently vacated by Richard Dawkins. His message is essentially that scientists should not only be more active in popularising science but also do more to “defend our rational, secular society against the rising tide of irrationalism”.

The legitimate interface between science and society has many levels to it. One aspect is the simple need to explain what science tells us about the world in order that people can play an informed part in our increasingly technological society. Another is that there needs to be encouragement for (especially young) people to study science seriously and to make it their career in order to maintain the supply of scientists for the future. And then there is the issue of the wider cultural implications of science, its impact on other belief-systems (such as religions) other forms of endeavour (such as art and literature) and even for government.

I think virtually all scientists would agree with the need for engagement in at least the first two of these. In fact, I’m sure most scientists would love to have the chance to explain their work to a lay audience, but not all subjects are as accessible or inspirational as, say, astronomy. Unfortunately also, not all scientists are very good at this sort of thing. Some might even be counterproductive if inflicted on the public in this way. So it seems relatively natural that some people have had more success than others, and have thus become identified as “science communicators”. Although some scientists are a bit snobby about those who write popular books and give popular talks, most of us agree that this kind of work is vital.

Vital, yes, but there are dangers. The number of scientists involved in this sort of work is probably more limited than it should be owing to the laziness of the popular media, who generally can’t be bothered to look outside London and the South-East for friendly scientists. The broadsheet newspapers employ very few qualified specialists among their staff even on the science pages so it’s a battle to get meaningful scientific content into print in the mass media. Much that does appear is slavishly regurgitated from one of the press agencies who are kept well fed by the public relations experts employed by research laboratories and other science institutes.

These factors mean that what comes out in the media can be a distorted representation of the real scientific process. Head of research groups and laboratories are engaged in the increasingly difficult business of securing enough money to continue their work in these uncertain financial times. Producing lots of glossy press releases seems to be one way of raising the profile and gaining the attention of funding bodies. Most scientists do this with care, but sometimes the results are ludicrously exaggerated or simply wrong. Some of the claims circulating around the time the Large Hadron Collider was switched on definitely fell into one or more of those categories. I realise that there’s a difficult balance to be struck between simplicity and accuracy, and that errors can result from overenthusiasm rather than anything more sinister, but even so we should tread carefully if we want the public to engage with what science really is.

Most worryingly is the perceived need to demonstrate black-and-white certainty over issues which are considerably more complicated than that. This is another situation where science popularisation becomes science propaganda. I’m not sure whether the public actually wants its scientists to make pronouncements as if they were infallible oracles, but the media definitely do. Scientists sometimes become cast in the role of priests, which is dangerous, especially when a result is later shown to be false. Then the public don’t just lose faith with one particular scientist, but with the whole of science.

Science is not about certainty. What it is a method for dealing rationally with uncertainty. It is a pragmatic system primarily intended for making testable inferences about the world using measurable, quantitative data. Scientists look their most arrogant and dogmatic when they try to push science beyond the (relatively limited) boundaries of its applicability and to ride roughshod over alternative ways of dealing with wider issues including, yes, religion.

I don’t have any religious beliefs that anyone other than me would recognize as such. I am also a scientist. But I don’t see any reason why being a scientist or not being a scientist should have any implications for my (lack of) religious faith. God (whatever that means) is, by construction, orthogonal to science. I’m not at all opposed to scientists talking about their religion or their atheism in the public domain, but I don’t see why their opinions are of any more interest than anyone else’s in these matters.

This brings us to the third of Jim’s suggestions: that more scientists should follow Richard Dawkins’ lead and be champions of atheism in the public domain. As a matter of fact, I agree with some of Dawkins’ agenda, such as his argument for the separation of church and state, although I don’t feel his heavy-handed use of the vitriol in The God Delusion achieved anything particularly positive (except for his bank balance, perhaps). But I don’t think it’s right to assume that all scientists should follow his example. Their beliefs are their business. I don’t think we will be much better off if we simply replace one set of priests with another.

So there you have my plea for scientists to accept that science will never have all the answers. There will always be “aspects of human experience that, even in an age of astonishing scientific advance, remain beyond the reach of scientific explanation”.

Can I have the Templeton Prize now please?

Cerebral Asymmetry: is it all in the Mind?

Posted in Bad Statistics, Science Politics with tags , , on November 12, 2008 by telescoper

After blogging a few days ago about the possibility that our entire Universe might be asymmetric, I found out today that a short comment of mine about a completely different form of asymmetry has been published in the Proceedings of the National Academy of Sciences of New York.

Earlier this summer a paper by Ivanka Savic & Per Lindstrom concerning gender and sexuality differences in brain structure received widespread press coverage and the odd blog comment. They had analysed a group of 90 volunteers divided into four classes based on gender and sexual orientation: male heterosexual, male homosexual, female heterosexual and female homosexual.

They studied the brain structure of these volunteers using Magnetic Resonance Imaging and used their data to look for differences between the different classes. In particular they measured the asymmetry between left and right hemispheres for their samples. The right side of the brain for heterosexual men was found to be typically about 2% larger than the left; homosexual women also had an asymmetry, but slightly smaller than this at about 1%. Gay men and heterosexual women showed no discernible cerebral asymmetry. These claims are obviously very interesting and potentially important if they turn out to be true. It is in the nature of the scientific method that such results should be subjected to rigorous scrutiny in order to check their credibility.

As someone who knows nothing about neurobiology but one or two things about statistics, I dug out the research paper by Savic & Lindstrom and looked at the analysis it presents. I very quickly began to suspect there might be a problem. For each volunteer, the authors obtain measurements of the left and right cerebral volumes (call these L and R respectively). Each pair of measurements is then combined to form an asymmetry index (AI) as (L-R)/(L+R). There is then a set of values for AI, one for each volunteer. The claim is that these are systematically different for the different gender and orientation groups, based on a battery of tests including Analysis of Variance (ANOVA) and t-tests based on sample means.

Of course, it would be better to do this using a consistent, Bayesian, approach because this would make explicit the dependence of the results on an underlying model of the data. Sadly, the statistical methodology available off-the-shelf is of inferior frequentist type and this is what researchers tend to do when they don’t really know what they’re doing. They also don’t bother to read the health warnings that state the assumptions behind the results.

The problem in this case is that the tests done by Savic & Lindstrom all depend on the quantity being analysed (AI) having a normal (Gaussian) distribution. This is very often a reasonable hypothesis for biometric data, but unfortunately in this case the construction of the asymmetry index is such that it is expected to have a very non-Gaussian shape as is commonly the case for distributions of variables formed as ratios. In fact, the ratio of two normal variates has a peculiar distribution with very long tails. Many statistical analyses appeal to the Central Limit Theorem to justify the assumption of normality, but distributions with very long tails (such as the Cauchy distribution) violate the conditions of this Theorem, namely that the distribution must have finite variance. The asymmetry index is probably therefore an inappropriate choice of variable for the tests that Savic & Lindstrom perform. In particular the significance levels (or p-values) quoted in their paper are very low (of order 0.0008, for example, in the ANOVA test) which is surprising for such small samples. These probabilities are obtained by assuming the observations have Gaussian statistics, and they would be much lower for a distribution with longer tails.

Being a friendly chap I emailed Dr Savic drawing this problem to her attention and asking if she knew about this problem and the possible implications it might have for the analysis she had presented. If not, I offered to do an independent (private) check on the data to see how reliable the claimed statistical results actually were. I never received a reply.

Worried that the world might be jumping to all kinds of far-reaching conclusions about gay genes based on these questionable statistics, I wrote instead to the editor of the Journal Proceedings of the National Academy of Sciences of New York, Randy Schekman, who suggested I submit a written comment to the Journal. I did, it was accepted by the editorial committee, and it came out in the 11th November Issue. What I didn’t realise was that Savic & Lindstrom had actually prepared a reply and that this was published alongside my comment. I find it strange that I wasn’t told about this before publication but that aside, it is in principle quite reasonable to let the authors respond to criticisms like mine. Their response reveals that they completely missed the point of the danger of long-tailed distributions I mentioned above. They state that “when the sample size n is big the sampling distribution of the mean becomes approximately normal regardless of the distribution of the original variable“. Not if the distribution of the original variable has such a long tail it doesn’t! In fact, if the observations have a Cauchy distribution then so does the sampling distribution of the mean, whatever the size of sample. You can find this caveat spelled out in many places, including here. Savic & Lindstrom seem oblivous to this pitfall, even after I specifically pointed it out to them.

They also claim that a group size of n=30 is sufficient to be confident that the central limit theorem holds. A pity, then, that none of their groups is of that size. The overall sample is 90, but it is broken down into two groups of 20 and two of 25.

cerebral-asymmetry

(c) 2008 Academy of Sciences of New York

They also say that the measured AI distribution is actually normal anyway and give a plot (above). This shows all the AI values binned into one histogram. Since they don’t give any quantitative measures of goodness of fit, it’s hard to tell whether this has a normal distribution or not. One can, however, easily identify a group of five or six individuals that seem to form a separate group with larger AI values (the small peak to the right of the large peak). Since they don’t give histograms broken down by group it is impossible to be sure, but I would hazard a guess that these few individuals might be responsible for the entire result; remember that the entire sample has n only of 90.

More alarmingly, Savic & Lindstrom state in their reply that “one outlier” is omitted from this graph. Really? On what basis was the outlier rejected? The existence of outliers could be evidence of exactly the sort of problem I am worried about! Unless there was a known mistake in the measurement, this outlier should never have been omitted. They claim that the “recalculation of the data excluding this outlier does not change the results”. It find it difficult to believe that the removal of an outlier from such a small sample could not change the p-values!

In my note I made a few constructive suggestions as to how the difficulty might be circumvented, by Savic & Bergstrom have not followed any of them. Instead they report (without details of the p-values) having done some alternative, non-parametric, tests. These are all very well, but they don’t add very much if their p-values also assume Gaussian statistics. A better way to do this sort of thing robustly would be using Monte Carlo simulations.

The bottom line is that after this exchange of comments we haven’t really got anywhere and I still don’t know if the result is significant. I don’t really think it’s useful to go backwards and forwards through the journal, so I’ve emailed Dr Savic again asking for access to the numbers so I can check the statistics privately. In astronomy it is quite normal for people to make their data sets publically available, but that doesn’t seem to be the case in neurobiology. I’m not hopeful that they will reply, especially since they branded my comments “harsh” and “inappropriate”. Scientists should know how to take constructive criticism.

Their conclusion may eventually turn out to be right, but the analysis done so far is certainly not robust and it needs further checking. In the meantime I don’t just have doubts about the claimed significance of this specific result, which merely serves to illustrate the extremely poor level of statistical understanding displayed by large numbers of professional researchers. This was one of the things I wrote about in my book From Cosmos to Chaos. I’m very confident that a large fraction of claimed results in biosciences are based on bogus analyses.

I’ve long thought that scientific journals that deal with subjects like this should employ panels of statisticians to do the analysis independently of the authors and also that publication of the paper should require publication of the raw data. Science advances when results are subject to open criticism and independent analysis. I sincerely hope that Savic & Lindstrom will release their data in order for their conclusions to be checked in this way.

It’s no wonder that there is so much public distrust of science, when such important claims are rushed into the public domain without proper scrutiny.

The New Inflationary Universe

Posted in Finance, Science Politics on October 14, 2008 by telescoper

Among the bits of economic information released by the Office of National Statistics today is one item that academics in all disciplines wanted to hear about: the value of the Retail Prices Index (RPI) in the UK for September 2008, which turned out to be 5.0%.

The reason for the fascination with this number is that, in an unusual spasm of farsightedness, the University and College Union stipulated that the final stage of the pay deal it negotiated in 2006 would be applied in October 2008 and this would amount to 2.5% or the RPI whichever is the greater. Two years ago it seemed a very different world, and 2.5 % seemed to be much the likelier eventuality, but energy and commodity prices surged last year and the RPI now stands at double that figure. So we’re all set for a 5% pay rise this month, although probably we won’t actually get any more money until the November pay packet arrives.

It would have been even better had UCU chosen the Consumer Prices Index (CPI), which has now overtaken the RPI and stands at 5.2%. This is the governments preferred measure of inflation, which is based only on the price of consumer goods and household utilities, while the RPI includes other items such as mortgage costs and transport costs.

At least in the short-term, this seems good news for all academics in UK universities.

But even in paradise there was a serpent, and there is a significant danger that some departments’ balance sheets will suffer very badly from these extra salary costs. Many already operate on very tight margins. In the longer term there may be mergers and closures followed by redundancies. Also since the research councils’ cash allocations for the next few years are already fixed, an increase in salaries over that already accounted for will mean a corresponding reduction in the number of positions that can be funded, which is bad news for younger people looking for PDRA positions. Given that the Science and Technology Facilities Council‘s budget wasn’t very generous in the first place, causing a crisis in funding for astronomy and particle physics research the extra wage demands are likely to cause further strain.

Still, a 5% pay rise just before Xmas will be good while it lasts.