Archive for the Science Politics Category

Index Rerum

Posted in Biographical, Science Politics with tags , , , , , , , , , on September 29, 2009 by telescoper

Following on from yesterday’s post about the forthcoming Research Excellence Framework that plans to use citations as a measure of research quality, I thought I would have a little rant on the subject of bibliometrics.

Recently one particular measure of scientific productivity has established itself as the norm for assessing job applications, grant proposals and for other related tasks. This is called the h-index, named after the physicist Jorge Hirsch, who introduced it in a paper in 2005. This is quite a simple index to define and to calculate (given an appropriately accurate bibliographic database). The definition  is that an individual has an h-index of  h if that individual has published h papers with at least h citations. If the author has published N papers in total then the other N-h must have no more than h citations. This is a bit like the Eddington number.  A citation, as if you didn’t know,  is basically an occurrence of that paper in the reference list of another paper.

To calculate it is easy. You just go to the appropriate database – such as the NASA ADS system – search for all papers with a given author and request the results to be returned sorted by decreasing citation count. You scan down the list until the number of citations falls below the position in the ordered list.

Incidentally, one of the issues here is whether to count only refereed journal publications or all articles (including books and conference proceedings). The argument in favour of the former is that the latter are often of lower quality. I think that is in illogical argument because good papers will get cited wherever they are published. Related to this is the fact that some people would like to count “high-impact” journals only, but if you’ve chosen citations as your measure of quality the choice of journal is irrelevant. Indeed a paper that is highly cited despite being in a lesser journal should if anything be given a higher weight than one with the same number of citations published  in, e.g., Nature. Of course it’s just a matter of time before the hideously overpriced academic journals run by the publishing mafia go out of business anyway so before long this question will simply vanish.

The h-index has some advantages over more obvious measures, such as the average number of citations, as it is not skewed by one or two publications with enormous numbers of hits. It also, at least to some extent, represents both quantity and quality in a single number. For whatever reasons in recent times h has undoubtedly become common currency (at least in physics and astronomy) as being a quick and easy measure of a person’s scientific oomph.

Incidentally, it has been claimed that this index can be fitted well by a formula h ~ sqrt(T)/2 where T is the total number of citations. This works in my case. If it works for everyone, doesn’t  it mean that h is actually of no more use than T in assessing research productivity?

Typical values of h vary enormously from field to field – even within each discipline – and vary a lot between observational and theoretical researchers. In extragalactic astronomy, for example, you might expect a good established observer to have an h-index around 40 or more whereas some other branches of astronomy have much lower citation rates. The top dogs in the field of cosmology are all theorists, though. People like Carlos Frenk, George Efstathiou, and Martin Rees all have very high h-indices.  At the extreme end of the scale, string theorist Ed Witten is in the citation stratosphere with an h-index well over a hundred.

I was tempted to put up examples of individuals’ h-numbers but decided instead just to illustrate things with my own. That way the only person to get embarrased is me. My own index value is modest – to say the least – at a meagre 27 (according to ADS).   Does that mean Ed Witten is four times the scientist I am? Of course not. He’s much better than that. So how exactly should one use h as an actual metric,  for allocating funds or prioritising job applications,  and what are the likely pitfalls? I don’t know the answer to the first one, but I have some suggestions for other metrics that avoid some of its shortcomings.

One of these addresses an obvious deficiency of h. Suppose we have an individual who writes one brilliant paper that gets 100 citations and another who is one author amongst 100 on another paper that has the same impact. In terms of total citations, both papers register the same value, but there’s no question in my mind that the first case deserves more credit. One remedy is to normalise the citations of each paper by the number of authors, essentially sharing citations equally between all those that contributed to the paper. This is quite easy to do on ADS also, and in my case it gives  a value of 19. Trying the same thing on various other astronomers, astrophysicists and cosmologists reveals that the h index of an observer is likely to reduce by a factor of 3-4 when calculated in this way – whereas theorists (who generally work in smaller groups) suffer less. I imagine Ed Witten’s index doesn’t change much when calculated on a normalized basis, although I haven’t calculated it myself.

Observers  complain that this normalized measure is unfair to them, but I’ve yet to hear a reasoned argument as to why this is so. I don’t see why 100 people should get the same credit for a single piece of work:  it seems  like obvious overcounting to me.

Another possibility – if you want to measure leadership too – is to calculate the h index using only those papers on which the individual concerned is the first author. This is  a bit more of a fiddle to do but mine comes out as 20 when done in this way.  This is considerably higher than most of my professorial colleagues even though my raw h value is smaller. Using first author papers only is also probably a good way of identifying lurkers: people who add themselves to any paper they can get their hands on but never take the lead. Mentioning no names of  course.  I propose using the ratio of  unnormalized to normalized h-indices as an appropriate lurker detector…

Finally in this list of bibliometrica is the so-called g-index. This is defined in a slightly more complicated way than h: given a set of articles ranked in decreasing order of citation numbers, g is defined to be the largest number such that the top g articles altogether received at least g2 citations. This is a bit like h but takes extra account of the average citations of the top papers. My own g-index is about 47. Obviously I like this one because my number looks bigger, but I’m pretty confident others go up even more than mine!

Of course you can play with these things to your heart’s content, combining ideas from each definition: the normalized g-factor, for example. The message is, though, that although h definitely contains some information, any attempt to condense such complicated information into a single number is never going to be entirely successful.

Comments, particularly with suggestions of alternative metrics are welcome via the box. Even from lurkers.

Reffing

Posted in Science Politics with tags , , , on September 28, 2009 by telescoper

No sooner has the dust settled on the  2008 Research Assessment Exercise (RAE) when the Higher Education Funding Council for England (HEFCE) has tabled its proposals for a new system called the Research Excellence Framework (REF) in a 56-page consultation document that you can download and peruse at your leisure.

I won’t try to give a complete account of the new system except to say that apart from the change of acronym there won’t be much different. Many of us hoped that the new framework would involve a lighter touch than the RAE, so we could actually get on with research instead of filling in forms all our lives. Fat chance. You can call me cynical if you like, but I think it’s obvious that once you set up a monstrous bureaucratical nightmare like the RAE it is almost impossible to kill it off. Things like this gather their own momentum and become completely self-serving. The apparatus of research assessment no longer exists to fulfil a particular purpose. It exists because it exists.

It might be useful however to summarise the main changes:

  1. The number of Units of Assessment and sub-panels is to be reduced from 67 to 30 and the number of main assessment panels from 15 to 4. This move is bound to prove controversial as it will clearly reduce the number of specialists involved in the quality appraisal side of things. However, the last RAE produced clear anomalies in the assessment carried out by different panels: physics overall did very poorly compared to other disciplines, for example. Having fewer panels might make it easier to calibrate different subjects. Might.
  2. In REF the overall assessments are going to be based on three elements: research output (60%); impact (25%); and environment (15%). In the last RAE each panel was free to vary the relative contribution of different components to the overall score. Although the “research output” category is similar to the last RAE, it is now proposed to include citation measures in the overall assessment. Officially, that is. It’s an open secret that panel members did look at citations last time anyway.  Citation impact will however be used only for certain science and engineering subjects.  “Impact” is a new element and its introduction is  in line with the government’s agenda to pump research funds into things which will generate wealth, so this measure will probably shaft fundamental physics. “Environment” includes things like postgraduate numbers, research funding and the like; this is also similar to the RAE.
  3. A roughly similar number of experts will be involved as in RAE 2008 – so it will be similarly expensive to run.
  4. The consultation document asks whether the number of outputs submitted per person should be reduced from four to three, and also whether “substantive outputs” (whatever they are) should be “double-weighted”.
  5. The results will be presented in terms of “profiles” as in 2008, with the percentage of activity at each level being given.
  6. The consultation also suggests honing the description of “world-leading” (4*) and “internationally excellent” (3*) to achieve greater discrimination at the top end of the scale. This is deeply worrying, as well as completely absurd. The last RAE applied a steeply rising funding formula to the scores so that 4*:3*:2*:1* was weighted 7:3:1:0. However the fraction of  work in each category is subject to considerable uncertainty, amplified by the strong weighting.  If the categories are divided further then I can see an even steeper weighting emerging, with the likely outcome that small variations in the (subjective) assessment will lead to drastic variations in funding. Among the inevitable consequences of this will be that  some excellent research will lose out.

No doubt university administrators across the United Kingdom will already be plotting how best to play the new system. I think we need to remember, though, that deep cuts in public spending have been promised by both major political parties and there is a general election due next year. I can see the overall  budget for university research being slashed so we’ll be fighting for shares of a shrinking pot. Killing off the bureaucracy would save money, but somehow I doubt that will be on the agenda.

Future Fees

Posted in Science Politics with tags , , , on September 21, 2009 by telescoper

There’s been a lot of news coverage today arising from a new report by the Confederation of British Industry (CBI) which argues that students should in future pay higher tuition fees to go to British universities. As you can probably imagine this has generated quite a lot of comment, but since some of the remarks I’ve heard are based on misunderstandings I thought I’d give my angle on  is happening and what the implications are.

For a start, the tuition fees paid by students at present are not the sole (or even the largest part) of the income paid to universities for undergraduate education. The way the funding councils work is to pay each university directly an amount for teaching each student (called the recurrent grant). This amount depends on the course. There is a basic level (which for 2009/10 is £3,947), but this is increased for subjects which require experimental work. The result is that there are four funding bands: A (which is clinical medicine, the most expensive); B (which includes science subjects such as physics); C (which includes subjects with laboratory or fieldwork element); and D (everything else).

The level of funding for an individual student in each price band in 2009/10 is

  • band A – £15,788
  • band B – £6,710
  • band C – £5,131
  • band D – £3,947

Physics (and Astronomy) is in band B, so the department receives £6,710 directly from the government for each student doing a course in these subjects.

Brought in in 2006, the “top-up” fee (currently £3225) is in addition to this, although it does not have to be paid immediately by the students. They can borrow the money at an advantageous interest rate and only have to pay it back when  they have left their University and started to earn money at a level sufficient to trigger the repayment. Here in Wales the situation is a little bit more complicated because the students don’t pay the full “top-up” fee payable in England. Instead they pay a lower rate (currently £1285) and the Welsh Assembly Government makes good the shortfall to the University. In Scotland there are no tuition fees payable by the students.

Anyway, for Physics at least, the tuition fee is only about one-third the total income for each student. It looks, then, like the government does actually pay the lion’s share of the cost of higher education, especially in science and medicine. However, it is worth remarking that if the UK devoted the same share of its GDP as the OECD mean (1.1%) then students would not have to pay top-up fees at all in order to fund the entire University system at an adequate level. Clearly a political decision was made that funding Trident, ID cards,  and wars in Iraq and Afghanistan was a much better use for taxpayers’ money than providing universal free higher education.

I don’t actually object to the principle that students should make a contribution to the cost of their university education but I think the fairest way to do that is via the taxation system. There are many problems with the system we have, which is an attempt at a British compromise that actually gives us the worst of all worlds. The Labour party was scared to allow fees to be set too high for fear of alienating its traditionalists by discouraging those from poorer background from going to university. On the other hand, it didn’t want to set them too low because that wouldn’t bring in sufficient extra money. In the end they settled at an in-between level, i.e. one that achieved very little and alienated people anyway.

For a start the level of top-up income is not really high enough to pay for the investment that is needed. Many leading universities are in fact making redundancies because the additional revenue  realised by top-up fees was not enough to meet the rising pay bill resulting from a generous salary settlement last year. Moreover, the idea that top-up fees would satisfy the right-wingers by introducing some kind of “market” was a complete delusion. All universities (big and small, old and new, good and less good) charged the same level of fee.

I went to university in the 1980s when the system was very different. There were no top-up fees and, because I wasn’t from a wealthy family, I received a full maintenance grant to cover the cost of living and studying during the three years of my degree. That’s the big difference nowadays: nobody gets a full maintenance grant. Universities do use some of their tuition fee money to provide contributions to poorer students but they generally amount to a few thousand pounds a year. That’s not enough to live on, so most students either rely on their parents to help them or have to work during term-time. I never had to do either of those.

Anyway the CBI report says that the level of tuition fees should increase to around £5000, the student loan interest rate should increase and there should be fewer bursaries. Even within its own terms I don’t think this makes much sense. In fact, I could understand them better if they had argued to remove the cap altogether. The posh places – Oxbridge and perhaps a few others – which can probably fill their places  charging whatever they like could actually afford a fairly generous bursary scheme that might encourage a few talented working class kinds to go there to ease these institutions’ consciences.  Other universities would be forced to set their own fee levels according to the demands of income and recruitment.  The system would be increasingly differentiated by cost and quality, but students from poorer backgrounds would  be excluded to an even greater extent than they are now. I wouldn’t like a university system built along those lines but it seems to me that it would suit the mentality of the CBI.

The big issue about today’s debate, however, is that neither the Labour nor the Conservative Party is going to say what they’re going to do about university funding until after the general election next year. Certainly  neither of them will say whether the fee will go up to £5000. For once, I agree with Sally Hunt  (general secretary of the Universities and Colleges Union) who has urged them both to come clean. Keeping silent about this when other public sector cuts are clearly on the table is both spineless and dishonest. Just what you’d expect from politicians, in fact.

For what it’s worth I predict that after the next election higher education will suffer a classic double-whammy. Whichever party takes power, the resulting government will be forced to make large-scale cuts in public spending to keep the country’s finances under control. I think what they’ll do is cut the unit of resource (probably by a large amount, say 25%) at the same time as increasing the tuition fee element. They can then claim that University funding has been protected while at the same time cutting the cost of the system to the public purse. Students will end up paying more for less. But, hey, at least it will keep the bankers happy and that’s what we’re here for after all.

Atlantes

Posted in Science Politics, The Universe and Stuff with tags , , , , , , on September 10, 2009 by telescoper

I’ve just noticed a  post on another blog about the  meeting of the Herschel ATLAS consortium that’s  going on in Cardiff at the moment, so I thought I’d do a quickie here too. Actually I’ve only just been accepted into the Consortium so quite a lot of the goings-on are quite new to me.

The Herschel ATLAS (or H-ATLAS for short) is the largest open-time key project involving Herschel. It has been awarded 600 hours of observing time  to survey 550 square degrees of sky in 5 wavelenth bands: 110, 170, 250, 350, & 500 microns. It is hoped to detect approximately 250,000 galaxies,  most of them in the nearby Universe, but some will undoubtedly turn out to be very distant, with redshifts of 3 to 4; these are likely to be very interesting for  studies of galaxy evolution.

Herschel is currently in its performance verification (PV) phase, following which there will be a period of science validation (SV). During the latter the ATLAS team will have access to some observational data to have a quick look to see that it’s  behaving as anticipated. It is planned to publish a special issue of the journal Astronomy & Astrophysics next year that will contain key results from the SV phase, although in the case of ATLAS many of these will probably be quite preliminary because only a small part of the survey area will be sampled during the SV time.

Herschel seems to be doing fine, with the possible exception of the HIFI instrument which is currently switched off owing to a fault in its power supply. There is a backup, but the ESA boffins don’t want to switch it back on and risk further complications until they know why it failed in the first place. The problem with HIFI has led to some rejigging of the schedule for calibrating and testing the other two instruments (SPIRE and PACS) but both of these are otherwise doing well.

The data for H-ATLAS proper hasn’t started arriving yet so the meeting here in Cardiff was intended to sort out the preparations, plan who’s going to do what, and sort out some organisational issues. With well over a hundred members, this project has to think seriously about quite a lot of administrative and logistical matters.

One of the things that struck me as particular difficult is the issue of authorship of science papers. In observational astronomy and cosmology we’re now getting used to the situation that has prevailed in experimental particle physics for some time, namely that even short papers have author lists running into the hundreds. Theorists like me usually work in teams too, but our author lists are, generally speaking, much shorter. In fact I don’t have any publications  yet with more than six or seven authors; mine are often just by me and a PhD student or postdoc.

In a big consortium, the big issue is not so much who to include, but how to give appropriate credit to the different levels of contribution. Those senior scientists who organized and managed the survey are clearly key to its success, but so also are those who work at the coalface and are probably much more junior. In between there are individuals who supply bits and pieces of specialist software or extra comparison data. Nobody can pretend that everyone in a list of 100 authors has made an identical contribution, but how can you measure the differences and how can you indicate them on a publication? Or  shouldn’t you try?

Some suggest that author lists should always be alphabetical, which is fine if you’re “Aarseth” but not if you’re “Zel’dovich”. This policy would, however, benefit “al”, a prolific collaborator who never seems to make it as first author..

When astronomers write grant applications for STFC one of the pieces of information they have to include is a table summarising their publication statistics. The total number of papers written has  to be given, as well as the number in which the applicant  is  the first author on the list,  the implicit assumption being that first authors did more work than the others or that first authors were “leading” the work in some sense.

Since I have a permanent job and  students and postdocs don’t, I always make junior collaborators  first author by default and only vary that policy if there is a specific reason not to. In most cases they have done the lion’s share of the actual work anyway, but even if this is not the case it is  important for them to have first author papers given the widespread presumption that this is a good thing to have on a CV.

With more than 100 authors, and a large number of  collaborators vying for position, the chances are that junior people will just get buried somewhere down the author list unless there is an active policy to protect their interests.

Of course everyone making a significant contribution to a discovery has to be credited, and the metric that has been used for many years to measure scientific productivity is the numbered of authored publications, but it does seem to me that this system must have reached breaking point when author lists run to several pages!

It was all a lot easier in the good old days when there was no data…

PS. Atlas was a titan who was forced to hold the sky  on his shoulders for all eternity. I hope this isn’t expected of members of the ATLAS consortium, none of who are titans anyway (as far as I can tell). The plural of Atlas is Atlantes, by the way.

Much Ado About a Null Result

Posted in Science Politics, The Universe and Stuff with tags , , , on August 20, 2009 by telescoper

In today’s Nature there’s an article outlining the current upper limits on the existence of a stochastic cosmological background of gravitational waves. The basis of the analysis presented in the paper is a combination of data from two larger international collaborations, called VIRGO and LIGO. Cardiff University is a member of the latter, so I suppose I should be careful about what I say…

These experiments have achieved incredible sensitivity – they can measure distortions that are a tiny fraction of an atomic nucleus in scale – but because gravity is such a very weak force they still haven’t managed to find direct evidence of gravitational waves. The next generation of these laser interferometers – Advanced LIGO – should get within hailing distance of a detection but in the meantime we have to do with upper limits. Since the sensitivity of the instruments is so well calibrated, the lack of a signal can yield interesting information. The Nature paper is quite interesting in that it summarizes the constraints that can be placed in such a way on some models of the early Universe. Mostly, though, these are “exotic” models that have already been excluded by other means. If I’ve got my sums right the stochastic gravitational wave background expected to be produced within the standard “concordance” cosmology, in which gravitational wave modes are excited by cosmic inflation, is at least three orders of magnitude lower than current experimental sensitivity.

I can’t resist including the following excerpts from a press release, produced by the Media Relations Department at Caltech whose spin doctors have apparently been hard at work.

Pasadena, Calif.—An investigation by the LIGO (Laser Interferometer Gravitational-Wave Observatory) Scientific Collaboration and the Virgo Collaboration has significantly advanced our understanding the early evolution of the universe.

Analysis of data taken over a two-year period, from 2005 to 2007, has set the most stringent limits yet on the amount of gravitational waves that could have come from the Big Bang in the gravitational wave frequency band where LIGO can observe. In doing so, the gravitational-wave scientists have put new constraints on the details of how the universe looked in its earliest moments.

Much like it produced the cosmic microwave background, the Big Bang is believed to have created a flood of gravitational waves—ripples in the fabric of space and time—that still fill the universe and carry information about the universe as it was immediately after the Big Bang. These waves would be observed as the “stochastic background,” analogous to a superposition of many waves of different sizes and directions on the surface of a pond. The amplitude of this background is directly related to the parameters that govern the behavior of the universe during the first minute after the Big Bang.

and

“Since we have not observed the stochastic background, some of these early-universe models that predict a relatively large stochastic background have been ruled out,” says Vuk Mandic, assistant professor at the University of Minnesota.

“We now know a bit more about parameters that describe the evolution of the universe when it was less than one minute old,” Mandic adds. “We also know that if cosmic strings or superstrings exist, their properties must conform with the measurements we made—that is, their properties, such as string tension, are more constrained than before.”

This is interesting, he says, “because such strings could also be so-called fundamental strings, appearing in string-theory models. So our measurement also offers a way of probing string-theory models, which is very rare today.”

“This result was one of the long-lasting milestones that LIGO was designed to achieve,” Mandic says. Once it goes online in 2014, Advanced LIGO, which will utilize the infrastructure of the LIGO observatories and be 10 times more sensitive than the current instrument, will allow scientists to detect cataclysmic events such as black-hole and neutron-star collisions at 10-times-greater distances.

“Advanced LIGO will go a long way in probing early universe models, cosmic-string models, and other models of the stochastic background. We can think of the current result as a hint of what is to come,” he adds.

“With Advanced LIGO, a major upgrade to our instruments, we will be sensitive to sources of extragalactic gravitational waves in a volume of the universe 1,000 times larger than we can see at the present time. This will mean that our sensitivity to gravitational waves from the Big Bang will be improved by orders of magnitude,” says Jay Marx of the California Institute of Technology, LIGO’s executive director.

“Gravitational waves are the only way to directly probe the universe at the moment of its birth; they’re absolutely unique in that regard. We simply can’t get this information from any other type of astronomy. This is what makes this result in particular, and gravitational-wave astronomy in general, so exciting,” says David Reitze, a professor of physics at the University of Florida and spokesperson for the LIGO Scientific Collaboration.

If hyperbole is what you’re looking for, go no further. There’s nothing wrong with presenting even null results in a positive light but, I don’t think this paints a very balanced picture of the field. For examples, early Universe models involving cosmic strings were already severely constrained before these results, so we know that they don’t have a significant effect on the evolution of cosmic structure anyway.

Clearly the political intention was to flag the importance of Advanced LIGO, although even that will probably be unable to detect the cosmological gravitational-wave background.  Overstatements contained in press releases of this type usually prove counterproductive in the long run.

Critical Theory

Posted in Art, Music, Science Politics with tags , , , , , on August 18, 2009 by telescoper

Critics say the stangest things.

How about this, from James William Davidson, music critic of The Times from 1846:

He has certainly written a few good songs, but what then? Has not every composer that ever composed written a few good songs? And out of the thousand and one with which he deluged the musical world, it would, indeed, be hard if some half-dozen were not tolerable. And when that is said, all is said that can justly be said of Schubert.

Or this, by Louis Spohr, written in 1860 about Beethoven’s Ninth (“Choral”) Symphony

The fourth movement is, in my opinion, so monstrous and tasteless and, in it’s grasp of Schiller’s Ode, so trivial that I cannot understand how a genius like Beethoven could have written it.

No less an authority than  Grove’s Dictionary of Music and Musicians (Fifth Edition) had this to say about Rachmaninov

Technically he was highly gifted, but also severely limited. His music is well constructed and effective, but monotonous in texture, which consists in essence mainly of artificial and gushing tunes…The enormous popular success some few of Rachmaninov’s works had in his lifetime is not likely to last and musicians regarded it with much favour.

And finally, Lawrence Gillman wrote this in the New York Tribune of February 13 1924 concerning George Gershwin’s Rhapsody in Blue:

How trite and feeble and conventional the tunes are; how sentimental and vapid the harmonic treatment, under its disguise of fussy and futile counterpoint! Weep over the lifelessness of the melody and harmony, so derivative, so stale, so inexpressive.

I think I’ve made my point. We all make errors of judgement and music critics are certainly no exception. The same no doubt goes for literary and art critics too. In fact,  I’m sure it would be quite easy to dig up laughably inappropriate comments made by reviewers across the entire spectrum of artistic endeavour. Who’s to say these comments are wrong anyway? They’re just opinions. I can’t understand anyone who thinks so little  of Schubert, but then an awful lot of people like to listen what sounds to me to be complete dross. There even appear to be some people who disagree with the opinions I expressed yesterday!

What puzzles me most about the critics is not that they make “mistakes” like these – they’re only human after all – but why they exist in the first place. It seems extraordinary to me that there is a class of people who don’t do anything creative themselves  but devote their working lives to criticising what is done by others. Who should care what they think? Everyone is entitled to an opinion, of course, but what is it about a critic that implies we should listen to their opinion more than anyone else?

(Actually, to be precise, Louis Spohr was also a composer but I defy you to recall any of his works…)

Part of the idea is that by reading the notices produced by a critic the paying public can decide whether to go to the performance, read the book or listen to the record. However, the correlation between what is critically acclaimed and what is actually good (or even popular) is tenuous at best. It seems to me that, especially nowadays with so much opinion available on the internet, word of mouth (or web) is a much better guide than what some geezer writes in The Times. Indeed, the   Opera reviews published in the papers are so frustratingly contrary to my own opinion that I don’t  bother to read them until after the performance, perhaps even after I’ve written my own little review on here.  Not that I would mind being a newspaper critic myself. The chance not only to get into the Opera for free but also to get paid for spouting on about afterwards sounds like a cushy number to me. Not that I’m likely to be asked.

In science,  we don’t have legions of professional critics, but reviews of various kinds are nevertheless essential to the way science moves forward. Applications for funding are usually reviewed by others working in the field and only those graded at the very highest level are awarded money.  The powers-that-be are increasingly trying to impose political criteria on this process, but it remains a fact that peer review is the crucial part of the process. It’s not just the input that is assessed either. Papers submitted to learned journals are reviewed by (usually anonymous)  referees, who often require substantial changes to be made the work can be accepted for publication.

We have no choice but to react to these critics if we want to function as scientists. Indeed, we probably pay much more attention to them than artists do of critics in their particular fields. That’s not to say that these referees don’t make mistakes either. I’ve certainly made bad decisions myself in that role,  although they were all made in good faith. I’ve also received comments that I thought were unfair or unjustifiable, but at least I knew they were coming from someone who was a working scientist.

I suspect that the use of peer review in assessing grant applications will remain in place for a some considerable time. I can’t think of an alternative, anyway. I’d much rather have a rich patron so I didn’t have to bother writing proposals all the time, but that’s not the way it works in either art or science these days.

However, it does seem to me that the role of referees in the publication process is bound to become redundant in the very near future. Technology now makes it easy to place electronic publications on an archive where they can be accessed freely. Good papers will attract attention anyway, just as they would if they were in refereed journals. Errors will be found. Results will be debated. Papers will be revised. The quality mark of a journal’s endorsement is no longer needed if the scientific community can form its own judgement, and neither are the monstrously expensive fees charged to institutes for journal subscriptions.

A Degree of Value

Posted in Science Politics with tags , , , on August 7, 2009 by telescoper

Many column-inches have been devoted in the newspapers this week to the issue of University education, after provocative remarks by Phil Willis to the effect that the uncertainty over the “value” of degrees meant the system was descending into farce. Willis is the Chair of the Parliamentary Committee on Innovation, Universities, Science and Skills, which has just produced a highly critical report about the (lack of) regulation of teaching standards in UK Universities.

The Times Higher responded yesterday with an editorial accusing Universities of complacency over the issue of standards, and also ran a piece in which the Chief of the Quality Assurance Agency (QAA) tried to answer some of the criticisms of his outfit contained in the report.

There’s been a great deal of discussion over on the e-astronomer about this issue, and much of what I would say has already been said over there ,so I won’t say it all  here as well. However, there are a few points that I’d like to note.

First, most of the press coverage of this story has focussed on the fact that Universities are now awarding more first-class degrees than they used to.  Actually, the number has almost doubled within a decade. Degrees must be getting easier in order for this to the case, the argument goes. The government strenuously denies charges of dumbing down when A-level results get better every year but has a go at Universities when the same thing happens. So there’s a charge of hypocrisy for a start. However, I think the real reason for grade creep at both A-level and degree stages is that the current education system places a ridiculously high emphasis on compartmentalised learning and assessment methods that allow the students to succeed by cramming and question-spotting without any real knowledge. This has happened at Maths and Physics A-level with a particularly negative effect, and is beginning to happen in Universities too through the enforced modularisation of the curriculum that happened in the 1990s. The way to maintain and improve standards, at least in science education, is to reduce the amount of examination and make the examinations less predictable. The answer is not to entangle Universities in the clutches of a beefed up QAA.

I don’t know if the “standard” of a degree in Physics is lower now than it was ten years ago, nor even what it means to say that is the case. I certainly do think, however, that some of the papers I’m involved with now as a setter or a marker are harder in some ways than the ones I sat when I was a student about 25 years ago. I’m also conscious that I didn’t have to work to support myself most of the time when I was studying. What has changed a lot – and I hope the current generation of students believe this, because I really believe it’s true – is that Universities now put a huge amount of extra effort into teaching than they did when I was a student.

I want to make it clear that I do certainly do not think that present-day students are not as clever or as industrious as previous generations and are  just playing the system. One piece of evidence refutes that view very easily. In the questionnaires we give to students, they very often give the strongest signals of appreciation to courses they consider hard than to those they consider easy. I don’t think students don’t like dumbing down any more than staff do. They just want things to be done fairly.

I should add that I also think, within Physics, that academic standards are roughly comparable at the present time from University to University in the UK. I mean, in Physics at any rate, I honestly do believe that a First from Cardiff is worth the same as a First from Cambridge. I’ve been an external (or internal) examiner at several institutes over the last decade (including Cambridge) and, although their curricula vary a bit, I’m convinced that the academics try very hard to maintain the level of difficulty while at the same time being fair to the students by providing much more help than they used to. Many physicists, however, accept that forcing their syllabus into little modular boxes has made this circle very difficult to square.

I can’t speak for other subjects, of course. Is a first class degree in Media Studies from Nottingham Trent University worth as much (or indeed as little) as one from the University of Glamorgan? Perhaps. Perhaps not. Who knows?

However, it’s not really the issue of grades in itself that worried me most. Contained in the report is a scary section that claims that the link between “teaching quality” and research is “weak at best”. If, it says, it is essential for undergraduate teaching to be delivered within a strong research environment then research funding should be spread around. If not, then it should be concentrated.

The argument contained in the report is a masterpiece of non sequitur. Where is the evidence that research benefits from being carried out in a smaller number of departments? And if you deny a connection between teaching and research, whyshould the higher education funding agencies be involved in funding research anyway? And the evidence is always going to be “weak” when you talk about such ill-defined concepts. What does “teaching quality” mean? How do you measure it? The QAA doesn’t know and neither do I.

 The problem underpinning this issue is that, in 1992, the (Conservative) government allowed the polytechnics to become universities. The various research assessment exercises were introduced because, prior to 1992, all Universities received research funding in proportion to their undergraduate numbers. It was assumed, you see, that a University did teaching and research. However, the new Universities (or old Polytechnics) didn’t always have research activities in the areas they were teaching, and there wasn’t enough money to fund all 120+ new Universities on the pre-1992 basis. Thus the idea was conceived to concentrate this element of research funding (called QR) in those departments that were actually doing research. That’s not unreasonable, but as bureaucracies always do, the system of research assessment has become self-serving. Sufficient  concentration was actually achieved a decade ago, but we still have to endure pointless reshuffling exercises every few years.

The big changes of 1992  left Physics in a special position. The number of Physics (or Physics & Astronomy) departments in the UK entered into the last Research Assessment Exercise was only 42. About two-thirds of UK universities do not have research activity in this area. Very few Polytechnics either taught Physics to undergraduates or did research in Physics and very few started such programmes when they became Universities.  Why? Because there is absolutely no way you can teach a modern Physics degree outside a research department. It would be impossible to keep up to date, impossible to provide appropriate projects, and impossible to retain quality  staff to do the teaching because they would clearly want to be doing physics as well as teaching it. In Physics the link between teaching and research is not “weak”. The pre-1992 situation demonstrates how crucial it really is.

I can’t speak for other subjects, but I suspect much of this applies across all disciplines. That’s why I think a University in which students are taught by people who are not doing research in the field they are teaching just shouldn’t be called a University. By definition.

The Polytechnics had much to offer this country, but their contribution was largely lost when they became second-rate Universities. But of course you’ll never find a politician who will admit that it was a mistake.

Singh Along

Posted in Science Politics with tags , , , on August 4, 2009 by telescoper

One of the nice things about the blog interface at  WordPress  is the way it flags up posts from other blogs that might be related to those on your own site. A good example is an item at a site which is quite new to me called Cubik’s Rube. This particular one alerted me to an update about the Simon Singh libel action which I’ve blogged about before, in a post that generated a great deal of debate and discussion.

If you recall, Singh is being sued for libel by the British Chiropractic Association (BCA)  for damages after he labelled some of their treatments bogus in an article written in The Guardian. The newspaper settled and withdrew the piece from its website but Singh decided to fight the action. At a pre-trial hearing the judge ruled that his use of the word bogus would be interpreted as meaning that the therapies being offered by the BCA were not only worthless, but that the BCA  knew they were worthless. To win his case Singh would have to prove both these claims were true. Simon Singh claimed he never intended that meaning and vowed to appeal. That was the situation in June 2009, at the time of my previous post.

Things moved on a bit while I was away last week. In an order sealed on 30 July 2009 the Court of Appeal has refused Singh leave to appeal, thus piling the pressure even further on him to settle the action and restricting his options even further. For a clearer explanation of the legal issues involved than I could ever manage, see the article by famous legal blogger Jack of Kent.

One side issue is worth mentioning, however, which is that it is apparently unclear from a legal point of view whether the BCA has standing to sue for defamation at all since it is a corporation without shareholders. It seems strange that such a basic issue would be unresolved. Surely there must be relevant precedents?

Meanwhile the BCA has issued a conciliatory statement, implying that it would prefer for the case to be settled out of court. This seems a bit surprising given that they would appear to hold all the cards, but the answer probably lies in the appalling public relations gaffe it has made over its presentation of alleged evidence for its therapies.

Challenged (largely by bloggers) to present evidence for the effectiveness of its therapies for certain paediatric conditions (such as asthma, infantile colic and even bed-wetting), the BCA produced a report containing a “plethora” of evidence, dated 17th June 2009. This dossier – cobbled together from 19 research papers, most of which don’t really support their case at all – turns out to have been the epitome of dodginess and over the last few weeks it has been comprehensively dissected, discredited, debunked and demolished all over the blogosphere. A recent editorial in the British Medical Journal described its own refutation of the BCA’s claims to be “complete”.

I doubt if the BCA wants to see its credibility further undermined by having its so-called evidence savaged again in open court, which probably explains why they might prefer to settle than carry on the case. Nothing said in court can be subject to the libel laws.

But it’s an amazing blunder by the BCA to have presented such a shaky collection of evidence in the first place. All it has achieved is to make them look like fools.

Anyway, it’s now a peculiar situation. It still looks like Singh can’t win the case unless he can prove the BCA are dishonest rather than merely inept. And the BCA stands to fall even lower in public esteem if it goes to trial. If Singh can afford it he could fight on regardless and hope that if he loses the damages will be bearable. Morally, though, he will have won.

But the really impressive thing to me is the way that expert bloggers have forced the BCA into a corner. I think this is probably a sign of the way science is changing through use of the internet’s ability to communicate complex things so rapidly.

Advanced Fellowships

Posted in Science Politics with tags , , , on July 11, 2009 by telescoper

This is just a quick Newsflash that UK Astronomers will be  interested in (and depressed by). My attention was drawn to it yesterday by Frazer Pearce of Nottingham.

The Science and Technology Facilities Council (STFC) has decided in its finite wisdom to cut in half the number of Advanced Fellowships (AFs) it awards each year, that is from 12 to 6, that number to cover all of Astronomy and Particle Physics.

These fellowships are awarded to researchers who do not have a permanent position but wish to pursue research, and are designed to further the careers of individuals with outstanding potential. They last 5 years – longer than the usual 2-3 year postdoctoral positions and have been for many a scientist an important stepping-stone to an academic career. A very large fraction of my colleagues who have permanent positions were awarded one of these fellowships when they were run by PPARC (including Frazer), as was I myself but, being an Oldie, mine was even pre-PPARC so was in fact given by SERC. Of course the fact that they gave me one doesn’t itself serve as much of a recommendation for continuing them, but it is worth drawing attention to the huge amount of  high quality research done in the UK by holders of these Fellowships.

A number of people have expressed to me their shock at this decision but it doesn’t surprise me at all. For one thing, it’s an open secret that STFC considers the academic community in these areas to be too large so the last thing it wants is more people getting permanent jobs through the AF route.  In any case, STFC’s prime concern is with facilities, not with scientific research.

Who needs half a dozen top class scientists when you can have Moonlite instead?

Slippage and Slideage

Posted in Science Politics with tags , on July 3, 2009 by telescoper

Back from the week’s exertions I’ve just realised that I missed the announcement from the Science and Technology Facilities Council (STFC) of the changes to their programme as a result of the 2009 budget settlement.

You can find the full statement here, but of immediate concern to astronomers is the plan to cut funding for the Cambridge Astronomical Survey Unit (CASU) and the Wide-Field Astronomy Unit (WFAU) at Edinburgh. I’m not sure how much their support is to be reduced and what the long-term implications of the cuts will be.

Expenditure on the outrageously useless space gizmo Moonlite will be delayed until next year, thus saving another bit of money. In my opinion, it would have been better simply to have cancelled this one altogether and diverted the funding into research grants which are instead to be held at the levels they were cut to last year.

Other savings will be made by “rephasing” (i.e. delaying) other projects in particle and nuclear physics and some others have started late anyway for other reasons.

Any optimism there might have been about a better settlement at the next Comprehensive Spending Review has now totally evaporated, however, and I wouldn’t bet against STFC having to cope with further large cuts  (in cash terms) a few years down the line. There are several ongoing consultation exercises (see Andy’s discussion and my earlier post for details) which will no doubt be used to draw up hit lists that will be used to make further cuts if and when needed.

The immediate impact of this review exercise on the astronomy programme seems considerably less brutal than I feared, but what may be going on is simply a holding operation and that the really drastic decisions will happen later, after money has already been spent on projects that are really already doomed. Still, a stay of execution is better than immediate termination.