I’ve just noticed a  post on another blog about the  meeting of the Herschel ATLAS consortium that’s  going on in Cardiff at the moment, so I thought I’d do a quickie here too. Actually I’ve only just been accepted into the Consortium so quite a lot of the goings-on are quite new to me.

The Herschel ATLAS (or H-ATLAS for short) is the largest open-time key project involving Herschel. It has been awarded 600 hours of observing time  to survey 550 square degrees of sky in 5 wavelenth bands: 110, 170, 250, 350, & 500 microns. It is hoped to detect approximately 250,000 galaxies,  most of them in the nearby Universe, but some will undoubtedly turn out to be very distant, with redshifts of 3 to 4; these are likely to be very interesting for  studies of galaxy evolution.

Herschel is currently in its performance verification (PV) phase, following which there will be a period of science validation (SV). During the latter the ATLAS team will have access to some observational data to have a quick look to see that it’s  behaving as anticipated. It is planned to publish a special issue of the journal Astronomy & Astrophysics next year that will contain key results from the SV phase, although in the case of ATLAS many of these will probably be quite preliminary because only a small part of the survey area will be sampled during the SV time.

Herschel seems to be doing fine, with the possible exception of the HIFI instrument which is currently switched off owing to a fault in its power supply. There is a backup, but the ESA boffins don’t want to switch it back on and risk further complications until they know why it failed in the first place. The problem with HIFI has led to some rejigging of the schedule for calibrating and testing the other two instruments (SPIRE and PACS) but both of these are otherwise doing well.

The data for H-ATLAS proper hasn’t started arriving yet so the meeting here in Cardiff was intended to sort out the preparations, plan who’s going to do what, and sort out some organisational issues. With well over a hundred members, this project has to think seriously about quite a lot of administrative and logistical matters.

One of the things that struck me as particular difficult is the issue of authorship of science papers. In observational astronomy and cosmology we’re now getting used to the situation that has prevailed in experimental particle physics for some time, namely that even short papers have author lists running into the hundreds. Theorists like me usually work in teams too, but our author lists are, generally speaking, much shorter. In fact I don’t have any publications  yet with more than six or seven authors; mine are often just by me and a PhD student or postdoc.

In a big consortium, the big issue is not so much who to include, but how to give appropriate credit to the different levels of contribution. Those senior scientists who organized and managed the survey are clearly key to its success, but so also are those who work at the coalface and are probably much more junior. In between there are individuals who supply bits and pieces of specialist software or extra comparison data. Nobody can pretend that everyone in a list of 100 authors has made an identical contribution, but how can you measure the differences and how can you indicate them on a publication? Or  shouldn’t you try?

Some suggest that author lists should always be alphabetical, which is fine if you’re “Aarseth” but not if you’re “Zel’dovich”. This policy would, however, benefit “al”, a prolific collaborator who never seems to make it as first author..

When astronomers write grant applications for STFC one of the pieces of information they have to include is a table summarising their publication statistics. The total number of papers written has  to be given, as well as the number in which the applicant  is  the first author on the list,  the implicit assumption being that first authors did more work than the others or that first authors were “leading” the work in some sense.

Since I have a permanent job and  students and postdocs don’t, I always make junior collaborators  first author by default and only vary that policy if there is a specific reason not to. In most cases they have done the lion’s share of the actual work anyway, but even if this is not the case it is  important for them to have first author papers given the widespread presumption that this is a good thing to have on a CV.

With more than 100 authors, and a large number of  collaborators vying for position, the chances are that junior people will just get buried somewhere down the author list unless there is an active policy to protect their interests.

Of course everyone making a significant contribution to a discovery has to be credited, and the metric that has been used for many years to measure scientific productivity is the numbered of authored publications, but it does seem to me that this system must have reached breaking point when author lists run to several pages!

It was all a lot easier in the good old days when there was no data…

PS. Atlas was a titan who was forced to hold the sky  on his shoulders for all eternity. I hope this isn’t expected of members of the ATLAS consortium, none of who are titans anyway (as far as I can tell). The plural of Atlas is Atlantes, by the way.

10 Responses to “Atlantes”

  1. Rhodri Evans Says:

    Very interesting. What has been the policy of places like CERN or SLAC or Fermilab on the author issue? SDSS has produced some papers with pretty long author lists. I should ask the folks at Chicago what the policy on those has been….

  2. SDSS actually released a lot of its data quite quickly into the public domain and papers after that I think had a pretty open policy. But the initial papers no doubt had author lists produced according to some pre-ordained policy.

    I’d be interested to hear what the precise policy was for SDSS, and also for particle physics experiments which typically have even longer lists of authors, running perhaps into the thousands!

  3. Some big consortia already have an “Aardvark et al” rule written into their constitutions, while others have rules on author order placement rewarding effort. So what happens when the first sort of team and the second sort want to collaborate with each other on joint projects? An irresistable force meets an immovable object? Even with goodwill on all sides, we can’t always expect the working practices of different communities to be trivial obstacles.

    One other thing that strikes me is that the authorship conventions depend strongly on the ratio of the number of significant results a team expects to have, to the team size. So one shot projects like high energy physics teams tend to be Aardvarkers, but Herschel ATLAS has much more varied legacy value.

    • An even more diffficult question to answer in these days of Research Assessment is how to apportion citations for large collaborations. If a paper with 100 authors generates 1000 citations, is it reasonable to credit each author with all of them? Or should the “impact” be weighted, giving each author a fraction 1/N? I rather think the latter is fairer, but observers will no doubt disagree as they tend to work in larger groups than theorists!

  4. By the way, ATLAS (the Herschel one) does have mechanisms for protecting the interests of junior people, and you’re absolutely right about how important it is. It’s easier when there’s a lot of diverse science to be had – we made a reasonably realistic list of >40 papers in the Herschel science verification.

  5. Re citation counts: I think astronomers’ fascination with self-chosen statistics is sometimes a bit unhealthy, though in the case of RAE it’s a necessary evil. In your suggestion, how would you feel if you had made key contributions to a big project and were rewarded with one first-author paper and five papers as 2nd author, all with author lists of 100? A 1/N weight would negate the obvious reward you’ve had.

    I think the wider problem is not so much choosing the metrics, but that different communities have different working practices, implicitly needing different impact-measuring metrics.

    • Steve,

      All attempts to reduce research productivity and/or impact into a single number are fraught with problems and I don’t advocate going down that route at all. My point was just that if a paper with 100 authors gets 100 citations then the present tendency is to attribute those citations to all authors; the sole author of paper with the same number of citations seems to me to deserve more credit than the 100th author of the previous example.

      Behind this particular bit of numerology lies the false assumption that citation response is somehow linear. In fact the really important papers probably generate an exponentially higher response in citation numbers. If, for example, you divide the WMAP papers’ impact by number of authors the team will all individually do well.


  6. Here’s an idea. For a paper with N authors and M citations take log (N/M) as your measure of impact. A paper that generates fewer citations than authors gets a negative score….

  7. Lung-Yih Chiang Says:

    It would be even better to assess the paper’s impact by adding the time factor : log[ M(t) / (N * t) ]. Take 2 of the most cited papers in cosmology as an example:
    1. WMAP 1-year paper: Cosmological Parameters by Spergel and 16 other authors: it was published 6 years ago and now has gathered 5747 citations so log(5747/6*17)~1.75

    2. Guth’s Inflationary Universe (single author): which so far has 3432 citations: log(3432/1*28)~2.088

    Of course one may argue that Guth’s paper doesn’t score as high as, say, a paper by 2 authors receiving 250 citations in its first year: log(250/2*1)~2.097, it nevertheless indicates its long-lasting impact, because a paper gathering 250 citations in 1 year must be a currently hotly-discussed topic. And for this new paper to compete with Guth’s in the next year (assuming Guth’s receiving no more citations: log(3432/1*(28+1))~2.073), it has to gather further 10^(2.073)*2*(1+1)-250 ~ 223 citations.

    What is amazing about Guth’s paper is that the citation number is fairly steady in the last 28 years:

    compared with Spergel et al.

  8. Lung-Yih,

    Certainly longevity is a measure of quality when it comes to academic publications. Perhaps one should measure the half-life, as most papers seem to have citations that fall off exponentially after a year or two.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: