A Poll about Peer Review

Anxious not to let the momentum dissipate about the discussion of scientific publishing, I thought I’d try a quick poll to see what people think about the issue of peer review. In my earlier posts I’ve advanced the view that, at least in the subject I work in (astrophysics), peer review achieves very little. Given that it is also extremely expensive when done by traditional journals, I think it could be replaced by a kind of crowd-sourcing, in which papers are put on an open-access archive or repository of some sort, and can then be commented upon by the community and from where they can be cited by other researchers. If you like, a sort of “arXiv plus”. Good papers will attract attention, poor ones will disappear. Such a system also has the advantage of guaranteeing open public access to research papers (although not necessarily to submission, which would have to be restricted to registered users only).

However, this is all just my view and I have no idea really how strongly others rate  the current system of peer review. The following poll is not very scientific, but ‘ve tried to include a reasonably representative range of views from “everything’s OK – let’s keep the current system” to the radical suggestion I make above.

Of course, if you have other views about peer review or academic publishing generally, please feel free to post them through the comments box.

31 Responses to “A Poll about Peer Review”

  1. Dave Carter's avatar
    Dave Carter Says:

    Peer review is in principle vital for quality control, and if we lose it we lose a key argument against the Delingpoles of this world. But I worry greatly at the way things are going. Senior scientists mentor junior ones in presentation skills, scientific writing, grant and telescope time applications, but they do not mentor in peer review. As a result I think that many people referee papers and applications without really understanding the purpose, and what it is they are supposed to assess.

    In my view the purpose of peer review is to assess methodology, significance and to a lesser extent background. It is not to assess how many of the reviewer’s publications are cited. And it is not really to assess conclusions, only the methodology by which those conclusions are reached.

    To my answer to the poll is that it is vital for quality control, but we must carry it out more effectively than we do at present.

    • I think that’s a good point. Referees don’t get much guidance from journals either, and there is no sanction for doing a bad job at it other than not getting asked to do it again…

      • > no sanction for doing a bad job at it other than not getting asked to do it again

        That’s not a sanction, that is a perverse incentive to do bad work.

        I think it would be appropriate for all promotion and tenure committees to ask the relevant journal editors to assess the candidate’s performance as a reviewer.

      • Ned,

        I did mean that remark sarcastically.

        Another problem with anonymous reviewing is that it’s just as hard to assign credit when it’s done well as it is to apportion blame when it’s done badly (or maliciously).

        Peter

    • I agree that having some more training on this at the graduate student level would be a really good idea, especially getting some personal tips from experienced/wizened referees (including what they would expect in a good report). It’s slightly daunting when you get that first request from a journal to referee a paper, especially when the author list contains names you recognise from the upper echelons of the field. Being a bit more prepared would be good.

      In the introductory course that was given at the start of my PhD we were given the task of refereeing some old (anonymised) telescope proposals. That was a very useful exercise, and some more practice like that would be handy.

      • We do this with our 4th-year MPhys students – they have to draw up a proposal of their own but also review those by other students.

        However, I really think we should introduce some sort of training about Peer Review for PhD students through the Graduate College. Since I’m on the Board I should have done that long before now!

      • …and i’ve just suggested we implement something like this in our grad course for next year (even if that’s little help to jim).

  2. Ian Douglas's avatar
    Ian Douglas Says:

    I was going to bring up Delingpole myself. You really don’t want to go even a step down the road towards a place where non-scientists are able to dismiss real research carried out by people who know what they’re doing.

    Does peer review have to be expensive? Imagine a journal run within an academic department. The same people would work as editors and reviewers, the same people would submit work. They would work for free as it would count towards their publishing targets set by their universities, and it could be published cheaply online. Isn’t it the publishing houses that are the problem, rather than the review process?

    • My proposal is that joe public should be able to access scientific papers, but I agree that there has to be some restriction on access to the reviewing process. There would have to be some form of registration. This would also prevent cartels voting up each other’s work. This happens already, of course, through citation gerrymandering.

      I’d also add that, in my opinion, reviewing should never be anonymous. That would make game-playing easier to identify. Also, if you’ve got something to say about a bit of research you should be expected to stand up and be counted.

      • Ian Douglas's avatar
        Ian Douglas Says:

        I agree absolutely that anonymity is far more trouble than it’s worth. A link to a reviewer’s work below their review of the paper would probably be enough to put their comments into perspective.

      • Anonymity is crucial for referees, who many times depend on the paper’s authors for career advancement, to be able to give honest assessment. The referee is only anonymous to the authors, their identity is known to the editor, so anonymity on the internet is not really analogous.

      • i agree – anonymity is essential – if only to ensure that early-stage researchers can express themselves freely.

        of course – editors are supposed to be the umpires in this game – and in some instances they are paid – so they at least are suitably rewarded to ensure they should put in the time and effort to ensure there is no “game-playing”. it would be interesting to know if such information is shared between editors or within journals.

        for the record – i voted for “peer-review-isn’t-perfect-and-could-be-replaced” option… mostly because i (like others) have been on the receiving end of both good and poor referee reports (and no doubt i’ve provided both to in my time).

        in my experience poor reports typically come in three flavors: lack of engagement (why did they agree to do it in the first place?), dogmatic disagreement (editors need to handle this) and self-interested obstruction. i have only come across a few instances of obstruction, but where the editor is unwilling or unable to recognise this – it is very disappointing. i did wonder if using multiple referees is the solution to this problem, but looking at nature, it doesn’t look likely. so i can’t see crowd-sourcing as the solution either… moreover if you think crowd-sourcing works then surely so do citations – as they track this sort of information (modulo the effects of both +ve and -ve citations) – and we are all aware of the short comings of citations.

        as an aside: i did wonder whether these threads were motivated by your recent reading of many referee “reports” for the grants panel? certainly some of them have made me want to break things… but in the case of the grants we are dealing with a zero-sum game – any positive outcome for one grant (“paper”) has to be balanced by a negative outcome for others – which is not the case for publications.

      • I’ve been thinking about these things for quite a while and it wasn’t the grant referees in particular that made me think this, although I can think of a few that were worse than useless….

        Citations are of course flawed in many ways, but the main problem is assigning them to people rather than to papers. In principle I think citations given to a paper are less problematic.

  3. Dave Carter's avatar
    Dave Carter Says:

    I don’t think peer-review is expensive. Reviewers are not paid, nor are editors, the only people who are are the editorial assistants. For MNRAS for instance, its not the RAS office who cost the money its Blackwell, and they have nothing to do with peer-review. How much would MNRAS cost if there was no paper copy and there was only an in-house managed web interface?

  4. My (admittedly limited) experience of peer review has been very mixed – sometimes reviewers have made useful and sensible comments, but on other occasions they seem not to have understood or even carefully read the document they are reviewing.

    I’ve also heard of a colleague being mistakenly asked to review a paper of which he was himself one of the authors!

  5. Dave Carter's avatar
    Dave Carter Says:

    I am also concerned by the various “citation clubs” which have appeared, some of these repeatedly cite each others papers even when those papers are derivative of work which was done and done better years or even decades before, and which is not cited (yes SDSS people, I am looking at you). I am absolutely apoplectic when asked by a referee to cite the papers of their particular citation club instead of earlier and superior work.

    However I was pleased to hear the chair of the REF panel in his presentation make it clear that citation data will be at most a secondary indicator when assessing REF outputs (secondary to reading the papers and assessing their merits independently that is).

    • “(secondary to reading the papers and assessing their merits independently that is).”
      Given the volume of papers that will be submitted one has to wonder, even with the best of intentions, how this will be managed. I suspect that citations will end up becoming more important, even if that was not intended. Anyway, I’m leading us off-topic.

  6. Interesting to see the results, my vote came second from the top, FWIW.

  7. The idea of the community commenting or voting on arXiv submissions worries me a bit. In an idealised world it sounds lovely, but all my experience with the internet suggests that it will devolve into an ugly scene on a depressingly short timescale. I imagine something like flurries of angry ‘A comment on…’ articles, but without the cooling-off period that typing something up and submitting it provides. Requiring real names would do something I suppose, but that doesn’t stop someone who just doesn’t care about irritating the whole room from dominating the conversation.

    • Such behaviour is possible, of course. Maybe even likely. But I’d prefer the wreckers to be out in the open rather than doing their dirty work behind closed doors in committees and via anonymous referee reports.

    • “… all my experience with the internet suggests that it will devolve into an ugly scene on a depressingly short timescale.”

      Have you ever spent much time on MathOverflow? It has done a great deal to restore my faith in the ability of people to have useful discussions on the internet.

  8. Also wanted to say that I do like the suggestions you make for community assessment, just not as a replacement for someone being assigned the job of carefully looking at the research. That someone should have enough incentive to do a good and thorough job (through money and prestige), and I’d hope the community input is helpful for them. There’s always going to be some administrative overhead to run this kind of system, the problem is the current rates are completely out of control.

    • Even something like a “Like” button on the arXiv which showed *who* liked the paper would be interesting. Not having a a “dislike” vote might suppress game-playing.

  9. Adrian Burd's avatar
    Adrian Burd Says:

    The geosciences community – largely through the Euopean Geophysical Union – have been experimenting with public peer review and public discussion on scientific papers since at least 2004. For example.

    http://www.biogeosciences.net/

    My feeling is that the jury is still out, but it is now well accepted within the geosciences community.

    Adrian

  10. It turns out there is a site that does the sort of thing I’m advocating – it’s

    http://paperrater.org/

    Although it hasn’t exactly set the world on fire. I may give it a go once this grant business is finished, but in the meantime feel free to give it a whirl.

  11. I reluctantly voted for ‘vital but doesn’t justify the expense’. I don’t think I have ever had a referee that has substantially improved a paper except for fairly superficial things, and I’ve had plenty of referees that have delayed good papers for no obvious reason. However, the groups I have worked in have been fairly self-policing in terms of quality, and I have a feeling that often isn’t the case. Therefore, reluctantly I think peer review is necessary, like the police, to keep the poor scientists under control. Not sure
    that alternative ideas would work.

  12. Perhaps one approach to private companies making substantial profits out of the academic community would be for researchers to charge a fee to the publishers for refereeing when the journals are run for profit. Researchers could agree to referee for free only for journals whose profits are put back into the community, for example the journals published by the R.A.S., the Institute of Physics and the American Astronomical Society. They could insist on a commercial fee for refereeing for journals published commercially.

  13. Quietly Anonymous's avatar
    Quietly Anonymous Says:

    http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2966.2011.19242.x/abstract

    Passed through peer review, in what most of us consider a highly respectable journal.

  14. […] A long time ago I posted a poll to see what people think about the issue of peer review. In previous posts (e.g. this one) I had advanced the view that, at least in the subject I work in (astrophysics), while in its usual form peer review does achieve some degree of quality control, it is by no means perfect. Some good papers get rejected and some poor papers get accepted. Any system operated by humans is bound to be flawed to some extent, but the question is whether there might be a way to improve the system so that it is fairer and more transparent. […]

Leave a comment