Archive for Peer Review

An analysis of the effects of sharing research data, code, and preprints on citations

Posted in OJAp Papers, Open Access with tags , , , , , on May 27, 2024 by telescoper

Whenever researchers ask me why I am an advocate of open science the response that first occurs to me is somewhat altruistic: sharing results and data is good for the whole community, as it enables the proper progress of research through independent scrutiny. There is however a selfish reason for open science, demonstrates rather well by a recent preprint on arXiv. The abstract is here:

Calls to make scientific research more open have gained traction with a range of societal stakeholders. Open Science practices include but are not limited to the early sharing of results via preprints and openly sharing outputs such as data and code to make research more reproducible and extensible. Existing evidence shows that adopting Open Science practices has effects in several domains. In this study, we investigate whether adopting one or more Open Science practices leads to significantly higher citations for an associated publication, which is one form of academic impact. We use a novel dataset known as Open Science Indicators, produced by PLOS and DataSeer, which includes all PLOS publications from 2018 to 2023 as well as a comparison group sampled from the PMC Open Access Subset. In total, we analyze circa 122’000 publications. We calculate publication and author-level citation indicators and use a broad set of control variables to isolate the effect of Open Science Indicators on received citations. We show that Open Science practices are adopted to different degrees across scientific disciplines. We find that the early release of a publication as a preprint correlates with a significant positive citation advantage of about 20.2% on average. We also find that sharing data in an online repository correlates with a smaller yet still positive citation advantage of 4.3% on average. However, we do not find a significant citation advantage for sharing code. Further research is needed on additional or alternative measures of impact beyond citations. Our results are likely to be of interest to researchers, as well as publishers, research funders, and policymakers.

Colavizza et al., arXiv:2404.16171

This analysis isn’t based on astrophysics, but I think the relatively high citation rates of papers in the Open Journal of Astrophysics are at least in part due to the fact that virtually all our papers are all available as preprints arXiv prior to publication. Citations aren’t everything, of course, but the positive effect of preprinting is an important factor in communicating the science you are doing.

The Gates Foundation and Open Access

Posted in Open Access with tags , , , , , on April 9, 2024 by telescoper

There has been quite a lot of reaction (e.g. here) to the recent announcement of a new Open Access Policy by the Bill & Melinda Gates Foundation, which is one of the one of the world’s top funders of biomedical research. This mandates the distribution of research it funds as preprints and also states that it will not pay Article Processing Charges (APCs). The essentials of the policy, which comes into effect on 1st January 2025, are these:

  1. Funded Manuscripts Will Be Available. As soon as possible and to the extent feasible, Funded Manuscripts shall be published as a preprint in a preprint server recognized by the foundation or preapproved preprint server which applies a sufficient level of scrutiny to submissions. Accepted articles shall be deposited immediately upon publication in PubMed Central (PMC), or in another openly accessible repository, with proper metadata tagging identifying Gates funding. In addition, grantees shall disseminate Funded Manuscripts as described in their funding agreements with the foundation, including as described in any proposal or Global Access commitments.
  2. Dissemination of Funded Manuscripts Will Be On “Open Access” Terms. All Funded Manuscripts, including any subsequent updates to key conclusions, shall be available immediately, without any embargo, under the Creative Commons Attribution 4.0 International License (CC BY 4.0) or an equivalent license. This will permit all users to copy, redistribute, transform, and build on the material in any medium or format for any purpose (including commercial) without further permission or fees being required.
  3. Gates Grantees Will Retain Copyright. Grantees shall retain sufficient copyright in Funded Manuscripts to ensure such Funded Manuscripts are deposited into an open-access repository and published under the CC-BY 4.0 or equivalent license.
  4. Underlying Data Will Be Accessible Immediately. The Foundation requires that underlying data supporting the Funded Manuscripts shall be made accessible immediately and as open as possible upon availability of the Funded Manuscripts, subject to any applicable ethical, legal, or regulatory requirements or restrictions. All Funded Manuscripts must be accompanied by an Underlying Data Availability Statement that describes where any primary data, associated metadata, original software, and any additional relevant materials or information necessary to understand, assess, and replicate the Funded Manuscripts findings in totality can be found. Grantees are encouraged to adhere to the FAIR principles to improve the findability, accessibility, interoperability, and reuse of digital assets.
  5. The Foundation Will Not Pay Article Processing Charges (APC). Any publication fees are the responsibility of the grantees and their co-authors.
  6. Compliance Is A Requirement of Funding. This Open Access policy applies to all Funded Manuscripts, whether the funding is in whole or in part. Compliance will be continuously reviewed, and grantees and authors will be contacted when they are non-compliant.
    • As appropriate, Grantees should include the following acknowledgment and notice in Funded Manuscripts:
    • “This work was supported, in whole or in part, by the Bill & Melinda Gates Foundation [Grant number]. The conclusions and opinions expressed in this work are those of the author(s) alone and shall not be attributed to the Foundation. Under the grant conditions of the Foundation, a Creative Commons Attribution 4.0 License has already been assigned to the Author Accepted Manuscript version that might arise from this submission. Please note works submitted as a preprint have not undergone a peer review process.”

Reactions to this new policy are generally positive, except (unsurprisingly) for the academic publishing industry.

For what it’s worth, my view is that it is a good policy, and I wish more funders went along this route, but it falls short of being truly excellent. As it stands, the policy seems to encourage authors to put the “final” version of their articles in traditional journals, without these articles being freely available through Open Access. That falls short of goal establishing a global worldwide network of institutional and/or subject-based repositories, linked to peer review mechanisms such as overlays, that share research literature freely for the common good. To help achieve that aim, the Gates’ Foundation should to encourage overlays rather than traditional journals as the way to carry out peer review. Perhaps this will be the next step?

Retractions and Resignations

Posted in Open Access with tags , , , , , , , , on December 16, 2023 by telescoper

I saw an article this week in Nature that revealed that more than 10,000 research papers have been retracted so far 2023. The actual number is probably much higher than that, as this is just the fraudulent papers that have been found out. Over 80% of the papers mentioned in the article were published by Hindawi, a known predatory publisher that specializes in Gold Open Access journals that charge Article Processing Charges. Hindawi is owned by Wiley but the brand has become so toxic that Wiley no longer wants to use the name. Presumably it still wants the profits.

(Another bit of news this week makes me think that Hindawi might be the academic publishing equivalent of Tesla…)

Here’s a figure showing how the number of retracted research articles has increased over time:

It has always seemed to me that the shift to “Gold” Open Access in which authors pay to have their work published would lead to a decrease in editorial standards. Since the publisher’s income comes from APCs, the more papers they publish the more money they get. This is another reason Diamond Open Access run on a not-for-profit basis with no fees for either authors or readers is a much better model.

At least some academics are taking a stand. Retraction Watch maintains a list of journals whose editors who have resigned – sometimes en masse from the same journal – in response to the imposition of dodgy practices by their publishers. Take the Journal of Geometric Mechanics, for example. The entire Editorial Board of this journal resigned because of pressure from above to increase “output” (i.e. profits) by lowering academic standards.

This is just a start, of course, but I don’t think it will take long for academic community to accept that the this publishing model is rotten to the core and embraces the only really viable and sustainable alternative.

Open Peer Review Analytics

Posted in OJAp Papers, Open Access with tags , , , , on October 18, 2023 by telescoper
A Peer, Reviewing. Photo from Pexels.com

Quite a few people have contacted me to ask about the Peer Review Process at the Open Journal of Astrophysics so I thought I’d do a quick post here to explain a bit about it here.

When a paper is submitted it is up to the Editor-in-Chief – that’s me! – to assign it to a member of the Editorial Board. Who that is depends on the topic of the paper and on the current availability due to workload. I of course take on some papers myself. I also reject some papers without further Peer Review if they clearer don’t meet  the journal’s criteria of scientific quality, originality, relevance and comprehensibility. I usually run such papers past the Editorial Board before doing such a ‘Desk Reject’.

Once the paper has been assigned, the Editor takes control of the process, inviting referees (usually two) to comment and make recommendations. This is the rate-determining step, as potential referees are often busy. It can take as many as ten declined invitations before we get a referee to agree. Once accepted, a referee is asked to provide a report within three weeks. Sometimes they are quicker than that, sometimes they take longer. It depends on many factors, including the length of the manuscript.

Once all the referee reports are in the Editor can make a decision. Some papers are rejected upon refereeing, and some are accepted with only tiny changes. The most frequent decision is “Revise and Resubmit” – authors are requested to make changes in response to the referee comments. Sometimes these are minor, sometimes they are substantial. We never give a deadline for resubmission.

A resubmitted paper is usually sent to the same referee(s) who reviewed the original. The referees may be satisfied and recommend acceptance, or we go around again.

Once a paper is accepted, the authors are instructed to upload the final, accepted, version to arXiv. It normally takes a day or two to be announced. The article is then passed over from the Peer Review process to the Publication process. As Managing Editor, I make the overlay and prepare the metadata for the final version. This is usually done the same day as the final version appears on arXiv, but sometimes it takes a bit longer to put everything in order. It’s never more than a few days though.

Anyway, here are some “analytics” – it’s weird how anything that includes any quantitative information is called analytics these days to make it sound more sophisticated than it actually is – provided by the Scholastica platform:

These numbers need a little explanation.

The “average days to a decision” figure includes desk rejects as well as all submissions and resubmissions. Suppose a paper is submitted and it then takes 4 weeks to get referee reports and for the Editor to make a “Revise and Resubmit” request. That would count as 28 days. It might take the authors three months to make their revisions and resubmit the paper, but that does not count in the calculation of the “average days to decision” as during that period the manuscript is deemed to be inactive. If the revised version is accepted almost immediately, say after 2 days, then the average days to decision would be (28+2)/2 = 15 days. Also, being an average there are some shorter than 14 days and some much longer.

The acceptance rate is the percentage of papers eventually accepted (even after revision). The figure for ‘total submissions’ includes resubmissions, so the hypothetical paper in the preceding paragraph would add 2 to this total. That accounts for why the total number of papers accepted is not 50% of 388, which is 194; the actual figure is lower, at 105.

Finally, the number of manuscripts “in progress” is currently 23. That includes papers currently going through the peer review process. It does not include papers which are back with the authors for revisions (although it would be reasonable to count those as in progress in some sense).

There we are. I hope this clarifies the situation.

Arguing the Case for Preprints

Posted in Open Access with tags , , , , on September 23, 2020 by telescoper

This is Peer Review Week 2020 as part of which I am participating tomorrow afternoon (Irish Time) in a live panel discussion/webinar called Increasing transparency and trust in preprints: Steps journals can take.

Working in a field like astrophysics, where the use of preprints as a means of disseminating information and ideas is well established, I’m always surprised that some people working in other disciplines don’t really approve of them at all. See for example, this Twitter thread. Still, even in the biosciences, preprints have their advocates and there are signs that attitudes may be changing.

That is not to say that things aren’t changing in astrophysics too. One of the interesting astronomical curiosities I’ve acquired over the years is a preprint of the classic work of Burbidge, Burbidge, Fowler and Hoyle in 1957 (a paper usually referred to as B2FH after the initials of its authors). It’s such an important contribution, in fact, that it has its own wikipedia page.

Younger readers will probably not realize that preprints were not always produced in the electronic form they are today. We all used to make large numbers of these and post them at great expense to (potentially) interested colleagues before publication in order to get comments. That was extremely useful because a paper could take over a year to be published after being refereed for a journal: that’s too long a timescale when a PhD or PDRA position is only a few years in duration. The first papers I was given to read as a new graduate student in 1985 were all preprints that were not published until well into the following year. In some cases I had more or less figured out what they were about by the time they appeared in a journal!

The B2FH paper was published in 1957 but the practice of circulating preprints persisted well into the 1990s. Usually these were produced by institutions with a distinctive design, logo, etc which gave them a professional look, which made it easier to distinguish `serious’ papers from crank material (which was also in circulation). This also suggested that some internal refereeing inside an institution had taken place before an “official” preprint was produced and this lending it an air of trustworthiness. Smaller institutions couldn’t afford all this, so were somewhat excluded from the preprint business.

With the arrival of the arXiv the practice of circulating hard copies of preprints in astrophysics gradually died out, to be replaced by ever-increasing numbers of electronic articles. The arXiv does have some gatekeeping – in the sense there are some controls on who can deposit a preprint there – but it is far easier to circulate a preprint now than it was.

It is still the case that big institutions and collaborations insist on quite strict internal refereeing before publishing a preprint – and some even insist on waiting for a paper to be accepted by a journal before adding it to the arXiv – but there’s no denying that among the wheat there is quite a lot of chaff, some of which attracts media coverage that it does not deserve. It must be admittted, however, that the same can be said of some papers that have passed peer review and appeared in high-profile journals! No system that is operated by human beings will ever be flawless, and peer review is no different.

Nowadays, in astrophysics, the single most important point of access to scientific literature is through the arXiv, which is why the Open Journal of Astrophysics was set up as an overlay journal to provide a level of rigorous peer review for preprints, not only to provide a sort of quality mark but also to improve the paper through the editorial process.

As for increasing transparency and trust in preprints, I think I’ll save some suggestions for tomorrow’s webinar. A good start, however, would be for journals to admit their own limitations and start helping rather than hindering the dissemination of information and ideas.

Who needs critics? Or peer review for that matter…

Posted in Art, Literature, Music, Science Politics with tags , , , , , on August 9, 2015 by telescoper

No time for a proper post today so I’m going to rehash an old piece from about six years ago. In particular I direct your attention to the final paragraph in which I predict that peer review for academic publications will soon be made redundant. There has been quite a lot of discussion about that recently; see here for an example.

Critics say the stangest things.

How about this, from James William Davidson, music critic of The Times from 1846:

He has certainly written a few good songs, but what then? Has not every composer that ever composed written a few good songs? And out of the thousand and one with which he deluged the musical world, it would, indeed, be hard if some half-dozen were not tolerable. And when that is said, all is said that can justly be said of Schubert.

Or this, by Louis Spohr, written in 1860 about Beethoven’s Ninth (“Choral”) Symphony

The fourth movement is, in my opinion, so monstrous and tasteless and, in it’s grasp of Schiller’s Ode, so trivial that I cannot understand how a genius like Beethoven could have written it.

No less an authority than  Grove’s Dictionary of Music and Musicians (Fifth Edition) had this to say about Rachmaninov

Technically he was highly gifted, but also severely limited. His music is well constructed and effective, but monotonous in texture, which consists in essence mainly of artificial and gushing tunes…The enormous popular success some few of Rachmaninov’s works had in his lifetime is not likely to last and musicians regarded it with much favour.

And finally, Lawrence Gillman wrote this in the New York Tribune of February 13 1924 concerning George Gershwin’s Rhapsody in Blue:

How trite and feeble and conventional the tunes are; how sentimental and vapid the harmonic treatment, under its disguise of fussy and futile counterpoint! Weep over the lifelessness of the melody and harmony, so derivative, so stale, so inexpressive.

I think I’ve made my point. We all make errors of judgement and music critics are certainly no exception. The same no doubt goes for literary and art critics too. In fact,  I’m sure it would be quite easy to dig up laughably inappropriate comments made by reviewers across the entire spectrum of artistic endeavour. Who’s to say these comments are wrong anyway? They’re just opinions. I can’t understand anyone who thinks so little  of Schubert, but then an awful lot of people like to listen what sounds to me to be complete dross.

What puzzles me most about the critics is not that they make “mistakes” like these – they’re only human after all – but why they exist in the first place. It seems extraordinary to me that there is a class of people who don’t do anything creative themselves  but devote their working lives to criticising what is done by others. Who should care what they think? Everyone is entitled to an opinion, of course, but what is it about a critic that implies we should listen to their opinion more than anyone else?

(Actually, to be precise, Louis Spohr was also a composer but I defy you to recall any of his works…)

Part of the idea is that by reading the notices produced by a critic the paying public can decide whether to go to the performance, read the book or listen to the record. However, the correlation between what is critically acclaimed and what is actually good (or even popular) is tenuous at best. It seems to me that, especially nowadays with so much opinion available on the internet, word of mouth (or web) is a much better guide than what some geezer writes in The Times. Indeed, the   Opera reviews published in the papers are so frustratingly contrary to my own opinion that I don’t  bother to read them until after the performance, perhaps even after I’ve written my own little review on here.  Not that I would mind being a newspaper critic myself. The chance not only to get into the Opera for free but also to get paid for spouting on about afterwards sounds like a cushy number to me. Not that I’m likely to be asked.

In science,  we don’t have legions of professional critics, but reviews of various kinds are nevertheless essential to the way science moves forward. Applications for funding are usually reviewed by others working in the field and only those graded at the very highest level are awarded money.  The powers-that-be are increasingly trying to impose political criteria on this process, but it remains a fact that peer review is the crucial part of the process. It’s not just the input that is assessed either. Papers submitted to learned journals are reviewed by (usually anonymous)  referees, who often require substantial changes to be made the work before the work can be accepted for publication.

We have no choice but to react to these critics if we want to function as scientists. Indeed, we probably pay much more attention to them than artists do of critics in their particular fields. That’s not to say that these referees don’t make mistakes either. I’ve certainly made bad decisions myself in that role,  although they were all made in good faith. I’ve also received comments that I thought were unfair or unjustifiable, but at least I knew they were coming from someone who was a working scientist.

I suspect that the use of peer review in assessing grant applications will remain in place for a some considerable time. I can’t think of an alternative, anyway. I’d much rather have a rich patron so I didn’t have to bother writing proposals all the time, but that’s not the way it works in either art or science these days.

However, it does seem to me that the role of referees in the publication process is bound to become redundant in the very near future. Technology now makes it easy to place electronic publications on an archive where they can be accessed freely. Good papers will attract attention anyway, just as they would if they were in refereed journals. Errors will be found. Results will be debated. Papers will be revised. The quality mark of a journal’s endorsement is no longer needed if the scientific community can form its own judgement, and neither are the monstrously expensive fees charged to institutes for journal subscriptions.

Research Hive on Open Access

Posted in Open Access, The Universe and Stuff with tags , , , , on March 21, 2014 by telescoper

Near the end of a week that has been both exciting and exhausting, I had the opportunity to take part in a seminar on Open Access publishing. I agreed to do this last year sometime, and only remembered that it was today because I got an email reminder a couple of days ago! Anyway it was nice to have an excuse to visit the iconic Library of the University of Sussex for this event.

Fortunately, as things turned out, I had plenty of topical material to draw on for inspiration and spent some time discussion the possibilities of community peer review with reference with what’s been happening with BICEP2. Here’s me in the middle of the talk on that very subject showing the Live Discussion Facebook page:

Hive

I shared the bill with Rupert Gatti from Open House Press which publishes mainly in the Arts and Humanities area; generally speaking these disciplines are a long way behind astrophysics in terms of their readiness for the age of Open Access but I think change across all academia is inevitable.

For those of you interested I realize that an update on the Open Journal For Astrophysics is long overdue. I’ve just been too busy with other things to devote much time to it. I do hope to have further news very soon…

Elsevierballs

Posted in Open Access with tags , , on December 16, 2012 by telescoper

Have you heard all the stories about the carefully-managed system of peer review that justifies the exorbitant cost of Elsevier journals? Then read this…

Ivan Oransky's avatarRetraction Watch

elsevierFor several months now, we’ve been reporting on variations on a theme: Authors submitting fake email addresses for potential peer reviewers, to ensure positive reviews. In August, for example, we broke the story of a Hyung-In Moon, who has now retracted 24 papers published by Informa because he managed to do his own peer review.

Now, Retraction Watch has learned that the Elsevier Editorial System (EES) was hacked sometime last month, leading to faked peer reviews and retractions — although the submitting authors don’t seem to have been at fault. As of now, eleven papers by authors in China, India, Iran, and Turkey have been retracted from three journals.

Here’s one of two identical notices that have just run in Optics & Laser Technology, for two unconnectedpapers:

View original post 556 more words

Clusters, Splines and Peer Review

Posted in Bad Statistics, Open Access, The Universe and Stuff with tags , , , , , on June 26, 2012 by telescoper

Time for a grumpy early morning post while I drink my tea.

There’s an interesting post on the New Scientist blog site by that young chap Andrew Pontzen who works at Oxford University (in the Midlands). It’s on a topic that’s very pertinent to the ongoing debate about Open Access. One of the points the academic publishing lobby always makes is that Peer Review is essential to assure the quality of research. The publishers also often try to claim that they actually do Peer Review, which they don’t. That’s usual done, for free, by academics.

But the point Andrew makes is that we should also think about whether the form of Peer Review that journals undertake is any good anyway.  Currently we submit our paper to a journal, the editors of which select one (or perhaps two or three) referees to decide whether it merits publication. We then wait – often many months – for a report and a decision by the Editorial Board.

But there’s also a free online repository called the arXiv which all astrophysics papers eventually appear on. Some researchers like to wait for the paper to be refereed and accepted before putting it on the arXiv, while others, myself included, just put it on the arXiv straight away when we submit it to the journal. In most cases one gets prompter and more helpful comments by email from people who read the paper on arXiv than from the referee(s).

Andrew questions why we trust the reviewing of a paper to one or two individuals chosen by the journal when the whole community could do the job quicker and better. I made essentially the same point in a post a few years ago:

I’m not saying the arXiv is perfect but, unlike traditional journals, it is, in my field anyway, indispensable. A little more investment, adding a comment facilities or a rating system along the lines of, e.g. reddit, and it would be better than anything we get academic publishers at a fraction of the cost. Reddit, in case you don’t know the site, allows readers to vote articles up or down according to their reaction to it. Restrict voting to registered users only and you have the core of a peer review system that involves en entire community rather than relying on the whim of one or two referees. Citations provide another measure in the longer term. Nowadays astronomical papers attract citations on the arXiv even before they appear in journals, but it still takes time for new research to incorporate older ideas.

In any case I don’t think the current system of Peer Review provides the Gold Standard that publishers claim it does. It’s probably a bit harsh to single out one example, but then I said I was feeling grumpy, so here’s something from a paper that we’ve been discussing recently in the cosmology group at Cardiff. The paper is by Gonzalez et al. and is called IDCS J1426.5+3508: Cosmological implications of a massive, strong lensing cluster at Z = 1.75. The abstract reads

The galaxy cluster IDCS J1426.5+3508 at z = 1.75 is the most massive galaxy cluster yet discovered at z > 1.4 and the first cluster at this epoch for which the Sunyaev-Zel’Dovich effect has been observed. In this paper we report on the discovery with HST imaging of a giant arc associated with this cluster. The curvature of the arc suggests that the lensing mass is nearly coincident with the brightest cluster galaxy, and the color is consistent with the arc being a star-forming galaxy. We compare the constraint on M200 based upon strong lensing with Sunyaev-Zel’Dovich results, finding that the two are consistent if the redshift of the arc is  z > 3. Finally, we explore the cosmological implications of this system, considering the likelihood of the existence of a strongly lensing galaxy cluster at this epoch in an LCDM universe. While the existence of the cluster itself can potentially be accomodated if one considers the entire volume covered at this redshift by all current high-redshift cluster surveys, the existence of this strongly lensed galaxy greatly exacerbates the long-standing giant arc problem. For standard LCDM structure formation and observed background field galaxy counts this lens system should not exist. Specifically, there should be no giant arcs in the entire sky as bright in F814W as the observed arc for clusters at  z \geq 1.75, and only \sim 0.3 as bright in F160W as the observed arc. If we relax the redshift constraint to consider all clusters at z \geq 1.5, the expected number of giant arcs rises to \sim 15 in F160W, but the number of giant arcs of this brightness in F814W remains zero. These arc statistic results are independent of the mass of IDCS J1426.5+3508. We consider possible explanations for this discrepancy.

Interesting stuff indeed. The paper has been accepted for publication by the Astrophysical Journal too.

Now look at the key result, Figure 3:

I’ll leave aside the fact that there aren’t any error bars on the points, and instead draw your attention to the phrase “The curves are spline interpolations between the data points”. For the red curve only two “data points” are shown; actually the points are from simulations, so aren’t strictly data, but that’s not the point. I would have expected an alert referee to ask for all the points needed to form the curve to be shown, and it takes more than two points to make a spline.  Without the other point(s) – hopefully there is at least one more! – the reader can’t reproduce the analysis, which is what the scientific method requires, especially when a paper makes such a strong claim as this.

I’m guessing that the third point is at zero (which is at – ∞ on the log scale shown in the graph), but surely that must have an error bar on it, deriving from the limited simulation size?

If this paper had been put on a system like the one I discussed above, I think this would have been raised…

A Poll about Peer Review

Posted in Science Politics with tags , , , on September 13, 2011 by telescoper

Anxious not to let the momentum dissipate about the discussion of scientific publishing, I thought I’d try a quick poll to see what people think about the issue of peer review. In my earlier posts I’ve advanced the view that, at least in the subject I work in (astrophysics), peer review achieves very little. Given that it is also extremely expensive when done by traditional journals, I think it could be replaced by a kind of crowd-sourcing, in which papers are put on an open-access archive or repository of some sort, and can then be commented upon by the community and from where they can be cited by other researchers. If you like, a sort of “arXiv plus”. Good papers will attract attention, poor ones will disappear. Such a system also has the advantage of guaranteeing open public access to research papers (although not necessarily to submission, which would have to be restricted to registered users only).

However, this is all just my view and I have no idea really how strongly others rate  the current system of peer review. The following poll is not very scientific, but ‘ve tried to include a reasonably representative range of views from “everything’s OK – let’s keep the current system” to the radical suggestion I make above.

Of course, if you have other views about peer review or academic publishing generally, please feel free to post them through the comments box.