Home » Blog, Featured

Peer review: What is it good for?

5 February 2010 27 Comments
Peer Review Monster
Image by Gideon Burton via Flickr

It hasn’t been a real good week for peer review. In the same week that the Lancet fully retract the original Wakefield MMR article (while keeping the retraction behind a login screen – way to go there on public understanding of science), the main stream media went to town on the report of 14 stem cell scientists writing an open letter making the claim that peer review in that area was being dominated by a small group of people blocking the publication of innovative work. I don’t have the information to actually comment on the substance of either issue but I do want to reflect on what this tells us about the state of peer review.

There remains much reverence of the traditional process of peer review. I may be over interpreting the tenor of Andrew Morrison’s editorial in BioEssays but it seems to me that he is saying, as many others have over the years “if we could just have the rigour of traditional peer review with the ease of publication of the web then all our problems would be solved”.  Scientists worship at the altar of peer review, and I use that metaphor deliberately because it is rarely if ever questioned. Somehow the process of peer review is supposed to sprinkle some sort of magical dust over a text which makes it “scientific” or “worthy”, yet while we quibble over details of managing the process, or complain that we don’t get paid for it, rarely is the fundamental basis on which we decide whether science is formally published examined in detail.

There is a good reason for this. THE EMPEROR HAS NO CLOTHES! [sorry, had to get that off my chest]. The evidence that peer review as traditionally practiced is of any value at all is equivocal at best (Science 214, 881; 1981, J Clinical Epidemiology 50, 1189; 1998, Brain 123, 1954; 2000, Learned Publishing 22, 117; 2009). It’s not even really negative. That would at least be useful. There are a few studies that suggest peer review is somewhat better than throwing a dice and a bunch that say it is much the same. It is at its best at dealing with narrow technical questions, and at its worst at determining “importance” is perhaps the best we might say. Which for anyone who has tried to get published in a top journal or written a grant proposal ought to be deeply troubling. Professional editorial decisions may in fact be more reliable, something that Philip Campbell hints at in his response to questions about the open letter [BBC article]:

Our editors […] have always used their own judgement in what we publish. We have not infrequently overruled two or even three sceptical referees and published a paper.

But there is perhaps an even more important procedural issue around peer review. Whatever value it might have we largely throw away. Few journals make referee’s reports available, virtually none track the changes made in response to referee’s comments enabling a reader to make their own judgement as to whether a paper was improved or made worse. Referees get no public credit for good work, and no public opprobrium for poor or even malicious work. And in most cases a paper rejected from one journal starts completely afresh when submitted to a new journal, the work of the previous referees simply thrown out of the window.

Much of the commentary around the open letter has suggested that the peer review process should be made public. But only for published papers. This goes nowhere near far enough. One of the key points where we lose value is in the transfer from one journal to another. The authors lose out because they’ve lost their priority date (in the worse case giving the malicious referees the chance to get their paper in first). The referees miss out because their work is rendered worthless. Even the journals are losing an opportunity to demonstrate the high standards they apply in terms of quality and rigor – and indeed the high expectations they have of their referees.

We never ask what the cost of not publishing a paper is or what the cost of delaying publication could be. Eric Weinstein provides the most sophisticated view of this that I have come across and I recommend watching his talk at Science in the 21st Century from a few years back. There is a direct cost to rejecting papers, both in the time of referees and the time of editors, as well as the time required for authors to reformat and resubmit. But the bigger problem is the opportunity cost – how much that might have been useful, or even important, is never published? And how much is research held back by delays in publication? How many follow up studies not done, how many leads not followed up, and perhaps most importantly how many projects not refunded, or only funded once the carefully built up expertise in the form of research workers is lost?

Rejecting a paper is like gambling in a game where you can only win. There are no real downside risks for either editors or referees in rejecting papers. There are downsides, as described above, and those carry real costs, but those are never borne by the people who make or contribute to the decision. Its as though it were a futures market where you can only lose if you go long, never if you go short on a stock. In Eric’s terminology those costs need to be carried, we need to require that referees and editors who “go short” on a paper or grant are required to unwind their position if they get it wrong. This is the only way we can price in the downside risks into the process. If we want open peer review, indeed if we want peer review in its traditional form, along with the caveats, costs and problems, then the most important advance would be to have it for unpublished papers.

Journals need to acknowledge the papers they’ve rejected, along with dates of submission. Ideally all referees reports should be made public, or at least re-usable by the authors. If full publication, of either the submitted form of the paper or the referees report is not acceptable then journals could publish a hash of the submitted document and reports against a local key enabling the authors to demonstrate submission date and the provenance of referees reports as they take them to another journal.

In my view referees need to be held accountable for the quality of their work. If we value this work we should also value and publicly laud good examples. And conversely poor work should be criticised. Any scientist has received reviews that are, if not malicious, then incompetent. And even if we struggle to admit it to others we can usually tell the difference between critical, but constructive (if sometimes brutal), and nonsense. Most of us would even admit that we don’t always do as good a job as we would like. After all, why should we work hard at it? No credit, no consequences, why would you bother? It might be argued that if you put poor work in you can’t expect good work back out when your own papers and grants get refereed. This again may be true, but only in the long run, and only if there are active and public pressures to raise quality. None of which I have seen.

Traditional peer review is hideously expensive. And currently there is little or no pressure on its contributors or managers to provide good value for money. It is also unsustainable at its current level. My solution to this is to radically cut the number of peer reviewed papers probably by 90-95% leaving the rest to be published as either pure data or pre-prints. But the whole industry is addicted to traditional peer reviewed publications, from the funders who can’t quite figure out how else to measure research outputs, to the researchers and their institutions who need them for promotion, to the publishers (both OA and toll access) and metrics providers who both feed the addiction and feed off it.

So that leaves those who hold the purse strings, the funders, with a responsibility to pursue a value for money agenda. A good place to start would be a serious critical analysis of the costs and benefits of peer review.

Addition after the fact: Pointed out in the comments that there are other posts/papers I should have referred to where people have raised similar ideas and issues. In particular Martin Fenner’s post at Nature Network. The comments are particularly good as an expert analysis of the usefulness of the kind of “value for money” critique I have made. Also a paper in the Arxiv from Stefano Allesina. Feel free to mention others and I will add them here.

Reblog this post [with Zemanta]

27 Comments »

  • On the run-04Feb10 « faculty of 1000 said:

    […] of researchers with the peer review process is still making waves this week. Cameron Neylon gives his own take on the matter at his blog. I’m not at all sure that I agree with his analysis, having had my own […]

  • Steven Salzberg said:

    You make many good points, but the solution isn't to throw out the baby with the bath water. There are many reasons for rejecting a paper, and peer review doesn't distinguish, but if a result is not valid, it shouldn't appear anywhere (as in the Wakefield MMR paper). However, many (most?) rejections, especially from “top” journals, are because the paper isn't “good enough” by some subjective standard. The authors then have to waste time, as you say, shopping the paper around. This could be remedied by a system where the paper was published in a lesser journal without delay. The PLoS people have a partial solution to this – papers rejected by PLoS Biology are passed down – if the authors agree – to the next tier of journals, and if those journals don't want them, to PLoS ONE, which is the bottom tier.

    Biology Direct is trying another model, the open peer review you propose, where all the reviews appear along with the paper. But many authors and reviewers don't want to do this.

    With open access, the cream will rise to the top in many cases, so the journal itself matters less – many papers in PLoS ONE are getting loads of attention because they're good papers. But we still need a reviewer system to eliminate bogus results, and to provide feedback on how to fix not-quite-ready results.

  • csurridge said:

    There are good arguments here and certainly more coherent proposals than produced by 'disgruntled of Oxbridge'. I don't completely agree but I follow the logic. I did want to say that it isn't true that there is now downside to editors in rejecting papers. It isn't very immediate but I for one have always had my editorial work judged both on the papers I accepted and those that I rejected. When papers that I have rejected have appeared in other prominent journals and/or become influential I have had some explaining to do to my bosses. Editors live in as much fear of missing something good as publishing stinkers.

  • Cameron Neylon said:

    Chris, fair point, and I will accept that I overstated the case somewhat in that respect, there are certainly some consequences for editors. But to stretch the financial analogy they are not fully unwound. There are internal consequences, and no doubt these lead to some sort of external consequences in the long term, but they are hidden. In a functioning market all players need reasonably good information, including e.g. Journal X has a history of rejecting papers similar to mine which then go on to have a big impact in Journal Y – to cut my costs I should submit to Journal Y.

  • Cameron Neylon said:

    Steven, I agree that peer review is all we have, and in the long term it seems to work, or at least science works. But I still feel we need to ask the fundamental question. You say “we still need a reviewer”. I say, “show me the evidence that this provides any useful information at all”. More precisely lets look at the situations where we can show peer review does work and try to make them more efficient. Then toss out the rest and try to find better solutions. I like the PLoS ONE approach (and I am an academic editor) because it narrows the criteria but even here it is tough.

    In principle I do like the Biology Direct approach, but in practice, being in the middle of trying to get it to work for me it is confusing and I feel even more inefficient from my perspective. And as I understand it still doesn't own up to rejecting papers or the reasons for that decision.

    I think there is a more effective alternative to the trickle down approach you describe. Publishing everything as pre-prints and then select, or charge a stinging fee, for peer review. People will only put forward what they see as the best, reviewers will have more time and prestige for the work they do, and the whole process can be feed into a commentary system for a paper that already exists, solving both the priority problem and the retaining of value in the comments.

  • Uli Pöschl said:

    Dear Cameron and All:

    following up on a suggestion of Daniel Mietchen I encountered your ongoing discussion, which I find very interesting.

    I agree with many of the arguments put forward, and I would like to draw your attention to a relatively new form of scientific publishing and quality assurance that solves or reduces most of the problems you addressed: interactive open access publishing and peer review as practiced by the journal Atmospheric Chemistry and Physics (ACP, http://www.atmos-chem-phys.net) and a rapidly growing number of sister journals of the European Geosciences Union (EGU, http://www.egu.eu).

    Please find attached the abstract of a recent article explaining the concept, achievements and perspectives of interactive publishing, which effectively resolves the dilemma between free speech, rapid communication and thorough quality assurance as required in the scientific discourse. For more information, please visit the web pages of ACP and EGU (all freely available through open access and creative commons licensing):

    http://www.atmospheric-chemistry-and-physics.ne

    http://www.atmospheric-chemistry-and-physics.ne

    http://www.atmospheric-chemistry-and-physics.ne

    With best regards,
    Uli Pöschl

    Interactive Open Access Publishing and Peer Review: The Effectiveness and Perspectives of Transparency and Self-Regulation in Scientific Communication and Evaluation

    Ulrich Pöschl
    Max Planck Institute for Chemistry, Mainz, Germany, u.poschl@mpic.de
    Manuscript version of 26 October 2009, Submitted to LIBER Quarterly

    Abstract

    The traditional forms of scientific publishing and peer review do not live up to the demands of efficient communication and quality assurance in today’s highly diverse and rapidly evolving world of science. They need to be advanced and complemented by interactive and transparent forms of review, publication, and discussion that are open to the scientific community and to the public.

    The advantages of open access, public peer review and interactive discussion can be efficiently and flexibly combined with the strengths of traditional publishing and peer review. Since 2001 the benefits and viability of this approach are demonstrated by the highly successful interactive open access journal Atmospheric Chemistry and Physics (ACP, http://www.atmos-chem-phys.net) and a growing number of sister journals of the European Geosciences Union (EGU, http://www.egu.eu) and Copernicus Publications (http://www.copernicus.org).

    These journals are practicing a two-stage process of publication and peer review combined with interactive public discussion, which effectively resolves the dilemma between rapid scientific exchange and thorough quality assurance. The same or similar concepts have also been adopted in other disciplines, including the life sciences and economics. Note, however, that alternative approaches where interactive commenting and public discussion are not fully integrated with formal peer review by designated referees tend to be less successful. So far, the interactive open access peer review of ACP is arguably the most successful alternative to the closed peer review of traditional scientific journals.

    The principles, key features and results of interactive open access publishing and peer review are presented and discussed in this manuscript. The achievements and statistics of ACP and its sister journals clearly prove both the scientific benefits and the financial sustainability of open access. Future perspectives and a vision of improved communication and evaluation in the global information commons are outlined with regard to the principles of critical rationalism and open societies.

  • Ulrich Poschl said:

    P.S.: ACP and its EGU interactive open access sister journals are currently publishing about 2000 papers with a turnover of 2 MEUR per year, which the authors or their institutions are ready to cover. Moreover, they are top ranked in the citation statistics of their field (see ISI-SCIE, SCOPUS, Google Scholar, etc.). In other words, interactive open access publishing and peer review are already well established and continue to spread throughout the geosciences and beyond (see links of preceding post).

  • Ulrich Poschl said:

    Cameron, your proposal is very close to what the interactive open access journal Atmospheric Chemistry and Physics (ACP, http://www.atmos-chem-phys.net) and a growing number of sister journals of the European Geosciences Union (EGU, http://www.egu.eu) are practicing since 2001 with great success and at fairly large scale. The results are high and steeply increasing rates of submission and publication (currently 1000 papers per year for ACP), top quality and visibility (impact factors) at low rejection rates (only 10% as opposed to 50% in traditional journals with lower impact factors), and financial sustainability at low cost (approx. 1000 EUR per paper). I am confident that interactive open access publishing is suitable for most if not all scientific disciplines, and I can only recommend this approach to all scientific publishers.

  • Bee said:

    Cameron: You know my take on the issue, but let me briefly summarize it. The problem isn't peer review. The problem is a) that the reviewers have little incentives to do a good job and/or b) are not aware what is required of them for their review to be beneficial for progress in science.

    Concretely I mean that reviewers have little time, have in the vast majority very insecure jobs and future options, so they'll fight for their own opinions in any possible way, even if they know it's unscientific. They know that, unfortunately, their colleagues' appreciation as well as their funding depends on how many people work on their own field (where there's flies, there must be shit). They rarely misunderstand the necessity of taking risk. But maybe worst of all is that the time it takes to offer thoughtful comments and constructive criticism is, the way it looks now, completely wasted. We simply have no culture in which criticism is sufficiently appreciated.

    These are social problems, caused by insufficient education and external pressures (time, financial, peer pressure). The problem with peer review are symptoms, not the disease.

  • Brian Whitworth said:

    Good article! See also our recent First Monday paper and the following month's suggested solution to achieve what Andrew Morrison suggests:
    http://firstmonday.org/htbin/cgiwrap/bin/ojs/in
    http://www.uic.edu/htbin/cgiwrap/bin/ojs/index….
    It is about time this discussion was engaged.
    Brian Whitworth

  • Cameron Neylon said:

    Bee, I agree with everything you say here (as you know) and I think we're coming from the same perspective. How do we align incentives so that important things are done? The market is clearly broken but how do we fix it? But I still think there is a more fundamental problem.

    As far as I am aware this is no evidence that traditional peer review (prior to publication/decision, with a limited number of people) would be effective even if people had all the incentives in the world to do it properly. It doesn't matter how much the incentives are fixed if the wheels are still square.

  • Examining Peer Review « said:

    […] 9, 2010 · Leave a Comment Cameron Neylon has written a post on the problems of peer review. From the post: Whatever value it might have we […]

  • The Third Bit » Blog Archive » Peer Review Is Broken said:

    […] Neylon’s recent post about peer review is pretty damning. It’s interesting to compare his description of peer review’s faults […]

  • Peplluis de la Rosa said:

    I think the peer review must not be used for the selection of papers to be published, but used to improve them, enrich them with further points of view, further contrast and verification.

    Furthermore, I agree that the burden of peer reviewing can be reduced by 90-95% as you claim. I have a mechanish in IEEE Intelligent Systems, Nov/Dc 2007, Vol. 22 num. 6 http://www.computer.org/portal/web/csdl/doi/10….. I named it “citation auctions” and the good thing is that peer review is applied for improving the papers prior their submission to publication, but not as the selective method for thos 90% of papers that cannot get the threshold and are automatically rejected. The remaining 10% still are reviewed to verify other editorial requirements.

  • Cameron Neylon said:

    Brian, interesting papers though I've only had a chance to skim them so far. Have you looked at the Frontiers series of Journals or the EGU journals mentioned below in the comments? How do they map on to your thinking?

  • Cameron Neylon said:

    I think I've seen some similar ideas mooted but not as an explicit auction. More a pay and return scheme, you can only be peer reviewed once you've put in a certain number of peer reviews. I think I first saw something like that suggested by Jeremiah Faith – but I'd have to dig deep to find the reference now. Is there a version of your paper full text online somewhere? Would be interested to read more.

  • elearnspace › Peer Review said:

    […] review and have offered a developmental model of scholarship. Which means I’m predisposed to finding articles like this very satisfying: Scientists worship at the altar of peer review, and I use that metaphor […]

  • wrpearson49 said:

    I agree with Steven; while it is easy to identify flaws in the peer review process, and exciting and spectacular examples when it fails, I believe it provides a critical role in the larger process of scientific discovery. It may be useful to distinguish between “peer” and “review” — whether one gives up on the “review” process altogether, because it has a random component and may not be reliably reproducible, or whether one focuses on how to make it better.

    It is not surprising that there is a random component to success in peer review, there is a random component in every review process (the New Yorker recently had a compelling story on how professional sports teams spend zillions of dollars deciding who to recruit, with tons of statistics and performance information, and yet still do a terrible job of predicting success). Similar problems are encountered in college admissions. Reviewing with perfect consistency and accuracy is impossible, even in fields that do not try to create new knowledge.

    But I would argue that giving up on review is even worse. In some fields I am familiar with, there are many papers published that I think are misleading in an important way. Yes, the experiments were done as described, and the results collected properly, but the explanation for the results was not unique, and the conclusions drawn were overstated. It is hard for me to see the benefit of more misleading papers; I think reducing review barriers will reduce the signal to noise ratio dramatically, because it is much easier to produce a “novel” mistaken result than a correct one.

  • Cameron Neylon said:

    Bill, thanks for the comments. I don't think anyone is giving up on review altogether. Peer review in its general form is what makes science work in the long term. But this kind of review appears to work over a longer time and require more diverse input than the traditional pre-publication peer review process. My opinion/feeling is that if we could harness this kind of post-publication peer review effectively and make it more efficient then we could do much better than we ever can trying to improve the traditional pre-publication approach.

    There are risks here – potentially uncertified research being picked up by a credulous media outlet and amplified or taken out of context but there are risks and damages and expenses with the current system. What I'm really arguing for is a serious cost-benefit analysis of the different approaches we could take. I still feel we need much less journal-formal-publication which I think lines up with what you are saying. Another problem here is that the published article becomes the finished article – rather than constantly evolving, or being rejected.

    All these papers you mention that are misleading – why can't you mark them up with your comments? Would that not be more useful peer review? Might not another reviewer raise a point that you hadn't thought of? We used to do this before publication because the act of changing the printed copy was way too expensive to even consider. Now it costs almost nothing (to change that is – publishing still costs money obviously)

    I guess at core I'm asking for evidence and clarification to support your first statement. Does traditional pre-publication peer review provide a critical role in the process. Or does review in general? The second I will agree with whole heartedly. The first I am much more sceptical.

  • Bill Pearson said:

    For me, the evidence that peer review serves a central role in publication is two-fold. First, several of my best papers improved dramatically in the review process, either because I was forced to provide more data (and often do more analyses) or to present it in a more clear or convincing way. Second, papers I review often have serious mis-statements that need correction or reflect misunderstandings of the literature or the resources upon which the studies were based. Investigators, particularly in a young and rapidly changing field, often make mistakes, and the literature is more useful when there are fewer mistakes.

    Indeed, I would argue that the papers demonstrating the “random” nature of peer review have some issues of their own, as those papers freely admit. We cannot know the “true” distribution of “good” and “bad” papers, and it is not clear to me that consistency is the appropriate surrogate. And then, of course, there is the irony that these papers questioning the value of peer-review were in fact peer-reviewed.

    As has been pointed out elsewhere, selection of papers for publication balances competing priorities: validity, significance, “sexiness”, politics, controversy. Scientists have been complaining about literature overload for decades, which suggests to me that we need more review, not less. And it's hard for me to imagine an alternative to peer review, despite its shortcomings.

  • Howard said:

    Well said, as are the comments! My take; we've depended to much in the past on a lone scientist model that ignores the social nature to knowledge. Your ideas and many comments reference the social nature of knowledge production in the process, but peer review operates without acknowledgement of this social nature. There would be benefits to reducing the cascading wall of publications, but at the same time increasing the ability of more people to contribute and to understand the process. The web has created a new publication world, but the old infrastructure is insufficient for taking advantage of the potential that exists.

  • How Does a University Create Value for Their Students: Does Current Practice Do This? | A Chronicle of a Learning Journey said:

    […] blogging regarding university tenure processes and journal peer review processes are a reminder of how contestable knowledge production can be; especially if knowledge is not used […]

  • Journals as Filters and Active Agents | Virtual Canuck said:

    […] Siemens sent me a link  to a post by Cameron Neylon that attempts to pound yet another nail in the coffin of peer review. As an editor of a peer […]

  • Publish or Perish: The plague of academia | effectivedesign.org said:

    […] 5, 2010 (my 41st birthday!): Cameron Neylon posts “Peer Review: What is it good for?” The basic premise of the article is that the process needs to be more open.  Journals […]

  • Thoughts after Textual Echoes, part 2: Kristina Busse's keynote, gendered science, money in fandom | Fanfic Forensics said:

    […] present data seemingly supported by others' findings. That may not always be wise of me, given the faults of peer review and other supposedly rational and effective academic processes. Regardless, many tenets of science […]

  • New science journalism ecosystem: new inter-species interactions, new niches « Science in the Triangle said:

    […] First, it is important to remind everyone that peer-review is a very new thing. Only one minor paper by Einstein went through peer-review. Nature only started experimenting with it in the late 1960s. Yet lots and lots of great science was published before this was instituted. There is no data supporting the view that peer-review actually does much good. […]

  • Bee said:

    Hi Cameron,

    I’m somewhat late here I suppose. Weirdly enough, I printed this blogpost (I think because I was about to take a flight to somewhere), and just found the printout in my bag.

    I like the point you’re making about the problem with peer review after a change of journals. It has another aspect that you haven’t mentioned though. It duplicates efforts. It has actually happened to me several times that a paper I rejected for publication in one journal was sent to me again by another journal. In all of these cases the authors evidently hadn’t thought even for a second about my report. It was exactly the same paper. (In these cases I usually tell the editor I had already written a report for another journal, attach it, and add I think it would be fair if the authors get a different referee.)

    In any case, I’m not sure whether I’ve mentioned this before here, but I’ve suggested several times already that a solution to these problems would be to decouple peer review from the journals. I’m thinking that the review process could be done by independent let’s call it agency for now, that provide authors with a report (or several) for their paper. This report could then be submitted with their paper to a journal, and the editors can decide in light of the reports whether they want to publish the paper. The idea is roughly that the value of such a report depends to some extend on the reputation of the agency used.

    There could be several of these agencies. I’m not sure which procedure would be the best there, so probably best to just try some. Some might do the reports anonymously, some not. Some might charge for the reports, some not. It might depend on the field which procedure works best. In some fields, double blind review might even be feasible. (In fields where people tend to know each others work well or most papers are pre-published anyway this doesn’t make much sense though.) There are also some technical hurdles to this, but I’m sure they can be overcome. For example if the report goes to the author for later use, you need to have some way to identify the report with a specific version of the manuscript. And I think it would make the referee process much easier if such an agency would provide a, possibly anonymous, chat interface such that one could ask some quick clarifying questions about the manuscript.

    Besides this, it isn’t true that writing reports isn’t acknowledged at all. There are several journals where you find “best referee of the year” rankings (or something similar). I’ve seen a couple of these. I think it’s a nice touch. This misses out on those people though whose referee services are dispersed over many journals. Best,

    B.