Home » Blog, Featured

What is it with researchers and peer review? or; Why misquoting Churchill does not an argument make

25 January 2011 16 Comments
Winston Churchill in Downing Street giving his...
Image via Wikipedia

I’ve been meaning for a while to write something about peer review, pre and post publication, and the attachment of the research community to traditional approaches. A news article in Nature though, in which I am quoted seems to have really struck a nerve for many people and has prompted me to actually write something. The context in which the quote is presented doesn’t really capture what I meant but I stand by the statement in isolation:

“It makes much more sense in fact to publish everything and filter after the fact” – quoted in Mandavilli (2011) “Trial by Twitter” Nature 469, 286-287

I think there are two important things to tease out here, firstly a critical analysis of the problems and merits of peer review, and secondly a close look at how it could be improved, modified, or replaced. I think these merit separate posts so I’ll start here with the problems in our traditional approach.

One thing that has really started to puzzle me is how un-scientific scientists are about the practice of science. In their own domain researchers will tear arguments to pieces, critically analyse each piece for flaws, and argue incessantly over the data, the methodology, the analysis, and the conclusions that are being put forward, usually with an open mind and a positive attitude.

But shift their attention onto the process of research and all that goes out the window. Personal anecdote, gut feelings, half-baked calculations and sweeping statements suddenly become de rigueur.

Let me pick a toy example. Whenever an article appears about peer review it seems inevitably to begin or end with someone raising Churchill; something along the lines of:

“It’s exactly like what’s said about democracy,” he adds. “The peer-review process isn’t very good — but there really isn’t anything that’s better.” ibid

Now lets examine this through the lens of scientific argument. Firstly it’s an appeal to authority, not something we’re supposed to respect in science and in any case its a kind of transplanted authority. Churchill never said anything about peer review but even if he did, why should we care? Secondly it is a misquotation. In science we expect accurate citation. If we actually look at the Churchill quote we see:

“Many forms of Government have been tried and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time.” – sourced from Wikiquotes, which cites: The Official Report, House of Commons (5th Series), 11 November 1947, vol. 444, cc. 206–07

The key here is “…apart from all those other[s…] tried from time to time…”. Churchill was arguing from historical evidence. The trouble is when it comes to peer review we a) have never really tried any other system so the quote really isn’t applicable (actually its worse than that, other systems have been used, mostly on a small scale, and they actually seem to work pretty well but that’s for the next post) and b) what evidence we do have shows almost universally that peer review is a waste of time and resources and that it really doesn’t achieve very much at all. It doesn’t effectively guarantee accuracy, it fails dismally at predicting importance, and its not really supporting any effective filtering.  If I appeal to authority I’ll go for one with some domain credibility, lets say the Cochrane Reviews which conclude the summary of a study of peer review with “At present, little empirical evidence is available to support the use of editorial peer review as a mechanism to ensure quality of biomedical research.” Or perhaps Richard Smith, a previous editor of the British Medical Journal, who describes the quite terrifying ineffectiveness of referees in finding errors deliberately inserted into a paper. Smith’s article is a good entry into to the relevant literature as is a Research Information Network study that notably doesn’t address the issue of whether peer review of papers helps to maintain accuracy despite being broadly supportive of the use of peer review to award grants.

Now does this matter? I mean in some ways people seem to feel we’re bumbling along well enough. Why change things? Well consider the following scenario.

The UK government gives £3B to a company, no real strings attached, except the expectation of them reporting back. At the end of the year the company says “we’ve done a lot of work but we know you’re worried about us telling you more than you can cope with, and you won’t understand most of it so we’ve filtered it for you.”

A reported digs a bit into this and is interested in these filters. The interview proceeds as follows:

“So you’ll be making the whole record available as well as the stuff that you’ve said is most important presumably? I mean that’s easy to do?”

“No we’d be worried about people getting the wrong idea so we’ve kept all of that hidden from them.”

“OK, but you’ll be transparent about the filtering at least?”

“No, we’ll decide behind closed doors with three of our employers and someone to coordinate the decision. We can’t really provide any information on who is making the decisions on what has been filtered out. Our employees are worried that their colleagues might get upset about their opinions so we have to keep it secret who looked at what.”

“Aaaalright so how much does this filtering cost?”

“We’re not too sure, but we think between £180M and £270M a year.”

“And that comes out of the £3B?”

“No, we bill that separately to another government department.”

“And these filters, you are sure that they work?”

“Well we’ve done a bit of work on that, but no-one in the company is especially interested in the results.”

“But what are the results?”

“Well we can’t show any evidence that the filtering is any good for deciding what is important or whether it’s accurate, but our employees are very attached to it. I can get some of them in, they’ll tell you lots of stories about how its worked for them…”

I mean seriously? They’d be ripped to shreds in moments. What if this happened within government? The media would have a field day. What makes us as a research community any different? And how are you going to explain that difference to taxpapers? Lets look at the evidence, see where the problems are , see where the good things are, and lets start taking our responsibility to the public purse seriously. Lets abandon the gut feelings and anecdotes and actually start applying some scientific thinking to the processes we used to do and communicate science. After all if science works, then we can’t lose can we?

Now simply abandoning the current system tomorrow is untenable and impractical. And there are a range of perfectly valid concerns that can be raised about moving to different systems. These are worth looking at closely and we need to consider carefully what kinds of systems and what kinds of transition might work. But that is a job for a second post.


Enhanced by Zemanta

16 Comments »

  • Software Carpentry » The Case Against Peer Review said:

    […] Neylon recently made the case against peer review once again; the dialog near his posting’s end is too accurate to be funny.  In this light, […]

  • Relax « Notes on Disordered Matter said:

    […] on publishing and peer review (see overview of posts here), an insightful post by Cameron Neylon: What is it with researchers and peer review? or; Why misquoting Churchill does not an argument make. If you are researcher, peer review is (and will remain) important – therefore keeping up on […]

  • rpg said:

    (Have discovered that the Churchill quote in the context of peer review is coming from a Nature stable article in 2005… same as the ‘critics’ mentioned in the Nature news piece. No wonder it wasn’t cited.)

  • David Crotty said:

    Other systems certainly have been, and continue to be tried. As examples, what about the court of public opinion as a filtering/review system (see the immunization/autism controversy). Or as rpg, who comments below, notes in a recent blog posting(http://blog.the-scientist.com/2011/01/20/politik/), The Third Reviewer, which seems to have withered on the vine as a review mechanism. PLoS’ numbers for post-publication review offered in their article level metrics suggest a generally unreviewed literature if that was the sole method employed.

    Looking forward to your suggestions for better filtering methods.

  • More peer review. Zzzzzz. | Not ranting – honestly said:

    […] in the end, I am back with the analogy that makes Cameron Neylon so tetchy. Yes, that Churchill one, about peer review being imperfect, but less imperfect than the […]

  • Chris Surridge said:

    I pretty much agree with everything said here. Most especially the use of the Churchill misquotation; shoot me if I ever use it myself. Very briefly my view is that Peer Review doesn’t stand a chance because it is being asked to do too many things. People want it to:

    1. Stop incorrect science being disseminated.
    2. Detect fraud
    3. Assess what people will find interesting now
    4. Decide what will have been important when we look back in 20 years
    5. Pass judgement on a researchers quality
    6. Work with authors to improve their publications
    7. Act as a proxy for grant awarding panels

    That’s too much to ask of any system. I could probably devise a very efficient way to achieve one or maybe two of the above, but all of them at the same time is an impossibility.

    As an editor I use peer review and have always used peer review for one thing only, to provide me with advice from people more knowledgeable than I about a particular area of science. And I need that advice so that I (and my editorial colleagues) can make a decision about which reports of research are most appropriate to be presented to the readership of my journal. It also gives me advice to help the authors make the paper more appropriate for my readership than it currently is.

    Peer review is brilliant at doing that.

  • Peer review – a bad example « Girl, Interrupting said:

    […] Neyland has published one of 2 posts defending his opinion of peer-review – standing by the quote ‘it makes more sense in fact to […]

  • Peer-review – I am beatin’ that horse until it dies! | The Occam's Typewriter Irregulars said:

    […] published another post on peer review in response to Cameron Neylon’s post about peer-review here . Specifically I didn’t like his example in the post as I don’t think its a good […]

  • Cameron Neylon (author) said:

    Well some quick easy answers on alternatives to traditional peer review. ArXiv (the sky doesn’t seem to have fallen in for the physicists), F1000 seems to be doing ok tho I think the model is somewhat flawed, and Atmospheric Chemistry and Physics have a publish first, peer review later, approach which seems to be working ok for them as well.

    I think the Wakefield paper is a peculiar example for you to choose though as it was an abject failure of both peer review and editorial view followed up with extensive post publication peer review, tests and a lot of hard work. This is exactly the kind of review that does work. Public, principled, and closely argued. The behind closed doors approach failed here. The fact that that review happened in the peer reviewed literature for the most part doesn’t actually damage my argument from my perspective. The question is whether we could have done that post-pub review more effectively and efficiently. I think the answer is yes, had their been a massive response quickly to that paper with good evidence and good argument then maybe the court of public opinion wouldn’t have gone off the rails.

  • Cameron Neylon (author) said:

    Ok, so you seem to be using it for two things here :-) On the second, the “improving papers” argument I can’t currently lay my hands on it but I think there has been one study on this that failed to show any improvement. Hard study to do obviously but it would be worth pinning this down. The trouble is that you’re falling back on personal anecdote again and I’m suspicious of that. Yes you get some different opinions, but how often are they the right opinions? How often are they accurate, and if you are by definition not in a position to tell how does that help us? These are tough questions but the only objective evidence out there suggests no effect (actually my reading of the evidence is that professional editors are much better at these jobs than peer reviewers). Against this we have “but I know it works” which sounds rather like the argument for homeopathy to me.

    But bottom line, yes, we should be designing systems explicitly and then checking that they are doing what we intend them to do. That’s the key. Applying scientific approaches to science. I’ve got to say, I’d much rather be wrong here than right. It would make our lives a lot easier. I still think there are options for improving efficiency but if we knew what peer review really was good for we’d at least have a starting point.

  • David Crotty said:

    Chris, which of the factors in your list of 7 go into your decision about which reports of research are appropriate to be presented to the readership of your journal. I’m thinking it’s likely 1, 2, 3 and 6. How would Nature Protocols handle a manuscript that came back with a review stating that the paper was incorrect, fraudulent, of little interest to the community and poorly written? While not perfect, as an article published by your employers notes (http://www.nature.com/emboj/about/emboj_rejects_2007.html) peer review is pretty effective at doing what it’s supposed to do.

    I don’t really know of any journal that asks reviewers to assess question 4, given the speed of scientific progress. Seriously, what was the last paper from 1991 that you read?

    5 and 7 are not failures of peer review, but instead are failures of the tenure and grant review processes. Although the track record of a candidate to produce results is and should be factors in such decisions, and their publication record is one way of assessing it.

    I do agree with much of what you’ve written, that assessing research is much more complicated than a simple and deeply flawed metric like the Impact Factor. But given the “brilliance” you see in peer review, are you really in complete agreement with Cameron that it should be eliminated, that it has no measurable value, and that everything should be published and sorted out after the fact? It seems like you’re making a very different argument than he is.

  • Mark Ware said:

    Nicely put, Cameron, and entertaining as ever. I think your attacking a straw man by picking on the Churchill quote, though: surely most people just include it as a way of adding a little colour their writing and to sum up the general feeling among the majority of researchers, rather than seriously citing it in support of the present system?

    I’ve just written a overview of the current state of peer review hopefully (subject to their peer review) to appear in a forthcoming issue of New Review of Information Networking (http://mrkwr.wordpress.com/2011/01/28/is-peer-review-in-crisis/). For what it’s worth I managed to avoid quoting Churchill. By the way, I’m sure it’s easy to find example of people misquoting Churchill but I checked the various sources that I cite and in fact they all correctly quote him.

    A lot depends on what you think peer review is for. You quote the Cochrane review which says there is no evidence that peer review improves the quality of biomedical research (though you omit the bit where they say that the absence of evidence isn’t the same thing as evidence of absence, I note). But is improving research the purpose of (journal) peer review? I’m more inclined to Chris Surridge’s position, that for publishing (peer review for grant applications is a different matter) peer review helps improve the published paper. In the PRC report you mention, we found that authors overwhelming said that review had improved their last published paper, so perhaps researchers are not being as irrational as you suggest in support peer review?

  • Cameron Neylon said:

    Mark, yes the Churchill quote is a kind of trivial example but I have to
    admit it drives me up the wall because it is so intellectually lazy.

    I agree that we only really have an absence of evidence at the moment and
    also that positive effects would be hard to observe but nonetheless given
    the amount of resource involved I think it is incumbent on us to demonstrate
    that we are getting value for money. That’s what really worries me, the
    somewhat lazy attitude to even seeing what the evidence is.

    As you say, there are a lot of surveys that show that authors support peer
    review. What there isn’t is very much objective evidence to back up authors
    views. The strong positive support and general failure to back this up where
    it has been tested is beginning to lead me to wonder whether the support for
    peer review is actually some form of Stockholm Syndrome. Clay Shirky in his
    most recent book, Cognitive Surplus notes that you get exactly this kind of
    positive emotional attachment to unpleasant and non-productive work because
    the alternative is to admit that you’re spending a lot of time on a
    worthless activity. I’d at least submit this as a hypothesis that is worthy
    of being disproved and that at the moment we don’t have the evidence to do
    that.

    But I’d love to see more evidence. As I’ve said elsewhere it would be nice
    to be proven wrong, at least in some aspects of this!

  • Cameron Neylon said:

    I would note that even I’m not saying peer review should be eliminated, just
    that it should be repositioned so as to get the most potential value out of
    it, and reduce the risk of opportunity costs as far as possible. Maybe I
    should stop spending time replying to comments and just write the second
    post…

    However I’d also say that I think the report you link to actually supports
    my position. There’s no evidence here about peer review per se, or at least
    nothing that can’t be explained simply by a Matthew effect. But in
    particular if I do a back of the envelope calculation we’ve got 2849
    rejected manuscripts of which about 2400 are ultimately published. If I
    assume three additional reviewers each spend three hours on each of these
    (round up to ten hours to make the calculation easy) with a notional hourly
    rate of say $US400 then we’ve got $4000 per paper for about $10M in
    additional and arguably unnecessary peer review costs. In addition EMBOJ
    will be a premium journal so we are also paying extra to the journal for the
    priviledge of paying the extra in peer review. So you need to demonstrate at
    least $10M in added value from this process to justify the additional peer
    review. Note I’m not including either the additional costs of reformatting
    papers (I’d guess around another $10M, based on one person day per paper).
    Nor am I including opportunity costs due to the delay in publication – this
    is near impossible to calculate, although it would be interesting to
    consider ways in which it might be estimated. These might in part be offset
    by the risks of early publication although I’m sceptical that those are
    really significant to be honest – I imagine you’d disagree on that point but
    again would be interesting to estimate those potential costs.

    Two obvious counter arguments. One is that this is largely editorial
    rejection, in which case peer review doesn’t come into it and we’re running
    around in a side issue. If we’re going to have a trickle down system I’d
    rather it was editorial rejection because that is less costly and arguably
    more accurate. Although I note again that NPG often justifies its existence
    and nonOA status with the high costs of (largely editorial) rejection. So we
    get charged coming and going.

    Second counter argument is that there is a value add in the system, through
    improving the papers, keeping the record “clean”, and improving the
    searchability of “important” papers in EMBO J. So your side of the argument
    is to provide estimates of that value add. Again hard to estimate but I’d be
    interested in seeing what you’re view on that is.

  • Chris Surridge said:

    Maybe I should have said earlier, but I’ll say now that of course my views are mine and not supposed to represent that of my employers (Nature Publishing Group at the moment).

    As you say I chiefly look for advice on 1,3 and 6. Fraud I tend to leave to one side as from what I get to work with as a journal editor it doesn’t look much different from studies that have been honestly messed up: the data looks ‘wrong’ in some way.

    As to looking onto the future I think that peer reviewers are always being asked to spot Nobel prize work or studies that will open up a wholly new field. The question might only be implicit but it is always there.

    I don’t disagree with you David. I think what I was trying to say was that Peer Review has been developed by scientific journals to help them decide what to publish in a times of increasing demand (more and more research wanting to be published) and restricted supply (a conventional journal can only publish so much). It is pretty good at this. The problems come when to much reliance is placed on journal publication to shape peoples careers and guide decisions about the assigning of large quantities of public money for research.

    I’m drifting off topic. I agree with Cameron that post-publication review of research has always played an important part in the culture of science, however traditional channels for this are too slow (discussion/citation in subsequent published papers) and too exclusive (individual conversations within labs/at meetings). To me it isn’t a question of replacing Peer Review with post-publication review, but how can we make fast and inclusive (Open even) review an integral part of modern science.

  • income tax calculator said:

    very well said