Articles tagged with: peer review
One of the things we want the Open Research Computation journal to do is bring more of the transparency and open critique that characterises the best Open Source Software development processes into the scholarly peer review process. But you can talk about changing the way peer review works and you can actively do something about. Michael Barton and Hazel Barton have taken matters into their own hands and thrown the doors completely open. They have submitted a paper to ORC and in parallel asked the community on the BioStar site how the paper and software could be improved.
So my previous post on peer review hit a nerve. Actually all of my posts on peer review hit a nerve and create massive traffic spikes and I’m still really unsure why. The strength of feeling around peer review seems out of all proportion to both its importance and indeed the extent to which people understand how it works in practice across different disciplines. Nonetheless it is an important and serious issue and one that deserves serious consideration, both blue skies thinking and applied as it were. And it is the latter I will try to do here.
I’ve been meaning for a while to write something about peer review, pre and post publication, and the somewhat bizarre attachment of research community to the traditional approaches. A news article in Nature tho, in which I am quoted seems to have really struck a nerve for many people. The context in which the quote is presented doesn’t really capture what I meant but I stand by the statement in isolation. I think there are two important things to tease out here, firstly a critical analysis of the problems and merits of peer review, and secondly a close look at how it could be improved, modified, or replaced. I think these merit separate posts so I’ll start here with the problems in our traditional approach.
Nature Publishing Group yesterday announced a new venture, very closely modelled on the success of PLoS ONE, titled Scientific Reports. Others have started to cover the details and some implications so I won’t do that here. I think there are three big issues here. What does this tell us about the state of Open Access? What are the risks and possibilities for NPG? And why oh why does NPG keep insisting on a non-commercial licence? I think those merit separate posts so here I’m just going to deal with the big issue. And I think this is really big.
I spend a lot of my time arguing that many of the problems in the research community are caused by journals. We have too many, they are an ineffective means of communicating the important bits of research, and as a filter they are inefficient and misleading. Today I am very happy to be publicly launching the call for papers for a new journal. How do I reconcile these two statements?
Last Friday I spoke at the STM Innovation Seminar in London, taking in general terms the theme I’ve been developing recently of focussing on enabling user discovery rather than providing central filtering, enabling people to act as their own gatekeeper rather than publishers taking that role on themselves. The example I used, the JACS hydride oxidation paper, wasn’t such a good example because, as a retraction, there is more detailed information available in the published retraction. Had it come a few days earlier the arsenic microbes paper, and subsequent detailed critique might well have made a better example. But both of these examples seem to be pointing towards a world in which post publication peer review is not just happening, but expected. How can publishers work to make the best of this new information and treat papers as the beginning of a story rather than its end?
I recently made the most difficult decision I’ve had to take thus far as a journal editor. That decision was ultimately to accept the paper; that probably doesn’t sound like a difficult decision until I explain that I made this decision despite a referee saying I should reject the paper with no opportunity for resubmission not once, but twice. One of the real problems I have with traditional pre-publication peer review is the way it takes a very nuanced problem around a work which has many different parts and demands that you take a hard yes/no decision.
A talk given in two slightly different forms at the NFAIS annual meeting 2010 (where I followed Clay Shirkey, hence the title) and at the Society for General Microbiology in Edinburgh in March. In the first case the talk was part of a panel of presentations intended to give the view of “scholars” to the information professionals. In the second it was part of a session looking at the application of web based tools to research and education.
Now, about that filter..
View more presentations from Cameron Neylon.
Abstract (NFAIS meeting): There was …
The online maths community has lit up with excitement as a document, claiming to prove one of the major outstanding theorems in maths has been circulated. In response an online peer review process has swung into action that is very similar to the kind of post-publication peer review that many of us have advocated. Is this a one of, a special case? Or does it point the way towards successfully using the web to find a way of doing peer review effectively and efficiently?
The idea that “it’s not information overload, it’s filter failure” combined with the traditional process of filtering scholarly communication by peer review prior to publication seems to be leading towards the idea that we need to build better filters by beefing up the curation of research output before it is published. Here I argue that this is backwards and that the ‘filter failure’ soundbite is maybe unfortunate in the context of scholarly communications. The web won’t reduce the cost of curation, but it has reduced the cost of publication. This means that instead of building filters to prevent stuff getting on the web it is more productive to focus on enhancing discovery. A focus on enabling discovery can both deliver for researchers and provide business models that are more aligned with the way the web works.