Why good intentions are not enough to get negative results published

There are a set of memes that seem to be popping up with increasing regularity in the last few weeks. The first is that more of the outputs of scientific research need to be published. Sometimes this means the publication of negative results, other times it might mean that a community doesn’t feel they have an outlet for their particular research field. The traditional response to this is “we need a journal” for this. Over the years there have been many attempts to create a “Journal of Negative Results”. There is a Journal of Negative Results – Ecology and Evolutionary Biology (two papers in 2008), a Journal of Negative Results in Biomedicine (four papers in 2009, actually looks pretty active) , a Journal of Interesting Negative Results in Natural Language (one paper), and a Journal of Negative Results in Speech and Audio Sciences, which appears to be defunct.

The idea is that there is a huge backlog of papers detailing negative results that people are gagging to get out if only there was somewhere to publish them. Unfortunately there are several problems with this. The first is that actually writing a paper is hard work. Most academics I know do not have the problem of not having anything to publish, they have the problem of getting around to writing the papers, sorting out the details, making sure that everything is in good shape. This leads to the second problem, that getting a negative result to a standard worthy of publication is much harder than for a positive result. You only need to make that compound, get that crystal, clone that gene, get the microarray to work once and you’ve got the data to analyse for publication. To show that it doesn’t work you need to repeat several times, make sure your statistics are in order, and establish your working condition. Partly this is a problem with the standards we apply to recording our research; designing experiments so that negative results are well established is not high on many scientists’ priorities. But partly it is the nature of beast. Negative results need to be much more tightly bounded to be useful .

Finally, even if you can get the papers, who is going to read them? And more importantly who is going to cite them? Because if no-one cites them then the standing of your journal is not going to be very high. Will people pay to have papers published there? Will you be able to get editors? Will people referee for you? Will people pay for subscriptions? Clearly this journal will be difficult to fund and keep running. And this is where the second meme comes in, one which still gets suprising traction, that “publishing on the web is free”. Now we know this isn’t the case, but there is a slighlty more sophisticated approach which is “we will be able to manage with volunteers”. After all with a couple of dedicated editors donating the time, peer review being done for free, and authors taking on the formatting role, the costs can be kept manageable surely? Some journals do survive on this business model, but it requires real dedication and drive, usually on the part of one person. The unfortunate truth is that putting in a lot of your spare time to support a journal which is not regarded as high impact (however it is measured) is not very attractive.

For this reason, in my view, these types of journals need much more work put into the business model than for a conventional specialist journal. To have any credibility in the long term you need a business model that works for the long term. I am afraid that “I think this is really important” is not a business model, no matter how good your intentions. A lot of the standing of a journal is tied up with the author’s view of whether it will still be there in ten years time. If that isn’t convincing, they won’t submit, if they don’t submit you have no impact, and in the long term a downward spiral until you have no journal.

The fundamental problem is that the “we need a journal” approach is stuck in the printed page paradigm. To get negative results published we need to reduce the barriers to publication much lower than they currently are, while at the same time applying either a pre- or post-publication filter. Rich Apodaca, writing on Zusammen last week talked about micropublication in chemistry, the idea of reducing the smallest publishable unit by providing routes to submit smaller packages of knowledge or data to some sort of archive. This is technically possible today, services like ChemSpider, NMRShiftDB, and others make it possible to submit small pieces of information to a central archive. More generally the web makes it possible to publish whatever we want, in whatever form we want, but hopefully semantic web tools will enable us to do this in an increasingly more useful form in the near future.

Fundamentally my personal belief is that the vast majority of “negative results” and other journals that are trying to expand the set of publishable work will not succeed. This is precisely because they are pushing the limits of the “publish through journal” approach by setting up a journal. To succeed these efforts need to embrace the nature of the web, to act as a web-native resource, and not as a printed journal that happens to be viewed in a browser. This does two things, it reduces the barrier to authors submitting work, making the project more likely to be successful, and it can also reduce costs. It doesn’t in itself provide a business model, nor does it provide quality assurance, but it can provide a much richer set of options for developing both of these that are appropriate to the web. Routes towards quality assurance are well established, but suffer from the ongoing problem of getting researchers involved in the process, a subject for another post. Micropublication might work through micropayments, the whole lab book might be hosted for a fee with a number of “publications” bundled in, research funders may pay for services directly, or more interestingly the archive may be able to sell services built over the top of the data, truly adding value to the data.

But the key is a low barriers for authors and a robust business model that can operate even if the service is perceived as being low impact. Without these you are creating a lot of work for yourself, and probably a lot of grief. Nothing comes free, and if there isn’t income, that cost will be your time.

12 Replies to “Why good intentions are not enough to get negative results published”

  1. I believe there was journal of negative results in psychology that is also now defunct. The Journal of Spurious Correlations (http://www.jspurc.org/) seems to exist not to have published anything.
    However, PLoS ONE (plosone.org) is a good place to publish negative results I think (disclosure: I am a board member). The criteria for publication (http://www.plosone.org/static/guidelines.action), by omitting subjective importance and emphasizing just that the study was done with sound methodology, reduces some of the difficulties in publishing negative results with traditional journals.

    However, as you point out most will not put in the time to write up a negative result and I am no different, I usually just abandon the project. So I agree that doing open science on the web helps, even in the limited way I do (http://openwetware.org/wiki/Holcombe) because it at least signals to a web-searcher that there is a possibly-abandoned project out there. With an email they can find out more.

  2. I believe there was journal of negative results in psychology that is also now defunct. The Journal of Spurious Correlations (http://www.jspurc.org/) seems to exist not to have published anything.
    However, PLoS ONE (plosone.org) is a good place to publish negative results I think (disclosure: I am a board member). The criteria for publication (http://www.plosone.org/static/guidelines.action), by omitting subjective importance and emphasizing just that the study was done with sound methodology, reduces some of the difficulties in publishing negative results with traditional journals.

    However, as you point out most will not put in the time to write up a negative result and I am no different, I usually just abandon the project. So I agree that doing open science on the web helps, even in the limited way I do (http://openwetware.org/wiki/Holcombe) because it at least signals to a web-searcher that there is a possibly-abandoned project out there. With an email they can find out more.

  3. Alex, thanks for the comment. Yes, I meant to mention that PLoS ONE is something quite different as well as answering some of the criticisms I was making. Firstly, it has a business model, which as far as we know is actually proving rather successful, and this business model was built in from the beginning, not added in haste when things started to go wrong. It also reduces barriers by providing clarity on what is accepted and not, again different to pretty much any other journal where the issue of “importance” and how important is important enough. But the other key thing PLoS ONE (aims to) provide is speed, again reducing barriers to entry.

    So I am a great supporter of PLoS ONE (and similar well thought out efforts) precisely because I think it deals with the criticisms that I make here. I think it is great when negative results do get published and am very happy for people to make the effort to create places where this can be done – but they need to be backed up with good resources, a serious business model for long term credibility, and also to recognise the importance of brand (which PLoS again does well on).

    Also I think there are probably two different types of negative experiment which I have not very helpfully combined here. There are experiments designed to test “does x have an effect on y?” for which the answer can be a well supported “no”. As others have commented in other places, as long as the paper writing is going on in train with the experiment, or at least a good record is being kept, then the barrier to publishing these types of experiments should be low. I think common practice is a long way from best practice here but we should keep pushing for higher standards.

    The other type of negative result is “this didn’t work”. I work a lot in methods development and we often try things that on the face of it look sensible, but don’t work out (at least in our hands in the way we try it), and then you find out that lots of other people have tried it and failed to (or in some cases there is a secret trick you need to know). Here it is close to impossible to publish (referees will always ask “did you try x, have you thought about y, did you check the gubbins on the widget every five minutes?”). This is where micropublication, along with a good record of the experiment, can be valuable. If you see 10 people have tried something and it doesn’t work, then you can either decide it aint going to to happen, or conversely look into the detail to see what people have missed. But I can’t see very many people ever going to the effort of actually writing this kind of thing up as formal papers.

  4. Alex, thanks for the comment. Yes, I meant to mention that PLoS ONE is something quite different as well as answering some of the criticisms I was making. Firstly, it has a business model, which as far as we know is actually proving rather successful, and this business model was built in from the beginning, not added in haste when things started to go wrong. It also reduces barriers by providing clarity on what is accepted and not, again different to pretty much any other journal where the issue of “importance” and how important is important enough. But the other key thing PLoS ONE (aims to) provide is speed, again reducing barriers to entry.

    So I am a great supporter of PLoS ONE (and similar well thought out efforts) precisely because I think it deals with the criticisms that I make here. I think it is great when negative results do get published and am very happy for people to make the effort to create places where this can be done – but they need to be backed up with good resources, a serious business model for long term credibility, and also to recognise the importance of brand (which PLoS again does well on).

    Also I think there are probably two different types of negative experiment which I have not very helpfully combined here. There are experiments designed to test “does x have an effect on y?” for which the answer can be a well supported “no”. As others have commented in other places, as long as the paper writing is going on in train with the experiment, or at least a good record is being kept, then the barrier to publishing these types of experiments should be low. I think common practice is a long way from best practice here but we should keep pushing for higher standards.

    The other type of negative result is “this didn’t work”. I work a lot in methods development and we often try things that on the face of it look sensible, but don’t work out (at least in our hands in the way we try it), and then you find out that lots of other people have tried it and failed to (or in some cases there is a secret trick you need to know). Here it is close to impossible to publish (referees will always ask “did you try x, have you thought about y, did you check the gubbins on the widget every five minutes?”). This is where micropublication, along with a good record of the experiment, can be valuable. If you see 10 people have tried something and it doesn’t work, then you can either decide it aint going to to happen, or conversely look into the detail to see what people have missed. But I can’t see very many people ever going to the effort of actually writing this kind of thing up as formal papers.

  5. Wise words, I fully agree about reducing the barrier to writing up (I’m a founder of JNR-EEB, so we’ve seen these issues first-hand). But the solution you offer is to change the whole culture. This might be good (for other reasons as well) if we can get there, but it’s difficult to see how to get there.

    Could an on-line “journal” be set up for negative results that would act as a half-way house. i.e. it acts like a journal: it has peer review and issues, but the content is closer to online open science?

  6. Wise words, I fully agree about reducing the barrier to writing up (I’m a founder of JNR-EEB, so we’ve seen these issues first-hand). But the solution you offer is to change the whole culture. This might be good (for other reasons as well) if we can get there, but it’s difficult to see how to get there.

    Could an on-line “journal” be set up for negative results that would act as a half-way house. i.e. it acts like a journal: it has peer review and issues, but the content is closer to online open science?

  7. Hi Bob, thanks for dropping by, if I’d realised before you were directly involved in JNR-EEB I would have asked you some questions in advance! I don’t think I’m advocating radical change here (makes a change you might say!). More that if we believe that more of these results should be available (which I am assuming we agree on) that we either need very good business models in place to support them if they are to be conventional journals, or that we need to think outside the box.

    But given that this stuff is currently not being published anyway – I don’t think we need to radically change the existing system to get it out there – we need new, low barrier mechanisms that make it easy. To beat another dead horse, it needs to be built into the existing workflows of scientists rather than creating new burdens over and above what already exists. I think that’s closer to what your suggesting but I suspect the peer review bit may be a) a barrier to entry and b) really difficult to find people to do it in the long term

  8. Hi Bob, thanks for dropping by, if I’d realised before you were directly involved in JNR-EEB I would have asked you some questions in advance! I don’t think I’m advocating radical change here (makes a change you might say!). More that if we believe that more of these results should be available (which I am assuming we agree on) that we either need very good business models in place to support them if they are to be conventional journals, or that we need to think outside the box.

    But given that this stuff is currently not being published anyway – I don’t think we need to radically change the existing system to get it out there – we need new, low barrier mechanisms that make it easy. To beat another dead horse, it needs to be built into the existing workflows of scientists rather than creating new burdens over and above what already exists. I think that’s closer to what your suggesting but I suspect the peer review bit may be a) a barrier to entry and b) really difficult to find people to do it in the long term

  9. I suspect the barrier is that a paper still has to be put together, with introduction, discussion etc. If we can persuade people that that stuff’s not important, we might do better. I suspect that would be easier if we give them a different model to start from.

  10. I suspect the barrier is that a paper still has to be put together, with introduction, discussion etc. If we can persuade people that that stuff’s not important, we might do better. I suspect that would be easier if we give them a different model to start from.

  11. Agreed – but even I would agree that there needs to be a little bit of context. I think there does need to be some barrier. The first step in the filtering process ought to be “I definitely think this package is of interest to some people”. I don’t know what a reasonable amount of effort would be though – one person-day?

  12. Agreed – but even I would agree that there needs to be a little bit of context. I think there does need to be some barrier. The first step in the filtering process ought to be “I definitely think this package is of interest to some people”. I don’t know what a reasonable amount of effort would be though – one person-day?

Comments are closed.