The software code that is written to support and manage research sits at a critical intersection of our developing practice of shared, reproducible, and re-useble research in the 21st century. Code is amongst the easiest things to usefully share, being both made up of easily transferable bits and bytes but also critically carrying its context with it in a way that digital data doesn’t do. Code at its best is highly reproducible: it comes with the tools to determine what is required to run it (make files, documentation of dependencies) and when run should (ideally) generate the same results from the same data. Where there is a risk that it might not, good code will provide tests of one sort or another than you can run to make sure that things are ok before proceeding. Testing, along with good documentation is what ensures that code is re-usable, that others can take it and efficiently build on it to create new tools, and new research.
The outside perspective, as I have written before, is that software does all of this better than experimental research. In practice the truth is that there are frameworks that make it possible for software to do a very good job on these things, but that in reality doing a good job takes work; work that is generally not done. Most software for research is not shared, is not well documented, generates results that are not easily reproducible, and does not support re-use and repurposing through testing and documentation. Indeed much like most experimental research. So how do we realise the potential of software to act as an exemplar for the rest of our research practice?
Nick Barnes of the Climate Code Foundation developed the Science Code Manifesto, a statement of how things ought to be (I was very happy to contribute and be a founding signatory) and while for many this may not go far enough (it doesn’t explicitly require open source licensing) it is intended as a practical set of steps that might be adopted by communities today. This has already garnered hundreds of endorsers and I’d encourage you to sign up if you want to show your support. The Science Code Manifesto builds on work over many years of Victoria Stodden in identifying the key issues and bringing them to wider awareness with both researchers and funders as well as the work of John Cook, Jon Claerbout, and Patrick Vanderwalle at ReproducibleResearch.net.
If the manifesto and the others work are actions that aim (broadly) to set out the principles and to understand where we need to go then Open Research Computation is intended as a practical step embedded in today’s practice. Researchers need the credit provided by conventional papers, so if we can link papers in a journal that garners significant prestige, with high standards in the design and application of the software that is described we can link the existing incentives to our desired practice. This is a high wire act. How far do we push those standards out in front of where most of the community is. We explicitly want ORC to be a high profile journal featuring high quality software, for acceptance to be a mark of quality that the community will respect. At the same time we can’t ask for the impossible. If we set standards so high that no-one can meet them then we won’t have any papers. And with no papers we can’t start the process of changing practice. Equally, allow too much in and we won’t create a journal with a buzz about it. That quality mark has to be respected as meaning something by the community.
I’ll be blunt. We haven’t had the number of submissions I’d hoped for. Lots of support, lots of enquiries, but relatively few of them turning into actual submissions. The submissions we do have I’m very happy with. When we launched the call for submissions I took a pretty hard line on the issue of testing. I said that, as a default, we’d expect 100% test coverage. In retrospect that sent a message that many people felt they couldn’t deliver on. Now what I meant by that was that when testing fell below that standard (as it would in almost all cases) there would need to be an explanation of what the strategy for testing was, how it was tackled, and how it could support people re-using the code. The language in the author submission guidelines has been softened a bit to try and make that clearer.
What I’ve been doing in practice is asking reviewers and editors to comment on how the testing framework provided can support others re-using the code. Are the tests provided adequate to help someone getting started on the process of taking the code, making sure they’ve got it working, and then as they build on it, giving them confidence they haven’t broken anything. For me this is the critical question, does the testing and documentation make the code re-usable by others, either directly in its current form, or as they build on it. Along the way we’ve been asking whether submissions provide documentation and testing consistent with best practice. But that always raises the question of what best practice is. Am I asking the right questions? And were should we ultimately set that bar?
Changing practice is tough, getting the balance right is hard. But the key question for me is how do we set that balance right? And how do we turn the aims of ORC, to act as a lever to change the way that research is done, into practice?
Related articles
- 5 Reasons To Write Tests For Your Code (masternerd.lucianocheng.com)
- Nature: why scientific programming does not compute (nature.com)
- Programming != Computer Science (matt-welsh.blogspot.com)
- Slides for Reproducible Research Talk at Interface 2011 (r-bloggers.com)