Best practice in Science and Coding. Holding up a mirror.

The following is the text from which I spoke today at the .Astronomy conference. I think there is some video available on the .Astronomy UStream account and I also have audio which I will put up somewhere soon.

There’s a funny thing about the science and coding communities. Each seems to think that the other has all the answers. Maybe the grass is just greener…For many years as an experimental scientist I looked jealously at both computational scientists and coders in general.  Wouldn’t it be so much less stressfull, I naively thought, to have systems that would do what they were told, to be easily able to re-run experiments and to be able to rely on getting the same answer.  Above all, I thought, imagine the convenience of just being able to take someone else’s work and being able to easily and quickly apply it to my own problems.

There is something of a mythology around code, and perhaps more so around open source, that it can be relied on, that there is a toolkit out there already for every problem. That there is a Ruby Gem, or an R library for every problem, or most memorably that I can sit on a python command line and just demand antigravity by importing it. Sometimes these things are true, but I’m guessing that everyone has experience of it not being true. Of the python library that looks as though it is using dictionaries but is actually using some bizarre custom data type, the badly documented ruby gem, or the perl…well, just the perl really. The mythology doesn’t quite live up to the hype. Or at least not as often as we might like.

But if us experimental scientists have an overoptimistic view of how repeatable and reliable computational tools are then computer scientists have an equally unrealistic view of how experimental science works. Greg Wilson, one of the great innovators in computer science education once said, while criticizing documentation and testing standards of scientific code “An experimental scientist would never get away with not providing their data, not providing their working. Experimental science is expected to be reproducible from the detailed methodology given….”….data provided…detailed methodology…reproducible…this doesn’t really sound like the experimental science literature that I know.

Ted Pedersen in an article with the wonderful title “Empiricism is not a matter of faith” excoriates computational linguistics by holding it up to what he sees as the much higher standards of reproducibility and detailed description of methodology in experimental science. Yet I’ve never been able to reproduce an experiment based only on a paper in my life.

What is interesting about both of these view points is that we are projecting our very real desire to raise standards against a mythology of someone else’s practice. There seems to be a need to view some other community’s practice as the example rather than finding examples within our own. This is odd because it is precisely the best examples, within each community, that inspire the other. There are experimental scientists that give detailed step by step instructions to enable others to repeat their work, who make the details of the protocols available online, and who work within their groups to the highest standards of reproducibility that are possible in the physical world.  Equally there are open source libraries and programmes with documentation that are both succinct and detailed, that just works when you import the library, that is fully tested and comes with everything you need to make sure it will work with your systems. Or that breaks in an informative way, making it clear what you need to do with your own code to get it working.

If we think about what makes science work; effective communication, continual testing and refinement, public criticism of claims and ideas; the things that make up good science, and mean that I had a laptop to write this talk on this morning, that meant the train and taxi I caught actually run, that, more seriously a significant portion of the people in this room did not in fact die in childhood. If we look at these things then we see a very strong correspondence with good practice in software development. High quality and useful documentation is key to good software libraries.  You can be as open source as you like but if no-one can understand your code they’re not going to use it. Controls, positive and negative, statistical and analytical are basically unit tests. Critique of any experimental result comes down to asking whether each aspect of the experiment is behaving the way it should, has each process been tested that a standard input gives the expected output. In a very real sense experiment is an API layer we use to interact with the underlying principles of nature.

So this is a nice analogy, but I think we can take this further, in fact I think that code and experiment are actually linked at a deeper level. Both are an instantiation of process that take inputs and generate outputs. These are (to a first approximation – good enough for this discussion) deterministic in any given instance. But they are meaningless without context. Useless without the meaning that documentation and testing provide.

Let me give you an example. Ana Nelson has written a lovely documentation tool called Dexy. This builds on concepts of literate programming in a beautifully elegant and simple way. Take a look for the details but in essence it enables you to directly incorporate the results of arbitrary running code into your documentation. As you document what your code does you provide examples, parts of the process that are actively running, and testing the code as you go. If you break the method you break your documentation. It is also not an accident that if you are thinking about documentation as you build your code then it helps to create good modular structures that are easy to understand and therefore both easy to use and easy to communicate. They may be a little more work to write but the value you are creating by thinking about the documentation up front means you are motivated to capture this up front. Design by contract and test driven development are tough, Documentation Driven Development can really help drive good process.

Too often when we write a scientific paper it’s the last part of the process. We fabricate a story that makes sense so that we can fit in the bits we want to. Now there’s nothing wrong with this. Humans are narrative processing systems, we need stories to make sense of the world. But its not the whole story. What if, as we collect and capture the events that we ultimately use to tell our story, that we also collect and structure the story of what actually happened? Of the experiments that didn’t work, of the statistical spread of good and bad results. There’s a sarcastic term in synthetic organic chemistry, the “American Yield” in which we imagine that 20 PhD students have been tasked with making a compound and the one who manages to get the highest overall yield gets to be first author. This isn’t actually a particularly useful number. Much more useful to the chemist who wants to use this prep is the spread of values, information that is generally thrown away. The difference between actually incorporating the running of the code into the documentation, and just showing one log file, cut and pasted, from when it worked well. You lose the information about when it doesn’t work.

Other tools from coding can also provide inspiration. Tools like Hudson for continuous integration. Everytime the code base is changed everything gets re-built, dependencies are tested, unit tests run, and a record of what gets broken. If you want to do X you do want to use this version of that library.  This isn’t a problem. In any large codebase things are going to get broken as changes are made, you change something, see what is broken, then go back and gradually fix those things until you’re ready to create commit to the main branch (at which point someone else has broken something…)

Science is continuous integration. This is what we do, we make changes , we check what they break, see if the dependencies still hold and if necessary go back and fix them. This is after all where the interesting science is. Or it would be if we did it properly. David Shotton and others have spoken about the question of “citation creep” or “hedging erosion” [see for example this presentation by Anita de Waard]. This is where something initially reported in one paper as a possibility, or even just a speculation gets converted into fact by a process of citation. What starts as “…it seems possible that…” can get turned into “…as we know that X causes Y (Bloggs et al, 2009)…” within 18 months or a couple of citations. Scientists are actually not very good at checking their dependencies. And they have a tendency of coming back to bite us in exactly the same way as a quick patch that wasn’t properly tested can.

Just imagine if we could do this. If everytime a new paper was added to the literature we could run a test against the rest. Check all the dependencies…if this isn’t true then all of these other papers in doubt as well…indeed if we could unit test papers would it be worth peer reviewing them? There is good evidence that pair coding works, and little evidence that traditional peer review does. What can we learn from this to make the QA processes in science and software development better?

I could multiply examples. What would an agile lab look like? What would be needed to make it work? What can successful library development communities tell us about sharing samples, and what can the best data repositories tell us about building the sites for sharing code? How can we apply the lessons of StackOverflow to a new generation of textbooks and how can we best package up descriptions of experimental protocols in a way that provides the same functionality as sharing an Amazon Machine Image.

Best practice in coding mirrors best practice in science. Documentation, testing, integration are at the core. Best practice is also a long way ahead of common practice in both science and coding. Both, perhaps are driven increasingly by a celebrity culture that is more dependent on what your outputs look like (and where they get published) than whether anyone uses them. Testing and documentation are hardly glamorous activities.

So what can we do about it? Improving practice is an arduous task. Many people are doing good work here with training programmes, tools, standards development and calls to action online and in the scientific literature. Too many people and organizations for me to call out and none of them getting the credit they deserve.

One of the things I have been involved with is to try and provide a venue, a prestigious venue, where people can present code that has been developed to the highest standards. Open Research Computation, a new Open Access journal from BioMedCentral, will publish papers that describe software for research. Our selection criteria don’t depend on how important the research problem is, but on the availability, documentation, and testing of the code. We expect the examples given in these papers to be reproducible, by which we mean that the software, the source code, the data, and the methodology are provided and described well enough that it is possible to reproduce those examples.  By applying high standards, and by working with authors to help them reach those standards we aim to provide a venue which is both useful and prestigious. Think about it, a journal that contains papers describing the most useful and useable tools and libraries is going to get a few citations and (whisper it) ought to get a pretty good impact factor. I don’t care about impact factors but I know the reality on the ground is that that those of you looking for jobs or trying to keep them do need to worry about them.

In the end, the problem with a journal, or with code, or with science is that we want everyone else to provide the best documentation, the best tested code and procedures, but its hard to justify doing it ourselves. I mean I just need something that works; yesterday. I don’t have time to write the tests in advance, think about the architecture, re-read all of that literature to check what it really said. Tools that make this easier will help, tools like Dexy and Hudson, or lab notebooks that capture what we’re doing and what we are creating, rather than what we said we would do, or what we imagine we did in retrospect.

But it’s motivation that is key here. How do you motivate people do the work up front? You can tell them that they have to of course but really these things work best when people want to make the effort. The rewards for making your work re-usable can be enormous but they are usually further down the road than the moment where you make the choice not to bother. And those rewards are less important to most people than getting to the Nature paper, or getting mentioned in Tim O’Reilly’s tweet stream.

It has to be clear that making things re-usable is the highest contribution that you can make, and for it to be rewarded accordingly.  I don’t even really care what forms of re-use are counted, re-use in research, re-use in education, in commerce, in industry, in policy development. ORC is deliberately – very deliberately – intended to hack the impact factor system by featuring highly re-usable tools that will gain lots of citations. We need more of these hacks.

I think this shift is occurring. It’s not widely know just how close UK science funding went to being slashed in the comprehensive spending review. That it wasn’t was due to a highly coherent and well organized campaign that convinced ministers and treasury that the re-use of UK research outputs generated enormous value, both economic, social and educational for the country and indeed globally. That the Sloan Digital Sky Survey was available in a form that could be re-used to support the development of something like Galaxy Zoo played a part in this. The headlong rush of governments worldwide to release their data is a massive effort to realize the potential value of the re-use of that data.

This change in focus is coming. It will no longer be enough in science to just publish. As David Willetts said in [answer to a question] in his first policy speech, “I’m very much in favour of peer review, but I worry when the only form of review is for journals”.  Government wants evidence of wider use. They call it impact, but its basically re-use. The policy changes are coming, the data sharing policies, the public engagement policies, the impact assessments. Just showing outputs will no be enough, showing that you’ve configured those outputs so that the potential for re-use is maximized will be an assumption of receiving funding.

William Gibson said the future is already here, its just unevenly distributed. They Might Be Giants asked, not quite in response, “but where’s my jetpack?”  The jetpacks, the tools, are around us and being developed if you know where to look. Best practice is unevenly distributed both in science and in software development but it’s out there if you want to go looking. The motivation to adopt it? The world around us is changing. The expectations of the people who fund us are changing. Best practice in code and in science have an awful lot in common. If you can master one you will have to tools to help you with the other. And if you have both then you’ll be well positioned to ride the wave of change as it sweeps by.


Enhanced by Zemanta

Open Research Computation: An ordinary journal with extraordinary aims.

I spend a lot of my time arguing that many of the problems in the research community are caused by journals. We have too many, they are an ineffective means of communicating the important bits of research, and as a filter they are inefficient and misleading. Today I am very happy to be publicly launching the call for papers for a new journal. How do I reconcile these two statements?

Computation lies at the heart of all modern research. Whether it is the massive scale of LHC data analysis or the use of Excel to graph a small data set. From the hundreds of thousands of web users that contribute to Galaxy Zoo to the solitary chemist reprocessing an NMR spectrum we rely absolutely on billions of lines of code that we never think to look at. Some of this code is in massive commercial applications used by hundreds of millions of people, well beyond the research community. Sometimes it is a few lines of shell script or Perl that will only ever be used by the one person who wrote it. At both extremes we rely on the code.

We also rely on the people who write, develop, design, test, and deploy this code. In the context of many research communities the rewards for focusing on software development, of becoming the domain expert, are limited. And the cost in terms of time and resource to build software of the highest quality, using the best of modern development techniques, is not repaid in ways that advance a researcher’s career. The bottom line is that researchers need papers to advance, and they need papers in journals that are highly regarded, and (say it softly) have respectable impact factors. I don’t like it. Many others don’t like it. But that is the reality on the ground today, and we do younger researchers in particular a disservice if we pretend it is not the case.

Open Research Computation is a journal that seeks to directly address the issues that computational researchers have. It is, at its heart, a conventional peer reviewed journal dedicated to papers that discuss specific pieces of software or services. A few journals now exist in this space that either publish software articles or have a focus on software. Where ORC will differ is in its intense focus on the standards to which software is developed, the reproducibility of the results it generates, and the accessibility of the software to analysis, critique and re-use.

The submission criteria for ORC Software Articles are stringent. The source code must be available, on an appropriate public repository under an OSI compliant license. Running code, in the form of executables, or an instance of a service must be made available. Documentation of the code will be expected to a very high standard, consistent with best practice in the language and research domain, and it must cover all public methods and classes. Similarly code testing must be in place covering, by default, 100% of the code. Finally all the claims, use cases, and figures in the paper must have associated with them test data, with examples of both input data and the outputs expected.

The primary consideration for publication in ORC is that your code must be capable of being used, re-purposed, understood, and efficiently built on. You work must be reproducible. In short, we expect the computational work published in ORC to deliver at the level that is expected in experimental research.

In research we build on the work of those that have gone before. Computational research has always had the potential to deliver on these goals to a level that experimental work will always struggle to, yet to date it has not reliably delivered on that promise. The aim of ORC is to make this promise a reality by providing a venue where computational development work of the highest quality can be shared, and can be celebrated. To provide a venue that will stand for the highest standards in research computation and where developers, whether they see themselves more as software engineers or as researchers who code, will be proud to publish descriptions of their work.

These are ambitious goals and getting the technical details right will be challenging. We have assembled an outstanding editorial board, but we are all human, and we don’t expect to get it all right, first time. We will be doing our testing and development out in the open as we develop the journal and will welcome comments, ideas, and criticisms to editorial@openresearchcomputation.com. If you feel your work doesn’t quite fit the guidelines as I’ve described them above get in touch and we will work with you to get it there. Our aim, at the end of the day is to help the research developer to build better software and to apply better development practice. We can also learn from your experiences and wider ranging review and proposal papers are also welcome.

In the end I was persuaded to start yet another journal only because there was an opportunity to do something extraordinary within that framework. An opportunity to make a real difference to the recognition and quality of research computation. In the way it conducts peer review, manages papers, and makes them available Open Research Computation will be a very ordinary journal. We aim for its impact to be anything but.

Other related posts:

Jan Aerts: Open Research Computation: A new journal from BioMedCentral

Enhanced by Zemanta