Fork, merge and crowd-sourcing data curation

I like to call this one "Fork"

Over the past few weeks there has been a sudden increase in the amount of financial data on scholarly communications in the public domain. This was triggered in large part by the Wellcome Trust releasing data on the prices paid for Article Processing Charges by the institutions it funds. The release of this pretty messy dataset was followed by a substantial effort to clean that data up. This crowd-sourced data curation process has been described by Michelle Brook. Here I want to reflect on the tools that were available to us and how they made some aspects of this collective data curation easy, but also made some other aspects quite hard.

The data started its life as a csv file on Figshare. This is a very frequent starting point. I pulled that dataset and did some cleanup using OpenRefine, a tool I highly recommend as a starting point for any moderate to large dataset, particularly one that has been put together manually. I could use OpenRefine to quickly identify and correct variant publisher and journal name spellings, clean up some of the entries, and also find issues that looked like mistakes. It’s a great tool for doing that initial cleanup, but its a tool for a single user, so once I’d done that work I pushed my cleaned up csv file to github so that others could work with it.

After pushing to github a number of people did exactly what I’d intended and forked the dataset. That is, they took a copy and added it to their own repository. In the case of code people will fork a repository, add to or improve the code, and then make a pull request that gives the original repository owner that there is new code that they might want to merge into their version of the codebase. The success of github has been built on making this process easy, even fun. For data the merge process can get a bit messy but the potential was there for others to do some work and for us to be able to combine it back together.

But github is really only used by people comfortable with command line tools – my thinking was that people would use computational tools to enhance the data. But Theo Andrews had the idea to bring in many more people to manually look at and add to the data. Here an online spreadsheet such as those provided by GoogleDocs that many people can work with is a powerful tool and it was through that adoption of the GDoc that somewhere over 50 people were able to add to the spreadsheet and annotate it to create a high value dataset that allowed the Wellcome Trust to do a much deeper analysis than had previously been the case. The dataset had been forked again, now to a new platform, and this tool enabled what you might call a “social merge” collecting the individual efforts of many people through an easy to use tool.

The interesting thing was that exactly the facilities that made the GDoc attractive for manual crowdsourcing efforts made it very difficult for those of us working with automated tools to contribute effectively. We could take the data and manipulate it, forking again, but if we then pushed that re-worked data back we ran the risk of overwriting what anyone else had done in the meantime. That live online multi-person interaction that works well for people, was actually a problem for computational processing. The interface that makes working with the data easy for people actually created a barrier to automation and a barrier to merging back what others of us were trying to do. [As an aside, yes we could in principle work through the GDocs API but that’s just not the way most of us work doing this kind of data processing].

Crowdsourcing of data collection and curation tends to follow one of two paths. Collection of data is usually done into some form of structured data store, supported by a form that helps the contributor provide the right kind of structure. Tools like EpiCollect provide a means of rapidly building these kinds of projects. At the other end large scale data curation efforts, such as GalaxyZoo, tend to create purpose built interfaces to guide the users through the curation process, again creating structured data. Where there has been less tool building and less big successes are the space in the middle, where messy or incomplete data has been collected and a community wants to enhance it and clean it up. OpenRefine is a great tool, but isn’t collaborative. GDocs is a great collaborative platform but creates barriers to using automated cleanup tools. Github and code repositories are great for supporting the fork, work, and merge back patterns but don’t support direct human interaction with the data.

These issues are part of a broader pattern of issues with the Open Access, Data, and Educational Resources more generally. With the right formats, licensing and distribution mechanisms we’ve become very very good at supporting the fork part of the cycle. People can easily take that content and re-purpose it for their own local needs. What we’re not so good at is providing the mechanisms, both social and technical, to make it easy to contribute those variations, enhancements and new ideas back to the original resources. This is both a harder technical problem and challenging from a social perspective. Giving stuff away, letting people use it is easy because it requires little additional work. Working with people to accept their contributions back in takes time and effort, both often in short supply.

The challenge may be even greater because the means for making one type of contribution easier may make others harder. That certainly felt like the case here. But if we are to reap the benefits of open approaches then we need to do more than just throw things over the fence. We need to find the ways to gather back and integrate all the value that downstream users can add.

Enhanced by Zemanta

Open is a state of mind

English: William Henry Fox Talbot's 'The Open ...
English: William Henry Fox Talbot’s ‘The Open Door’ (Photo credit: Wikipedia)

“Open source” is not a verb

Nathan Yergler via John Wilbanks

I often return to the question of what “Open” means and why it matters. Indeed the very first blog post I wrote focussed on questions of definition. Sometimes I return to it because people disagree with my perspective. Sometimes because someone approaches similar questions in a new or interesting way. But mostly I return to it because of the constant struggle to get across the mindset that it encompasses.

Most recently I addressed the question of what “Open” is about in a online talk I gave for the Futurium Program of the European Commission (video is available). In this I tried to get beyond the definitions of Open Source, Open Data, Open Knowledge, and Open Access to the motivation behind them, something which is both non-obvious and conceptually difficult. All of these various definitions focus on mechanisms – on the means by which you make things open – but not on the motivations behind that. As a result they can often seem arbitrary and rules-focussed, and do become subject to the kind of religious wars that result from disagreements over the application of rules.

In the talk I tried to move beyond that, to describe the motivation and the mind set behind taking an open approach, and to explain why this is so tightly coupled to the rise of the internet in general and the web in particular. Being open as opposed to making open resources (or making resources open) is about embracing a particular form of humility. For the creator it is about embracing the idea that – despite knowing more about what you have done than any other person –  the use and application of your work is something that you cannot predict. Similarly for someone working on a project being open is understanding that – despite the fact you know more about the project than anyone else – that crucial contributions and insights could come from unknown sources. At one level this is just a numbers game, given enough people it is likely that someone, somewhere, can use your work, or contribute to it in unexpected ways. As a numbers game it is rather depressing on two fronts. First, it feels as though someone out there must be cleverer than you. Second, it doesn’t help because you’ll never find them.

Most of our social behaviour and thinking feels as though it is built around small communities. People prefer to be a (relatively) big fish in a small pond, scholars even take pride in knowing the “six people who care about and understand my work”, the “not invented here” syndrome arises from the assumption that no-one outside the immediate group could possibly understand the intricacies of the local context enough to contribute. It is better to build up tools that work locally rather than put an effort into building a shared community toolset. Above all the effort involved in listening for, and working to understand outside contributions, is assumed to be wasted. There is no point “listening to the public” because they will “just waste my precious time”. We work on the assumption that, even if we accept the idea that there are people out there who could use our work or could help, that we can never reach them. That there is no value in expending effort to even try. And we do this for a very good reason; because for the majority of people, for the majority of history it was true.

For most people, for most of history, it was only possible to reach and communicate with small numbers of people. And that means in turn that for most kinds of work, those networks were simply not big enough to connect the creator with the unexpected user, the unexpected helper with the project. The rise of the printing press, and then telegraph, radio, and television changed the odds, but only the very small number of people who had access to these broadcast technologies could ever reach larger numbers. And even they didn’t really have the tools that would let them listen back. What is different today is the scale of the communication network that binds us together. By connecting millions and then billions together the probability that people who can help each other can be connected has risen to the point that for many types of problem that they actually are.

That gap between “can” and “are”, the gap between the idea that there is a connection with someone, somewhere, that could be valuable, and actually making the connection is the practical question that underlies the idea of “open”. How do we make resources, discoverable, and re-usable so that they can find those unexpected applications? How do we design projects so that outside experts can both discover them and contribute? Many of these movements have focussed on the mechanisms of maximising access, the legal and technical means to maximise re-usability. These are important; they are a necessary but not sufficient condition for making those connections. Making resources open enables, re-use, enhances discoverability, and by making things more discoverable and more usable, has the potential to enhance both discovery and usability further. But beyond merely making resources open we also need to be open.

Being open goes in two directions. First we need to be open to unexpected uses. The Open Source community was first to this principle by rejecting the idea that it is appropriate to limit who can use a resource. The principle here is that by being open to any use you maximise the potential for use. Placing limitations always has the potential to block unexpected uses. But the broader open source community has also gone further by exploring and developing mechanisms that support the ability of anyone to contribute to projects. This is why Yergler says “open source” is not a verb. You can license code, you can make it “open”, but that does not create an Open Source Project. You may have a project to create open source code, an “Open-source project“, but that is not necessarily a project that is open, an “Open source-project“. Open Source is not about licensing alone, but about public repositories, version control, documentation, and the creation of viable communities. You don’t just throw the code over the fence and expect a project to magically form around it, you invest in and support community creation with the aim of creating a sustainable project. Successful open source projects put community building, outreach, both reaching contributors and encouraging them, at their centre. The licensing is just an enabler.

In the world of Open Scholarship, and I would include both Open Access and Open Educational Resources in this, we are a long way behind. There are technical and historical reasons for this but I want to suggest that a big part of the issue is one of community. It is in large part about a certain level of arrogance. An assumption that others, outside our small circle of professional peers, cannot possibly either use our work or contribute to it. There is a comfort in this arrogance, because it means we are special, that we uniquely deserve the largesse of the public purse to support our work because others cannot contribute. It means do note need to worry about access because the small group of people who understand our work “already have access”. Perhaps more importantly it encourages the consideration of fears about what might go wrong with sharing over a balanced assessment of the risks of sharing versus the risks of not sharing, the risks of not finding contributors, of wasting time, of repeating what others already know will fail, or of simply never reaching the audience who can use our work.

It also leads to religious debates about licenses, as though a license were the point or copyright was really a core issue. Licenses are just tools, a way of enabling people to use and re-use content. But the license isn’t what matters, what matters is embracing the idea that someone, somewhere can use your work, that someone, somewhere can contribute back, and adopting the practices and tools that make it as easy as possible for that to happen. And that if we do this collectively that the common resource will benefit us all. This isn’t just true of code, or data, or literature, or science. But the potential for creating critical mass, for achieving these benefits, is vastly greater with digital objects on a global network.

All the core definitions of “open” from the Open Source Definition, to the Budapest (and Berlin and Bethesda) Declarations on Open Access, to the Open Knowledge Definition have a common element at their heart – that an open resource is one that any person can use for any purpose. This might be good in itself, but thats not the real point, the point is that it embraces the humility of not knowing. It says, I will not restrict uses because that damages the potential of my work to reach others who might use it. And in doing this I provide the opportunity for unexpected contributions. With Open Access we’ve only really started to address the first part, but if we embrace the mind set of being open then both follow naturally.

Enhanced by Zemanta

Good practice in research coding: What are the targets and how do we get there…?

EN{code}D Exhibition, The Building Centre, Sto...
Image by olliepalmer.com via Flickr

The software code that is written to support and manage research sits at a critical intersection of our developing practice of shared, reproducible, and re-useble research in the 21st century. Code is amongst the easiest things to usefully share, being both made up of easily transferable bits and bytes but also critically carrying its context with it in a way that digital data doesn’t do. Code at its best is highly reproducible: it comes with the tools to determine what is required to run it (make files, documentation of dependencies) and when run should (ideally) generate the same results from the same data. Where there is a risk that it might not, good code will provide tests of one sort or another than you can run to make sure that things are ok before proceeding. Testing, along with good documentation is what ensures that code is re-usable, that others can take it and efficiently build on it to create new tools, and new research.

The outside perspective, as I have written before, is that software does all of this better than experimental research. In practice the truth is that there are frameworks that make it possible for software to do a very good job on these things, but that in reality doing a good job takes work; work that is generally not done. Most software for research is not shared, is not well documented, generates results that are not easily reproducible, and does not support re-use and repurposing through testing and documentation. Indeed much like most experimental research. So how do we realise the potential of software to act as an exemplar for the rest of our research practice?

Nick Barnes of the Climate Code Foundation developed the Science Code Manifesto, a statement of how things ought to be (I was very happy to contribute and be a founding signatory) and while for many this may not go far enough (it doesn’t explicitly require open source licensing) it is intended as a practical set of steps that might be adopted by communities today. This has already garnered hundreds of endorsers and I’d encourage you to sign up if you want to show your support. The Science Code Manifesto builds on work over many years of Victoria Stodden in identifying the key issues and bringing them to wider awareness with both researchers and funders as well as the work of John Cook, Jon Claerbout, and Patrick Vanderwalle at ReproducibleResearch.net.

If the manifesto and the others work are actions that aim (broadly) to set out the principles and to understand where we need to go then Open Research Computation is intended as a practical step embedded in today’s practice. Researchers need the credit provided by conventional papers, so if we can link papers in a journal that garners significant prestige, with high standards in the design and application of the software that is described we can link the existing incentives to our desired practice. This is a high wire act. How far do we push those standards out in front of where most of the community is. We explicitly want ORC to be a high profile journal featuring high quality software, for acceptance to be a mark of quality that the community will respect. At the same time we can’t ask for the impossible. If we set standards so high that no-one can meet them then we won’t have any papers. And with no papers we can’t start the process of changing practice. Equally, allow too much in and we won’t create a journal with a buzz about it. That quality mark has to be respected as meaning something by the community.

I’ll be blunt. We haven’t had the number of submissions I’d hoped for. Lots of support, lots of enquiries, but relatively few of them turning into actual submissions. The submissions we do have I’m very happy with. When we launched the call for submissions I took a pretty hard line on the issue of testing. I said that, as a default, we’d expect 100% test coverage. In retrospect that sent a message that many people felt they couldn’t deliver on. Now what I meant by that was that when testing fell below that standard (as it would in almost all cases) there would need to be an explanation of what the strategy for testing was, how it was tackled, and how it could support people re-using the code. The language in the author submission guidelines has been softened a bit to try and make that clearer.

What I’ve been doing in practice is asking reviewers and editors to comment on how the testing framework provided can support others re-using the code. Are the tests provided adequate to help someone getting started on the process of taking the code, making sure they’ve got it working, and then as they build on it, giving them confidence they haven’t broken anything. For me this is the critical question, does the testing and documentation make the code re-usable by others, either directly in its current form, or as they build on it. Along the way we’ve been asking whether submissions provide documentation and testing consistent with best practice. But that always raises the question of what best practice is. Am I asking the right questions? And were should we ultimately set that bar?

Changing practice is tough, getting the balance right is hard. But the key question for me is how do we set that balance right? And how do we turn the aims of ORC, to act as a lever to change the way that research is done, into practice?

 

Enhanced by Zemanta

A return to “bursty work”

Parris Island, S.C., barrage balloon (LOC)
Image by The Library of Congress via Flickr

What seems like an age ago a group of us discussed a different way of doing scientific research. One partly inspired by the modular building blocks approach of some of the best open source software projects but also by a view that there were tremendous efficiency gains to be found in enabling specialisation of researchers, groups, even institutes, while encouraging a shared technical and social infrastructure that would help people identify the right partners for the very specific tasks that they needed doing today.

“Bursty work” is a term first used by Chris Messina but introduced to the online community of scientists by Deepak Singh. At the time it seemed obvious that with enough human and financial capital that a loose network of specialist groups could do much better science, and arguably much more effective exploitation of that science, than isolated groups perpetually re-inventing the wheel.

The problem of course is that science funding is not configured that way, a problem that is that bane of any core-facility manager’s existence. Maintaining a permanent expert staff via a hand to mouth existence of short term grants is tough. Some succeed but more probably fail, and there is very little glory in this approach. Once again it is prestige that gets promotion, not effective and efficient use of resources.

But the world is changing, a few weeks ago I got a query from a commercial partner interested in whether I could solve a specific problem. This is a small “virtual company” that aims to target the small scale, but potentially high value, innovations that larger players don’t have the flexibility to handle.  Everything is outsourced, samples prepared and passed from contractor to contractor. Turns out I think we can solve their problem and it will be exciting to see this work applied. What is even more gratifying is that the company came across this work in an Open Access journal which made it easier both to assess how useful it was and whether to get in touch. In the words of my contact:

“The fact that your work was in an open access journal certainly made it easier for me to access. I guess the same google search would have found it in a different journal, but it might have required a subscription for access. In that case I would have used the free info available (corresponding authors, university addresses etc) to try and get in touch based on the abstract.”

The same problems of course remain. How do I reasonably cost this work? What is the value of being involved vs just being a contractor. And of course, where will I find the time, or the pair of hands, to get the work done. People with the right expertise don’t grow on trees, and it’s virtually impossible to get people on short contracts at the moment. Again, in the words of our collaborator:

“Bursty work” sounds a little like how [our company] is trying to operate. One problem is moving from an investment environment where investors invest in companies to one where they invest in projects. Has any work been done to identify investors who like the idea of bursty work?

Nonetheless, its exciting to me that some elements of what was beginning to seem like a pipe dream are coming to pass. It takes time for the world to catch up, but where there is a demand for innovation, and an effective market, the opportunities are there for the people who can make them work.

[It won’t escape anyone’s notice that I’ve given no details of either the project or the company. We are doing this under an NDA and as this is someone else’s project I’m not going to be difficult about it. We make progress one small step at a time]

Enhanced by Zemanta

Open Source, Open Research and Open Review

Logo Open Source Initiative
Image via Wikipedia

The submissions for Open Research Computation (which I blogged about a month or so back) are starting to come in and we hope to be moving towards getting those initial papers out soon. One of the things we want the journal to do is bring more of the transparency and open critique that characterises the best Open Source Software development processes into the scholarly peer review process. The journal will have an open review process in which reviews and the versions of the manuscript they refer to will be available.

One paper’s authors however have taken matters into their own hands and thrown the doors completely open. With agreement from the editorial board Michael Barton and Hazel Barton have  asked the community on the BioStar site, a bioinformatics focussed member of the StackExchange family of Q&A websites, how the paper and software could be improved. They have published a preprint of the paper and the source code was obviously already available on Github. You can see more at Michael’s blog post. We will run a conventional peer review process in parallel and the final decision on whether the paper is ready to publish will rest with the ORC editors but we will take into account the comments on BioStar and of course the authors will be free to use those comments to improve on their software and documentation.

 

This kind of approach goes a long way towards dealing with the criticisms I often level at conventional peer review processes. By making the process open there is the opportunity for any interested party to offer constructive critique and help to improve the code and the paper. By not restricting commentary to a small number of people we stand a better chance of getting all the appropriate points of view represented. And by (hopefully, we may have some niggling licence issues with copying content from BioStar’s CC-BY-SA to BioMedCentral’s CC-BY) presenting all of that commentary and critique along with the authors responses we can offer a clear view of how effective the review process was and what the final decisions were based on. I’ve talked about what we can do to improve peer review. Michael and Hazel have taken action to make it happen. You can be a part of it.

Enhanced by Zemanta

Free…as in the British Museum

Great Court - Quadrangle and Sydney Smirke's 1...
Image via Wikipedia

Richard Stallman and Richard Grant, two people who I wouldn’t ever have expected to group together except based on their first name, have recently published articles that have made me think about what we mean when we talk about “Open” stuff. In many ways this is a return right to the beginning of this blog, which started with a post in which I tried to define my terms as I understood them at the time.

In Stallman’s piece he argues that “open” as in “open source” is misleading because it sounds limiting. It makes it sound as though the only thing that matters is having access to the source code. He dismisses the various careful definitions of open as specialist pleading, definitions that only the few are aware of, and that using them will confuse most others. He is of course right, no matter how carefully we define open it is such a commonly used word and so open to interpretation itself that there will always be ambiguity.

Many efforts have been made in various communities to find new and more precise terms, “gratis” and “libre”, “green” vs “gold”, but these never stick, largely because the word “open” captures the imagination in a way more precise terms do not, and largely because these terms capture the issues that divide us, rather than those that unite us.

So Stallman has a point but he then goes on to argue that “free” does not suffer from the same issues because it does capture an important aspect of Free Software. I can’t agree here because it seems clear to me we have exactly the same confusions. “Free as in beer”, “free as in free speech” capture exactly the same types of confusion, and indeed exactly the same kind of issues as all the various subdefinitions of open. But worse than that it implies these things are in fact free, that they don’t actually cost anything to produce.

In Richard Grant’s post he argues against the idea that the Faculty of 1000, a site that provides expert assessment of researcher papers by a hand picked group of academics, “should be open access”. His argument is largely pragmatic, that running the service costs money. That money needs to be recovered in some way or there would be no service. Now we can argue that there might be more efficient and cheaper ways of providing that service but it is never going to be free. The production of the scholarly literature is likewise never going to be free. Archival, storage, people keeping the system running, just the electricity, these all cost money and that has to come from somewhere.

It may surprise overseas readers but access to many British museums is free to anyone. The British Museum, National Portrait Gallery and others are all free to enter. That they are not “free” in terms of cost is obvious. This access is subsidised by the taxpayer. The original collection of the British Museum was in fact donated to the British people, but in taking that collection on the government was accepting a liability. One that continues to run into millions of pounds a year, just to stop the collection from falling apart, let alone enhancing, displaying it, or researching it.

The decision to make these museums openly accessible is in part ideological, but it can also be framed as a pragmatic decision. Given the enormous monetary investment there is a large value in subsidising free access to maximise the social benefits that universal access can provide. Charging for access would almost certainly increase income, or at least decrease costs, but there would be significant opportunity cost in terms of social return on investment by barring access.

Those of us who argue for Open Access to the scholarly literature or for Open Data, Process, Materials or whatever need to be careful that we don’t pretend this comes free. We also need to educate ourselves more about the costs. Writing costs money, peer review costs money, editing the formats, running the web servers, and providing archival services costs money. And it costs money whether it is done by publishers operating a subscription or  author-pays business models, or by institutional or domain repositories. We can argue for Open Access approaches on economic efficiency grounds, and we can argue for it based on maximizing social return on investment, essentially that for a small additional investment, over and above the very large existing investment in research, significant potential social benefits will arise.

Open Access scholarly literature is free like the British Museum or a national monument like the Lincoln Memorial is free. We should strive to bring costs down as far as we can. We should defend the added value of investing in providing free access to view and use content. But we should never pretend that those costs don’t exist.

Enhanced by Zemanta

The Panton Principles: Finding agreement on the public domain for published scientific data

Drafters of the Panton principlesI had the great pleasure and privilege of announcing the launch of the Panton Principles at the Science Commons Symposium – Pacific Northwest on Saturday. The launch of the Panton Principles, many months after they were first suggested is really largely down to the work of Jonathan Gray. This was one of several projects that I haven’t been able to follow through properly on and I want to acknowledge the effort that Jonathan has put into making that happen. I thought it might be helpful to describe where they came from, what they are intended to do and perhaps just as importantly what they don’t.

The Panton Principles aim to articulate a view of what best practice should be with respect to data publication for science. They arose out of an ongoing conversation between myself Peter Murray-Rust and Rufus Pollock. Rufus founded the Open Knowledge Foundation, an organisation that seeks to promote and support open culture, open source, and open science, with the emphasis on the open. The OKF position on licences has always been that share-alike provisions are an acceptable limitation to complete freedom to re-use content. I have always taken the Science Commons position that share-alike provisions, particularly on data have the potential to make it difficult or impossible to get multiple datasets or systems to interoperate. In another post I will explore this disagreement which really amounts to a different perspective on the balance of the risks and consequences of theft vs things not being used or useful. Peter in turn is particularly concerned about the practicalities – really wanting a straightforward set of rules to be baked right into publication mechanisms.

The Principles came out of a discussion in the Panton Arms a pub near to the Chemistry Department of Cambridge University, after I had given a talk in the Unilever Centre for Molecular Informatics. We were having our usual argument trying to win the others over when we actually turned to what we could agree on. What sort of statement could we make that would capture the best parts of both positions with a focus on science and data. We focussed further by trying to draw out one specific issue. Not the issue or when people should share results, or the details of how, but the mechanisms that should be used for re-use. The principles are intended to focus on what happens when a decision has been made to publish data and where we assume that the wish is for that data to be effectively re-used.

Where we found agreement was that for science, and for scientific data, and particularly science funded by public investment, that the public domain was the best approach and that we would all recommend it. We brought John Wilbanks in both to bring the views of Creative Commons and to help craft the words. It also made a good excuse to return to the pub. We couldn’t agree on everything – we will never agree on everything – but the form of words chosen – that placing data explicitly, irrevocably, and legally in the public domain satisfies both the Open Knowledge Definition and the Science Commons Principles for Open Data was something that we could all personally sign up to.

The end result is something that I have no doubt is imperfect. We have borrowed inspiration from the Budapest Declaration, but there are three B’s. Perhaps it will take three P’s to capture all the aspects that we need. I’m certainly up for some meetings in Pisa or Portland, Pittsburgh or Prague (less convinced about Perth but if it works for anyone else it would make my mother happy). For me it captures something that we agree on – a way forwards towards making the best possible practice a common and practical reality. It is something I can sign up to and I hope you will consider doing so as well.

Above all, it is a start.

Reblog this post [with Zemanta]

Open Data, Open Source, Open Process: Open Research

There has been a lot of recent discussion about the relative importance of Open Source and Open Data (Friendfeed, Egon Willighagen, Ian Davis). I don’t fancy recapitulating the whole argument but following a discussion on Twitter with Glyn Moody this morning [1, 2, 3, 4, 5, 6, 7, 8] I think there is a way of looking at this with a slightly different perspective. But first a short digression.

I attended a workshop late last year on Open Science run by the Open Knowledge Foundation. I spent a significant part of the time arguing with Rufus Pollock about data licences, an argument that is still going on. One of Rufus’ challenges to me was to commit to working towards using only Open Source software. His argument was that there wasn’t really any excuses any more. Open Office could do the job of MS Office, Python with SciPy was up to the same level as MatLab, and anything specialist needed to be written anyway so should be open source from the off.

I took this to heart and I have tried, I really have tried. I needed a new computer and, although I got a Mac (not really ready for Linux yet), I loaded it up with Open Office, I haven’t yet put my favourite data analysis package on the computer (Igor if you must know), and have been working in Python to try to get some stuff up to speed. But I have to ask whether this is the best use of my time. As is often the case with my arguments this is a return on investment question. I am paid by the taxpayer to do a job. At what point is the extra effort I am putting into learning to use, or in some cases fight with, new tools cost more than the benefit that is gained, by making my outputs freely available?

Sometimes the problems are imposed from outside. I spent a good part of yesterday battling with an appalling, password protected, macroed-to-the-eyeballs Excel document that was the required format for me to fill in a form for an application. The file crashed Open Office and only barely functioned in Mac Excel at all. Yet it was required, in that format, before I could complete the application. Sometimes the software is just not up to scratch. Open Office Writer is fine, but the presentation and spreadsheet modules are, to be honest, a bit ropey compared to the commercial competitors. And with a Mac I now have Keynote which is just so vastly superior that I have now transferred wholesale to that. And sometimes it is just a question of time. Is it really worth me learning Python to do data analysis that I could knock in Igor in a tenth of the time?

In this case the answer is, probably yes. Because it means I can do more with it. There is the potential to build something that logs process the way I want to , the potential to convert it to run as a web service. I could do these things with other OSS projects as well in a way that I can’t with a closed product. And even better because there is a big open community I can ask for help when I run into problems.

It is easy to lose sight of the fact that for most researchers software is a means to an end. For the Open Researcher what is important is the ability to reproduce results, to criticize and to examine. Ideally this would include every step of the process, including the software. But for most issues you don’t need, or even want, to be replicating the work right down to the metal. You wouldn’t after all expect a researcher to be forced to run their software on an open source computer, with an open source chipset. You aren’t necessarily worried what operating system they are running. What you are worried about is whether it is possible read their data files and reproduce their analysis. If I take this just one step further, it doesn’t matter if the analysis is done in MatLab or Excel, as long as the files are readable in Open Office and the analysis is described in sufficient detail that it can be reproduced or re-implemented.

Lets be clear about this: it would be better if the analysis were done in an OSS environment. If you have the option to work in an OSS environment you can also save yourself time and effort in describing the process and others have a much better chances of identifying the sources of problems. It is not good enough to just generate an Excel file, you have to generate an Excel file that is readable by other software (and here I am looking at the increasing number of instrument manufacturers providing software that generates so called Excel files that often aren’t even readable in Excel). In many cases it might be easier to work with OSS so as to make it easier to generate an appropriate file. But there is another important point; if OSS generates a file type that is undocumented or worse, obfuscated, then that is also unacceptable.

Open Data is crucial to Open Research. If we don’t have the data we have nothing to discuss. Open Process is crucial to Open Research. If we don’t understand how something has been produced, or we can’t reproduce it, then it is worthless. Open Source is not necessary, but, if it is done properly, it can come close to being sufficient to satisfy the other two requirements. However it can’t do that without Open Standards supporting it for documenting both file types and the software that uses them.

The point that came out of the conversation with Glyn Moody for me was that it may be more productive to focus on our ability to re-implement rather than to simply replicate. Re-implementability, while an awful word, is closer to what we mean by replication in the experimental world anyway. Open Source is probably the best way to do this in the long term, and in a perfect world the software and support would be there to make this possible, but until we get there, for many researchers, it is a better use of their time, and the taxpayer’s money that pays for that time, to do that line fitting in Excel. And the damage is minimal as long as source data and parameters for the fit are made public. If we push forward on all three fronts, Open Data, Open Process, and Open Source then I think we will get there eventually because it is a more effective way of doing research, but in the meantime, sometimes, in the bigger picture, I think a shortcut should be acceptable.