Articles tagged with: open-source
Over the past few weeks there has been a sudden increase in the amount of financial data on scholarly communications in the public domain. This was triggered in large part by the Wellcome Trust releasing data on the prices paid for Article Processing Charges by the institutions it funds. The release of this pretty messy dataset was followed by a substantial effort to clean that data up. This crowd-sourced data curation process has been described by Michelle Brook. Here I want to reflect on the tools that were available to …
“Open source” is not a verb
Nathan Yergler via John Wilbanks
I often return to the question of what “Open” means and why it matters. Indeed the very first blog post I wrote focussed on questions of definition. Sometimes I return to it because people disagree with my perspective. Sometimes because someone approaches similar questions in a new or interesting way. But mostly I return to it because of the constant struggle to get across the mindset that it encompasses.
Most recently I addressed the question of what “Open” is about in a online talk …
The software code that is written to support and manage research sits at a critical intersection of our developing practice of shared, reproducible, and re-useble research in the 21st century. Code is amongst the easiest things to usefully share, being both made up of easily transferable bits and bytes but also critically carrying its context with it in a way that digital data doesn’t do. Code at its best is highly reproducible but how do we get from “at its best” to its best being common practice? How hard should we be pushing on standards?
What seems like an age ago a group of us discussed a different way of doing scientific research. One partly inspired by the modular building blocks approach of some of the best open source software projects but also by a view that there were tremendous efficiency gains to be found in enabling specialisation of researchers, groups, even institutes, while encouraging a shared technical and social infrastructure that would help people identify the right partners for the very specific tasks that they needed doing today. The problem of course is that science funding is not configured that way, a problem that is that bane of any core-facility manager’s existence. Maintaining a permanent expert staff via a hand to mouth existence of short term grants is tough. But the world is changing, a few weeks ago I got a query from a commercial partner interested in whether I could solve a specific problem. This is a small “virtual company” that aims to target the small scale, but potentially high value, innovations that larger players don’t have the flexibility to handle. Everything is outsourced, samples prepared and passed from contractor to contractor. This is the first real contact I’ve had with this kind of approach in the research space but maybe these ideas are starting to take hold.
One of the things we want the Open Research Computation journal to do is bring more of the transparency and open critique that characterises the best Open Source Software development processes into the scholarly peer review process. But you can talk about changing the way peer review works and you can actively do something about. Michael Barton and Hazel Barton have taken matters into their own hands and thrown the doors completely open. They have submitted a paper to ORC and in parallel asked the community on the BioStar site how the paper and software could be improved.
Richard Stallman and Richard Grant, two people who I wouldn’t ever have expected to group together except based on their first name, have recently published articles that have made me think about what we mean when we talk about “Open” stuff. Stallman argues that the word “open” is limiting and misleading. But I feel the same way in many ways about “free”. Richard Grant’s piece probes the problems of making services open-access, making precisely the point that they are not free. Clearly they are not, and pretending they are is a dangerous way to justify access and accessibility. For me, it is a question of how best to invest to maximise your return.
I had the great pleasure and privilege of announcing the launch of the Panton Principles at the Science Commons Symposium – Pacific Northwest on Saturday. The Panton Principles aim to articulate a view of what best practice should be with respect to data publication for science. Where we found agreement was that for science, and for scientific data, and particularly science funded by public investment, that the public domain was the best approach and that we would all recommend it.
There has been a lot of recent discussion about the relative importance of Open Source and Open Data (Friendfeed, Egon Willighagen, Ian Davis). I don’t fancy recapitulating the whole argument but following a discussion on Twitter with Glyn Moody this morning [1, 2, 3, 4, 5, 6, 7, 8] I think there is a way of looking at this with a slightly different perspective. But first a short digression.
I attended a workshop late last year on Open Science run by the Open Knowledge Foundation. I spent a significant part of …