A personal view of Open Science – Part IV – Policies and standards

This is the fourth and final part of the serialisation of a draft paper on Open Science. The other parts are here – Part IPart IIPart III

A question that needs to be asked when contemplating any major change in practice is the balance and timing of ‘bottom up’ versus ‘top-down’ approaches for achieving that change. Scientists are notoriously un-responsive to decrees and policy initiatives but as has been discussed they are also inherently conservative and generally resistant to change led from within the community as well. For those advocating the widespread, and ideally rapid, adoption of more open practice in science it will be important to strike the right balance between calling for mandates and conditions for funding or journal submission and of simply adopting these practices in their own work. While the motivation behind the adoption of data sharing policies by funders such as the UK research councils is to be applauded it is possible for such intiatives to be counterproductive if the policies are not supported by infrastructure development, appropriate funding, and appropriate enforcement. Equally, standards and policy statements can send a powerful message on the aspirations of funders to make the research they fund more widely available and, for the most part, when funders speak, scientists listen.

One Approach for Mainstream Adoption – The fully supported paper

There are two broad approaches to standards that are currently being discussed. The first of these is aimed at mainstream acceptance and uptake and can be described as ‘The fully supported paper’. This is a concept that is simple on the surface but very complex to implement in practice. In essence it is the idea that the claims made in a peer reviewed paper in the conventional literature should be fully supported by a publically accessible record of all the background data, methodology, and data analysis procedures that contribute to those claims. On one level this is only a slightly increased in requirements from the Brussels Declaration made by the Internaional Association of Scientific, Technical, and Medical Publishers in 2007 which states;

Raw research data should be made freely available to all researchers. Publishers encourage the public posting of the raw data outputs of research. Sets or sub-sets of data that are submitted with a paper to a journal should wherever possible be made freely accessible to other scholars

http://www.stm-assoc.org/brussels-declaration/

The degree to which this declaration is supported by publishers and the level to which different journals require their authors to adhere to it is a matter for debate but the principle of availability of background data has been accepted by a broad range of publishers. It is therefore reasonable to consider the possibility of making the public posting of data as a requirement for submission. At a simple level this is already possible. For specific types of data repositories already exist and in many cases most journals require submission of these data types to recognised respositories. More generally it is possible to host data sets in some institutional repositories and with the expected announcement of a large scale data hosting service from Google the argument that this is not practicable is becoming unsustainable. While such datasets may have limited discoverability and limited metadata, they will at least be discoverable from the papers that reference them. It is reasonable to expect sufficent context to be provided in the published paper to make the data useable.

However the data itself, except in specific cases, is not enough to be useful to other researchers. The detail of how that data was collected and how it was processed are critical for making a proper analysis of whether the claims made in a paper to be properly judged. Once again we come to the problem of recording the process of research and then presenting that in a form which is both detailed enough to be widely useful but not so dense as to be impenetrable. The technical challenges of delivering a fully supported paper are substantial. However it is difficult to argue that this shouldn’t be available. If claims made in the scientific literature cannot be fully verified can they be regarded as scientific? Once again – while the target is challenging – it is simply a proposal to do good science, properly communicated.

Aspirational Standards – celebrating best practice in open science

While the fully supported paper would be a massive social and technical step forward it in many ways is no more open than the current system. It does not deal with the problem of unpublished or unsuccessful studies that may never find a home in a traditional peer reviewed paper. As discussed above the ‘fully supported paper’ is not really ‘open science’; it is just good science. What then are the requirements, or standards for ‘open science’. Does there need to be a certificate or a set of requirements that need to be met before a project, individual, or institution can claim they are doing Open Science. Or is Open Science simply too generic and prone to misinterpretation?

I would argue that while ‘Open Science’ is a very generic term it has real value as a rallying point or banner. It is a term which generates significant positive reaction amongst the general public, the mainstream media, and large sections of the research community. Its very vagueness also allows some flexibility making it possible to welcome contributions from publishers, scientists, and funders which while not 100% open are nonetheless positive and helpful. Within this broad umbrella it is then possible to look at defining or recomending practices and standards and giving these specific labels for identification.

The main work in the area of defining relevant practices and standards has been carried out by Science Commons and the Open Knowledge Foundation. Science Commons have published four ‘Principles for Open Science‘ which focus on the availability and accessiblity of published literature, research tools, and data, and the development of cyberinfrastructure to make this possible. These four principles currently do no explicitly include the availability of process, which has been covered in detail above, but provide a clear set of criteria which could form the basis of standards. Broadly speaking research projects, individuals, or institutions that deliver on these principles could be said to be doing Open Science. The Open Knowledge Definiton, developed by the Open Knowledge Foundation, is another useful touchstone here. Another possible defining criterion for Open Science is that all the relevant material is made available under licenses that adhere to the definition.

The devil, naturally, lies in the details. Are embargoes on data and methodology appropriate, and if so, in what fields and how should they be constructed? For data that cannot be released should specific exceptions be made, or special arrangments made to hold data in secure repositories? Where the same group is doing open and commercial research how should the divisions between these projects be defined and declared? These details are important, and will take time to work out. In the short term it is therefore probably more effective to identify and celebrate examples of open science, define best practice and observe how it works (and does not work) in the real world. This will raise the profile of Open Science without making it immediately an exclusive preserve of those with the luxury of radically changing practice. It enables examples of best practice to be held up as aspirational standards, providing the goals for others to work towards, and the impetus for the tool and infrastructure development that will support them. Many government funders are starting to introduce data sharing mandates, generally with very weak wording, but in most cases these refer to the expectation that funded research will adhere to the standard of ‘best practice’ in the relevant field. At this stage of development it may be more productive to drive adoption throgh the strategic support of improving best practice in a wide range of fields than to attempt to define strict standards.

Summary

The community advocating more open practice in scientific research is growing in size and influence. The major progress made in the past 12-18 months by the Open Access movement and the development of deposition and data sharing mandates by a range of research funders show that real progress is being made in increasing access to both the finished products of research and the materials that support them. While there have been significant successes this remains a delicate moment. There is a risk of over enthusiasm driving expectations which cannot be delivered and of alienating the mainstream community that we wish to draw in. The fears and concerns of researchers in widening access to their work need to be addressed sensitively and seriously, pointing out the benefits but also acknowledging the risks involved in adopting these practices.

It will not be enough to develop tools and infrastructure that, if adopted, would revolutionize science communication. Those tools must be built with an understanding of how scientists work today, and with the explicit aim of embedding these tools in existing workflows. The need for, and the benefits of, adopting controlled vocabularies needs to be sold much more effectively to the mainstream scientific community. The ontologies community also needs to recognise that there are cases and areas where the use of strict controlled vocabularies is not appropriate. Web 2.0 and Semantic web technologies are not competitors but are complementary approaches that are appropriate in different contexts. Again, the right question to ask is ‘what do scientists do? And what can we do to make that work better?’; not how can we make scientists see they need to do things the ‘right’ way.

Finally, it is my belief that now is not the time to set out specific and strict standards of what qualifies as Open Science. It is the right time to discuss the details of what these standards might look like. It is the right time to look at examples of best practice; to celebrate these and to see what can be learnt from them, but with our current lack of experience, and lack of knowledge of what the unintended consequences of specific standards might be, it is too early to pin down the details of those standards. It is a good time to be clearly articulating the specific aspirations of the movement, and to provide goals that communities can aggregate around; the fully supported paper, the Science Commons principles, and the Open Knowledge Definition are all useful starting points. Open Science is gathering momentum, and that is a good thing. But equally it is a good time to take stock, identify the best course forward, and make sure that we ar carrying as many people forward with use as we can.

A personal view of open science – Part III – Social issues

The third installment of the paper (first part, second part) where I discuss social issues around practicing more Open Science.

Scientists are inherently rather conservative in their adoption of new approaches and tools. A conservative approach has served the community well in the process of sifting ideas and claims; this approach is well summarised by the aphorism ‘extraordinary claims require extraordinary evidence’. New methodologies and tools often struggle to be accepted until the evidence of their superiority is overwhelming. It is therefore unreasonable to expect the rapid adoption of new web based tools and even more unreasonable to expect scientsits to change their overall approach to their research en masse. The experience of adoption of new Open Access journals is a good example of this.

Recent studies have shown that scientists are, in principle, in favour of publishing in Open Access journals yet show marked reluctance to publish in such journals in practice [1]. The most obvious reason for this is the perceived cost. Because most operating Open Access publishers charge a publication fee, and until recently such charges were not allowable costs for many research funders, it can be challenging for researchers to obtain the necessary funds. Although most OA publishers will waive these charges there is anecdotally a marked reluctance to ask for such a waiver. Other reasons for not submitting papers to OA journals include the perception that most OA journals are low impact and a lack of OA journals in specific fields. Finally, simple inertia can be a factor where the traditional publication outlets for a specific field are well defined and publishing outside the set of ‘standard’ journals runs the risk of the work simply not being seen by peers. As there is no perception of a reward for publishing in open access journals, and a perception of significant risk, uptake remains relatively small.

Making data available faces similar challenges but here they are more profound. At least when publishing in an open access journal it can be counted as a paper. Because there is no culture of citing primary data, but rather of citing the papers they are reported in, there is no reward for making data available. If careers are measured in papers published then making data available does not contribute to career development. Data availability to date has generally been driven by strong community norms, usually backed up by journal submission requirements. Again this links data publication to paper publication without necessarily encouraging the release of data that is not explicitly linked to a peer reviewed paper. The large scale DNA sequencing and astronomy facilities stand out as cases where data is automatically made available as it is taken. In both cases this policy is driven largely by the funders, or facility providers, who are in position to make release a condition of funding the data collection. This is not, however a policy that has been adopted by other facilities such as synchrotrons, neutron sources, or high power photon sources.

In other fields where data is more heterogeneous and particular where competition to publish is fierce, the idea of data availability raises many fears. The primary one is of being ‘scooped’ or data theft where others publish a paper before the data collector has had the ability to fully analyse the data. This again is partly answered by robust data citation standards but this does not prevent another group publishing an analysis quicker, potentially damaging the career or graduation prospects of the data collector. A principle of ‘first right to publish’ is often suggested. Other approaches include timed embargoes for re-use or release. All of these have advantages and disadvantages which depend to a large extent on how well behaved members of a specific field are. Another significant concern is that the release of substandard, non peer-reviewed, or simply innaccurate data into the public domain will lead to further problems of media hype and public misunderstanding. This must be balanced against the potential public good of having relevant research data available.

The community, or more accurately communities, in general, are waiting for evidence of benefits before adopting either open access publication or open data policies. This actually provides the opportunity for individuals and groups to take first mover advantages. While remaining controversial [3, 4] there is some evidence that publication in open access journals leads to higher citation counts for papers [5, 6] and that papers for which the supporting data is available receive more citations [7]. This advantage is likely to be at its greatest early in the adoption curve and will clearly disappear if these approaches become widespread. There are therefore clear advantages to be had in rapidly adopting more open approaches to research which can be balanced against the risks described above.

Measuring success in the application of open approaches and particularly quantifying success relative to traditional approaches is a challenge, as is demonstrated by the continuing controversy over the citation advantage of open access articles. However pointing to examples of success is relatively straightforward. In fact Open Science has a clear public relations advantage as the examples are out in the open for anyone to see. This exposure can be both good and bad but it makes publicising best practice easy. In many ways the biggest successes of open practice are the ones that we miss because they are right in front of us, the global databases of freely accessible data in biological databases such as the Protein Data Bank, NCBI, and many others that have driven the massive advances in biological sciences over the past 20 years. The ability to analyse and consider the implications of genome scale DNA sequence data, as it is being generated, is now

In the physical sciences, the arXiv has long stood as an example to other disciplines of how the research literature can be made available in an effective and rapid manner, and the availability of astronomical data from efforts such as the Sloan Digital Sky Survey make efforts combining public outreach and the crowdsourcing of data analysis such as Galaxy Zoo possible. There is likely to be a massive expansion in the availability of environmental and ecological data globally as the potential to combine millions of data gatherers holding mobile phones, and sophisticated data aggregation and manipulation tools is realised.

Closer to the bleeding edge of radical sharing there have been less high profile successes, a reflection both of the limited amount of time these approaches have been pursued and the limited financial and personnel resources that have been available. Nonetheless there are examples. Garret Lisi’s high profile preprint on the ArXiv, An exceptionally simple theory of everything, [8] is supported by a comprehensive online notebook at http://deferentialgeometry.org that contains all the arguments as well as the background detail and definitions that support the paper. The announcement by Jean-Claude Bradley of the successful identification of several compounds with activity against malaria [9] is an example where the whole research process was carried out in the open, from the decision on what the research target should be, through the design and in silico testing of a library of chemicals, to the synthesis and testing of those compounds. For every step of this process the data is available online and several of the collaborators that made the study possible made contact due to finding that material online. The potential for a coordinated global synthesis and screening effort is currently being investigated.

There are both benefits and risks associated with open practice in research. Often the discussion with researchers is focussed on the disadvantages and risks. In an inherently conservative pursuit it is perfectly valid to ask whether changes of the type and magnitude offer any benefits given the potential risks they pose. These are not concerns that should be dismissed or ridiculed, but ones that should be taken seriously, and considered. Radical change never comes without casualties, and while some concerns may be misplaced, or overblowm, there are many that have real potential consequences. In a competitive field people will necessarily make diverse decisions on the best way forward for them. What is important is providing as good information to them as is possible to help them balance the risks and benefits of any approach they choose to take.

The fourth and final part of this paper can be found here.

  1. Warlick S E, Vaughan K T. Factors influencing publication choice: why faculty choose open access. Biomedical Digital Libraries. 2007;4:1-12.
  2. Bentley D R. Genomic Sequence Information Should Be Released Immediately and Freely. Science. 1996;274(October):533-534.
  3. Piwowar H A, Day R S, Fridsma D B. Sharing Detailed Research Data Is Associated with Increased Citation Rate. PLoS ONE. 2007;1(3):e308.
  4. Davis P M, Lewenstein B V, Simon D H, Booth J G, Connolly M J. Open access publishing, article downloads, and citations: randomised controlled trial. BMJ. 2008;337(October):a568.
  5. Rapid responses to David et al., http://www.bmj.com/cgi/eletters/337/jul31_1/a568
  6. Eysenbach G. Citation Advantage of Open Access Articles. PLoS Biology. 2006;4(5):e157.
  7. Hajjem, C., Harnad, S. and Gingras, Y. (2005) Ten-Year Cross-Disciplinary Comparison of the Growth of Open Access and How it Increases Research Citation Impact. IEEE Data Engineering Bulletin, 28 (4). pp. 39-47. http://eprints.ecs.soton.ac.uk/12906/
  8. Lisi G, An exceptionally simple theory of everything, arXiv:0711.0770v1 [hep-th], November 2007.
  9. Bradley J C, We have antimalarial activity!, UsefulChem Blog, http://usefulchem.blogspot.com/2008/01/we-have-anti-malarial-activity.html, January 25 2008.

Where does Open Access stop and ‘just doing good science’ begin?

open access banner
I had been getting puzzled for a while as to why I was being characterised as an ‘Open Access’ advocate. I mean, I do adovcate Open Access publication and I have opinions on the Green versus Gold debate. I am trying to get more of my publications into Open Access journals. But I’m no expert, and I’ve certainly been around this community for a much shorter time and know a lot less about the detail than many other people. The giants of the Open Access movement have been fighting the good fight for many years. Really I’m just a late comer cheering from the sidelines.

This came to a head recently when I was being interviewed for a piece on Open Access. We kept coming round to the question of what it was that motivated me to be ‘such a strong’ advocate of open access publication. I must have a very strong motivation to have such strong views surely? And I found myself thinking that I didn’t. I wasn’t that motivated about open access per se. It took some thinking and going back over where I had come from to realise that this was because of where I was coming from.

I guess most people come to the Open Science movement firstly through an interest in Open Access. The frustration of not being able to access papers, followed by the realisation that for many other scientists it must be much worse. Often this is followed by the sense that even when you’ve got the papers they don’t have the information you want or need, that it would be better if they were more complete, the data or software tools available, the methodology online. There is a logical progression from ‘better access to the literature helps’ to ‘access to all the information would be so much better’.

I came at the whole thing from a different angle. My Damascus moment came when I realised the potential power of making everything available; the lab book, the data, the tools, the materials, and the ideas. Once you connect the idea of the read-write web to science communication, it is clear that the underlying platform has to be open, accessible, and re-useable to get the benefits. Science is perhaps the ultimate open platform available to build on. From this perspective it is immediately self evident that the current publishing paradigm and subscription access publication in particular is broken. But it is just one part of the puzzle, one of the barriers to communication that need to be attacked, broken down, and re-built. It is difficult, for these reasons, for me to separate out a bit of my motivation that relates just to Open Access.

Indeed in some respects Open Access, at least in the form in which it is funded by author charges can be a hindrance to effective science communication. Many of the people I would like to see more involved in the general scientific community, who would be empowered by more effective communication, cannot afford author charges. Indeed many of my colleagues in what appear to be well funded western institutions can’t afford them either. Sure you can ask for a fee waiver but no-one likes to ask for charity.

But I think papers are important. Some people believe that the scientific paper as it exists today is inevitably doomed. I disagree. I think it has an important place as a static document, a marker of what a particular group thought at a particular time, based on the evidence they had assembled. If we accept that the paper has a place then we need to ask how it is funded, particularly the costs of peer and editorial review, and the costs maintaining that record into the future. If you believe, as I do, that in an ideal world this communication would be immediately available to all then there are relatively few viable business models available. What has been exciting about the past few months, and indeed the past week has been the evidence that these business models are starting to work through and make sense. The purchase of BioMedCentral by Springer may raise concerns for the future but it also demonstrates that a publishing behemoth has faith in the future of OA as a publishing business model.

For me, this means that in many ways the discussion has moved on. Open Access, and Open Access publication in particular, has proved its viability. The challenges now lie in widening the argument to include data, to include materials, to include process. To develop the tools that will allow us to capture all of this in a meaningful way and to make sense of other people’s record. None of which should in any way belittle the achievement of those who have brought the Open Access movement to its current point. Immense amounts of blood, sweat, and tears, from thousands of people have brought what was once a fringe movement to the centre of the debate on science communication. The establishing of viable publishers and repositories for pre-prints, the bringing of funders and governments to the table with mandates, and of placing the option of OA publication at the fore of people’s minds are huge achievements, especially given the relatively short time it has taken. The debate on value for money, on quality of communication, and on business models and the best practical approaches will continue, but the debate about the value of, indeed the need for, Open Access has essentially been won.

And this is at the core of what Open Access means for me. The debate has placed, or perhaps re-placed, right at the centre of the discussion of how we should do science, the importance of the quality of communication. It has re-stated the principle of placing the claims that you make, and the evidence that supports them, in the open for criticism by anyone with the expertise to judge, regardless of where they are based or who is funding them. And it has made crystal clear where the deficiencies in that communication process lie and exposed the creeping tendency of publication over the past few decades to become more an exercise in point scoring than communication. There remains much work to be done across a wide range of areas but the fact that we can now look at taking those challenges on is due in no small part to the work of those who have advocated Open Access from its difficult beginnings to today’s success. Open Access Day is a great achievment in its own right and it should be celebration of the the efforts of all those people who have contributed to making it possible as well as an opportunity to build for the future.

High quality communication, as I and others have said, and will continue to say, is Just Good Science. The success of Open Access has shown how one aspect of that communication process can be radically improved. The message to me is a simple one. Without open communication you simply can’t do the best science. Open Access to the published literature is simply one necessary condition of doing the best possible science.

A use case scenario for Mark…a description of the first experiment on the ISIS LaBLog

Two rather exciting things are happening at the moment. Firstly we have finally got the LaBLog system up and running at RAL (http://biolab.isis.rl.ac.uk). Not a lot is happening there yet but we are gradually working up to a full Open Notebook status, starting by introducing people to the system bit by bit. My first experiment went up there late last week, it isn’t finished yet but I better get some of the data analysis done as rpg, if no-one else, is interested in the results.

The other area of development is that back down in Southampton, Blog MkIII is being specced out and design is going forward. This is being worked on now by both Mark Borkum and Andrew Milsted. Last time I was down in Southampton Mark asked me for some use cases – so I thought I might use the experiment I’ve just recorded to try and explain both the good and bad points of the current system, and also my continuing belief that anything but a very simple data model is likely to be fatally flawed when recording an experiment. This will also hopefully mark the beginning of more actual science content on this blog as I start to describe some of what we are doing and why. As we get more of the record of what we are doing onto the web we will be trying to generate a useful resource for people looking to use our kind of facilities.

So, very briefly the point of the experiment we started last week is to look at the use of GFP as a concentration and scattering standard in Small Angle Scattering. Small angle x-ray and neutron scattering provide an effective way of determining low resolution (say 5-10 A) structures of proteins in solution. However they suffer from serious potential artefacts that must be rigorously excluded before the data analysis can be trusted. One of those most crucial of these is aggregation, whether random conversion of protein into visible crud, or specific protein-protein interactions. Either of these, along with poor background subtraction or any one of a number of other problems can very easily render data and the analysis that depends on it meaningless.

So what to do? Well one approach is to use a very well characterised standard for which concentration, size, and shape are well established. There are plenty of proteins that are well behaved, pretty cheap, and for which the structure is known. However, as any biophysicist will tell you, measuring protein concentration accurately and precisely is tough; colorimetric assays are next to useless and measuring the UV absorbance of aromatic residues is pretty insensitive, prone to interference with other biological molecules (particularly DNA), and a lot harder to do right than most people think.

Our approach is to look at whether GFP is a good potential standard (specifically an eGFP engineered to prevent the tetramerisation that is common with the natural proteins). It has a strong absoprtion, well clear of most other biological molecules at 490 nm, it is dead easy to produce in large quantities (in our hands, I know other people have had trouble with this but we routinely pump out hundreds of milligrams and currently have a little over one gramme in the freezer), is stable in solution at high concentrations and freeze dries nicely. Sounds great! In principle we can do our scattering, then take the same sample cells, put them directly in a spectrophotometer, and measure the concentration. Last week was about doing some initial tests on a lab SAXS instrument to see whether the concept held up.

So – to our use case.

Maria, a student from Southampton, met me in Bath holding samples of GFP made up to 1, 2, 5, and 10 mg/mL in buffer. I quizzed Maria as to exactly how the samples had been made up and then recorded that in the LaBLog (post here). I then created the four posts representing each of the samples (1, 2, 3, 4). I also created a template for doing SAXS, and then, using that template I started filling in the first couple of planned samples (but I didn’t actually submit the post until sometime later).

At this point, as the buffer background was running, I realised that the 10mg/mL sample actually had visible aggregate in it. As the 5 mg/mL sample didn’t have any aggregate we changed the planned order of SAXS samples, starting with the 5 mg/mL sample. At the same time, we centrifuged the 10 mg/mL sample, which appeared to work quite nicely, generating a new, cleared 10 mg/mL sample, and prepared fresh 5 mg/mL and fresh 2 mg/mL samples.

Due to a lack of confidence in how we had got the image plate into its reader we actually ended up running the original 5 mg/mL sample three times. The second time we really did muck up the transfer but comparisons of the first and third time made us confident the first one was ok. At this point we were late for lunch and decided we would put the lowest concentration (1 mg/mL) sample on for an hour and grab something to eat. Note that by this time we had changed the expected order of samples about three or four times but none of this is actually recorded because I didn’t actually commit the record of data collection until the end of the day.

By this stage the running of samples was humming along quite nicely. It was time to deal with the data. The raw data comes off the instrument in the form of an image. I haven’t actually got these off the original computer as yet because they are rather large. However they are then immediately processed into relatively small two column data. It seems clear that each data file requires its own identity so those were all created (using another template) . Currently, several of these do not even have the two column text data, the big tiff files broke the system on upload, and I got fed up with uploading the reduced data by hand into each file.

As a result of running out of time and the patience to upload multiple files, the description of the data reduction is a bit terse, and although there are links to all the data most of you will get a 404 if you try to follow them, so I need to bring all of that back down and put it into the LaBLog proper where it is accessible but if you look closely here, you will see I made a mistake with some of the data analysis that needs fixing. I’m not sure I can be bothered systematically uploading all the incorrect files. If the system were acting naturally as a file repository and I was acting directly on those files then it would be part of the main workflow that everything would be made available automatically. The problem here was that I was forced by the instrument software to do the analysis on a specific computer (that wasn’t networked) and that our LaBLog system has no means of multiple file upload.

So to summarise the use case.

  1. Maria created four samples
  2. Original plan created to run the four samples plus backgrounds
  3. Realised 10mg/mL sample was aggregating and centrifuged to clear it (new unexpected procedure)
  4. Ran one of the pre-made samples three times, first time wasn’t confident, second time was a failure, third time confirmed first time was ok
  5. New samples prepared from cleared 10 mg/mL sample
  6. Prepared two new samples from cleared 10 mg/mL sample
  7. Re-worked plan for running samples based on time available
  8. Ran 1 mg/mL sample for different amount of time than previous samples
  9. Ran remaining samples for various amounts of time
  10. Data was collected from each sample after it was run and converted to a two column text format
  11. Two column data was rebinned and background subtracted (this is where I went wrong with some of them, forgetting that in some cases I had two lots of electronic background)
  12. Subtracted data was rebinned again and then desmeared (the instrument has a slit geometry rather than a pinhole) to generate a new two column data file.

So, four original samples, and three unexpected ones were created. One set of data collection led to nine raw data files which were then recombined in a range of different ways depending on collection times. Ultimately this generates four finalised reduced datasets, plus a number of files along the way. Two people were involved. And all of this was done under reasonable time pressure. If you look at the commit times on the posts you will realise that a lot of these were written (or at least submitted) rather late in the day, particularly the data analysis. This is because the data analysis was offline, out of the notebook in proprietary software. Not a lot that can be done about this. The other things that were late were the posts associated with the ‘raw’ datafiles. In both cases a major help would be a ‘directory watcher’ that automatically uploads files and queues them up somewhere so they are available to link to.

This was not an overly complicated or unusual experiment but one that illustrates the pretty common  changes of direction mid-stream and reassessments of priorities as we went. What it does demonstrate is the essential messiness of the process. There is no single workflow that traces through the experiment that can be applied across the whole experiment, either in the practical or the data analysis parts. There is no straightforward parallel process applied to a single set of samples but multiple, related samples, that require slightly different tacks to be taken with data analysis.  What there are, are objects that have relationships. The critical thing in any laboratory recording system is making the recording of both the objects, and the relationships between them, as simple and as natural as possible. Anything else and the record simply won’t get made.

A personal view of Open Science – Part II – Tools

The second installment of the paper (first part here) where I discuss building tools for Open (or indeed any) Science.

Tools for open science – building around the needs of scientists

It is the rapid expansion and development of tools that are loosely categorised under the banner of ‘Web2.0’ or ‘Read-write web’ that makes the sharing of research material available. Many of the generic tools, particularly those that provide general document authoring capabilities, have been adopted and used by a wide range of researchers. Online office tools can enable collaborative development of papers and proposals without the need for emailing documents to multiple recipients and the resultant headaches associated with which version is which. Storing spreadsheets, databases, or data online means that collaborators have easy access to the most recent versions and can see how these are changing. More generally the use of RSS feed readers and bookmarking sites to share papers of interest and, to some extent, to distribute the task of triaging the literature are catching in in some communities. The use of microblogging platforms such as Twitter and aggregation and conversational tools such as Friendfeed have recently been used very effectively to provide coverage of conferences in progress, including collaborative note-taking. In combination with streamed or recorded video as well as screencasts and sharing of presentations online the idea of a dstributed conference, while not an everyday reality, is becoming feasible.

However it is often the case that,while useful, generic web based services do not provide desired functionality or do not fit well into the existing workflows of researchers. Here there is the opportunity, and sometime necessity, to build specialised or adapated tools. Collaborative preparation of papers is a good example of this. Conventional web bookmarking services, such as del.icio.us provide a great way of sharing the literature or resources that a paper builds on with other authors but they do not automatically capture and recognise the necessary metadata associated with published papers (journal, date, author, volume, page numbers). Specialised services such as citeulke and Connotea have been developed to enable one click bookmarking of published literature and these have been used effectively by for example using a specific tag for references associated with a specific paper in progress. The problem with these services as they exist at the moment is that they don’t provide the crucial element in the workflow that scientists want to aggregate the references for, the formatting of the references in the finalised paper. Indeed the lack of formatting functionality in GoogleDocs, the most widely used collaborative writing tool, means that in practice the finalised document is usually cut and pasted into Word and the references formatted using proprietary software such as Endnote.The available tools do not provide the required functionality.

A number of groups and organisations have investigated the use of Blogs and Wikis as collaborative and shareable laboratory notebooks. However few of these systems offer good functionality ‘out of the box’. While there are many electronic laboratory notebook systems sold by commercial interests most are actually designed around securing data rather than sharing it so are not of interesthere. While the group of Jean-Claude Bradley has used the freely hosted WikiSpaces as a laboratory notebook without further modification, much of the data and analysis is hosted on other services, including YouTube, FlickR, and GoogleDocs. The OpenWetWare group has made extensive modifications to the MediaWiki system to provide laboratory notebook functionality whereas Garret Lisi has adapted the TiddlyWiki framework as a way of presenting his notebook. The Chemtools collaboration at the University of Southampton has developed a specialised Blog platform . Commercial offerings in the area of web based lab notebooks are also starting to appear. All of these different systems have developed because of the specialised needs of recording the laboratory work of the scientists they were designed for. The different systems make different assumptions about where they fit in the workflow of the research scientist, and what that workflow looks like. They are all, however, built around the idea that they need to satisfy the needs of the user.

This creates a tension in tool building. General tools, that can be used across a range of disciplines, are extremely challenging to design, because workflows, and the perception of how they work, are different in different disciplines. Specialist tools can be built for specific fields but often struggle to translate into new areas. Because the market is small in any field the natural desire for designers is to make tools as general as possible. However in the process of trying to build for a sufficiently general workflow it is often the case that applicability to specific workflows is lost. There is a strong argument based on this for building interoperable modules, rather than complete systems, that will allow domain specialists to stich together specific solutions for specific fields or even specific experiments. Interoperability of systems and standards that enable it is a criteria that is sometimes lost in the development process, but is absolutely essential to making tools and processes shareable. The use of workflow management tools, such as Taverna, Kepler, and VisTrails have an important role to play here.

While not yet at a stage where they are widely configurable by end users the vision behind them has the potential both to make data analysis much more straightforward for experimental scientist but also to solve many of the problems involved in sharing process, as opposed to data. The idea of visually wiring up online or local analysis tools to enable data processing pipelines is compelling. The reason most experimental scientists use spreadsheets for data analysis is that they do not wish to learn programming languages. Providing visual programming tools along with services with clearly defined inputs and outputs will make it possible for a much wider range of scientists to use more sophisticated and poweful analysis tools. What is more the ability to share, version, and attribute, workflows will go some significant distance towards solving the problem of sharing process. Services like MyExperiment which provide an environment for sharing and versioning Taverna workflows provide a natural way of sharing the details of exactly how a specific analysis is carried out. Along with an electronic notebook to record each specific use of a given workflow or analysis procedure (which can be achieved automatically though an API) the full details of the raw data, analysis procedure, and any specific parameters used, can be recorded. This combination offers a potential route out of the serious problem of sharing research processes if the appropriate support infrastructure can be built up.

Also critical to successful sharing is a shared language or vocabulary. The development of ontologies, controlled vocabularies, and design standards are all important in sharing knowledge and crucial to achieving the ulitmate goals of making this knowledge machine readable. While there are divisions in the technical development and user communities over the development and use of controlled vocabularies there is little disagreement over the fact that good vocabularies combined with good tools are useful. The disagreements tend to lie in how they are best developed, when they should be applied, and whether they are superior to or complementary to other approaches such as text mining and social tagging. An integrated and mixed approach to the use of controlled vocabularies and standards is the most likely to be successful. In particular it is important to match the degree of structure in the description to the natural degree of structure in the object or objects being described. Highly structured and consistent data types, such as crystal structures and DNA sequences, can benefit greatly from highly structured descriptions which are relatively straightforward to create, and in many cases are the standard outputs of an analysis process. For large scale experimental efforts the scale of the data and sample management problem makes an investment in detailed and structured desriptions worth while. In a small laboratory doing unique work, however, there may be a strong case for using local descriptions and vocabularies that are less rigorous but easier to apply and able to grow to fit the changing situation on the ground. Ideally designed in such a way that mapping onto an external vocabulary is feasible if it is required or useful in the future.

Making all of this work requires that researchers adopt these tools and that a community develops that is big enough to provide the added value that these tools might deliver. For a broad enough community to adopt these approaches the tools must fit well in their existing workflow and help to deliver the things that researchers are already motivated to produce. For most researchers, published papers are the measure of their career success and the basis of their reward structures. Therefore tools that make it easier to write papers, or that help researchers to write better papers, are likely to get traction. As the expectations of the quality and completeness of supporting data increase for published papers, tools that make it easier for the researcher to collate and curate the record of their research will become important. It is the process of linking the record of what happened in the laboratory, or study, to the first pass intepretation and analysis of data, through further rounds of analysis until a completed version is submitted for review, that is currently poorly supported by available tools, and it is this need that will drive the development of improved tools. These tools will enable the disparate elements of the record of research, currently scattered between paper notebooks, various data files on multiple hard drives, and unconnected electronic documents, to be chained together. Once this record is primarily electronic, and probably stored online in a web based system, the choice to make the record public at any stage from the moment the record is made to the point of publication, will be available. The reason to link this to publication is to tie it into an existing workflow in the first instance. Once the idea is embedded the steps involved in making the record even more open are easily taken.

Part III covers social issues around Open Science.

The Southampton Open Science Workshop – a brief report

On Monday 1 September we had a one day workshop in Southampton discussing the issues that surround ‘Open Science’. This was very free form and informal and I had the explicit aim of getting a range of people with different perspectives into the room to discuss a wide range of issues, including tool development, the social and career structure issues, as well as ideas about standards and finally, what concrete actions could actually be taken. You can find live blogging and other commentary in the associated Friendfeed room and information on who attended as well as links to many of the presentations on the conference wiki.

Broadly speaking the day was divided into three chunks, the first was focussed on tools and services and included presentations on MyExperiment, Mendeley, Chemtools, and Inkspot Science. Branwen Hide of Research Information Network has written more on this part. Given that the room contained more than the usual suspects the conversation focussed on usability and interfaces rather than technical aspects although there was a fair bit of that as well.

The second portion of the day revolved more around social challenges and issues. Richard Grant presented his experience of blogging on an official university sanctioned site and the value of that for both outreach and education. One point he made was that the ‘lack of adoption problem’ seen in science just doesn’t seem to exist in the humanities. Perhaps this is because scientists don’t generally see ‘writing’ as a valuable thing in its own right. Certainly there is a preponderance of scientists who happen also to see themselves as writers on Nature Network.

Jennifer Rohn followed on from Richard, and objected to my characterising her presentation as “the skeptic’s view”. A more accurate characterisation would have been “I’d love to be open but at the moment I can’t: This is what has to change to make it work”. She presented a great summary of the proble, particularly from the biological scientist’s point of view as well as potential solutions. Essentially the problem is that of the ‘Minimum Publishable Unit’ or research quantum as well as what ‘counts’ as publication. Her main point was that for people to be prepared to publish material that falls short of a full paper they need to get some proportional credit for that. This folds closely into the discussion of what can be cited and what should be cited in particular contexts. I have used the phrase ‘data sized peg into a paper shaped hole’ to describe this in the past.

After lunch Liz Lyon from UKOLN talked about curation and long term archival storage which lead into an interesting discussion about the archiving of blogs and other material. Is it worth keeping? One answer to this was to look at the real interest today in diaries from the second world war and earlier from ‘normal people’. You don’t necessarily need to be a great scientist, or even a great blogger, for the material to be of potential interest to historians in 50-100 years time. But doing this properly is hard – in the same way that maintaining and indexing data is hard. Disparate sites, file formats, places of storage, and in the end whose blog is it actually? Particularly if you are blogging for, or recording work done at, a research institution.

The final session was about standards or ‘brands’. Yaroslav Nikolaev talked about semantic representations of experiments. While important it was probably a shame in the end we did this at the end of the day because it would have been helpful to get more of the non-techie people into that discussion to iron out both the communication issues around semantic web as well as describing the real potential benefits. This remains a serious gap – the experimental scientists who could really use semantic tools don’t really get the point, and the people developing the tools don’t communicate well what the benefits are, or in some cases (not all I hasten to add!) actually build the tools the experimentalists want.

I talked about the possibility of a ‘certificate’ or standard for Open Science, and the idea of an organisation to police this. It would be safe to say that, while people agreed that clear definitions would be hepful, the enhusiasm level for a standards organisation was pretty much zero. There are more fundamental issues of actually building up enough examples of good practice, and working towards identifying best practice in open science, that need to be dealt with before we can really talk about standards.

On the other hand the idea of ‘the fully supported’ paper got immediate and enthusiastic support. The idea here is deceptively simple, and has been discussed elsewhere; simply that all the relevant supporting information for a paper (data, detailed methodology, software tools, parameters, database versions etc. as well as access to required materials at reasonable cost) should be available for any published paper. The challenge here lies in actually recording experiments in such a way that this information can be provided. But if all of the record is available in this form then it can be made available whenever the researcher chooses. Thus by providing the tools that enable the fully supported paper you are also providing tools that enable open science.

Finally we discussed what we could actually do: Jean-Claude Bradley discussed the idea of an Open Notebook Science challenge to raise the profile of ONS (this is now setup – more on this to follow). Essentially a competition type approach where individuals or groups can contribute to a larger scientific problem by collecting data – where the teams get judged on how well they describe what they have done and how quickly they make it available.

The most specific action proposed was to draft a ‘Letter to Nature’ proposing the idea of the fully supported paper as a submission standard. The idea would be to get a large number of high profile signatories on a document which describes  a concrete step by step plan to work towards the final goal, and to send that as correspondence to a high profile journal. I have been having some discussions about how to frame such a document and hope to be getting a draft up for discussion reasonably soon.

Overall there was much enthusiasm for things Open and a sense that many elements of the puzzle are falling into place. What is missing is effective coordinated action, communication across the whole community of interested and sympathetic scientsts, and critically the high profile success stories that will start to shift opinion. These ought to, in my opinion, be the targets for the next 6-12 months.

A personal view of Open Science – Part I

For the Open Science workshop at the Pacific Symposium on Biocomputing I wrote a very long essay as an introductory paper. It turned out that this was far too long for the space available so an extremely shortened version was submitted for the symposium proceedings. I thought I would post the full length essay in installments here as a prelude to cleaning it up and submitting to an appropriate journal.

Introduction

Openness is arguably the great strength of the scientific method. At its core is the principle that claims and the data that support them are placed before the community for examination and critique. Through open examination and critical analysis models can be refined, improved, or rejected. Conflicting data can be compared and the underlying experiments and methodology investigated to identify which, if any, is more reliable. While individuals may not always adhere to the highest standards, the community mechanisms of review, criticism, and integration have proved effective in developing coherent and useful models of the physical world around us. As Lee Smolin of the Perimeter Institute for Theoretical Physics recently put it, “we argue in good faith from shared evidence to shared conclusions“[1]. It is an open approach that drives science towards an understanding which, while never perfect, nevertheless enables the development of sophisticated technologies with practical applications.

The Internet and the World Wide Web provide the technical ability to share a much wider range of both the evidence and the argument and conclusions that drive modern research. Data, methodology, and interpretation can also be made available online at lower costs and with lower barriers to access than has traditionally been the case. Along with the ability to share and distribute traditional scientific literature, these new technologies also offer the potential for new approaches. Wikis and blogs enable geographically and temporally widespread collaborations, the traditional journal club can now span continents with online book marking tools such as Connotea and CiteULike, and the smallest details of what is happening in a laboratory (or on Mars [2]) can be shared via instant messaging applications such as Twitter.

The potential of online tools to revolutionise scientific communication and their ability to open up the details of the scientific enterprise so that a wider range of people can participate is clear. In practice, however, the reality has fallen far behind the potential. This is in part due to a need for tools that are specifically designed with scientific workflows in mind, partly due to the inertia of infrastructure providers with pre-Internet business models such as the traditional “subscriber pays” print literature and, to some extent, research funders. However it is predominantly due to cultural and social barriers within the scientific community. The prevailing culture of academic scientific research is one of possession – where control over data, methodological secrets, and exploitation of results are paramount. The tradition of Mertonian Science has receded, in some cases, so far that principled attempts to reframe an ethical view of modern science can seem charmingly naive.

It is in the context of these challenges that the movement advocating more openness in science must be seen. There will always be places where complete openness is not appropriate, such as where personal patient records may be identifiable, where research is likely to lead to patentable (and patent-worthy) results, or where the safety or privacy of environments, study subjects, or researchers might be compromised. These, however are special instances for which exceptional cases can be made, and not the general case across the whole of global research effort. Significant steps forward such as funder and institutional pre-print deposition mandates and the adoption of data sharing policies by UK Research Councils must be balanced against the legal and legislative attempts to overturn the NIH mandate and widespread confusion over what standards of data sharing are actually required and how they will be judged and enforced. Nonetheless there is a growing community interested in adopting more open practices in their research, and increasingly this community is developing as a strong voice in discussions of science policy, funding, and publication.  The aim of this workshop is to strengthen this voice by focusing the attention of the community on areas requiring technical development, the development and implementation of standards, both technical and social, and identification and celebration of success.

Why we need open science – Open Access publication, Open Data, and Open Process

The case for taxpayer access to the taxpayer funded peer reviewed literature was made personally and directly in Jonathon Eisen’s first editorial for PLoS Biology [3].

[…describing the submission of a paper to PLoS Biology as an ‘experiment’…] But then, while finalizing the paper, a two-month-long medical nightmare ensued that eventually ended in the stillbirth of my first child. While my wife and I struggled with medical mistakes and negligence, we felt the need to take charge and figure out for ourselves what the right medical care should be. And this is when I experienced the horror of closed-access publishing. For unlike my colleagues at major research universities that have subscriptions to all journals, I worked at a 300-person nonprofit research institute with a small library. So there I was—a scientist and a taxpayer—desperate to read the results of work that I helped pay for and work that might give me more knowledge than possessed by our doctors. And yet either I could not get the papers or I had to pay to read them without knowing if they would be helpful. After we lost our son, I vowed to never publish in non-OA journals if I was in control. […]

Eisen JA (2008) PLoS Biology 2.0. PLoS Biol 6(2): e48 doi:10.1371/journal.pbio.0060048

As a scientist in a small institution he was unable to access the general medical literature. More generally, as a US taxpayer he was unable to access the outputs of US government funded research or indeed of research funded by the governments of other countries. The general case for enabling access of both the general public, scientists in less well funded institutions, and in the developing world has been accepted by most in principle. While there are continuing actions being taken to limit the action of the NIH mandate by US publishers a wide range of research institutions have adopted deposition mandates. There remains much discussion about routes to open access with the debate over ‘Green’ and ‘Gold’ routes continuing as well as an energetic ongoing debate about the stability and viability of the business models of various open access journals. However it seems unlikely that the gradual increase in number and impact of open access journals is likely to slow or stop soon. The principle that the scientific literature should be available to all has been won. The question of how best to achieve that remains a matter of debate.

A similar case to that for access to the published literature can also be made for research data. At the extremes, withholding data could lead to preventable deaths or severely reduced quality of life for patients. Andrew Vickers, in a hard hitting New York Times essay [4] dissected the reasons that medical scientists give for not making data from clinical cancer trials available; data that could, in aggregate, provide valuable insights into enhancing patient survival time and quality of life. He quotes work by John Kirwan (Bristol University) showing that three quarters of researchers in one survey opposed sharing data from clinical trials. While there may be specific reasons for retaining specific types of data from clinical trials, particularly in small specialised cases where maintaining the privacy of participants is difficult or impossible, it seems unarguable that the interests of patients and the public demand that such data be available for re-use and analysis. This is particularly the case where the taxpayer has funded these trials, but for other funders, including industrial funders, there is a public interest argument for making clinical trial data public in particular.

In other fields the case for data sharing may seem less clear cut. There is little obvious damage done to the general public by not making the details of research available. However, while the argument is more subtle, it is similar to that for clinical data. There the argument is that reanalysis and aggregation can lead to new insights with an impact on patient care. In non-clinical sciences this aggregation and re-analysis leads to new insights, more effective analysis, and indeed new types of analysis. The massive expansion in the scale and ambition of biological sciences over the past twenty years is largely due to the availability of biological sequence, structural, and functional data in international and freely available archives. Indeed the entire field of bioinformatics is predicated on the availability of this data. There is a strong argument to be made that the failure of the chemical sciences to achieve a similar revolution is due to the lack of such publicly available data. Bioinformatics is a highly active and widely practiced field of science. By comparison, chemoinformatics is marginalised, and, what is most galling to those who care for the future of chemistry, primarily driven by the needs and desires of biological scientists. Chemists for the most part haven’t grasped the need because the availability of data is not part of their culture.

High energy particle physics by contrast is necessarily based on a community effort; without strong collaboration, communication, and formalised sharing of the details of what work is going on the research simply would not happen. Astronomy, genome sequencing, and protein crystallography are other fields where there is a strong history, and in some cases formalized standards of data sharing. While there are anecdotal cases of ‘cheating’ or bending the rules, usually to prevent or restrict the re-use of data, the overall impact of data sharing in these areas is generally seen as positive, leading to better science, higher data quality standards, and higher standards of data description. Again, to paraphrase Smolin, where the discussion proceeds from a shared set of evidence we are more likely to reach a valid conclusion. This is simply about doing better science by improving the evidence base.

The final piece of the puzzle, and in many ways the most socially and technically challenging is the sharing of research procedures. Data has no value in and of itself unless the process used to generate it is appropriate and reliable. Disputes over the validity of claims are rarely based on the data themselves but on the procedures used either to collect them or those used to process and analyse them. A widely reported recent case turned on the details of how a protein was purified; whether with a step or gradual gradient elution. This detail of procedure led laboratories to differing results, a year of wasted time for one researcher, and ultimately the retraction of several high profile papers [refs – nature feature, retractions, original paper etc]. Experimental scientists generally imagine that in the computational sciences where a much higher level of reproducibility and the ready availability of code and subversion repositories makes sharing and documenting material relatively straightforward, would have much higher standards. However, a recent paper [6] by Ted Pedersen (University of Minnesota, Duluth) – with the wonderful title ‘Empiricism is not a matter of faith’ – criticized the standards of both code documentation and availability. He makes the case that working with the assumption that you will make the tools available to others not only allows you to develop better tools, and makes you popular in the community, but also improves the quality of your own work.

And this really is the crux of the matter. If the central principle of the scientific method is open analysis and criticism of claims then making the data and process and conclusions avalable and accessible is just doing good science. While we may argue about the timing of release or the details of ‘how raw’ available data needs to be or the file formats or ontologies used to describe it there can be no argument that if the scientific record is to have value it must rest on an accessible body of relevant evidence. Scientists were doing mashups long before the term was invented; mixing data from more than one source; reprocessing it to provide a different view. The potential of online tools to help to do this better is massive, but the utility of these tools depends on the sharing of data, workflows, ideas, and opinions.

There are broadly three areas for development that are required to enable the more widespread adoption of open practice by research scientists. The first is the development of tools that are designed for scientists. While many of the general purpose tools and services have been adopted by researchers there are many cases where specialised design or adaptation is required for the specific needs of a research environment. In some cases the needs of research willpush development in specific areas, such as controlled vocabularies, beyond what is being done in the mainstream. The second, and most important area involves the social and cultural barriers within various research communities.These vary widely in type and importance across different fields and understanding and overcoming the fears as well as challenging entrenched interests will be an important part of the open science programme. Finally, there is a value and a need to provide top-down guidance in the form of policies and standards. The vagueness of the term ‘Open Science’ means that while it is a good banner there is a potential for confusion. Standards, policies, and brands can provide clarity for researchers, a clear articulation of aspirations (and a guide to the technical steps required to achieve them), and the support required to help people actually make this happen in their own research.

Part II will cover the issues around tools for Open Science

References

  1. Smolin L (2008), Science as an ethical community, PIRSA ID#08090035, http://pirsa.org/08090035/
  2. Mars Phoenix on Twitter, http://twitter.com/MarsPhoenix
  3. Eisen JA (2008) PLoS Biology 2.0. PLoS Biol 6(2): e48 doi:10.1371/journal.pbio.0060048
  4. Vickers A (2008), http://www.nytimes.com/2008/01/22/health/views/22essa.html?_r=1
  5. Pedersen T (2008), Computational Linguistics, Volume 34, Number 3, pp. 465-470, Self archived.

Convergent evolution of scientist behaviour on Web 2.0 sites?

A thought sparked off by a comment from Maxine Clarke at Nature Networks where she posted a link to a post by David Crotty. The thing that got me thinking was Maxine’ statement:

I would add that in my opinion Cameron’s points about FriendFeed apply also to Nature Network. I’ve seen lots of examples of highly specific questions being answered on NN in the way Cameron describes for FF…But NN and FF aren’t the same: they both have the same nice feature of discussion of a partiular question or “article at a URL somewhere”, but they differ in other ways,…[CN- my emphasis]

Alright, in isolation this doesn’t look like much, read through both David’s post and the comments, and then come back to Maxine’s,  but what struck me was that on many of these sites many different communities seem to be using very different functionality to do very similar things. In Maxine’s words ‘…discussion of a…paricular URL somewhere…’ And that leads me to wonder the extent to which all of these sites are failing to do what it is that we actually want them to do. And the obvious follow on question: What is it we want them to do?

There seem to be two parts to this. One, as I wrote in my response to David, is that a lot of this is about the coffee room conversation, a process of building and maintaining a social network. It happens that this network is online, which makes it tough to drop into each others office, but these conversational tools are the next best thing. In fact they can be better because they let you choose when someone can drop into your office, a choice you often don’t have in the physical world. Many services; Friendfeed, Twitter, Nature Networks, Faceboo, or a combination can do this quite well – indeed the conversation spreads across many services helping the social network (which bear in mind probably actually has less than 500 total members) to grow, form, and strengthen the connections between people.

Great. So the social bit, the bit we have in common with the general populace, is sorted. What about the science?

I think what we want as scientists is two things. Firstly we want the right URL delivered at the right time to our inbox (I am assuming anything important is a resource on the web – this may not be true now but give it 18 months and it will be) . Secondly we want a rapid and accurate assessment of this item, its validity, its relevance, and its importance to us judged by people we trust and respect. Traditionally this was managed by going to the library and reading the journals – and then going to the appropriate conference and talking to peopl. We know that the volume of material and the speed at which we need to deal with this is way too fast. Nothing new there.

My current thinking is that we are failing in building the right tools because we keep thinking of these two steps as separate when actually combining them into one integrated process would actual provide efficiency gains for both phases. I need to sleep on this to get it straight in my head, there are issues of resource discovery, timeframes, and social network maintenance that are not falling into place for me at the moment, so that will be the subject of another post.

However, whether I am right or wrong in that particular line of thought, if it is true that we are reasonably consistent in what we want then it is not suprising that we try to bend the full range of services available into achieving those goals. The interesting question is whether we can discern what the killer app would be by looking at the details of what people do to different services and where they are failing. In a sense, if there is a single killer app for science then it should be discernable what it would do based on what scientists try to do with different services…

Why a 25% risk of developing Parkinson’s really does matter

There has been a lot of commentary around the blogosphere about Sergey Brin’s blog post in which he announced that his SNP profile includes one variant which significantly increases his risk of developing Parkinson’s disease. The mainstream media seem mostly to be desperately concerned about the potential for the ignorant masses to be misinformed about what the results of such tests would mean and the potential for people to make unfortunate decisions based on them (based on that argument I’m not sure why we are allowed to have either credit cards or mortgages but never mind). The other, related stream, which is found more online is that this is all a bit meaningless because the correlation between the SNPs 23andMe measure and any disease (or indeed any phenotype) are weak so it’s not like he knows he’s going to get it – why not get a proper test for a proper genetic disease?

I think this is missing the point – and I think in fact, his post could represent the beginning of a significant change to the landscape of medical funding – precisely because that correlation is weak.

In the western world we have been talking for decades about how ‘prevention is better than cure’ yet funding for disease prevention has always remained poor. It’s not sexy, it is often long term, and it is much harder to get people to donate money for it.  Rich people donate money because relatives, or they themselves, are already ill. They are looking for a cure, or at least a legacy of helping people in the same situation.

The Health Commons project run by Science Commons has as its aim the reduction of the transactional costs involved in getting to a drug. If you can drop the cost of making a cure for a disease from $1B to $1M you have many orders of magnitude more people who can afford to just ‘buy a cure’ for a loved one, themselves, or to make themselves feel better.  This vision, and what it does to global health, even if the success rates are relatively low, is immensely powerful. And it needn’t apply just to drugs, or to cures.

But Brin doesn’t want a cure necessarily. Actually that’s not true – I’m sure he does, his mother has Parkinson’s and his great aunt suffered from it as well. Anyone who has seen someone suffer from a degenerative disease would want a cure. But equally, he’s a smart guy and knows just how tough that will be, and how much money is already going into such things.  But he can now look at his genetic profile now and look at which diseases he is predisposed to get. He can look at which of those he is really worried about. And then he can dig into his pocket and decide what the most efficient use of his $15B fortune is. Sure he will put millions into developing treatments, but the smart money will go on research into prevention or slowing onset. He has the time before any likely onset to allow programmes on prevention to run for 10-20 years before he will need to take a best guess on implementation.

And the point is that there will be a growing number of people having only the probabilities based on SNP data to work on making similar judgements about a range of diseases. And don’t believe that if we have full genome sequences those probabilities will get any better either. More precise yes. Better linked to phenotype? Not for a while yet. People who get these tests done don’t know exactly what they will get, but they have an idea, and they might have that idea up to 50 years in advance. Now consider what happens if the costs of developing methods to prevent or delay onset drops to the point that millionaires can make an impact. A thousand or maybe a miillion-fold more people with a deep interest in preventing the onset of specific diseases, an understanding of risk-based investment, and the money and the time to do something about it.

Preventative medicine just became the biggest growth area in medical research.

p.s. Attilla gets it – he’s just thinking regenerative rather than preventative – maybe they are the same thing in the end?