Defining error rates in the Illumina sequence: A useful and feasible open project?

Panorama image of the EBI (left) and Sulston Laboratories (right) of the Sanger Institute on the Genome campus in Cambridgeshire, England.

Regular readers will know I am a great believer in the potential of Web2.0 tools to enable rapid aggregation of loose networks of collaborators to solve a particular problem and the possibilities of using this approach to do science better, faster, and more efficiently. The reason why we haven’t had great successes on this thus far is fundamentally down to the size of the network we have in place and the bias in the expertise of that network towards specific areas. There is a strong bioinformatics/IT bias in the people interested in these tools and this plays out in a number of fields from the people on Friendfeed, to the relative frequency of commenting on PLoS Computational Biology versus PLoS ONE.

Putting these two together one obvious solution is to find a problem that is well suited to the people who are around, may be of interest to them, and is also quite useful to solve. I think I may have found such a problem.

The Illumina next generation sequencing platform developed originally by Solexa is the latest kid on the block as far as the systems that have reached the market. I spent a good part of today talking about how the analysis pipeline for this system could be improved. But one thing that came out as an issue is that no-one seems to have published  detailed analysis of the types of errors that are generated experimentally by this system. Illumina probably have done this analysis in some form but have better things to do than write it up.

The Solexa system is based on sequencing by synthesis. A population of DNA molecules, all amplified from the same single molecule, is immobilised on a surface. A new strand of DNA is added, one base at a time. In the Solexa system each base has a different fluorescent marker on it plus a blocking reagent. After the base is added, and the colour read, the blocker is removed and the next base can be added. More details can be found on the genographia wiki. There are two major sources of error here. Firstly, for a proportion of each sample, the base is not added successfully. This means in the next round, that part of the sample may generate a readout for the previous base. Secondly the blocker may fail, leading to the addition of two bases, causing a similar problem but in reverse. As the cycles proceed the ends of each DNA strand in the sample get increasingly out of phase making it harder and harder to tell which is the correct signal.

These error rates are probably dependent both on the identity of the base being added and the identity of the previous base. It may also be related to the number of cycles that have been carried out. There is also the possibility that the sample DNA has errors in it due to the amplification process though these are likely to be close to insignificant. However there is no data on these error rates available. Simple you might think to get some of the raw data and do the analysis – fit the sequence of raw intensity data to a model where the parameters are error rates for each base.

Well we know that the availability of data makes re-processing possible and we further believe in the power of the social network. And I know that a lot of you guys are good at this kind of analysis, and might be interested in having a play with some of the raw data. It could also be a good paper – Nature Biotech/Nature Methods perhaps and I am prepared to bet it would get an interesting editorial writeup on the process as well. I don’t really have the skills to do the work but if others out there are interested then I am happy to coordinate. This could all be done, in the wild, out in the open and I think that would be a brilliant demonstration of the possibilities.

Oh, the data? We’ve got access to the raw and corrected spot intensities and the base calls from a single ‘tile’ of the phiX174 control lane for a run from the 1000 Genomes Project which can be found at http://sgenomics.org/phix174.tar.gz courtesy of Nava Whiteford from the Sanger Centre. If you’re interested in the final product you can see some of the final read data being produced here.

What I had in mind was taking the called sequence, align onto phiX174 so we know the ‘true’ sequence. Then use that sequence plus a model with error rates to parameterise those error rates. Perhaps there is a better way to approach the problem? There are a series of relatively simple error models that could be tried and if the error rates can be defined then it will enable a really significant increase in both the quality and quantity of data that can be determined by these machines. I figure splitting the job up into a few small groups working on different models, putting the whole thing up on google code with a wiki there to coordinate and capture other issues as we go forward. Anybody up for it (and got the time)?

Related articles

More on the science exchance – or building and capitalising a data commons

Image from Wikipedia via ZemantaBanknotes from all around the World donated by visitors to the British Museum, London

Following on from the discussion a few weeks back kicked off by Shirley at One Big Lab and continued here I’ve been thinking about how to actually turn what was a throwaway comment into reality:

What is being generated here is new science, and science isn’t paid for per se. The resources that generate science are supported by governments, charities, and industry but the actual production of science is not supported. The truly radical approach to this would be to turn the system on its head. Don’t fund the universities to do science, fund the journals to buy science; then the system would reward increased efficiency.

There is a problem at the core of this. For someone to pay for access to the results, there has to be a monetary benefit to them. This may be through increased efficiency of their research funding but that’s a rather vague benefit. For a serious charitable or commercial funder there has to be the potential to either make money, or at least see that the enterprise could become self sufficient. But surely this means monetizing the data somehow? Which would require restrictive licences, which is not at the end what we’re about.

The other story of the week has been the, in the end very useful, kerfuffle caused by ChemSpider moving to a CC-BY-SA licence, and the confusion that has been revealed regarding data, licencing, and the public domain. John Wilbanks, whose comments on the ChemSpider licence, sparked the discussion has written two posts [1, 2] which I found illuminating and have made things much clearer for me. His point is that data naturally belongs in the public domain and that the public domain and the freedom of the data itself needs to be protected from erosion, both legal, and conceptual that could be caused by our obsession with licences. What does this mean for making an effective data commons, and the Science Exchange that could arise from it, financially viable? Continue reading “More on the science exchance – or building and capitalising a data commons”

Protocols for Open Science

interior detail, stata center, MIT. just outside science commons offices.

One of the strong messages that came back from the workshop we held at the BioSysBio meeting was that protocols and standards of behaviour were something that people would appreciate having available. There are many potential issues that are raised by the idea of a ‘charter’ or ‘protocol’ for open science but these are definitely things that are worth talking about. I thought I would through a few ideas out and see where they go. There are some potentially serious contradictions to be worked through. Continue reading “Protocols for Open Science”

The economic case for Open Science

I am thinking about how to present the case for Open Science, Open Notebook Science, and Open Data at Science in the 21st Century, the meeting being organised by Sabine Hossenfelder and Michael Nielsen at the Perimeter Institute for Theoretical Physics. I’ve put up a draft abstract and as you might guess from this I wanted to make an economic case that the waste of resources, both human and monetary is not something that is sustainable for the future. Here I want to rehearse that argument a bit further as well as explore the business case that could be presented to Google/Gates Foundation as a package that would include the development of the Science Exchange ideas that I blogged about last week. Continue reading “The economic case for Open Science”

Science in the 21st Century

Perimiter InstitutePerimeter Institute by hungryhungrypixels (Picture found by Zemanta).

Sabine Hossenfelder and Michael Nielsen of the Perimeter Institute for Theoretical Physics are organising a conference called ‘Science in the 21st Century‘ which was inspired in part by SciBarCamp. I am honoured, and not a little daunted, to have been asked to speak considering the star studded line up of speakers including, well lots of really interesting people, read the list. The meeting looks to be a really interesting mix of science, tools, and how these interact with people (and scientists). I’m looking forward to it. Continue reading “Science in the 21st Century”

Open Science in the Undergraduate Laboratory: Could this be the success story we’re looking for?

A whole series of things have converged in the last couple of days for me. First was Jean-Claude’s description of the work [1, 2] he and Brent Friesen of the Dominican University are doing putting the combi-Ugi project into an undergraduate laboratory setting. The students will make new compounds which will then be sent for testing as antimalarial agents by Phil Rosenthal at UCSF. This is a great story and a testament in particular to Brent’s work to make the laboratory practical more relevant and exciting for his students.

At the same time I get an email from Anna Croft, University of Bangor, Wales, after meeting up the previous day; Continue reading “Open Science in the Undergraduate Laboratory: Could this be the success story we’re looking for?”

Semantics in the real world? Part I – Why the triple needs to be a quint (or a sext, or…)

I’ve been mulling over this for a while, and seeing as I am home sick (can’t you tell from the rush of posts?) I’m going to give it a go. This definitely comes with a health warning as it goes way beyond what I know much about at any technical level. This is therefore handwaving of the highest order. But I haven’t come across anyone else floating the same ideas so I will have a shot at explaning my thoughts.

The Semantic Web, RDF, and XML are all the product of computer scientists thinking about computers and information. You can tell this because they deal with straightforward declarations that are absolute. X has property Y. Putting aside all the issues with the availability of tools and applications, the fact that triple stores don’t scale well, regardless of all the technical problems a central issue with applying these types of strategy to the real world is that absolutes don’t exist. I may assert that X has property Y, but what hppens when I change my mind, or when I realise I made a mistake, or when I find out that the underlying data wasn’t taken properly. How do we get this to work in the real world? Continue reading “Semantics in the real world? Part I – Why the triple needs to be a quint (or a sext, or…)”

Workshop on Blog Based Notebooks

DUE TO SEVERE COMMENT SPAM ON THIS POST I HAVE CLOSED IT TO COMMENTS

On February 28/29 we held a workshop on our Blog Based notebook system at the Cosener’s House in Abingdon, Oxfordshire. This was a small workshop with 13 people including biochemists (from Southampton, Cardiff, and RAL), social scientists (from Oxford Internet Institute and Computing Laboratory), developers from the MyGrid and MyExperiment family and members of the the Blog development team. The purpose of the workshop was to try and identify both the key problems that need to be addressed in the next version of the Blog Based notebook system and also to ask the question ‘Does this system deliver functionality that people want’. Another aim was to identify specific ways in which we could interact with MyExperiment to deliver enhanced functionality for us as well as new members for MyExperiment.

Continue reading “Workshop on Blog Based Notebooks”

Connecting Open Notebook data to a peer reviewed paper

One thing that we have been thinking about a bit recently is how best to link elements of a peer reviewed paper back to an Open Notebook. There are a number of issues that this raises, both technical and philosophical about how and why we might do this. Our first motivation is to provide access to the raw data if people want it. The dream here is that by clicking on a graph you are taken straight through to processed data which you can then backtrack through to get to the raw data itself. This is clearly some way off.

Other simple solutions are to provide a hyperlink back to the open notebook, or to an index page that describes the experiment, how it was done, and how the data was processed. Publishers are always going to have an issue with this because they can’t rely on the stability of external material. So the other solution is to package up a version of the notebook and provide it as supplementary information. This could still provide links back to the ‘real’ notebook but provides additional stability and also protects the data against disaster by duplicating it.

The problem with this is that many journals will only accept pdf. While we can process a notebook to provide a package which is wrapped up as pdf this has a lot of limitations particularly when it comes to data scraping, which after all we want to encourage. An encouraging development was recently described on the BioMedCentral Blog where they describe the capability of uploading a ‘mini website’ as supplementary information. This is great as we can build a static version of our notebook, with lots of lovely rich metadata built in. We can point out to the original notebook and we can point in from the original notebook back to the paper. I am supposed to be working on a paper at the moment which I was considering where to send. I hope we can give BMC Biotechnology or perhaps BMC Research Notes a go to test this out.