Home » Blog

A use case scenario for Mark…a description of the first experiment on the ISIS LaBLog

13 October 2008 6 Comments

Two rather exciting things are happening at the moment. Firstly we have finally got the LaBLog system up and running at RAL (http://biolab.isis.rl.ac.uk). Not a lot is happening there yet but we are gradually working up to a full Open Notebook status, starting by introducing people to the system bit by bit. My first experiment went up there late last week, it isn’t finished yet but I better get some of the data analysis done as rpg, if no-one else, is interested in the results.

The other area of development is that back down in Southampton, Blog MkIII is being specced out and design is going forward. This is being worked on now by both Mark Borkum and Andrew Milsted. Last time I was down in Southampton Mark asked me for some use cases – so I thought I might use the experiment I’ve just recorded to try and explain both the good and bad points of the current system, and also my continuing belief that anything but a very simple data model is likely to be fatally flawed when recording an experiment. This will also hopefully mark the beginning of more actual science content on this blog as I start to describe some of what we are doing and why. As we get more of the record of what we are doing onto the web we will be trying to generate a useful resource for people looking to use our kind of facilities.

So, very briefly the point of the experiment we started last week is to look at the use of GFP as a concentration and scattering standard in Small Angle Scattering. Small angle x-ray and neutron scattering provide an effective way of determining low resolution (say 5-10 A) structures of proteins in solution. However they suffer from serious potential artefacts that must be rigorously excluded before the data analysis can be trusted. One of those most crucial of these is aggregation, whether random conversion of protein into visible crud, or specific protein-protein interactions. Either of these, along with poor background subtraction or any one of a number of other problems can very easily render data and the analysis that depends on it meaningless.

So what to do? Well one approach is to use a very well characterised standard for which concentration, size, and shape are well established. There are plenty of proteins that are well behaved, pretty cheap, and for which the structure is known. However, as any biophysicist will tell you, measuring protein concentration accurately and precisely is tough; colorimetric assays are next to useless and measuring the UV absorbance of aromatic residues is pretty insensitive, prone to interference with other biological molecules (particularly DNA), and a lot harder to do right than most people think.

Our approach is to look at whether GFP is a good potential standard (specifically an eGFP engineered to prevent the tetramerisation that is common with the natural proteins). It has a strong absoprtion, well clear of most other biological molecules at 490 nm, it is dead easy to produce in large quantities (in our hands, I know other people have had trouble with this but we routinely pump out hundreds of milligrams and currently have a little over one gramme in the freezer), is stable in solution at high concentrations and freeze dries nicely. Sounds great! In principle we can do our scattering, then take the same sample cells, put them directly in a spectrophotometer, and measure the concentration. Last week was about doing some initial tests on a lab SAXS instrument to see whether the concept held up.

So – to our use case.

Maria, a student from Southampton, met me in Bath holding samples of GFP made up to 1, 2, 5, and 10 mg/mL in buffer. I quizzed Maria as to exactly how the samples had been made up and then recorded that in the LaBLog (post here). I then created the four posts representing each of the samples (1, 2, 3, 4). I also created a template for doing SAXS, and then, using that template I started filling in the first couple of planned samples (but I didn’t actually submit the post until sometime later).

At this point, as the buffer background was running, I realised that the 10mg/mL sample actually had visible aggregate in it. As the 5 mg/mL sample didn’t have any aggregate we changed the planned order of SAXS samples, starting with the 5 mg/mL sample. At the same time, we centrifuged the 10 mg/mL sample, which appeared to work quite nicely, generating a new, cleared 10 mg/mL sample, and prepared fresh 5 mg/mL and fresh 2 mg/mL samples.

Due to a lack of confidence in how we had got the image plate into its reader we actually ended up running the original 5 mg/mL sample three times. The second time we really did muck up the transfer but comparisons of the first and third time made us confident the first one was ok. At this point we were late for lunch and decided we would put the lowest concentration (1 mg/mL) sample on for an hour and grab something to eat. Note that by this time we had changed the expected order of samples about three or four times but none of this is actually recorded because I didn’t actually commit the record of data collection until the end of the day.

By this stage the running of samples was humming along quite nicely. It was time to deal with the data. The raw data comes off the instrument in the form of an image. I haven’t actually got these off the original computer as yet because they are rather large. However they are then immediately processed into relatively small two column data. It seems clear that each data file requires its own identity so those were all created (using another template) . Currently, several of these do not even have the two column text data, the big tiff files broke the system on upload, and I got fed up with uploading the reduced data by hand into each file.

As a result of running out of time and the patience to upload multiple files, the description of the data reduction is a bit terse, and although there are links to all the data most of you will get a 404 if you try to follow them, so I need to bring all of that back down and put it into the LaBLog proper where it is accessible but if you look closely here, you will see I made a mistake with some of the data analysis that needs fixing. I’m not sure I can be bothered systematically uploading all the incorrect files. If the system were acting naturally as a file repository and I was acting directly on those files then it would be part of the main workflow that everything would be made available automatically. The problem here was that I was forced by the instrument software to do the analysis on a specific computer (that wasn’t networked) and that our LaBLog system has no means of multiple file upload.

So to summarise the use case.

  1. Maria created four samples
  2. Original plan created to run the four samples plus backgrounds
  3. Realised 10mg/mL sample was aggregating and centrifuged to clear it (new unexpected procedure)
  4. Ran one of the pre-made samples three times, first time wasn’t confident, second time was a failure, third time confirmed first time was ok
  5. New samples prepared from cleared 10 mg/mL sample
  6. Prepared two new samples from cleared 10 mg/mL sample
  7. Re-worked plan for running samples based on time available
  8. Ran 1 mg/mL sample for different amount of time than previous samples
  9. Ran remaining samples for various amounts of time
  10. Data was collected from each sample after it was run and converted to a two column text format
  11. Two column data was rebinned and background subtracted (this is where I went wrong with some of them, forgetting that in some cases I had two lots of electronic background)
  12. Subtracted data was rebinned again and then desmeared (the instrument has a slit geometry rather than a pinhole) to generate a new two column data file.

So, four original samples, and three unexpected ones were created. One set of data collection led to nine raw data files which were then recombined in a range of different ways depending on collection times. Ultimately this generates four finalised reduced datasets, plus a number of files along the way. Two people were involved. And all of this was done under reasonable time pressure. If you look at the commit times on the posts you will realise that a lot of these were written (or at least submitted) rather late in the day, particularly the data analysis. This is because the data analysis was offline, out of the notebook in proprietary software. Not a lot that can be done about this. The other things that were late were the posts associated with the ‘raw’ datafiles. In both cases a major help would be a ‘directory watcher’ that automatically uploads files and queues them up somewhere so they are available to link to.

This was not an overly complicated or unusual experiment but one that illustrates the pretty common  changes of direction mid-stream and reassessments of priorities as we went. What it does demonstrate is the essential messiness of the process. There is no single workflow that traces through the experiment that can be applied across the whole experiment, either in the practical or the data analysis parts. There is no straightforward parallel process applied to a single set of samples but multiple, related samples, that require slightly different tacks to be taken with data analysis.  What there are, are objects that have relationships. The critical thing in any laboratory recording system is making the recording of both the objects, and the relationships between them, as simple and as natural as possible. Anything else and the record simply won’t get made.


6 Comments »

  • Will Moore said:

    It’s good to hear about the practical challenges of keeping a digital lab notebook, although I think that many of them will also apply to keeping a well-organised paper notebook!

    “anything but a very simple data model is likely to be fatally flawed when recording an experiment”
    I totally agree. The question is what you want to do with the model of the experiment. Simply exchange a description of the experiment with other software? Or to get some machine to reason over it?
    Is anyone working on a simple data model for an experiment? The only ones I’m aware of are complex Functional Genomics ones like MAGE-TAB.

  • Will Moore said:

    It’s good to hear about the practical challenges of keeping a digital lab notebook, although I think that many of them will also apply to keeping a well-organised paper notebook!

    “anything but a very simple data model is likely to be fatally flawed when recording an experiment”
    I totally agree. The question is what you want to do with the model of the experiment. Simply exchange a description of the experiment with other software? Or to get some machine to reason over it?
    Is anyone working on a simple data model for an experiment? The only ones I’m aware of are complex Functional Genomics ones like MAGE-TAB.

  • Yaroslav Nikolaev said:

    Very good point on the need of objects with relationships, instead of rigid workflows!

    On your challenges with precise protein concentration measurement, you might think of additionally evaluating those by NMR. It has limitations of its own and is more laborious than UV, but gives very precise results. And due to intrinsic properties of the method contribution of aggregates is naturally excluded from obtained values. (however the same property creates an upper limit for the size of the measurable particles).
    Here approximate procedure is illustrated for nucleic acids, but the same applies to other biomolecules as well:
    http://www.ncbi.nlm.nih.gov/pubmed/12419352
    http://www.ncbi.nlm.nih.gov/pubmed/14722228

  • Yaroslav Nikolaev said:

    Very good point on the need of objects with relationships, instead of rigid workflows!

    On your challenges with precise protein concentration measurement, you might think of additionally evaluating those by NMR. It has limitations of its own and is more laborious than UV, but gives very precise results. And due to intrinsic properties of the method contribution of aggregates is naturally excluded from obtained values. (however the same property creates an upper limit for the size of the measurable particles).
    Here approximate procedure is illustrated for nucleic acids, but the same applies to other biomolecules as well:
    http://www.ncbi.nlm.nih.gov/pubmed/12419352
    http://www.ncbi.nlm.nih.gov/pubmed/14722228

  • Cameron Neylon said:

    Will, yes I think many of these issues are equally true of paper notebooks. Just that the same standards of legibility and comprehensibility are usually not applied. Also, to be frank, the interface remains better :-)

    Yaroslav, its a good point, I hadn’t considered NMR and I should look into this. We don’t actually have NMR onsite here but I believe it is something we should have and presumably it wouldn’t need incredibly high field instruments so wouldn’t have to be very expensive. Will look into this.

  • Cameron Neylon said:

    Will, yes I think many of these issues are equally true of paper notebooks. Just that the same standards of legibility and comprehensibility are usually not applied. Also, to be frank, the interface remains better :-)

    Yaroslav, its a good point, I hadn’t considered NMR and I should look into this. We don’t actually have NMR onsite here but I believe it is something we should have and presumably it wouldn’t need incredibly high field instruments so wouldn’t have to be very expensive. Will look into this.