The long slow catchup…

I’m a little shell shocked really. I’ve spent the last couple of weeks running around like a lunatic, being at meetings, organising meetings, flying out to other meetings. And then flying back to try and catch up with all the things that need doing before the next flurry of activity strikes (which involves less travel and more experiments you will be pleased to know). There are two things I desperately need to write up.

The Open Science workshop at Southampton on September 1 seemed to be well received and was certainly interesting for me.  Despite having a very diverse group of people we did seem to manage to have a sensible discussion that actually came to some conclusions. This was followed up by discussions with the web publishing group at Nature where some of these ideas were refined – more on this will follow!

Following on from this (and with a quick afternoon jaunt to Bristol for the Bristol Knowlege Unconference on the evening of September 5 I flew to Toronto en route to Waterloo for Science in the 21st Century, allowing for a brief stop for a Nature Network Toronto pub night panel session with Jen Dodd, Michael Nielsen, and Timo Hannay. The organisers of Science21, but in particular Sabine Hossenfelder, deserve huge congratulations for putting together one of the most diverse and exciting conferences I have ever been to. With speakers from historians to sociologists, hedge fund managers to writers, and even the odd academic scientist the sheer breadth of material covered was quite breathtaking.

You can see most of the talks and associated material on the Perimeter Institute Seminar Archive page here. The friendfeed commentary is also available in the science21 room. Once again it was a great pleasure to meet people I kind of knew but hadn’t ever actually met such as Greg Wilson and John Dupuis as well as to meet new people including (but by no means limited to) Harry Collins, Paul Guinnessy, and David Kaiser. We have yet to establish whether I knew Jen Dodd in a previous life…

Very many ideas will come out of this meeting I think – and I have no doubt you will see some interesting blog posts from others with the science21 tag coming out over the next few weeks and months. A couple of particular things I will try to follow up on;

  • Harry Collins spoke about categorisations of tacit (i.e. non-communicated) knowledge and how these relate to different categories of expertise. This has obvious implications for our mission to describe our experiments to a level where there is ‘no insider information’. The idea that we may be able to rationally describe what we can and cannot expect to be able to communicate and that we can therefore concentrate on the things that we can is compelling.
  • Greg Wilson made a strong case for the fully supported experiment that echoed my own thoughts about the recording of data analysis procedures. He was focussed on computational science but I think his point goes much wider than that. This requires some thought and processing but for me it is clear that the big challenge in communicating the details of our experiments now clearly lies in communicating process rather than data.

Each of these deserves its own post and will hopefully get it. And I am also aware that I owe many of you comments, replies, or other things – some more urgent than others. I’ll be getting to them as soon as I can dig myself out from under this pile of……

The trouble with semantics…

…is knowing what you mean…

I posted last week about the spontaneous CMLReact hackfest held around Peter Murray-Rust’s dining room table the day after Science Blogging in London. There were a number of interesting things that came out of the exercise for me. The first was that it would be relatively easy to design a moderately strict, but pretty standard, description format for a synthetic chemistry lab notebook that could be automatically scraped into CMLReact.

Automatic conversions from lab book to machine readable XML

CMLReact files have (roughly) three sections. In the first, all the molecules that are relevant to the description are described, or in the ideal semantic web world pointed to at an external authority such as Chemspider, PubChem, or other source. In the second section the relationships between input materials, solvents, products, and samples are described. In general all of these will be molecules which are referred to in the first session but this is not absolutely required (and this will be important later). The final section describes observables, procedures, yields, and other descriptions of what happened or what was measured.

If we take a look at the UsefulChem experiment that we converted to CMLReact you can see that most of this information is available in one form or another. The molecules are described via InChi/InChiKey at the bottom of the page. This could be used as they are to populate the molecules section. A little additional markup to distinguish between reactants, solvents, reagents, and products would make it possible to start populating the second section describing the relationships between these molecules.

The third section is the most tricky, and this will always be an 80:20 game. The object is to abstract as much information as can be reasonably garnered without putting in the vast amount of work required to get close to 100% retrieval. At the end of the day, if someone wants the real detail they can go back to the lab book. Peter has demonstrated text scraping tools that do a pretty good job of extracting a lot of this information. In combination with a bit of markup it is reasonable to expect that some basic information (amounts of reagents, yield, temperature of reaction, some descriptive terms) could reasonably be extracted. Again, getting 80-90% of a subset of regularly used terms  would be very powerful.

But what are we describing?

There is a problem with grabbing this descriptive information from the lab notebook however, and it is a problem that is very general and something I believe we need to grapple with urgently. There is a fundamental question as to what it is that this file is describing. Does it describe the plan of the experiment? The record of carrying out a specific example of this experiment? An ‘averaged’ description of a set of equivalent experiments? A general description of the reaction? Or a description of a model of what we expect or think is happening?

If you look closely at the current version of the CMLReact file you will see that the yield is expressed as a percentage with a standard deviation. This is actually describing the average of three independent reactions but that is not actually made explicit anywhere in this file. Is this important? Well I think it is because it has an effect on what any outward links back to the lab book mean. There is a significant difference between – ‘this link points to an example of this kind of reaction’ (which might in fact be significantly different in the details) and ‘this link points to this exact experiment’ or indeed ‘this link points to an index of relevant experimental results’. Those distinctions need to be encoded in the links, or perhaps more likely made explicit in the abstracted file.

The CMLReact file is an abstraction of the experimental record. It is therefore important to make it clear what the level of abtraction is and what has been abstracted out of that description. This relates to the distinction I have made before between the flexibility required to record an experiment versus the ability to use a more structured vocabulary to describe the experiment after it has happened. My impression is that people who work in developing these controlled vocabularies are focussed on description rather than recording and don’t often make the distinction between the two. There is also often a lack of distinction between describing an experiment and describing a model of what happened in that experiment.  This is important because the model may need to be modified in the future whereas the description of the experiment should be accurate.

Summary

My view remains that when recording an experiment the system used should be as flexible as possible. Structure can be added to this primary record when convenient to make the process of abstracting from this primary record to a controlled vocabulary easier. The primary goal for me, for the moment, remains making a human readable record available. The process of converting the primary record into a controlled vocabulary, such as CMLReact, FuGE, or workflow system such as Taverna, should be enabled via domain specific automated or semi-automated tools that help the user to structure their description of the experiment in a way that makes it more directly useful to them but maintains the links with the primary record. Where the same controlled vocabulary is used for more abstracted descriptions of studies, experiments, or the models that purport to describe them, this distinction must be made clear.

Semantics depends absolutely on being clear about what you are describing. There is absolutely no point in having absolute clarity about the description of an object if the nature of that object is fuzzy. Get it right and we could have a very sophisticated description of the scientific record. Get it wrong and that description could be at best unclear and at worst downright misleading.

The Open Science Endurance Event – Team JC-C

So far we’ve had a fun week. Jean-Claude arrived in the UK on Thursday last, followed up with a talk at Bath University to people at UKOLN on Friday. The talk kicked off an extended conversation which meant we were very late to lunch but it was great to follow up on issues from a different perspective to that. Jean-Claude will be making a screencast of the talk available on his Drexel-CoAs Podcast blog.

On Friday afternoon we headed into London in preparation for Science Blogging 2008 which was a blast. A very entertaining keynote by Ben Goldacre of Bad Science was followed up by a fascinating series of sessions on everything from Open Notebooks to connecting up conversations separated in time and space to creativity, blogging boredom, and unicycling giraffes. The sessions were great fun and there was lots of back chat on friendfeed but in many ways the best part of this for me was the chance to meet people (if in many cases very briefly) that I knew well but had never actually met in person. Too many to mention all but in particular it was a pleasure to finally meet Michael Barton, Richard Grant, Heather Etchevers, Matt Wood, Graham Steel, and many people from NPG as well as to meet many old friends. There is pretty good coverage of the meeting itself so I will simply point at the technorati tag and be done with that.

I was one of the people on the final panel with Peter Murray-Rust, Richard Grant, and moderation from Timo Hannay. This was a fun conversation with lots of different perspectives. I think the overall conclusion was that the idea that ‘blogging is bad for your career’ is shifting towards ‘why do you put all this work in?’ There was a strong sense that some people had made real personal gains out of blogging and online activities and that many organisations are starting to see it as a valuable contribution. Nonetheless it is not an activity that is widely valued, or indeed even known about. To this end the panel offered a challenge – to persuade a senior scientist to start writing a blog. One prize will be to be featured in next year’s Open Lab 2008 – the best of science writing on the web. The other prize – which caused an extensive collective intake of breath – will be an all expenses paid trip to Scifoo next year for both blogger and the encourager. The announcement will probably be made with details by the time I get to post this.

In the pub on the Saturday night, Peter M-R grabbed me and JC and Egon Willighagen and said ‘why don’t you come up to Cambridge tomorrow?’ So we all did and I and Egon have written briefly about that already. More work to do, and some interesting things to discuss, which I hope to follow up later.

Sunday afternoon- dash back to Southampton for the introductory dinner for the Open Science Workshop held at Southampton Uni on Monday. This was a really great meeting from my perspective with a real mix of tools people and ‘practicing’ scientists, computer scientists, chemists, biologists, and people with business degrees. There is more at the Wiki and on Friendfeed – but this will need a write up of its own. Hopefully slides will be made available for most of the talks and we will point to them from both Wiki and Friendfeed.

Tuesday, more meetings and planning, with a great meeting with Dave De Roure of Southampton Electronics and Computer Science and in particular the PI for the MyExperiment project.  Some good stuff will come out of this – and the contact between Dave and Chemspider has been made. The MyExperiment team are keen on delivering more for chemistry so that link will be important. However I was particularly taken with a throw away comment Dave made that workflows (and makefiles) have a direct equivalence with spreadsheets. This made me think immediately of that great divide between ‘those who use excel for everything’ and ‘those who run screaming in the other direction and would rather hard code in perl on a clay tablet’ for analysis. If we could actually leverage the vast number of analytical spread sheets sitting on a million hard drives we might be able to do some very interesting stuff. Hopefully more on this in a future post.

Wednesday we did some experiments – its mostly up online now so you can go see if you are interested. And today we are heading up to London to see the folks at Nature Publishing Group which should be fun. More opportunity to talk in detail about ideas from Saturday and the role of the publisher and papers in the future.  Had a lovely lunch with the NPG web publishing people, talks seemed to go reasonably well, and a quick chat with the people from Nature Chemistry

But it doesn’t stop there. Tomorrow JC goes to Manchester to give a talk, then heads to Edinburgh for the e-Science All Hands Meeting, including a workshop on ‘The Global Data Centric View’. Then he heads to Oxford for another meeting  and talk before finally heading back to Philadelphia. On Sunday I fly to Toronto, with a Nature Network pub session on the Sunday evening (wow I am going to be on scintillating form for that!) followed by Science in the 21st Century for the following week.

I think we are going to need a rest when we get to our respective homes again…

Linking up open science online

I am currently sitting at the dining table of Peter Murray-Rust with Egon Willighagen opposite me talking to Jean-Claude Bradley. We pulling together sets of data from Jean-Claude’s UsefulChem project into CML to make it more semantically rich and do a bunch of cool stuff. Jean-Claude has a recently published preprint on Nature Precedings of a paper that has been submitted to JoVE. Egon was able to grab the InChiKeys from the relevant UsefulChem pages and passing those to CDK via a script that he wrote on the spot (which he has also just blogged) generated CML react for those molecules.

Peter at the same time has cut and pasted an existing example of a CML react XML document into a GoogleDoc which we will then modify to represent one example of the Ugi reactions that Jean-Claude reported in the precedings paper. You will be able to see all of these documents. The only way we would be able to do this is with four laptops all online – and for all the relevant documents and services to be available from where we are sitting. I’ve never been involved in a hackfest like this before but actually the ability to handle different aspects of the same document via GoogleDocs is a very powerful way of handling multiple processes at the same time.

Can post publication peer review work? The PLoS ONE report card

This post is an opinion piece and not a rigorous objective analysis. It is fair to say that I am on the record as and advocate of the principles behind PLoS ONE and am also in favour of post publication peer review and this should be read in that light. [ed I’ve also modified this slightly from the original version because I got myself mixed up in an Excel spreadsheet]

To me, anonymous peer review is, and always has been, broken. The central principle of the scientific method is that claims and data to support those claims are placed, publically, in the view of expert peers. They are examined, and re-examined on the basis of new data, considered and modified as necessary, and ultimately discarded in favour of an improved, or more sophisticated model. The strength of this process is that it is open, allowing for extended discussion on the validity of claims, theories, models, and data. It is a bearpit, but one in which actions are expected to take place in public (or at least community) view. To have as the first hurdle to placing new science in the view of the community a process which is confidential, anonymous, arbitrary, and closed, is an anachronism.

It is, to be fair, an anachronism that was necessary to cope with rising volumes of scientific material in the years after the second world war as the community increased radically in size. A limited number of referees was required to make the system manageable and anonymity was seen as necessary to protect the integrity of this limited number of referees. This was a good solution given the technology of the day. Today, it is neither a good system, nor an efficient system, and we have in principle the ability to do peer review differently, more effectively, and more efficiently. However, thus far most of the evidence suggests that the scientific community dosen’t want to change. There is, reasonably enough, a general attitude that if it isn’t broken it doesn’t need fixing. Nonetheless there is a constant stream of suggestions, complaints, and experimental projects looking at alternatives.

The last 12-24 months have seen some radical experiments in peer review. Nature Publishing Group trialled an open peer review process. PLoS ONE proposed a qualitatively different form of peer reivew, rejecting the idea of ‘importance’ as a criterion for publication. Frontiers have developed a tiered approach where a paper is submitted into the ‘system’ and will gradually rise to its level of importance based on multiple rounds of community review. Nature Precedings has expanded the role and discipline boundaries of pre-print archives and a white paper has been presented to EMBO Council suggesting that the majority of EMBO journals be scrapped in favour of retaining one flagship journal for which papers would be handpicked from a generic repository where authors would submit, along with referees’ reports and author’s response, on payment of a submission charge. Of all of these experiments, none could be said to be a runaway success so far with the possible exception of PLoS ONE. PLoS ONE, as I have written before, succeeded precisely because it managed to reposition the definition of ‘peer review’. The community have accepted this definition, primarily because it is indexed in PubMed. It will be interesting to see how this develops.

PLoS has also been aiming to develop ratings and comment systems for their papers as a way of moving towards some element of post publication peer review. I, along with some others (see full disclosure below) have been granted access to the full set of comments and some analytical data on these comments and ratings. This should be seen in the context of Euan Adie’s discussion of commenting frequency and practice in BioMedCentral journals which broadly speaking showed that around 2% of papers had comments and that these comments were mostly substantive and dealt with the science. How does PLoS ONE compare and what does this tell us about the merits or demerits of post publication peer review?

PLoS ONE has a range of commenting features, including a simple rating system (on a scale of 1-5) the ability to leave freetext notes, comments, and questions, and in keeping with a general Web 2.o feel the ability to add trackbacks, a mechanism for linking up citations from blogs. Broadly speaking a little more than 13% (380 of 2773) of all papers have ratings and around 23% have comments, notes, or replies to either (647 of 2773, not including any from PLoS ONE staff) . Probably unsurprisingly most papers that have ratings also have comments. There is a very weak positive correlation between the number of citations a paper has received (as determined from Google Scholar) and the number of comments (R^2 = 0.02, which is probably dominated by papers with both no citations and no comments, which are mostly recent, none of this is controlled for publication date).

Overall this is consistent with what we’d expect. The majority of papers don’t have either comments or ratings but a significant minority do. What is slightly suprising is that where there is arguably a higher barrier to adding something (click a button to rate versus write a text comment) there is actually more activity. This suggests to me that people are actively uncomfortable with rating papers versus leaving substantive comments. These numbers compare very favourably to those reported by Euan on comments in BioMedCentral but they are not yet moving into the realms of the majority. It should also be noted that there has been a consistent  programme at PLoS ONE with the aim of increasing the involvement of the community. Broadly speaking I would say that the data we have suggest that that programme has been a success in raising involvement.

So are these numbers ‘good’? In reality I don’t know. They seem to be an improvement on the BMC numbers arguing that as systems improve and evolve there is more involvement. However, one graph I received seems to indicate that there isn’t an increase in the frequency of comments within PLoS ONE over the past year or so which one would hope to see. Has this been a radical revision of how peer review works? Well not yet certainly, not until the vast majority of papers have ratings, but more importantly not until we have evidence that people are using those ratings. We are not yet in a position where we are about to see a stampede towards radically changed methods of peer review and this is not surprising. Tradition changes slowly – we are still only just becoming used to the idea of the ‘paper’ being something that goes beyond a pdf, embedding that within a wider culture of online rating and the use of those ratings will take some years yet.

So I have spent a number of posts recently discussing the details of how to make web services better for scientists. Have I got anything useful to offer to PLoS ONE? Well I think some of the criteria I suggested last week might be usefully considered. The problem with rating is that it lies outside the existing workflow for most people. I would guess that many users don’t even see the rating panel on the way into the paper. Why would people log into the system to look at a paper? What about making the rating implicit when people bookmark a paper in external services? Why not actually use that as the rating mechanism?

I emphasised the need for a service to be useful to the user before there are any ‘social effects’ present. What can be offered to make the process of rating a paper useful to the single user in isolation? I can’t really see why anyone would find this useful unless they are dealing with huge number of papers and can’t remember which one is which from day to day. It may be useful within groups or journal clubs but all of these require a group to sign up.  It seems to me that if we can’t frame it as a useful activity for a single person then it will be difficult to get the numbers required to make this work effectively on a community scale.

In that context, I think getting the numbers to around the 10-20% level for either comments or ratings has to be seen as an immense success. I think it shows how difficult it is to get scientists to change their workflows and adopt new services. I also think there will be a lot to learn about how to improve these tools and get more community involvement. I believe strongly that we need to develop better mechanisms for handling peer review and that it will be a very difficult process getting there. But the results will be seen in more efficient dissemination of information and more effective communication of the details of the scientific process. For this PLoS, the PLoS ONE team, as well as other publishers, including BioMedCentral, Nature Publishing Group, and others, that are working on developing new means of communication and improving the ones we have deserve applause. They may not hit on the right answer first off, but the current process of exploring the options is an important one, and not without its risks for any organisation.

Full disclosure: I was approached along with a number of other bloggers to look at the data provided by PLoS ONE and to coordinate the release of blog posts discussing that data. At the time of writing I am not aware of who the other bloggers are, nor have I read what they have written. The data that was provided included a list of all PLoS ONE papers up until 30 July 2008, the number of citations, citeulike bookmarks, trackbacks, comments, and ratings for each paper. I also received a table of all comments and a timeline with number of comments per month. I have been asked not to release the raw data and will honour that request as it is not my data to release. If you would like to see the underlying data please get in contact with Bora Zivkovic.

How I got into open science – a tale of opportunism and serendipity

So Michael Nielsen, one morning at breakfast at Scifoo asked one of those questions which never has a short answer; ‘So how did you get into this open science thing?’ and I realised that although I have told the story to many people I haven’t ever written it down. Perhaps this is a meme worth exploring more generally but I thought others might be interested in my story, partly because it illustrates how funding drives scientists, and partly because it shows how the combination of opportunism and serendipity can make for successful bedfellows.

In late 2004 I was spending a lot of my time on the management of a large collaborative research project and had had a run of my own grant proposals rejected. I had a student interested in doing a PhD but no direct access to funds to support the consumables cost of the proposed project. Jeremy Frey had been on at me for a while to look at implementing the electronic lab notebook system that he had lead the development of and at the critical moment he pointed out to me a special call from the BBSRC for small projects to prototype, develop, or implement e-science technologies in the biological sciences. It was a light touch review process and a relatively short application. More to the point it was a way of funding some consumables.

So the grant was written. I wrote the majority of it, which makes somewhat interesting reading in retrospect. I didn’t really know what I was talking about at the time (which seems to be a theme with my successful grants). The original plan was to use the existing, fully semantic, rdf backed electronic lab notebook and develop models for use in a standard biochemistry lab. We would then develop systems to enable a relational database to be extracted from the rdf representation and present this on the web.

The grant was successful but the start was delayed due to shenanigans over the studentship that was going to support the grant and the movement of some of the large project to another institution with one of the investigators. Partly due to the resulting mess I applied for the job I ultimately accepted at RAL and after some negotiation organised an 80:20 split between RAL and Southampton.

By the time we had a student in place and had got the grant started it was clear that the existing semantic ELN was not in a state that would enable us to implement new models for our experiments. However at this stage there was a blog system that had been developed in Jeremy’s group and it was thought it would be an interesting experiment to use this as a notebook. This would be almost the precise opposite of the rdf backed ELN. Looking back at it now I would describe it as taking the opportunity to look at a Web 2.0 approach to the notebook as compared to a Web 3.0 approach but bear in mind that at the time I had little or no idea of what these terms meant, let alone the care with which they need to be used.

The blog based system was great for me as it meant I could follow the student’s work online and doing this I gradually became aware of blogs in general and the use of feed readers. The RSS feed of the LaBLog was a great help as it made following the details of experiments remotely straightforward. This was important as by now I was spending three or four days a week at RAL while the student was based in Southampton. As we started to use the blog, at first in a very naïve way we found problems and issues which ultimately led to us thinking about and designing the organisational approach I have written about elsewhere [1, 2]. By this stage I had started to look at other services online and was playing around with OpenWetWare and a few other services, becoming vaguely aware of Creative Commons licenses and getting a grip on the Web 2.0 versus Web 3.0 debate.

To implement our newly designed approach to organising the LaBLog we decided the student would start afresh with a clean slate in a new blog. By this stage I was playing with using the blog for other things and had started to discover that there were issues that meant the ID authentication we were using didn’t always work through the RAL firewall. I ended up having complicated VPN setups, particularly working from home, where I couldn’t log on to the blog and I have my email online at the same time. This, obviously, was a pain and as we were moving to a new blog which could have new security settings I said, ‘stuff it, let’s just make it completely visible and be done with it’.

So there you go. The critical decision to move to an Open Notebook status was taken as the result of a firewall. So serendipity, or at least the effect of outside pressures, was what made it happen.  I would like to say it was a carefully thought out philosophical decision but, although the fact that I was aware of the open access movement, creative commons, OpenWetWare, and others no doubt prepared the background that led me to think down that route, it was essentially the result of frustration.

So, so far, opportunism and serendipity, which brings us back to opportunism again, or at least seizing an opportunity. Having made the decision to ‘go open’ two things clicked in my mind. Firstly the fact that this was rather radical. Secondly, the fact that all of these Web 2.0 tools combined with an open approach could lead to a marked improvement in the efficiency of collaborative science, a kind of ‘Science 2.0’ [yes, I know, don’t laugh, this would have been around March 2007]. Here was an opportunity to get my name on a really novel and revolutionary concept! A quick Google search revealed that, funnily enough, I wasn’t the first person to think of this (yes! I’d been scooped!), but more importantly it led to what I think ought to be three of the Standard Works of Open Science, Bill Hooker’s three part series on Open Science at 3 Quarks Daily [1, 2, 3], Jean-Claude Bradley’s presentation on Open Notebook Science at Nature Precedings (and the associated original blog post coining the term), and Deepak Singh’s talk on Open Science at Ignite Seattle. From there I was inspired to seize the opportunity, get a blog of my own, and get involved. The rest of my story story, so far, is more or less available online here and via the usual sources.

Which leads me to ask. What got you involved in the ‘open’ movement? What, for you, were the ‘primary texts’ of open science and open research? There is a value in recording this, or at least our memories of it, for ourselves, to learn from our mistakes and perhaps discern the direction going forward. Perhaps it isn’t even too self serving to think of it as history in the making. Or perhaps, more in line with our own aims as ‘open scientists’, that we would be doing a poor job if we didn’t record what brought us to where we are and what is influencing our thinking going forward. I think the blogosphere does a pretty good job of the latter, but perhaps a little more recording of the former would be helpful.

How to make Connotea a killer app for scientists

So Ian Mulvaney asked, and as my solution did not fit into the margin I thought I would post here. Following on from the two rants of a few weeks back and many discussions at Scifoo I have been thinking about how scientists might be persuaded to make more use of social web based tools. What does it take to get enough people involved so that the network effects become apparent. I had a discussion with Jamie Heywood of Patients Like Me at Scifoo because I was interested as to why people with chronic diseases were willing to share detailed and very personal information in a forum that is essentially public. His response was that these people had an ongoing and extremely pressing need to optimise as far as is possible their treatment regime and lifestyle and that by correlating their experiences with others they got to the required answers quicker. Essentially successful management of their life required rapid access to high quality information sliced and diced in a way that made sense to them and was presented in as efficient and timely a manner as possible. Which obviously left me none the wiser as to why scientists don’t get it….

Nonetheless there are some clear themes that emerge from that conversation and others looking at uptake and use of web based tools. So here are my 5 thoughts. These are framed around the idea of reference management but the principles I think are sufficiently general to apply to most web services.

  1. Any tool must fit within my existing workflows. Once adopted I may be persuaded to modify or improve my workflow but to be adopted it has to fit to start with. For citation management this means that it must have one click filing (ideally from any place I might find an interesting paper)  but will also monitor other means of marking papers by e.g. shared items from Google reader, ‘liked’ items on Friendfeed, or scraping tags in del.icio.us.
  2. Any new tool must clearly outperform all the existing tools that it will replace in the relevant workflows without the requirement for network or social effects. Its got to be absolutely clear on first use that I am going to want to use this instead of e.g. Endnote. That means I absolutely have to be able to format and manage references in a word processor or publication document. Technically a nightmare I am sure (you’ve got to worry about integration with Word, Open Office, GoogleDocs, Tex) but an absolute necessity to get widespread uptake. And this has to be absolutely clear the first time I use the system, before I have created any local social network and before you have a large enough user base for theseto be effective.
  3. It must be near 100% reliable with near 100% uptime. Web services have a bad reputation for going down. People don’t trust their network connection and are much happier with local applications still. Don’t give them an excuse to go back to a local app because the service goes down. Addendum – make sure people can easily backup and download their stuff in a form that will be useful even if your service dissappears. Obviously they’ll never need to but it will make them feel better (and don’t scrimp on this because they will check if it works).
  4. Provide at least one (but not too many) really exciting new feature that makes people’s life better. This is related to #2 but is taking it a step further. Beyond just doing what I already do better I need a quick fix of something new and exciting. My wishlist for Connotea is below.
  5. Prepopulate. Build in publically available information before the users arrive. For a publications database this is easy and this is something that BioMedExperts got right. You have a pre-existing social network and pre-existing library information. Populate ‘ghost’ accounts with a library that includes people’s papers (doesn’t matter if its not 100% accurate) and connections based on co-authorships. This will give people an idea of what the social aspect can bring and encourage them to bring more people on board.

So that is so much motherhood and applepie. And nothing that Ian didn’t already know (unlike some other developers who I shan’t mention). But what about those cool features? Again I would take a back to basics approach. What do I actually want?

Well what I want is a service that will do three quite different things. I want it to hold a library of relevant references in a way I can search and use and I want to use this to format and reference documents when I write them. I want it to help me manage the day to day process of dealing with the flood of literature that is coming in (real time search). And I want it to help me be more effective when I am researching a new area or trying to get to grips with something (offline search). Real time search I think is a big problem that isn’t going to be solved soon. The library and document writing aspects I think are a given and need to be the first priority. The third problem is the one that I think is amenable to some new thinking.

What I would really like to see here is a way of pivoting my view of the literature around a specific item. This might be a paper, a dataset, or a blog post. I want to be able to click once and see everything that item cites, click again and see everything that cites it. Pivot away from that to look at what GoPubmed thinks the paper is about and see what it has which is related and then pivot back and see how many of those two sets are common. What are the papers in this area that this review isn’t citing? Is there a set of authors this paper isn’t citing? Have they looked at all the datasets that they should have? Are there general news media items in this area, books on Amazon, books in my nearest library, books on my bookshelf? Are they any good? Have any of my trusted friends published or bookmarked items in this area? Do they use the same tags or different ones for this subject? What exactly is Neil Saunders doing looking at that gene? Can I map all of my friends tags onto a controlled vocabulary?

Essentially I am asking for is to be able to traverse the graph of how all these things are interconnected. Most of these connections are already explicit somewhere but nowhere are they all brought together in a way that the user can slice and dice them the way they want. My belief is that if you can start to understand how people use that graph effectively to find what they want then you can start to automate the process and that that will be the route towards real time search that actually works.

…but you’ll struggle with uptake…

The problem of academic credit and the value of diversity in the research community

This is the second in a series of posts (first one here) in which I am trying to process and collect ideas that came out of Scifoo. This post arises out of a discussion I had with Michael Eisen (UC Berkely) and Sean Eddy (HHMI Janelia Farm) at lunch on the Saturday. We had drifted from a discussion of the problem of attribution stacking and citing datasets (and datasets made up of datasets) into the problem of academic credit. I had trotted out the usual spiel about the need for giving credit for data sets and for tool development.

Michael made two interesting points. The first was that he felt people got too much credit for datasets already and that making them more widely citeable would actually devalue the contribution. The example he cited was genome sequences. This is a case where, for historical reasons, the publication of a dataset as a paper in a high ranking journal is considered appropriate.

In a sense I agree with this case. The problem here is that for this specific case it is allowable to push a dataset sized peg into a paper sized hole. This has arguably led to an over valuing of the sequence data itself and an undervaluing of the science it enables. Small molecule crystallography is similar in some regards with the publication of crystal structures in paper form bulking out the publication lists of many scientists. There is a real sense in which having a publication stream for data, making the data itself directly citeable, would lead to a devaluation of these contributions. On the other hand it would lead to a situation where you would cite what you used, rather than the paper in which it was, perhaps peripherally described. I think more broadly that the publication of data will lead to greater efficiency in research generally and more diversity in the streams to which people can contribute.

Michael’s comment on tool development was more telling though. As people at the bottom of the research tree (and I count myself amongst this group) it is easy to say ‘if only I got credit for developing this tool’, or ‘I ought to get more credit for writing my blog’, or anyone of a thousand other things we feel ‘ought to count’. The problem is that there is no such thing as ‘credit’. Hiring decisions and promotion decisions are made on the basis of perceived need. And the primary needs of any academic department are income and prestige. If we believe that people who develop tools should be more highly valued then there is little point in giving them ‘credit’ unless that ‘credit’ will be taken seriously in hiring decisions. We have this almost precisely backwards. If a department wanted tool developers then it would say so, and would look at CVs for evidence of this kind of work. If we believe that tool developers should get more support then we should be saying that at a higher, strategic level, not just trying to get it added as a standard section in academic CVs.

More widely there is a question as to why we might think that blogs, or public lectures, or code development, or more open sharing of protocols are something for which people should be given credit. There is often a case to be made for the contribution of a specific person in a non-traditional medium, but that doesn’t mean that every blog written by a scientists is a valuable contribution. In my view it isn’t the medium that is important, but the diversity of media and the concomitant diversity of contributions that they enable. In arguing for these contributions being significant what we are actually arguing for is diversity in the academic community.

So is diversity a good thing? The tightening and concentration of funding has, in my view, led to a decrease in diversity, both geographical and social, in the academy. In particular there is a tendency to large groups clustered together in major institutions, generally led by very smart people. There is a strong argument that these groups can be more productive, more effective, and crucially offer better value for money. Scifoo is a place where those of us who are less successful come face to face with the fact that there are many people a lot smarter than us and that these people are probably more successful for a reason. And you have to question whether your own small contribution with a small research group is worth the taxpayer’s money. In my view this is something you should question anyway as an academic researcher – there is far too much comfortable complacency and sense of entitlement, but that’s a story for another post.

So the question is; do I make a valid contribution? And does that provide value for money? And again for me Scifoo provides something of an answer. I don’t think I spoke to any person over the weekend without at least giving them something new to think about, a slightly different view on a situation, or just an introduction to something that hadn’t heard of before. These contributions were in very narrow areas, ones small enough for me to be expert, but my background and experience provided a different view. What does this mean for me? Probably that I should focus more on what makes my background and experience unique – that I should build out from that in the directions most likely to provide a complementary view.

But what does it mean more generally? I think that it means that a diverse set of experiences, contributions, and abilities will improve the quality of the research effort. At one session of Scifoo, on how to support ground breaking science, I made the tongue in cheek comment that I thought we needed more incremental science, more filling in of tables, of laying the foundations properly. The more I think about this the more I think it is important. If we don’t have proper foundations, filled out with good data and thought through in detail, then there are real risks in building new skyscrapers. Diversity adds reinforcement by providing better tools, better datasets, and different views from which to examine the current state of opinion and knowledge. There is an obvious tension between delivering radical new technologies and knowledge and the incremental process of filling in, backing up, and checking over the details. But too often the discussion is purely about how to achieve the first, with no attention given to the importance of the second. This is about balance not absolutes.

So to come back around to the original point, the value of different forms of contribution is not due to the fact that they are non-traditional or because of the medium per se, it is because they are different. If we value diversity at hiring committees, and I think we should, then looking at a diverse set of contributions, and the contribution that a given person is likely to make in the future based on their CVs, we can assess more effectively how they will differ from the people we already have. The tendency of ‘the academy’ to hire people in its own image is well established. No monoculture can ever be healthy; certainly not in a rapidly changing environment. So diversity is something we should value for its own sake, something we should try to encourage, and something that we should search CVs for evidence of. Then the credit for these activities will flow of its own accord.

Re-inventing the wheel (again) – what the open science movement can learn from the history of the PDB

One of the many great pleasures of SciFoo was to meet with people who had a different, and in many cases much more comprehensive, view of managing data and making it available. One of the long term champions of data availability is Professor Helen Berman, the head of the Protein Data Bank (the international repository for biomacromolecular structures), and I had the opportunity to speak with her for some time on the Friday afternoon before Scifoo kicked off in earnest (in fact this was one of many somewhat embarrasing situations where I would carefully explain my background in my very best ‘speaking to non-experts’ voice only to find they knew far more about it than I did – however Jim Hardy of Gahaga Biosciences takes the gold medal for this event for turning to the guy called Larry next to him while having dinner at Google Headquarters and asking what line of work he was in).

I have written before about how the world might look if the PDB and other biological databases had never existed, but as I said then I didn’t know much of the real history. One of the things I hadn’t realised was how long it was after the PDB was founded before deposition of structures became expected for all atomic resolution biomacromolecular structures. The road from a repository of seven structures with a handful of new submissions a year to the standards that mean today that any structure published in a reputable journal must be deposited was a long and rocky one. The requirement to deposit structures on publication only became general in the early 1990s, nearly twenty years after it was founded and there was a very long and extended process where the case for making the data available was only gradually accepted by the community.

Helen made the point strongly that it had taken 37 years to get the PDB to where it is today; a gold standard international and publically available repository of a specific form of research data supported by a strong set of community accepted, and enforced, rules and conventions.  We don’t want to take another 37 years to achieve the widespread adoption of high standards in data availability and open practice in research more generally. So it is imperative that we learn the lessons and benefit from the experience of those who built up the existing repositories. We need to understand where things went wrong and attempt to avoid repeating mistakes. We need to understand what worked well and use this to our advantage. We also need to recognise where the technological, social, and political environment that we find ourselves in today means that things have changed, and perhaps to recognise that in many ways, particularly in the way people behave, things haven’t changed at all.

I’ve written this in a hurry and therefore not searched as thoroughly as I might but I was unable to find any obvious ‘history of the PDB’ online. I imagine there must be some out there – but they are not immediately accessible. The Open Science movement could benefit from such documents being made available – indeed we could benefit from making them required reading. While at Scifoo Michael Nielsen suggested the idea of a panel of the great and the good – those who would back the principles of data availability, open access publication, and the free transfer of materials. Such a panel would be great from the perspective of publicity but as an advisory group it could have an even greater benefit by providing the opportunity to benefit from the experience many of these people have in actually doing what we talk about.