The growth of linked up data in chemistry – and good community projects

It’s been an interesting week or so in the Chemistry online world. Following on from my musings about data services and the preparation I was doing for a talk the week before last I asked Tony Williams whether it was possible to embed spectra from ChemSpider on a generic web page in the same way that you would embed a YouTube video, Flickr picture, or Slideshare presentation. The idea is that if there are services out on the cloud that make it easier to put some rich material in your own online presence by hosting it somewhere that understands about your data type, then we have a chance of pulling all of these disparate online presences together.

Tony went on to release two features, one that enables you to embed a molecule, which Jean-Claude has demonstrated over on the ONS Challenge Wiki. Essentially by cutting and pasting a little bit of text from ChemSpider into Wikispaces you get a nicely drawn image of the molecule, and the machinery is in place to enable good machine readability of the displayed page (by embedding chemical identifiers within the code) as well as enabling the aggregation of web based information about the molecule back at Chemspider.

The second feature was the one I had asked about, the embedding of spectra. Again this is really useful because it means that as an experimentalist you can host spectra on a service that gets what they are, but you can also incorporate them in a nice way back into your lab book, online report, or whatever it is you are doing. This has already enabled Andy Lang and Jean-Claude to build a very cool game, initially in Second Life but now also on the web. Using the spectral and chemical information from Chemspider the player is presented with the spectrum and three molecules; if they select the correct molecule they get some points, if they get it wrong they lose some. As Tony has pointed out, this is also a way of crowdsourcing the curation process – if the majority of people disagree with the “correct” assignment then maybe the spectrum needs a second look. Chemistry Captchas anyone?

The other even this week has been the efforts by Mitch over at the Chemistry Blog to set up an online resource for named reactions by crowdsourcing contributions and ultimately turning it into a book. Mitch deserves plaudits for this because he’s gone on and done something rather than just talked about it and we need more people like that. Some of us have criticised the details (also see comments at the original post) of how he is going about it but from my perspective this is definitely criticism motivated by the hope that he will succeed and that by making some changes early on, there is the chance to get much more out of the contributions that he gets.

In particular Egon asked whether it would be better to use Wikipedia as the platform for aggregating the named reaction; a point which I agree with. The problem that people see with Wikipedia is largely that of image. People are concerned about inaccurate editing, about the sometimes combative approach of senior editors that are not necessarily expert in the are. Part of the answer is to just get in there and do it – particularly in chemistry there are a range of people working hard to try and get stuff cleaned up. Lots of work has gone into the Chemical boxes and named reactions would be an obvious thing to move on to. Nonetheless it may not work for some people and to a certain extent as long as the material that is generated can be aggregated back to Wikipedia I’m not really fussed.

The bigger concern for us “chemoinformatics jocks” (I can’t help but feel that categorising me as a foo-informatics anything is a little off beam but never mind (-;) was the example pages Mitch put up where there was very little linking back of data to other resources. So there was no way, for instance, to know that this page was even about a specific class of chemicals. The schemes were shown as plain images, making it very hard for any information aggregation service to do anything useful. Essentially the pages didn’t make full use of the power of the web to connect information.

Mitch in turn has taken the criticism offered in a positive fashion and has thrown down the gauntlet; effectively asking the question, “well if you want this marked up, where are the tools to make it easy, and the instructions in plain English to show how to do it?”. He also asks, if named reactions aren’t the best place to start, then what would be a good collaborative project. Fair questions, and I would hope the Chemspider services start to point in the right direction. Instead of drawing an image of a molecule and pasting it on a web page, use the service and connect to the molecule itself, this connects the data up, and gives you a nice picture for free. It’s not perfect. The ideal situation would be a little chemical drawing palette. You draw the molecule you want, it goes to the data service of choice, finds the correct molecule (meaning the user doesn’t need to know what SMILES, InChis or whatever are), and then brings back whatever you want; image, data, vendors, price. This would be a really powerful demonstration of the power of linked data and it probably could be pulled together from existing services.

But what about the second question? What would be a good project? Well this is where that second Chemspider functionality comes in. What about flooding Chemspider with high quality, electronic copies, of NMR spectra. Not the molecules that your supervisor will kill you for releasing, but all those molecules you’ve published, all those hiding in undergraduate chemistry practicals. Grab the electronic files, lets find a way of converting them all to JCamp online, and get ’em up on Chemspider as Open Data.

What are the services we really want for recording science online?

There is an interesting meta-discussion going on in a variety of places at the moment which touch very strongly on my post and talk (slides, screencast) from last week about “web native” lab notebooks. Over at Depth First, Rich Apodaca has a post with the following little gem of a soundbite:

Could it be that both Open Access and Electronic Laboratory Notebooks are examples of telephone-like capabilities being used to make a better telegraph?

Web-Centric Science offers a set of features orthoginal to those of paper-centric science. Creating the new system in the image of the old one misses the point, and the opportunity, entirely.

Meanwhile a discussion on The Realm of Organic Synthesis blog was sparked off by a post about the need for a Wikipedia inspired chemistry resource (thanks again to Rich and Tony Williams for pointing the post and discussion out respectively). The initial idea here was something along the lines of;

I envision a hybrid of Doug Taber’s Organic Chemistry Portal, Wikipedia and a condensed version of SciFinder.  I’ll gladly contribute!  How do we get the ball rolling?

This in turn has led into a discussion of whether Chemspider and ChemMantis partly fill this role already. The key point being made here is the problem of actually finding and aggregating the relevant information. Tony Williams makes the point in the comments that Chemspider is not about being a central repository in the way that J proposes in the original TROS blog post but that if there are resources out there they can be aggregated into Chemspider. There are two problems here, capturing the knowledge into one “place” and then aggregating.

Finally there is an ongoing discussion in the margins of a post at RealClimate. The argument here is over the level of “working” that should be shown when doing analyses of data, something very close to my heart. In this case both the data and the MatLab routines used to process the data have been made available. What I believe is missing is the detailed record, or log, of how those routines were used to process the data. The argument rages over the value of providing this, the amount of work involved, and whether it could actually have a chilling effect on people doing independent validation of the results. In this case there is also the political issue of providing more material for antagonistic climate change skeptics to pore over and look for minor mistakes that they will then trumpet as holes. It will come as no suprise that I think the benefits of making the log available outweigh the problems. But that we need the tools to do it. This is beautifully summed up in one comment by Tobias Martin at number 64:

So there are really two main questions: if this [making the full record available – CN] is hard work for the scientist, for heaven’s sake, why is it hard work? (And the corrolary: how are you confident that your results are correct?)

And the answer – it is hard work because we still think in terms of a paper notebook paradigm, which isn’t well matched to the data analysis being done within MatLab. When people actually do data analysis using computational systems they very rarely keep a systematic log of the process. It is actually a rather difficult thing to do – even though in principle the system could (and in some cases does) keep that entire record for you.

My point is that if we completely re-imagine the shape and functionality of the laboratory record, in the way Rich and I, and others, have suggested; if the tools are built to capture what happens and then provide that to the outside world in a useful form (when the researcher chooses), then not only will this record exist but it will provide the detailed “clickstream” records that Richard Akerman refers to in answer to a Twitter proposal from Michael Barton:

Michael Barton: Website idea: Rank scientific articles by relevance to your research; get uptakes on them via citations and pubmed “related articles”

Richard Akerman: This is a problem of data, not of technology. Amazon has millions of people with a clear clickstream through a website. We’ve got people with PDFs on their desktops.

Exchange “data files” and “Matlab scripts” for PDFs and you have a statement of the same problem that they guys at RealClimate face. Yes it is there somewhere, but it is a pain in the backside to get it out and give it to someone.

If that “clickstream” and “file use stream” and “relationship” stream was automatically captured then we get closer to the thing that I think many of us a yearning for (and have been for some time). The Amazon recommendation tool for science.

Licenses and protocols for Open Science – the debate continues

This is an important discussion that has been going on in disparate places, but primarily at the moment is on the Open Science mailing list maintained by the OKF (see here for an archive of the relevant thread). To try and keep things together and because Yishay Mor asked, I thought I would try to summarize the current state of the debate.

The key aim here is to find a form of practice that will enhance data availability, and protect it into the future.

There is general agreement that there is a need for some sort of declaration associated with making data available. Clarity is important and the minimum here would be a clear statement of intention.Where there is disagreement is over what form this should take. Rufus Pollock started by giving the reasons why this should be a formal license. Rufus believes that a license provides certainty and clarity in a way that a protocol, statement of principles, or expression of community standards can not.  I, along with Bill Hooker and John Wilbanks [links are to posts on mailing list], expressed a concern that actually the use of legal language, and the notion of “ownership” of this by lawyers rather than scientists would have profound negative results. Andy Powell points out that this did not seem to occur either in the Open Source movement or with much of the open content community. But I believe he also hits the nail on the head with the possible reason:

I suppose the difference is that software space was already burdened with heavily protective licences and that the introduction of open licences was perceived as a step in the right direction, at least by those who like that kind of thing.         

Scientific data has a history of being assumed to be in public domain (see the lack of any license at PDB or Genbank or most other databases) so there isn’t the same sense of pushing back from an existing strong IP or licensing regime. However I think there is broad agreement that this protocol or statement would look a lot like a license and would aim to have the legal effect of at least providing clarity over the rights of users to copy, re-purpose, and fork the objects in question.

Michael Nielsen and John Wilbanks have expressed a concern about the potential for license proliferation and incompatibility. Michael cites the example of Apache, Mozilla, and GPL2 licenses. This feeds into the issue of the acceptability, or desirability of share-alike provisions which is an area of significant division. Heather Morrison raises the issue of dealing with commercial entities who may take data and use technical means to effectively take it out of the public domain, citing the takeover of OAIster by OCLC as a potential example.

This is a real area of contention I think because some of us (including me) would see this in quite a positive light (data being used effectively in a commercial setting is better than it not being used at all) as long as the data is still both legally and technically in the public domain. Indeed this is at the core of the power of a public domain declaration. The issue of finding the resources that support the preservation of research objects in the (accessible) public domain is a separate one but in my view if we don’t embrace the idea that money can and should be made off data placed in the public domain then we are going to be in big trouble sooner or later because the money will simply run out.

On the flip side of the argument is a strong tradition of arguing that viral licensing and share alike provisions protect the rights and personal investment of individuals and small players against larger commercial entities. Many of the people who support open data belong to this tradition, often for very good historical reasons. I personally don’t disagree with the argument on a logical level, but I think for scientific data we need to provide clear paths for commercial exploitation because using science to do useful things costs a lot of money. If you want people want to invest in using the outputs of publicly funded research you need to provide them with the certainty that they can legitimately use that data within their current business practice. I think it is also clear that those of us who take this line need to come up with a clear and convincing way of expressing this argument because it is at the centre of the objection to “protection” via licenses and share alike provisions.

Finally Yishay brings us back to the main point. Something to keep focussed on:

I may be off the mark, but I would argue that there’s a general principle to consider here. I hold that any data collected by public money should be made freely available to the public, for any use that contributes to the public good. Strikes me as a no-brainer, but of course – we have a long way to go. If we accept this principle, the licensing follows.         

Obviously I don’t agree with the last sentence – I would say that dedication to the public domain follows – but the principle I think is something we can agree that we are aiming for.

Best practice for data availability – the debate starts…well over there really

The issue of licensing arrangements and best practice for making data available has been brewing for some time but has just recently come to a head. John Wilbanks and Science Commons have a reasonably well established line that they have been developing for some time. Michael Nielsen has a recent blog post and Rufus Pollock, of the Open Knowledge Foundation, has also just synthesised his thoughts in response into a blog essay. I highly recommend reading John’s article on licensing at Nature Precedings, Michael’s blog post, and Rufus’ essay before proceeding. Another important document is the discussion of the license that Victoria Stodden is working to develop. Actually if you’ve read them go and read them again anyway – it will refresh the argument.

To crudely summarize, Rufus makes a cogent argument for the use of explicit licenses applied to collections of data, and feels that share-alike provisions in licenses or otherwise do not cause major problems and that the benefit that arises from enforcing re-use outweighs the problem. John’s position is that it far better for standards to be applied through social pressure (“community norms”) rather than licensing arrangements. He also believes that share-alike provisions are bad because they break interoperability between different types of objects and domains. One point that I think is very important and (I think) is a point of agreement is that some form of license or at dedication to the public domain will be crucial to developing best practice. Even if the final outcome of debate is that everything will go in the public domain it should be part of best practice to make that explicit.

Broadly speaking I belong to John’s camp but I don’t want to argue that case with this post. What is important in my view is that the debate takes place and that we are clear about what the aims of that debate are. What is it we are trying to achieve in the process of coming to (hopefully) some consensus of what best practice should look like?
It is important to remember that anyone can assert a license (or lack thereof) on any object that they (assert they) own or have rights over. We will never be able to impose a specific protocol on all researchers, all funders. Therefore what we are looking for is not the perfect arrangement but a balance between what is desired, what can be practically achieved, and what is politically feasible. We do need a coherent consensus view that can be presented to research communities and research funders. That is why the debate is important. We also need something that works, and is extensible into the future, where it will stand up to the development of new types of research, new types of data, new ways of making that data available, and perhaps new types of researchers altogether.

I think we agree that the minimal aim is to enable, encourage, and protect into the future the ability to re-use and re-purpose the publicly published products of publicly funded research. Arguments about personal or commercial work are much harder and much more subtle. Restricting the argument to publicly funded researchers makes it possible to open a discussion with a defined number of funders who have a public service and public engagement agenda. It also makes the moral arguments much clearer.

In focussing on research that is being made public we short circuit the contentious issue of timing. The right, or the responsibility, to commercially exploit research outputs and the limitations this can place on data availability is a complex and difficult area and one in which agreement is unlikely any time soon. I would also avoid the word “Open”. This is becoming a badly overloaded term with both political and emotional overtones, positive and negative. Focussing on what should happen after the decision has been to go public reduces the argument to “what is best practice for making research outputs available”. The question of when to make them available can then be kept separate. The key question for the current debate is not when but how.

So what I believe the debate should be about is the establishment, if possible, of a consensus  protocol or standard or license for enabling and ensuring the availability of the research outputs associated with publicly published, publicly funded research.  Along side this is the question of establishing mechanisms, for researchers to implement and be supported to observe these standards, as well as for “enforcement”. These might be trademarks, community standards, or legal or contractual approaches as well as systems and software to make all of this work, including trackbacks, citation aggregators, and effective data repositories. In addition we need to consider the public relations issue of selling such standards to disparate research funders and research communities.

Perhaps a good starting point would be to pinpoint the issues where there is general agreement and map around those. If we agree some central principles then we can take an empirical approach to the mechanisms. We’re scientists after all aren’t we?

Euan Adie asks for help characterising PLoS comments

Euan Adie has asked for some help to do further analysis on the comments made on PLoS ONE articles. He is doing this via crowd sourcing through a specially written app at appspot to get people to characterize all the comments in PLoS ONE. Euan is very good at putting these kind of things together and again this shows the power of Friendfeed as a way of getting the message out. Dividing the job up into bite sized chunks so people can help even with a little bit of time, providing the right tools, and getting them in the hands of people who care enough to dedicate a little time. If anything counts as Science2.0 then this must be pretty close.

Third party data repositories – can we/should we trust them?

This is a case of a comment that got so long (and so late) that it probably merited it’s own post. David Crotty and Paul (Ling-Fung Tang) note some important caveats in comments on my last post about the idea of the “web native” lab notebook. I probably went a bit strong in that post with the idea of pushing content onto outside specialist services in my effort to try to explain the logic of the lab notebook as a feed. David notes an important point about any third part service (do read the whole comment at the post):

Wouldn’t such an approach either:
1) require a lab to make a heavy investment in online infrastructure and support personnel, or
2) rely very heavily on outside service providers for access and retention of one’s own data? […]

Any system that is going to see mass acceptance is going to have to give the user a great deal of control, and also provide complete and redundant levels of back-up of all content. If you’ve got data scattered all over a variety of services, and one goes down or out of business, does that mean having to revise all of those other services when/if the files are recovered?

This is a very wide problem that I’ve also seen in the context of the UK web community that supports higher education (see for example Brian Kelly‘s risk assessment for use of third party web services). Is it smart, or even safe, to use third party services? The general question divides into two sections: is the service more or less reliable than you own hard drive or locally provided server capacity (technical reliability, or uptime); and what is the long term reliability of the service remaining viable (business/social model reliability). Flickr probably has higher availability than your local institutional IT services but there is no guarantee that it will still be there tomorrow. This is why data portability is very important. If you can’t get your data out, don’t put it in there in the first place.

In the context of my previous post these data services could be local, they could be provided by the local institution, or by a local funder, or they could even be a hard disk in the lab. People are free to make those choices and to find the best balance of reliability, cost, and maintenance that suits them. My suspicion is that after a degree of consolidation we will start to see institutions offering local data repositories as well as specialised services on the cloud that can provide more specialised and exciting functionality. Ideally these could all talk to each other so that multiple copies are held in these various services.

David says:

I would worry about putting something as valuable as my own data into the “cloud” […]

I’d rather rely on an internally controlled system and not have to worry about the business model of Flickr or whether Google was going to pull the plug on a tool I regularly use. Perhaps the level to think on is that of a university, or company–could you set up a system for all labs within an institution that’s controlled (and heavily backed up) by that institution? Preferably something standardized to allow interaction between institutions.

Then again, given the experiences I’ve had with university IT departments, this might not be such a good approach after all.

Which I think encapsulates a lot of the debate. I actually have greater faith in Flickr keeping my pictures safe than my own hard disk. And more faith in both than insitutional repository systems that don’t currently provide good data functionality and that I don’t understand. But I wouldn’t trust either in isolation. The best situation is to have everything everywhere, using interchange standards to keep copies in different places; specialised services out on the cloud to provide functionality (not every institution will want to provide a visualisation service for XAFS data), IRs providing backup archival and server space for anything that doesn’t fit elsewhere, and ultimately still probably local hard disks for a lot of the short to medium term storage. My view is that the institution has the responsibility of aggregating, making available, and archiving the work if its staff, but I personally see this role as more harvester than service provider.

All of which will turn on the question of business models. If the data stores a local, what is the business model for archival? If they are institutional how much faith do you have that the institution won’t close them down. And if they are commercial or non-profit third parties, or even directly government funded service, does the economics make sense in the long term. We need a shift in science funding if we want to archive and manage data in the longer term. And with any market some services will rise and some will die. The money has to come from somewhere and ultimately that will always be the research funders. Until there is a stronger call from them for data preservation and the resources to back it up I don’t think we will see much interesting development. Some funders are pushing fairly hard in this direction so it will be interesting to see what develops. A lot will turn on who has the responsibility for ensuring data availability and sharing. The researcher? The institution? The funder?

In the end you get what you pay for. Always worth remembering that sometimes even things that are free at point of use aren’t worth the price you pay for them.

The integrated lab record – or the web native lab notebook

At Science Online 09 and at the Smi Electronic Laboratory Notebook meeting in London later in January I talked about how laboratory notebooks might evolve. At Science Online 09 the session was about Open Notebook Science and here I wanted to take the idea of what a “web native” lab record could look like and show that if you go down this road you will get the most out if you are open. At the ELN meeting which was aimed mainly at traditional database backed ELN systems for industry I wanted to show the potential of a web native way of looking at the laboratory record, and in passing to show that these approaches work best when they are open, before beating a retreat back to the position of “but if you’re really paranoid you can implement all of this behind your firewall” so as not to scare them off too much. The talks are quite similar in outline and content and I wanted to work through some of the ideas here.The central premise is one that is similar to that of many web-service start ups: “The traditional paper notebook is to the fully integrated web based lab record as a card index is to Google”. Or to put it another way, if you think in a “web-native” way then you can look to leverage the power of interlinked networks, tagging, social network effects, and other things that don’t exist on a piece of paper, or indeed in most databases. This means stripping back the lab record to basics and re-imagining it as thought it were built around web based functionality.

So what is a lab notebook? At core it is a journal of events, a record of what has happened. Very similar to a Blog in many ways. An episodic record containing dates, times, bits and pieces of often disparate material, cut and pasted into a paper notebook. It is interesting that in fact most people who use online notebooks based on existing services use Wikis rather than blogs. This is for a number of reasons; better user interfaces, sometimes better services and functionality, proper versioning, or just personal preference. But there is one thing that Wikis tend to do very badly that I feel is crucial to thinking about the lab record in a web native way; they generate at best very ropey RSS feeds. Wikis are well suited to report writing and formalising and sharing procedures but they don’t make very good diaries. At the end of the day it ought to possible to do clever things with a single back end database being presented as both blog and wiki but I’ve yet to see anything really impressive in this space so for the moment I am going to stick with the idea of blog as lab notebook because I want to focus on feeds.

So we have the idea of a blog as the record – a minute to minute and day to day record. We will assume we have a wonderful backend and API and a wide range of clients that suit different approaches to writing things down and different situations where this is being done. Witness the plethora of clients for Twittering in every circumstance and mode of interaction for instance. We’ll assume tagging functionality as well as key value pairs that are exposed as microformats and RDF as appropriate. Widgets for ontology look up and autocompleton if it is desired and the ability to automatically generate input forms from any formal description of what an experiment should look like. But above all, this will be exposed in a rich machine readable format in an RSS/Atom feed.What we don’t need is the ability to upload data. Why not? Because we’re thinking web native. On a blog you don’t generally upload images and video directly, you host them on an appropriate service and embed them on the blog page. All of the issues are handled for you and a nice viewer is put in place. The hosting service is optimised for handling the kind of content you need; Flickr for photos, YouTube (Viddler, Bioscreencast) for video, Slideshare for presentations etc. In a properly built ecosystem there would be a data hosting service, ideally one optimised for your type of data, that would provide cut and paste embed codes providing the appropiate visualisations. The lab notebook only needs to point at the data; doesn’t need to know anything much about that data beyond the fact that it is related to the stuff going on around it and that it comes with some html code to embed a visualisation of some sort.

That pointing is the next thing we need to think about. In the way I use the Chemtools LaBLog I use a one post, one item system. This means that every object gets its own post. Each sample, each bottle of material, should have its own post and its own identity. This creates a network of posts that I have written about before. What it also means is that it is possible to apply page rank style algorithms and link analysis more generally in looking at large quantities of posts. Most importantly it encodes the relationship between objects, samples, procedures, data, and analysis in the way the web is tooled up to understand: the relationships are encoded in links. This is a lightweight way of starting to build up a web of data – it doesn’t matter so much to start with whether this is in hardcore RDF as long as there is enough contextual data to make it useful. Some tagging or key-value pairs would be a good start. Most importantly it means that it doesn’t matter at all where our data files are as long as we can point at them with sufficient precision.

But if we’re moving the datafiles off the main record then what about the information about samples? Wouldn’t it be better to use the existing Laboratory Information Management System, or sample management system or database? Well again, as long as you can point at each sample independently with the precision you need then it doesn’t matter. You can use a GoogleSpreadsheet if you want to – you can give URL for each cell, there is a powerful API that would let you build services to make putting the links in easy. We use the LaBLog to keep information on our samples because we have such a wide variety of different materials put to different uses, that the flexibility of using that system rather than a database with a defined schema is important for our way of working. But for other people this may not be the case. It might even be better to use multiple different systems, a database for oligonucleotides, a spreadsheet for environmental samples, and a full blown LIMS for barcoding and following the samples through preparation for sequencing. As long; as it can be pointed at, it can be used. Similar to the data case, it is best to use the system that is best suited to the specific samples. These systems are better developed than they are for data – but many of the existing systems don’t allow a good way of pointing at specific samples from an external document – and very few make it possible to do this via a simple http compliant URL.

So we’ve passed off the data, we’ve passed off the sample management. What we’re left with is the procedures which after all is the core of the record, right? Well no. Procedures are also just documents. Maybe they are text documents, but perhaps they are better expressed as spreadsheets or workflows (or rather the record of running a workflow). Again these may well be better handled by external services, be they word processors, spreadsheets, or specialist services. They just need to be somewhere where we can point at them.

What we are left with is the links themselves, arranged along a timeline. The laboratory record is reduced to a feed which describes the relationships between samples, procedures, and data. This could be a simple feed containing links or a sophisticated and rich XML feed which points out in turn to one or more formal vocabularies to describe the semantic relationship between items. It can all be wired together, some parts less tightly coupled than others, but in principle it can at least be connected. And that takes us one significant step towards wiring up the data web that many of us dream of

The beauty of this approach is that it doesn’t require users to shift from the applications and services that they are already using, like, and understand. What it does require is intelligent and specific repositories for the objects they generate that know enough about the object type to provide useful information and context. What it also requires is good plugins, applications, and services to help people generate the lab record feed. It also requires a minimal and arbitrarily extensible way of describing the relationships. This could be as simple html links with tagging of the objects (once you know an object is a sample and it is linked to a procedure you know a lot about what is going on) but there is a logic in having a minimal vocabulary that describes relationships (what you don’t know explicitly in the tagging version is whether the sample is an input or an output). But it can also be fully semantic if that is what people want. And while the loosely tagged material won’t be easily and tightly coupled to the fully semantic material the connections will at least be there. A combination of both is not perfect, but it’s a step on the way towards the global data graph.

A funny thing happened on the (way to the) forum

I love Stephen Sondheim musicals. In particular I love the way he can build an ensemble piece in which there can be 10-20 people onstage, apparently singing, shouting, and speaking complete disconnected lines, which nonetheless build into a coherent whole. Into the Woods (1987) contains many brilliant examples of the thoughts, fears, and hopes of a whole group of people building into a coherent view and message (see the opening for a taste and links to other clips). Those who believe in the wisdom of crowds in its widest sense see a similar possibility in aggregating the chatter found on the web into coherent and accurate assessments of problems. Those who despair of the ignorance of the lowest common denominator see most Web2 projects as a waste of time. I sit somewhere in the middle – believing that with the right tools, a community of people who care about a problem and have some form of agreed standards of behavior and disputation can rapidly aggregate a well informed and considered view of a problem and what it’s solution might be.

Yesterday and today, I saw one of the most compelling examples of that I’ve yet seen. Yesterday I posted a brain dump of what I had been thinking about following discussions in Hawaii and in North Carolina, about the possibilities of using OpenID to build a system for unique researcher IDs. The discussion on Friendfeed almost immediately aggregated a whole set of material, some of which I had not previously seen, proceded through a coherent discussion of many points, with a wide range of disparate views, towards some emerging conclusions. I’m not going to pre-judge those conclusions except to note there are some positions clearly developing that are contrary to my own view (e.g. on CrossRef being the preferred organisation to run such a service). This to me suggests the power of this approach for concensus building, even when that concensus is opposite to the position of the person kicking off the discussion.

What struck me with this was the powerful way in which Friendfeed rapidly enabled the conversation – and also the potential negative effect it had on widening the conversation beyond that community. Friendfeed is a very powerful tool for very rapidly widening the reach of a discussion like this one. It would be interesting to know how many people saw the item in their feeds. I could calculate it I suppose but for now I will just guess it was probably in the low to mid thousands. Many, many, more than subscribe to the blog anyway. What will be interesting to see is whether the slower process of blogospheric diffusion is informed by the Friendfeed discussion or runs completely independent of it (incidentally Friendfeed widget will hopefully be coming soon on the blog as well to try to and tie things together). Andy Powell of the Eduserv Foundation comments in his post of today that;

There’s a good deal of discussion about the post in Cameron’s FriendFeed. (It’s slightly annoying that the discussion is somewhat divorced from the original blog post but I guess that is one of the, err…, features of using FriendFeed?) [Andy also goes on to make some good point about delegation – CN]

The speed with which Friendfeed works, and the way in which it helps you build an interested community, and  separated communities where appropriate, is indeed a feature of Friendfeed. Equally that speed and the fact that you need an account to comment, if not to watch, can be exclusionary. It is also somewhat closed off from the rest of the world. While I am greatly excited by what happened yesterday and today, indeed possibly just as excited as I am about yesterday’s other important news, it is important to make sure that the watering and care of the community doesn’t turn into the building of a walled garden.

A specialist OpenID service to provide unique researcher IDs?

Following on from Science Online 09 and particularly discussions on Impact Factors and researcher incentives (also on Friendfeed and some video available at Mogulus via video on demand) as well as the article in PloS Computational Biology by Phil Bourne and Lynn Fink the issue of unique researcher identifiers has really emerged as absolutely central to making traditional publication work better, effectively building a real data web that works, and making it possible to aggregate the full list of how people contribute to the community automatically.

Good citation practice lies at the core of good science. The value of research data is not so much in the data itself but its context, its connection with other data and ideas. How then is it that we have no way of citing a person? We need a single, unique way, of identifying researchers. This will help traditional publishers and the existing ecosystem of services by making it possible to uniquely identify authors and referees. It will make it easier for researchers to be clear about who they are and what they have done. And finally it is a critical step in making it possible to automatically track all the contributions that people make. We’ve all seen CVs where people say they have refereed for Nature or the NIH or served on this or that panel. We can talk about micro credits but until there are validated ways of pulling that information and linking it to an identity that follows the person, not who they work for, we won’t make much progress.

On the other hand most of us do not want to be locked into one system, particularly if it is controlled by one commercial organization.  Thomson ISI’s ResearcherID is positioned as a solution to this problem, but I for one am not happy with being tied into using one particular service, regardless of who runs it.

In the PLoS Comp Biol article Bourne and Fink argue that one solution to this is OpenID. OpenID isn’t a service, it is a standard. This means that an identity can be hosted by a range of services and people can choose between them based on the service provided, personal philosophy, or any other reason. The central idea is that you have a single identity which you can use to sign on to a wide range of sites. In principle you sign into your OpenID and then you never see another login screen. In practice you often end up typing in your ID but at least it reduces the pain in setting up new accounts. It also provides in most cases a “home page”. If you go to http://cameron.neylon.myopenid.com you will see a (pretty limited) page with some basic information.

OpenID is becoming more popular with a wide range of webservices providing it as a login option including Dopplr, Blogger, and research sites including MyExperiment. Enabling OpenID is also on the list for a wide range of other services, although not always high up the priority list. As a starting point it could be very easy for researchers with an OpenID simply to add it to their address when publishing papers, thus providing a unique, and easily trackable identifier that is carried through the journal, abstracting services, and the whole ecosystem services built around them.

There are two major problems with OpenID. The first is that it is poorly supported by big players such as Google and Yahoo. Google and Yahoo will let you use your account with them as an OpenID but they don’t accept other OpenID providers. More importantly, people just don’t seem to get OpenID. It seems unnatural for some reason for a person’s identity marker to be a URL rather than a number, a name, or an email address. Compounded with the limited options provided by OpenID service providers this makes the practical use of such identifiers for researchers very much a minority activity.

So what about building an OpenID service specifically for researchers? Imagine a setup screen that asks sensible questions about where you work and what field you are in. Imagine that on the second screen, having done a search through literature databases it presents you with a list of publications to check through, remove any mistakes, allow you to add any that have been missed. And then imagine that the default homepage format is similar to an academic CV.

Problem 1: People already have multiple IDs and sometimes multiple OpenIDs. So we make at least part of the back end file format, and much of what is exposed on the homepage FOAF, making it possible to at least assert that you are the same person as, say cameronneylon@yahoo.com.

Problem 2: Aren’t we just locking people into a specific service again? Well no, if people don’t want to use it they can use any OpenID provider, even set one up themselves. It is an open standard.

Problem 3: What is there to make people sign up? This is the tough one really. It falls into two parts. Firstly, for those of us who already have OpenIDs or other accounts on other systems, isn’t this just (yet) another “me too” service. So, in accordance with the five rules I have proposed for successful researcher web services, there has to be a compelling case for using it.

For me the answer to this comes in part from the question. One of the things that comes up again and again as a complaint from researchers is the need to re-format their CV (see Schleyer et al, 2008 for a study of this). Remember that the aim here is to automatically aggregate most of the information you would put in a CV. Papers should be (relatively) easy, grants might be possible. Because we are doing this for researchers we know what the main categories are and what they look like. That is we have semantically structured data.

Ok so great I can re-format my CV easier and I don’t need to worry about whether it is up to date with all my papers but what about all these other sites where I need to put the same information? For this we need to provide functionality that lets all of this be carried easily to other services. Simple embed functionality like that you see on YouTube, and most other good file hosting services, which generates a little fragment of code that can easily be put in place on other services (obviously this requires other services to allow that – which could be a problem in some cases). But imagine the relief if all the poor people who try to manage university department websites could just throw in some embed codes to automatically keep their staff pages up to date? Anyone seeing a business model here yet?

But for this to work the real problem to be solved is the vast majority of researchers for whom this concept is totally alien. How do we get them to be bothered to sign up for this thing which apparently solves a problem they don’t have? The best approach would be if journals and grant awarding bodies used OpenIDs as identifiers. This would be a dream result but doesn’t seem likely. It would require significant work on changing many existing systems and frankly what is in it for them? Well one answer is that it would provide a mechanism for journals and grant bodies to publicly acknowledge the people who referee for them. An authenticated RSS feed from each journal or funder could be parsed and displayed on each researcher’s home page. The feed would expose a record of how many grants or papers that each person has reviewed (probably with some delay to prevent people linking that to the publication of specific papers). Of course such a feed could be used for lot of other interesting things as well, but none of them will work without a unique person identifier.

I don’t think this is compelling enough in itself, for the moment, but a simpler answer is what was proposed above – just encouraging people to include an OpenID as part of their address. Researchers will bend over backwards to make people happy if they believe those people have an impact on their chances of being published or getting a grant. A little thing could provide a lot of impetus and that might bring into play the kind of effects that could result from acknowledgement and ultimately make the case that shifting to OpenID as the login system is worth the effort. This would particularly the case for funders who really want to be able to aggregate information about the people they fund effectively.

There are many details to think about here. Can I use my own domain name (yes, re-directs should be possible). Will people who use another service be at a disadvantage (probably, otherwise any business model won’t really work).  Is there a business model that holds water (I think there is but the devil is in the details). Should it be non-profit or for profit or run by a respected body (I would argue that for-profit is possible and should be pursued to make sure the service keeps improving – but then we’re back with a commercial provider).

There are many good questions that need to be thought through but I think the principle of this could work, and if such an approach is to be successful it needs to get off the ground soon and fast.

Note: I am aware that a number of people are working behind the scenes on components of this and on similar ideas. Some of what is written above is derived from private conversations with these people and as soon as I know that their work has gone public I will add references and citations as appropriate at the bottom of this post. 

Very final countdown to Science Online 09

I should be putting something together for the actual sessions I am notionally involved in helping running but this being a very interactive meeting perhaps it is better to leave things to very last minute. Currently I am at a hotel at LAX awaiting an early flight tomorrow morning. Daily temperatures in the LA area have been running around 25-30 C for the past few days but we’ve been threatened with the potential for well below zero in Chapel Hill. Nonetheless the programme and the people will more than make up for it I have no doubt. I got to participate in a bit of the meeting last year via streaming video and that was pretty good but a little limited – not least because I couldn’t really afford to stay up all night unlike some people who were far more dedicated.

This year I am involved in three sessions (one on Blog Networks, one on Open Notebook Science, and one on Social Networks for Scientists – yes those three are back to back…) and we will be aiming to be video casting, live blogging, posting slides, images, and comments; the whole deal. If you’ve got opinions then leave them at the various wiki pages (via the programme) or bring them along to the sessions. We are definitely looking for lively discussion. Two of these are being organised with the inimitable Deepak Singh who I am very much looking forward to finally meeting in person – along with many others I feel I know quite well but have never met – and others I have met and look forward to catching up with including Jean-Claude who has instigated the Open Notebook session.

With luck I will get to the dinner tomorrow night so hope to see some people there. Otherwise I hope to see many in person or online over the weekend. Thanks for Bora and Anton and David for superb organisation (and not a little pestering to make sure I decided to come!)