The science exchange
How do we actually create the service that will deliver on the promise of the internet to enable collaborations to form as and where needed, to increase the speed at which we do science by enabling us to make the right contacts at the right times, and critically; how do we create the critical mass needed to actually make it happen? In another example of blog based morphic resonance there has been a discussion a discussion over at Nature Networks on how to enable collaboration occurred almost at the same time as Pawel Szczeny was blogging on freelance science. I then hooked up with Pawel to solve a problem in my research; as far as we know the first example of a scientific collaboration that started on Friendfeed. And Shirley Wu has now wrapped all of this up in a blog post about how a service to enable collaborations to be identified might actually work which has provoked a further discussion.
Shirley’s proposal is essentially an exchange. People make requests and people offer to help them. I have my concerns about involving a currency element in this but that is really a separate issue in many ways. The key problem with such a system is two-fold. Firstly discovery; very often people do not know how to phrase their problem, in many cases they are not aware that they have a problem at all. Equally most will be unlikely to realise they have a solution to specific problem. For such a system to work it needs to have critical mass. My suspicion is that critical mass will be enough people that, in combination with the fact that people won’t know what they want, that it will be almost impossible for people to find each other.
Remember that what we are trying enable here is additional collaborations above and beyond those we already know we need. If people have a specific problem today, they can pick up the phone and call someone who might have the answer. We can make that easier, or faster, but to me the key benefit, the potential for a step change in the efficiency of how we do science lies in all the small things that could be done better, the collaborations that are not currently sought out, and in particular, the data that languishes somewhere because there is never quite enough for a paper. As in any ‘long tail’ exploitation the benefits come from putting all those small pieces together into useful and useable pieces of science.
So how do we make it happen? Again (sorry to go on about this) a lot relies on critical mass. We don’t have it so we’ll have to create it. With critical mass discovery is a serious problem. One solution is ‘better search’, perhaps semantic search, but if this is about finding problems that people haven’t realised they have then that’s a very deep semantic problem. Actually this is exactly the kind of thing, like tagging photos, or checking Wikipedia articles, that humans still beat machines at time and time again. I think the thing that actually makes the connection has to be a person. Indeed this is the way this kind of thing already works. I see colleague X struggling with a problem and I think ‘they should talk to colleague B’ because she has solved a very similar problem in a different system.
People are connected in a social network, and within the scientific network it is possible to identify ‘supernodes’ that provide the connectivity that drive innovation and new collaborations. These supernodes are not always wildly successful scientists; actually they tend to be scatterbrains who struggle to focus on one thing for long enough to get it finished, but they have the broad knowledge to make connections. Many of them leave academic science because the restrictions chafe (you know who you are). We can enable these people to be more effective by providing them with feeds from various sources; blogs, online literature libraries, journal articles, and in some cases open or partially open lab books. These feeds can provide the raw data that would drive such a system. And the good bit is we don’t actually need researchers to opt in. Obviously the more data someone is generating the more chance of spotting their problem or opportunity, but it is possible to generate a feed without the researcher in question actually generating it themselves. Quality (of the feed, not the content) will be an issue but that would be something to work on going forward. I think a ‘launch’ is precisely the wrong thing to do. This is a classic case of closed alpha (friendfeed?) move to closed beta with a gradual move to an open beta. The point to ‘launch’ such a thing is when you’ve already got a set of success stories to tell in my view.
You want to follow people who make good collaborators and so there clearly is a place for a rating system within the process. By providing people with opportunities you will make the case to them that they should be generating a richer feed so that we can provide them with more opportunities or solve more of their problems. Shirley suggested the notion of a currency, but my belief is that people will be happy to contribute, within their available resources, as long as the work is properly attributed in peer reviewed publications. Authorship is enough of a currency to drive collaborations.
But what about those resources? And in particular what about the humans in the middle driving this system. Are they doing it for the love of it (the Wikipedia model). Will they get credit on the papers for making the connection? And if they do will a string of mid-author list papers do them any good in their career progression particularly if that career is outside academic science. Let’s turn that on its head. If this were a startup, with the ‘connectors’ being paid, where would the money come from? No-one will pay to subscribe to such a system, and they won’t pay a tax or fee to undertake collaborations either. First right of refusal on IP might be an answer but it’s a long term, high risk route, and it means the system would focus on exploitable results, which is arguably exactly the place where markets are already reasonably effective at driving the formation of these collaborations.
What is being generated here is new science, and science isn’t paid for per se. The resources that generate science are supported by governments, charities, and industry but the actual production of science is not supported. The truly radical approach to this would be to turn the system on its head. Don’t fund the universities to do science, fund the journals to buy science; then the system would reward increased efficiency. As it exists at the moment the funding system does nothing to support increased efficiency.
In stock exchanges and money markets, people are paid an awfully large amount of money to make what are fundamentally rather simple connections between buyers and sellers. This is still, for the most part, ultimately handled by humans, although there is a move towards fully automatic position taking. The connections we are talking about are much more complex to understand. To make this work we need to figure out how to reward the people who can make those connections. We also need to find a way to put money into the system to actually help provide the additional resources required to actually make things happen.
Turning the funding system on its head is probably not viable and while it makes a nice thought experiment I’m sure there are many reasons why its a terrible idea. What we need to do is find research funders who are serious about increasing their return on investment; not in terms of money, but in terms of results; in terms of science. I think if we can do that, and convince someone of the case for a return on their investment, the rest of the technical problems will be pretty straightforward to crack.