Loss, time and money

May - Oct 2006 Calendar
May – Oct 2006 Calendar (Photo credit: Wikipedia)

For my holiday project I’m reading through my old blog posts and trying to track the conversations that they were part of. What is shocking, but not surprising with a little thought, is how many of my current ideas seem to spring into being almost whole in single posts. And just how old some of those posts are. At the some time there is plenty of misunderstanding and rank naivety in there as well.

The period from 2007-10 was clearly productive and febrile. The links out from my posts point to a distributed conversation that is to be honest still a lot more sophisticated than much current online discussion on scholarly communications. Yet at the same time that fabric is wearing thin. Broken links abound, both internal from when I moved my own hosting and external. Neil Saunders’ posts are all still accessible, but Deepak Singh’s seem to require a trip to the Internet Archive. The biggest single loss, though occurs through the adoption of Friendfeed in mid-2008 by our small community. Some links to discussions resolve, some discussions of discussions survive as posts but whole chunks of the record of those conversations – about researcher IDs, peer review, and incentives and credit appear to have disappeared.

As I dig deeper through those conversations it looks like much of it can be extracted from the Internet Archive, but it takes time. Time is a theme that runs through posts starting in 2009 as the “real time web” started becoming a mainstream thing, resurfaced in 2011 and continues to bother. Time also surfaces as a cycle. Comments on peer review from 2011 still seem apposite and themes of feeds, aggregations and social data continue to emerge over time. On the other hand, while much of my recounting of conversations about Researcher IDs in 2009 will look familiar to those who struggled with getting ORCID up and running, a lot of the technology ideas were…well probably best left in same place as my enthusiasm for Google Wave. And my concerns about the involvement of Crossref in Researcher IDs is ironic given I now sit on their board as second representing PLOS.

The theme that travels throughout the whole seven-ish years is that of incentives. Technical incentives, the idea that recording research should be a byproduct of what the researcher is doing anyway and ease of use (often as rants about institutional repositories) appear often. But the core is the question of incentives for researchers to adopt open practice, issues of “credit” and how it might be given as well as the challenges that involves, but also of exchange systems that might turn “credit” into something real and meaningful. Whether that was to be real money wasn’t clear at the time. The concerns with real money come later as this open letter to David Willets suggests a year before the Finch review. Posts from 2010 on frequently mention the UK’s research funding crisis and in retrospect that crisis is the crucible that formed my views on impact and re-use as well as how new metrics might support incentives that encourage re-use.

The themes are the same, the needs have not changes so much and many of the possibilities remain unproven and unrealised. At the same time the technology has marched on, making much of what was hard easy, or even trivial. What remains true is that the real value was created in conversations, arguments and disagreements, reconciliations and consensus. The value remains where it has always been – in a well crafted network of constructive critics and in a commitment to engage in the construction and care of those networks.

Incentives: Definitely a case of rolling your own

Lakhovsky: The Convesation; oil on panel (Бесе...
Image via Wikipedia

Science Online London ran late last week and into the weekend and I was very pleased to be asked to run a panel, broadly speaking focused on evaluation and incentives. Now I had thought that the panel went pretty well but I’d be fibbing if I said I wasn’t a bit disappointed. Not disappointed with the panel members or what they said. Yes it was a bit more general than I had hoped and there were things that I wished we’d covered but the substance was good from my perspective. My disappointment was with the response from the audience, really on two slightly different points.

The first was the lack of response to what I thought were some of the most exciting things I’ve heard in a long time from major stakeholders. I’ll come back to that later. But a bigger disappointment was that people didn’t seem to connect the dots to their own needs and experiences.

Science Online, both in London and North Carolina forms, has for me always been a meeting where the conversation proceeds at a more sophisticated level than the usual. So I pitched the plan of the session at where I thought the level should be. Yes we needed to talk about the challenges and surface the usual problems, non-traditional research outputs and online outputs in particular don’t get the kind of credit that papers do, institutions struggle to give credit for work that doesn’t fit in a pigeonhole, funders seem to reward only the conventional and traditional, and people outside the ivory tower struggle to get either recognition or funding. These are known challenges, the question is how to tackle them.

The step beyond this is the hard one. It is easy to say that incentives need to change. But incentives don’t drop from heaven. Incentives are created within communities and they become meaningful when they are linked to the interests of stakeholders with resources. So the discussion wasn’t really about impact, or funding, or to show that nothing can be done by amateurs. The discussion was about the needs of institutions and funders and how they can be served by what is being generated by the online community. It was also about the constraints they face in acting. But fundamentally you had major players on the stage saying “this is the kind of thing we need to get the ball rolling”.

Make no mistake, this is tough. Everyone is constrained and resources are tight but at the centre of the discussion were the key pointers to how to cut through the knot. The head of strategy at a major research university stated that universities want to play a more diverse role, want to create more diverse scholarly outputs, and want to engage with the wider community in new ways. That smart institutions will be looking to diversify. The head of evaluation at a major UK funder said that funders really want to know about non-traditional outputs and how they were having a wider impact. That these outputs are amongst the best things they can talk about to government. That they will be crucial to make the case to sustain science funding.

Those statements are amongst the most direct and exciting I have heard in some years of advocacy in this space. The opportunity is there, if you’re willing to put the effort in to communicate and to shape what you are doing to match to match their needs. As Michael Nielsen said in his morning keynote this is a collective action problem. That means finding what unites the needs of those doing with the needs of those with resources. It means compromise, and it means focusing on the achievable, but the point of the discussion was to identify what might be achievable.

So mostly I was disappointed that the excitement I felt wasn’t mirrored in the audience. The discussion about incentives has to move on. Saying that “institutions should do X” or “funders should do Y” gets us nowhere. Understanding what we can do together with funders and institutions and other communities to take the online agenda forward and understanding what the constraints are is where we need to go. The discussion showed that both institutions and funders know that they need what the community of online scientists can do. They don’t know how to go about it, and they don’t even know very much what we are doing, but they want to know. And when they do know they can advise and help and they can bring resources to bear. Maybe not all the resources you would like, and maybe not for all the things you would like, but resources nonetheless.

With a lot of things it is easy to get too immersed in the detail of these issues and to forget that people are looking in from the outside without the same context. I guess the fact that I pulled out what might have seemed to the audience to be just asides as the main message is indicative of that. But I really want to get that message out because I think it  is critical if the community of online scientists wants to be the mainstream. And I think it should be.

The bottom line is that smart funders and smart institutions value what is going on online. They want to support it, they want to be seen to support it, but they’re not always sure how to go about it and how to judge its quality. But they want to know more. That’s where you come in and that’s why the session was relevant. Lars Fischer had it absolutely right: “I think the biggest and most consequential incentive for scientists is (informal) recognition by peers.” You know, we know, who is doing the good stuff and what is valuable. Take that conversation to the funders and the institutions, explain to them what’s good and why, and tell the story of what the value is. Put it in your CV, demand that promotion panels take account of it, whichever side of the table you are on. Show that you make an impact in language that they understand. They want to know. They may not always be able to act – funding is an issue – but they want to and they need your help. In many ways they need our help more than we need theirs. And if that isn’t an incentive then I don’t know what is.

Enhanced by Zemanta