Tracking research into practice: Are nurses on twitter a good case study?

The holy grail of research assessment is a means of automatically tracking the way research changes the way practitioners act in the real world. How does new research influence policy? Where has research been applied by start-ups? And have new findings changed the way medical practitioners treat patients? Tracking this kind of research impact is hard for a variety of reasons: practitioners don’t (generally) write new research papers citing the work they’ve used; even if they did their work is often several steps removed from the original research making the links harder to identify; and finally researchers themselves are often too removed from the application of the research to be aware of it. Where studies of downstream impact have been done they are generally carefully selected case studies, generating a narrative description. These case studies can be incredibly expensive, and by their nature are unlikely to uncover unexpected applications of research.

In recent talks I have used a specific example of a research article reaching a practitioner community. This is a paper that I discovered will search through the output of the University of Cape Town on Euan Adie‘s Altmetric.com service. The paper deals with domestic violence, HIV status and rape. These are critical social issues and new insights have a real potential to improve people’s lives, particularly in the area of the study. The paper was tweeted by a number of accounts but in particularly by @Shukumisa and @SonkeTogether two support and adovcacy organisations in South Africa. Shukumisa in particular tweeted in response to another account “@lizieloots a really important study, we have linked to it on our site”. This is a single example but it illustrates how it is possible to at least identify where research is being discussed within practitioner and community spaces.

But can we go further? More recently I’ve shown some other examples of heavily tweeted papers that relate to work funded by cancer charities. In one of those talks I made the throw away comment “You’ve always struggled to see whether practitioners actually use your research…and there are a lot of nurses on Twitter”. I hadn’t really followed that up until yesterday when I asked on twitter about research into the use of social media by nurses and was rapidly put in touch with a range of experts on the subject (remind me, how did we ask speculative research questions before Twitter?) . So the question I’m interested in probing is whether the application of research by nurses is something that can be tracked using links shared on Twitter as a proxy?

The is interesting from a range of perspectives. To what extent do practicing nurses who use social media share links to web content that informs their professional practice. How does this mirror the parallel link sharing activity by academic researchers? Are nurses referring to primary research content, or is this information mediated through other sources? Do such other sources link back to the primary research? Can those links be traced automatically?  And a host of other questions around how professional practice is changing with the greater availability of these primary and secondary resources.

My hypothesis is as follows: Links shared by nurse practitioners and their online community are a viable proxy of (some portion of) the impact that research has in clinical practice. The extent to which links are shared by nurses on Twitter, perhaps combined with sentiment analysis,  could serve as a measure of the impact of research targeted at the professional practice of nurses.

Thoughts? Criticisms?

Incentives: Definitely a case of rolling your own

Lakhovsky: The Convesation; oil on panel (Бесе...
Image via Wikipedia

Science Online London ran late last week and into the weekend and I was very pleased to be asked to run a panel, broadly speaking focused on evaluation and incentives. Now I had thought that the panel went pretty well but I’d be fibbing if I said I wasn’t a bit disappointed. Not disappointed with the panel members or what they said. Yes it was a bit more general than I had hoped and there were things that I wished we’d covered but the substance was good from my perspective. My disappointment was with the response from the audience, really on two slightly different points.

The first was the lack of response to what I thought were some of the most exciting things I’ve heard in a long time from major stakeholders. I’ll come back to that later. But a bigger disappointment was that people didn’t seem to connect the dots to their own needs and experiences.

Science Online, both in London and North Carolina forms, has for me always been a meeting where the conversation proceeds at a more sophisticated level than the usual. So I pitched the plan of the session at where I thought the level should be. Yes we needed to talk about the challenges and surface the usual problems, non-traditional research outputs and online outputs in particular don’t get the kind of credit that papers do, institutions struggle to give credit for work that doesn’t fit in a pigeonhole, funders seem to reward only the conventional and traditional, and people outside the ivory tower struggle to get either recognition or funding. These are known challenges, the question is how to tackle them.

The step beyond this is the hard one. It is easy to say that incentives need to change. But incentives don’t drop from heaven. Incentives are created within communities and they become meaningful when they are linked to the interests of stakeholders with resources. So the discussion wasn’t really about impact, or funding, or to show that nothing can be done by amateurs. The discussion was about the needs of institutions and funders and how they can be served by what is being generated by the online community. It was also about the constraints they face in acting. But fundamentally you had major players on the stage saying “this is the kind of thing we need to get the ball rolling”.

Make no mistake, this is tough. Everyone is constrained and resources are tight but at the centre of the discussion were the key pointers to how to cut through the knot. The head of strategy at a major research university stated that universities want to play a more diverse role, want to create more diverse scholarly outputs, and want to engage with the wider community in new ways. That smart institutions will be looking to diversify. The head of evaluation at a major UK funder said that funders really want to know about non-traditional outputs and how they were having a wider impact. That these outputs are amongst the best things they can talk about to government. That they will be crucial to make the case to sustain science funding.

Those statements are amongst the most direct and exciting I have heard in some years of advocacy in this space. The opportunity is there, if you’re willing to put the effort in to communicate and to shape what you are doing to match to match their needs. As Michael Nielsen said in his morning keynote this is a collective action problem. That means finding what unites the needs of those doing with the needs of those with resources. It means compromise, and it means focusing on the achievable, but the point of the discussion was to identify what might be achievable.

So mostly I was disappointed that the excitement I felt wasn’t mirrored in the audience. The discussion about incentives has to move on. Saying that “institutions should do X” or “funders should do Y” gets us nowhere. Understanding what we can do together with funders and institutions and other communities to take the online agenda forward and understanding what the constraints are is where we need to go. The discussion showed that both institutions and funders know that they need what the community of online scientists can do. They don’t know how to go about it, and they don’t even know very much what we are doing, but they want to know. And when they do know they can advise and help and they can bring resources to bear. Maybe not all the resources you would like, and maybe not for all the things you would like, but resources nonetheless.

With a lot of things it is easy to get too immersed in the detail of these issues and to forget that people are looking in from the outside without the same context. I guess the fact that I pulled out what might have seemed to the audience to be just asides as the main message is indicative of that. But I really want to get that message out because I think it  is critical if the community of online scientists wants to be the mainstream. And I think it should be.

The bottom line is that smart funders and smart institutions value what is going on online. They want to support it, they want to be seen to support it, but they’re not always sure how to go about it and how to judge its quality. But they want to know more. That’s where you come in and that’s why the session was relevant. Lars Fischer had it absolutely right: “I think the biggest and most consequential incentive for scientists is (informal) recognition by peers.” You know, we know, who is doing the good stuff and what is valuable. Take that conversation to the funders and the institutions, explain to them what’s good and why, and tell the story of what the value is. Put it in your CV, demand that promotion panels take account of it, whichever side of the table you are on. Show that you make an impact in language that they understand. They want to know. They may not always be able to act – funding is an issue – but they want to and they need your help. In many ways they need our help more than we need theirs. And if that isn’t an incentive then I don’t know what is.

Enhanced by Zemanta

(S)low impact research and the importance of open in maximising re-use

Open
Image by tribalicious via Flickr

This is an edited version of the text that I spoke from at the Altmetrics Workshop in Koblenz in June. There is also an audio recording of the talk I gave available as well as the submitted abstract for the workshop.

I developed an interest in research evaluation as an advocate of open research process. It is clear that researchers are not going to change themselves so someone is going to have to change them and it is funders who wield the biggest stick. The only question, I thought,  was how to persuade them to use it

Of course it’s not that simple. It turns out that funders are highly constrained as well. They can lead from the front but not too far out in front if they want to retain the confidence of their community. And the actual decision making processes remain dominated by senior researchers. Successful senior researchers with little interest in rocking the boat too much.

The thing you realize as you dig deeper into this as that the key lies in finding motivations that work across the interests of different stakeholders. The challenge lies in finding the shared objectives. What it is that unites both researchers and funders, as well as government and the wider community. So what can we find that is shared?

I’d like to suggest that one answer to that is Impact. The research community as a whole has stake in convincing government that research funding is well invested. Government also has a stake in understanding how to maximize the return on its investment. Researchers do want to make a difference, even if that difference is a long way off. You need a scattergun approach to get the big results, but that means supporting a diverse range of research in the knowledge that some of it will go nowhere but some of it will pay off.

Impact has a bad name but if we step aside from the gut reactions and look at what we actually want out of research then we start to see a need to raise some challenging questions. What is research for?  What is its role in our society really? What outcomes would we like to see from it, and over what timeframes? What would we want to evaluate those outcomes against? Economic impact yes, as well as social, health, policy, and environmental impact. This is called the ‘triple bottom line’ in Australia. But alongside these there is also research impact.

All these have something in common. Re-use. What we mean by impact is re-use. Re-use in industry, re-use in public health and education, re-use in policy development and enactment, and re-use in research.

And this frame brings some interesting possibilities. We can measure some types of re-use. Citation, retweets, re-use of data or materials, or methods or software. We can think about gathering evidence of other types of re-use, and of improving the systems that acknowledge re-use. If we can expand the culture of citation and linking to new objects and new forms of re-use, particularly for objects on the web, where there is some good low hanging fruit, then we can gather a much stronger and more comprehensive evidence base to support all sorts of decision making.

There are also problems and challenges. The same ones that any social metrics bring. Concentration and community effects, the Matthew effect of the rich getting richer. We need to understand these feedback effects much better and I am very glad there are significant projects addressing this.

But there is also something more compelling for me in this view. It let’s us reframe the debate around basic research. The argument goes we need basic research to support future breakthroughs. We know neither what we will need nor where it will come from. But we know that its very hard to predict – that’s why we support curiosity driven research as an important part of the portfolio of projects. Yet the dissemination of this investment in the future is amongst the weakest in our research portfolio. At best a few papers are released then hidden in journals that most of the world has no access to and in many cases without the data, or other products either being indexed or even made available. And this lack of effective dissemination is often because the work is perceived as low, or perhaps better, slow impact.

We may not be able to demonstrate or to measure significant re-use of the outputs of this research for many years. But what we can do is focus on optimizing the capacity, the potential, for future exploitation. Where we can’t demonstrate re-use and impact we should demand that researchers demonstrate that they have optimized their outputs to enable future re-use and impact.

And this brings me full circle. My belief is that the way to ensure the best opportunities for downstream re-use, over all timeframes, is that the research outputs are open, in the Budapest Declaration sense. But we don’t have to take my word for it, we can gather evidence. Making everything naively open will not always be the best answer, but we need to understand where that is and how best to deal with it. We need to gather evidence of re-use over time to understand how to optimize our outputs to maximize their impact.

But if we choose to value re-use, to value the downstream impact that our research or have, or could have, then we can make this debate not about politics or ideology but how about how best to take the public investment in research and to invest it for the outcomes that we need as a society.

 

 

 

 

Enhanced by Zemanta

Hoist by my own petard: How to reduce your impact with restrictive licences

No Television
Image via Wikipedia

I was greatly honoured to be asked to speak at the symposium held on Monday to recognize Peter Murray-Rusts’ contribution to scholarly communication. The lineup was spectactular, the talks insightful and probing, and the discussion serious, but also no longer trapped in the naive yes/no discussions of openness and machine readability, but moving on into detail, edge cases, problems and issues.

For my own talk I wanted to do something different to what I’ve been doing in recent talks. Following the example of Deepak Singh, John Wilbank and others I’ve developed what seems to be a pretty effective way of doing an advocacy talk, involving lots of slides, big images, few words going by at a fast rate. Recently I did 118 slides in 20 minutes. The talk for Peter’s symposium required something different so I eschewed slides and just spoke for 40 minutes wanting to explore the issues deeply rather than skate over the surface in the way the rapid fire approach tends to do.

The talk was, I think, reasonably well received and provoked some interesting (and heated) discussion. I’ve put the draft text I was working from up on an Etherpad. However due to my own stupidity the talk was neither livestreamed nor recorded. In a discussion leading up to talk I was asked whether I wanted to put up a pretty picture as a backdrop and I thought it would be good to put up the licensing slide that I use in all of my talks to show that livestreaming, twittering, etc, is fine and encouraging people to do it. The trouble is that I navigated to the slideshare deck that has that slide and just hit full screen without thinking. What the audience therefore saw was the first slide, which looks like this.

A restrictive talk licence prohibiting live streaming, tweeting, etc.

I simply didn’t notice as I was looking the other way. The response to this was both instructive and interesting. The first thing that happened as soon as the people running the (amazingly effective given the resources they had) livestream and recording saw the slide they shut down everything. In a sense this is really positive, it shows that people respect the requests of the speaker by default.

Across the audience people didn’t tweet, and indeed in a couple of cases deleted photographs that they had taken. Again the respect for the request people thought I was making was solid. Even in an audience full of radicals and open geeks no-one questioned the request. I’m slightly gobsmacked in fact that no-one shouted at me to ask what the hell I thought I was doing. Some thought I was being ironic, which I have to say would have been too clever by half. But again it shows, if you ask, people do for the most part respect that request.

Given the talk was about research impact, and how open approaches will enable it, it is rather ironic that by inadvertantly using the wrong slide I probably significantly reduced the impact of the talk. There is no video that I can upload, no opportunity for others to see the talk. Several people who I know were watching online whose opinion I value didn’t get to see the talk, and the tweetstream that I might have hoped would be full of discussion, disagreement, and alternative perspectives was basically dead. I effectively made my own point, reducing what I’d hoped might kick off a wider discussion to a dead talk that only exists in a static document and memories of the limited number of people who were in the room.

The message is pretty clear. If you want to reduce the effectiveness and impact of the work you’re doing, if you want to limit the people you can reach, then use restrictive terms. If you want our work to reach people and to maximise the chance it has to make a difference, make it clear and easy for people to understand that they are encouraged to copy, share, and cite your work. Be open. Make a difference.

Enhanced by Zemanta

Beyond the Impact Factor: Building a community for more diverse measurement of research

An old measuring tape
Image via Wikipedia

I know I’ve been a bit quiet for a few weeks. Mainly I’ve been away for work and having a brief holiday so it is good to be plunging back into things with some good news. I am very happy to report that the Open Society Institute has agreed to fund the proposal that was built up in response to my initial suggestion a month or so ago.

OSI, which many will know as one of the major players in bringing the Open Access movement to its current position, will fund a workshop that will identify both potential areas where the measurement and aggregation of research outputs can be improved as well as barriers to achieving these improvements. This will be immediately followed by a concentrated development workshop (or hackfest) that will aim to deliver prototype examples that show what is possible. The funding also includes further development effort to take one or two of these prototypes and develop them to proof of principle stage, ideally with the aim of deploying these into real working environments where they might be useful.

The workshop structure will be developed by the participants over the 6 weeks leading up to the date itself. I aim to set that date in the next week or so, but the likelihood is early to mid-March. The workshop will be in southern England, with the venue to be again worked out over the next week or so.

There is a lot to pull together here and I will be aiming to contact everyone who has expressed an interest over the next few weeks to start talking about the details. In the meantime I’d like to thank everyone who has contributed to the effort thus far. In particular I’d like to thank Melissa Hagemann and Janet Haven at OSI and Gunner from Aspiration who have been a great help in focusing and optimizing the proposal. Too many people contributed to the proposal itself to name them all (and you can check out the GoogleDoc history if you want to pull apart their precise contributions) but I do want to thank Heather Piwowar and David Shotton in particular for their contributions.

Finally, the success of the proposal, and in particular the community response around it has made me much more confident that some of the dreams we have for using the web to support research are becoming a reality. The details I will leave for another post but what I found fascinating is how far the network of people spread who could be contacted, essentially through a single blog post. I’ve contacted a few people directly but most have become involved through the network of contacts that spread from the original post. The network, and the tools, are effective enough that a community can be built up rapidly around an idea from a much larger and more diffuse collection of people. The challenge of this workshop and the wider project is to see how we can make that aggregated community into a self sustaining conversation that produces useful outputs over the longer term.

It’s a complete co-incidence that Michael Nielsen posted a piece in the past few hours that forms a great document for framing the discussion. I’ll be aiming to write something in response soon but in the meantime follow the top link below.

Enhanced by Zemanta