David Crotty, over at Scholarly Kitchen has an interesting piece on metrics, arguing that many of these have not been thought through because they don’t provide concrete motivation to researchers to care about them. Really he’s focused mainly on exchange mechanisms, means of persuading people that doing high quality review is worth their while by giving them something in exchange, but the argument extends to all sorts of metrics. Why would you care about any given measure if achieving on it doesn’t translate into more resources, time, or glory?
You might expect me to disagree with a lot of this but for the most part I don’t. Any credible metric has to be real, it has to mean something. It has to matter. This is why connecting funders, technologists, data holders, and yes, even publishers, is at the core of the proposal I’m working with at the moment. We need funders to want to have access to data and to want to reward performance on those measures. If there’s money involved then researchers will follow.
Any time someone talks about a “system†using a language of currency there is a key question you have to ask; can this “value†can be translated into real money. If it can’t then it is unlikely people will take it seriously. Currency has to be credible, it has to be taken seriously or it doesn’t work. How much is the cash in your pocket actually worth? Cash has to embody transferable value and many of these schemes don’t provide anything more than a basic barter.
But equally the measures of value, or of cost have to be real. Confidence in the reality of community measures is crucial, and this is where I part company with David, because at the centre of his argument is what seems to me a massive hole.
“The Impact Factor, flawed though it may be, at least tries to measure something that directly affects career advancement–the quality and impact of one’s research results. It’s relevant because it has direct meaning toward determining the two keys to building a scientific career, jobs and funding.â€
The second half of this I agree with (but resent). But it depends absolutely on the first part being widely believed. And the first part simply isn’t true. The Thomson Reuters Journal Impact Factor does not try to to measure the quality and impact of individual research results. TR are shouting this from the treetops at the moment. We know that it is at best an extremely poor measure of individual performance in practice. In economic terms, our dependence on the JIF is a bubble. And bubbles burst.
The reason people are working on metrics is because they figure that replacing one rotten measure at the heart of the system with ones that are self-evidently technically superior should be easy. Of course this isn’t true. Changing culture, particularly reward culture is very difficult. You have to tackle the self-reinforcement that these measures thrive on – and you need to work very carefully to allow the bubble to deflate in a controlled fashion.
There is one further point where I disagree with David. He asks a rhetorical question:
“Should a university deny tenure to a researcher who is a poor peer reviewer, even if he brings in millions of dollars in grants each year and does groundbreaking research? Should the NIH offer to fund poorly designed research proposals simply because the applicant is well-liked and does a good job interpreting the work of others?â€
It’s interesting that David even asks these questions, because the answers seem obvious, self evident even. The answer, at least the answer to the underlying question, is in both cases yes. The ultimate funders should fund people who excel at review even if they are poor at other parts of the enterprise. The work of review must be valued or it simply won’t be done. I have heard heads of university departments tell researchers to do less reviewing and write more grants. And I can tell you in the current UK research funding climate that review is well off the top of my priority list. If there is no support for reviewing then in the current economic climate we will see less of it done; if there is no space in our community for people who excel at reviewing then who will teach it? Or do we continue the pernicious myth that the successful scientist is a master of all of their trades? Aside from anything else basic economics tells us that specialisation leads to efficiency gains, even when one of the specialists is a superior practitioner in both areas. Shouldn’t we be seeking those efficiency gains?
Because the real question is not whether reviewers should be funded, by someone in some form, but what the relative size of that investment should be in a balanced portfolio that covers all the contributions needed across the scientific enterprise. The question is how we balance, these activities. How do we tension them? And the answer to that is that we need a market. And to have a functioning market we need a functioning currency. That currency may just be money, but reputation can be converted, albeit not directly, into funds. Successfully hacking research reputation will make a big difference to more effective tensioning between different important and valid scientific research roles, and that’s why people are so interested in trying to do it.