Home » Blog, Featured

Metrics and Money

20 October 2010 6 Comments
Crane Paper Company in Dalton produces the pap...
Image via Wikipedia

David Crotty, over at Scholarly Kitchen has an interesting piece on metrics, arguing that many of these have not been thought through because they don’t provide concrete motivation to researchers to care about them. Really he’s focused mainly on exchange mechanisms, means of persuading people that doing high quality review is worth their while by giving them something in exchange, but the argument extends to all sorts of metrics. Why would you care about any given measure if achieving on it doesn’t translate into more resources, time, or glory?

You might expect me to disagree with a lot of this but for the most part I don’t. Any credible metric has to be real, it has to mean something. It has to matter. This is why connecting funders, technologists, data holders, and yes, even publishers, is at the core of the proposal I’m working with at the moment. We need funders to want to have access to data and to want to reward performance on those measures. If there’s money involved then researchers will follow.

Any time someone talks about a “system” using a language of currency there is a key question you have to ask; can this “value” can be translated into real money. If it can’t then it is unlikely people will take it seriously. Currency has to be credible, it has to be taken seriously or it doesn’t work. How much is the cash in your pocket actually worth? Cash has to embody transferable value and many of these schemes don’t provide anything more than a basic barter.

But equally the measures of value, or of cost have to be real. Confidence in the reality of community measures is crucial, and this is where I part company with David, because at the centre of his argument is what seems to me a massive hole.

“The Impact Factor, flawed though it may be, at least tries to measure something that directly affects career advancement–the quality and impact of one’s research results.  It’s relevant because it has direct meaning toward determining the two keys to building a scientific career, jobs and funding.”

The second half of this I agree with (but resent). But it depends absolutely on the first part being widely believed. And the first part simply isn’t true. The Thomson Reuters Journal Impact Factor does not try to to measure the quality and impact of individual research results. TR are shouting this from the treetops at the moment. We know that it is at best an extremely poor measure of individual performance in practice. In economic terms, our dependence on the JIF is a bubble. And bubbles burst.

The reason people are working on metrics is because they figure that replacing one rotten measure at the heart of the system with ones that are self-evidently technically superior should be easy. Of course this isn’t true. Changing culture, particularly reward culture is very difficult. You have to tackle the self-reinforcement that these measures thrive on – and you need to work very carefully to allow the bubble to deflate in a controlled fashion.

There is one further point where I disagree with David. He asks a rhetorical question:

“Should a university deny tenure to a researcher who is a poor peer reviewer, even if he brings in millions of dollars in grants each year and does groundbreaking research?  Should the NIH offer to fund poorly designed research proposals simply because the applicant is well-liked and does a good job interpreting the work of others?”

It’s interesting that David even asks these questions, because the answers seem obvious, self evident even. The answer, at least the answer to the underlying question, is in both cases yes. The ultimate funders should fund people who excel at review even if they are poor at other parts of the enterprise. The work of review must be valued or it simply won’t be done. I have heard heads of university departments tell researchers to do less reviewing and write more grants. And I can tell you in the current UK research funding climate that review is well off the top of my priority list. If there is no support for reviewing then in the current economic climate we will see less of it done; if there is no space in our community for people who excel at reviewing then who will teach it? Or do we continue the pernicious myth that the successful scientist is a master of all of their trades? Aside from anything else basic economics tells us that specialisation leads to efficiency gains, even when one of the specialists is a superior practitioner in both areas. Shouldn’t we be seeking those efficiency gains?

Because the real question is not whether reviewers should be funded, by someone in some form, but what the relative size of that investment should be in a balanced portfolio that covers all the contributions needed across the scientific enterprise. The question is how we balance, these activities. How do we tension them? And the answer to that is that we need a market. And to have a functioning market we need a functioning currency. That currency may just be money, but reputation can be converted, albeit not directly, into funds. Successfully hacking research reputation will make a big difference to more effective tensioning between different important and valid scientific research roles, and that’s why people are so interested in trying to do it.

Enhanced by Zemanta


6 Comments »

  • Bill said:

    “And the answer to that is that we need a market.”

    If better reviewing results in better papers, then perhaps an efficient market in article quality could work as a proxy (so long as reviewers could be connected to the papers they reviewed over time). Interestingly, the only way I know of to establish an efficient market in article quality is to use article level metrics in an overwhelmingly OA publishing environment…

  • Cameron Neylon said:

    Using article quality (or more generally reviewed research output object
    quality – or re-use) as a proxy market makes sense but this goes back to
    needing reviewers to be openly connected to what they have reviewed which
    seems a tough one to win. It also goes back to the question of whether you
    should require reviewers to “short” things they reject (or downgrade in a
    post-publication peer review world) or just to go long on objects they
    promote.

    There’s a good argument that says that one of the big problems in the
    current system is that you can go short with close to no risk as a reviewer
    (and contained risk as an editor/publishing channel) yet the downsides of
    getting a positive review wrong can be more embarrassing. Again its a
    question of appropriate tensioning. Everyone makes mistakes but the question
    is how much risk should you take – and what are the balancing upsides of
    getting it right. Having this tension would actually encourage more
    reviewing in and of itself because you’d need to hold enough reviewed
    objects to keep a balanced portfolio.

  • David Crotty said:

    Cameron,
    Glad to see you and I are in agreement, at least on the main point of my post. Not surprising though, as your analysis and proposals are often some of the most well thought out ones, and in particular, you present ideas in the context of the real world, rather than relying on wishful thinking about how the world “should be” for implementation.

    A couple quick responses:
    On the Impact Factor: please don’t take my statement as any sort of endorsement of the Impact Factor. I think we both agree that it is terribly flawed in a wide variety of ways. And perhaps I phrased it poorly, but the point of that statement was that people care about IF because it has a direct impact on their career. That’s different from any made-up metric with no real world grounding.

    On those rhetorical questions:
    Sorry, can’t agree at all with you here. Do you really believe that a university should kick out a productive scientist, someone doing innovative and groundbreaking work because they are a poor reviewer? Does that really improve the state of science or our body of knowledge? Sorry Dr. Einstein, your relativity theory was nice and all, but your peer reviewing stinks. Back to the patent office with you. And do you really think a funding body should pay for a poorly planned project that’s likely to fail just because the proposer is a good reviewer? Sorry we at the Damon Runyon Cancer Foundation didn’t cure cancer this year, but we had to give our limited funds to good peer reviewers rather than choosing the best research projects with the highest likelihood of adding to our knowledge of the disease.

    I do agree that review is important, and should be rewarded, but it must be done in context. As I’ve stated in the past, doing science is more important than talking about science. It doesn’t mean that the talking part is worthless, but they’re not on the same level. Take away the doing part and you’ve got nothing to talk about. The absolute priority for institutions and for funding agencies has to be the actual research, the actual discovery, the actual knowledge gained. The review of that knowledge is secondary, and any rewards or penalties must be on an appropriate level.

    As you note, spending time reviewing is often counterproductive to the goals of universities or funding agencies. Time spent reviewing is time spent not doing the things they want done. So does it make sense to ask them to pay for something that takes away from their achievement of their own goals? If I’m running a fund that’s trying to cure a disease, I’m looking to give money to scientists who might help cure that disease, not paying them to criticize the work of others. As you know, funding is incredibly tight these days, so asking agencies to be less efficient, to get less bang for their buck may be a hard sell.

    I would like to see review performance rewarded, but I’m not sure it will be accepted on the same level as achievement. We’ve already seen something of a split between the reward system for research and the reward system for teaching, something likely as valuable as review (see here, here). Can you force a system that’s been built around rewarding research to accept these other activities as equally important to their goals?

  • Quick Links | A Blog Around The Clock said:

    […] Metrics and Money […]

  • Cameron Neylon said:

    Hi David, thanks for replying.

    I will admit to being surprised by the way you phrased the sentence about the impact factor as I didn’t really believe you meant it the way that I read it – but it gave me the opportunity to make my point. I agree that the reality is that it does make a difference today, although this is changing. I stand by the point though that the only thing that makes the IF potent is the perception that it is used (possibly the over-perception even) and this both indicates that the situation is unstable and explains why people feel that just coming up with something new and better will change things.

    In terms of the whether universities should sack poor reviewers I think we’re slightly at cross purposes. You asked a rhetorical question to which I agree the answer is obvious. Most university administrators are going to look at where their bread is buttered from and are going to go for the high earning researcher. My point was slightly different, that the _ultimate_ funders who need to care for the health of the community, whether they be direct research funders, government, charities or industry need to look to a longer term to ensure that all the activities that are required for a healthy research operation are being supported.

    You ask whether agencies should be asked to be less efficient but actually the economics says that if they found people that were relatively better at (and presumably enjoyed) reviewing and funded that activity directly then their spending overall would be more efficient. This is assuming that they are currently indirectly paying for that reviewing which is true in some cases but not in others. Indeed my central point is that if we don’t tension these activities against each other, i.e. have a functioning market, then

    So in your example of a cancer foundation, yes they should be asking the question whether the research they are generating is getting proper review, because if it isn’t then they are wasting their money. They should also ask a lot of questions around whether their outputs are reproducible, useful, actually generating clinical outcomes for their target beneficiaries. Generating papers and newspaper headlines doesn’t really cut it any more. Your second last paragraph reads as though the review process isn’t important in the process of curing that disease. But I’m pretty sure that neither of us believes that is true. The review process is critical to curing the disease – therefore it is part of a balanced portfolio for a research funder.

    At the end I think we’re just talking about degrees and balance here. Review has to be paid for in some form, and by someone. All I’m really arguing is that those costs need to be properly surfaced and acknowledged. It’s not that it is “equally important” but that the relative importance needs to be tested. Then we can look at what the right balance is and whether there are places where we can find efficiencies to bring the cost down.

    Your last point really goes to the heart of this. Can we force a system that is built around rewarding research (and I would say built around rewarding the prestige of research outputs rather than their actual usefulness) to accept other parts of the enterprise that are critical to its success as important? Well if we don’t then we may as well pack up and go home because a system that doesn’t recognize and value crucial inputs is a system heading for destruction.

  • Stephanie Cosgrove said:

    Mmmm. Reducing the dross would be a good thing. Probably impossible – but desirable none the less.