Creating a research community monoculture – just when we need diversity

This post is a follow on from a random tweet that I sent a few weeks back in response to a query on twitter from Lord Drayson, the UK’s Minister of State for Science and Innovation. I thought it might be an idea to expand from the 140 characters that I had to play with at the time but its taken me a while to get to it. It builds on the ideas of a post from last year but is given a degree of urgency by the current changes in policy proposed by EPSRC.

Government money for research is limited, and comes from the pockets of taxpayers. It is incumbent on those of us who spend it to ensure that this investment generates maximum impact. Impact, for me comes in two forms. Firstly there is straightforward (although not straightforward to measure) economic impact; increases in competitivenes, standard of living, development of business opportunities, social mobility, reductions in the burden of ill health and hopefully in environmental burden at some point in the future. The problem with economic impact is that it is almost impossible to measure in any meaningful way. The second area of impact is, at least on the surface, a little easier to track, that is research outputs delivere. How efficiently do we turn money into science? Scratch beneath the surface and you realise rapidly that measurement is a nightmare, but we can at least look at where there are inefficiencies, where money is being wasted, and being lost from the pipelines before it can be spent on research effort.

The approach that is being explicitly adopted in the UK is to concentrate research in “centres of excellence” and to “focus research on areas where the UK leads” and where “they are relevant to the UK’s needs”. At one level this sounds like motherhood and apple pie. It makes sense in terms of infrastructure investment to focus research funding both geographically and in specific subject areas. But at another level it has the potential to completely undermine the UK’s history of research excellence.

There is a fundamental problem with trying to maximise the economic impact of research. And it is one that any commercial expert, or indeed politician should find obvious. Markets are good at picking winners, commitees are very bad at it. Using committees of scientists, with little or no experience of commercialising research outputs is likely to be an unmitigated disaster. There is no question that some research leds to commercial outcomes but to the best of my knowledge there is no evidence that anyone has ever had any success in picking the right projects in advance. The simple fact is that the biggest form of economic impact from research is in providing and supporting the diverse and skilled workforce that support a commercially responsive, high technology economy. To a very large extent it doesn’t actually matter what specific research you support as long as it is diverse. And you will probably generate just exactly the same amount of commercial outcomes by picking at random as you will by trying to pick winners.

The world, and the UK in particular, is facing severe challenges both economic and environmental for which there may be technological solutions. Indeed there is a real opportunity in the current economic climate to reboot the economy with low carbon technologies and at the same time take the opportunity to really rebuild the information economy in a way that takes advantage of the tools the web provides, and in turn to use this to improve outcomes in health, social welfare, to develop new environmentally friendly processes and materials. The UK has great potential to lead these developments precisely because it has a diverse research community and a diverse highly trained research and technology workforce. We are well placed to solve todays problems with tomorrow’s technology.

Now let us return to the current UK policy proposals. These are to concentrate research, to reduce diversity, and to focus on areas of UK strength. How will those strengths be identified? No doubt by committee. Will they be forward looking strengths? No, they will be what a bunch of old men, already selected by their conformance to a particular stereotype, i.e. the ones doing fundable research i fundable places, identify in a closed room. It is easy to identify the big challenges. It is not easy, perhaps not even possible, to identify the technological solutions that will eventually solve them. Not the currently most promising solutions, the ones that will solve the problem five or ten years down the track.

As a thought experiment think back to what the UK’s research strengths and challenges were 20 years ago and imagine a world in which they were exclusively funded. It would be easy to argue that many of the UK’s current strengths simply wouldn’t even exist (web technology? biotechnology? polymer materials?). And that disciplines that have subsequently reduced in size or entirely disappeared would have been maintained at the cost of new innovation. Concentrating research in a few places, on a few subjects, will reduce diversity, leading to the loss of skills, and probably the loss of skilled people as researchers realise there is no future career for them in the UK. It will not provide the diverse and skilled workforce required to solve the problems we face today. Concentrating on current strengths, no matter how worthy, will lead to ossification and conservatism making UK research ultimately irrelevant on a world stage.

What we need more than ever now, is a diverse and vibrant research community working on a wide range of problems, and to find better communication tools so as to efficiently connect unexpected solutions to problems in different areas. This is not the usual argument for “blue skies research”, whatever that may be. It is an argument for using market forces to do what they are best at (pick the winners from a range of possible technologies) and to use the smart people currently employed in research positions at government expense to actually do what they are good at; do research and train new researchers. It is an argument for critically looking at the expenditure of government money in a wholistic way and to seriously consider radical change where money is being wasted. I have estimated in the past that the annual cost of failed grant proposals to the UK government is somewhere between £100M – £500M, a large sum of money in anybody’s books. More rigorous economic analysis of a Canadian government funding scheme has shown that the cost of preparing and refeering the proposals ($CAN40k) is more than the cost of giving every eligible applicant a support grantof $CAN30k. This is not just farcical, it is an offensive waste of taxpayer’s money.

The funding and distribution of research money requires radicaly overhaul. I do not beleive that simply providing more money is the solution. Frankly we’ve had a lot more money, it makes life a little more comfortable if you are in the right places, but it has reduced the pressure to solve the underlying problems. We need responsive funding at a wide range of levels that enables both bursts of research, the kind of instant collaboration that we know can work, with little or no review, and large scale data gathering projects of strategic importance that need extensive and careful critical review before being approved.  And we need mechanisms to tension these against each other. We need baseline funding to just let people get on with research and we need access to larger sums where appropriate.

We need less buearacracy, less direction from the top, and more direction from the sides, from the community, and not just necessarily the community of researchers. What we have at the moment are strategic initiatives announced by research councils that are around five years behind the leading edge, which distort and constrain real innovation. Now we have ministers proposing to identify the UK’s research strengths. No doubt these will be five to ten years out of date and they will almost certainly stifle those pockets of excellence that will grow in strengths over the next decade. No-one will ever agree what tomorrow’s strengths will be. Much better would be to get on and find out.

Fantasy Science Funding: How do we get peer review of grant proposals to scale?

This post is both a follow up to last week’s post on the cost’s of peer review and a response to Duncan Hull‘s post of nine or so months ago proposing a game of “Fantasy Science Funding“. The game requires you to describe how you would distribute the funding of the BBSRC if you were a benign (or not so benign) dictator. The post and the discussion should be read bearing in mind my standard disclaimer.

Peer review is in crisis. Anyone who tells you otherwise either has their head in the sand or is trying to sell you something. Volumes are increasing, quality of review is decreasing. The willingness of scientists to take on refereeing is increasingly the major problem for those who commission it. This is a problem for peer reviewed publication but the problems for the reviewing of funding applications are far worse.

For grant review, the problems that are already evident in scholarly publishing, fundamentally the increasing volume, are exacerbated by the fact that success rates for grans are falling and that successful grants are increasingly in the hands of a smaller number of people in a smaller number of places. Regardless of whether you agree with this process of concentrating grant funding this creates a very significant perception problem. If the perception of your referees is that they have no chance of getting funding, why on earth should they referee

Is this really happening? Well in the UK chemistry community last year there was an outcry when two EPSRC grant rounds in a row had success rates of 10% or lower. Bear in mind this was the success rate of grants that made it to panel, i.e. it is an upper bound, assuming there weren’t any grants removed at an earlier stage. As you can imagine there was significant hand wringing and a lot of jumping up and down but what struck me was two statements I heard made. The first, was from someone who had sat on one of the panels, was that it “raises the question of whether it is worth our time to attend panel meetings”. The second was the suggestion that the chemistry community could threaten to unilaterally withdraw from EPSRC peer review. These sentiments are now being repeated on UK mailing lists in response to EPSRC’s most recent changes to grant submission guidelines. Whether serious or not, credible or not, this shows that the compact of community contribution to the review process is perilously close to breaking down.

The research council response to this is to attempt to reduce the number of grant proposals, generally by threatening to block those who have a record of serial rejection. This will fail. With success rates as low as they are, and with successful grants concentrated in the hands of the few, most academics are serial failures. The only way departments can increase income is by increasing the volume and quality of grant applications. With little effective control over quality the focus will necessarily be on increasing volume. The only way research councils will control this is either by making applications a direct cost to departments, or by reducing the need of academics to apply.

The cost of refereeing is enormous and largely hidden. But it pales into insignificance compared to the cost of applying for grants. Low success rates make the application process an immense waste of departmental resources. The approximate average cost of running a UK academic for a year is £100,000. If you assume that each academic writes one grant per year and that this takes around two weeks full time work that amounts to ~£4k per academic per year. If there are 100,000 academics in the UK this is £400M, which with a 20% success rate means that £320M is lost in the UK each year. Let’s say that £100M is a reasonable ballpark figure.

In more direct terms this means that academics who are selected for their ability to do research, are being taken away from what they are good at to play a game which they will on average lose four times out of five. It would be a much more effective use of government funding to have those people actually doing research.

So this is a game of Fantasy Funding, how would I spend the money? Well, rather than discuss my biases about what science is important, which are probably not very interesting, it is perhaps more useful to think about how the system might be changed to reduce these wastages. And there is a simple, if somewhat radical, way of doing this.

Cut the budget in two and distribute half of it directly to academics on a pro-rata basis.

By letting researcher’s focus on getting on with research you will reduce their need for funding and reduce the burden. By setting the bar naturally higher for funding research you still maintain the perception that everyone is in with a chance and reduce the risk of referee drop out due to dis-enchantment with the process. More importantly you enable innovative research by allowing it to keep ticking over and in particular you enable a new type of peer review.

If you look at the amounts of money involved, say a few hundred million pounds for BBSRC, and divide that up amongst all bioscience academics, you end up with figures of a £5-20K per academic per year. Not enough to hire a postdoc, just about enough to run a PhD student (at least at UK rates). But what if you put that together with the money from a few other academics? If you can convince your peers that you have an interesting and fun idea then they can pool funds together. Or perhaps share a technician between two groups so that you don’t lose the entire group memory every time a student leaves? Effective collaboration will lead to a win on all sides.

If these arguments sound familiar it is because they are not so different to the notion of 20% time, best known as a Google policy of having all staff spend some time on personal projects. By supporting low level innovation and enabling small scale judging of ideas and pooling of resources it is possible to enable bottom up innovation of precisely the kind that is stifled by top down peer review.

No doubt there would be many unintended consequences, and probably a lot of wastage, but in amongst that I wouldn’t bet against the occassional brilliant innovation which is virtually impossible in the current climate.

What is clear is that doing nothing is not an option. Look at that EPSRC statement again. People with a long term success rate below 25% will be blocked…I just checked my success rate over the past ten years (about 15% by numbers of grants, 70% by value but that is dominated by one large grant). The current success rate at chemistry panel is around 15%. And that is skewed towards a limited number of people and places.

The system of peer review relies absolutely on the communities agreement to contribute and some level of faith in the outcome. It relies absolutely on trust. That trust is perilously close to a breakdown.