A tale of two analysts

Understanding how a process looks from outside our own echo chamber can be useful. It helps to calibrate and sanity check our own responses. It adds an external perspective and at its best can save us from our own overly fixed ideas. In the case of the ongoing Elsevier Boycott we even have a perspective that comes from two opposed directions. The two analyst/brokerage firms Bernstein and Exane Paribas have recently published reports on their view of how recent events should effect the view of those investing in Reed Elsevier. In the weeks following the start of the boycott Elsevier’s stock price dropped – was this an indication of a serious structural problems in the business revealed by the boycott (the Bernstein view) or just a short term over reaction that provides an opportunity for a quick profit (the Exane view)?

Claudio Aspesi from Bernstein has been negative on Elsevier stock for sometime [see Stephen Curry’s post for links and the most recent report], citing the structural problem that the company is stuck in a cycle of publishing more, losing subscriptions, charging more, and managing to squeeze out a little more profit for shareholders in each cycle. Aspesi has been stating for some time that this simply can’t go on. He also makes the link between the boycott and a potentially increased willingness of libraries to drop subscriptions or abandon big deals altogether. He is particularly scathing about the response to the boycott arguing that Elsevier is continuing to estrange the researcher community and that this must ultimately be disastrous. In particular the report focuses on the claims management have made of their ability to shift the cost base away from libraries and onto researchers based on “excellent relations with researchers”.

The Exane view on the other hand is that this is a storm in a teacup [summary at John Baez’s G+]. They point to the relatively small number of researchers signing up to the boycott, particularly in the context of the much larger numbers involved in similar pledges in 2001 and 2007. In doing this I feel they are missing the point – that the environment of those boycotts was entirely different both in terms of disciplines and targeting but an objective observer might well view me as biased.

I do however find this report complacent on details – claiming as it does that the “low take-up of this petition is a sign of the scientific community’s improving perception of Elsevier”, an indication of a lack of real data on researcher sentiment. They appear to have bought the Elsevier line on “excellent relations” uncritically – and what I see on the ground is barely suppressed fury that is increasingly boiling over. It also focuses on OA as a threat – not an opportunity – for Elsevier, a view which would certainly lead me to discount their long term views on the company’s stock price. Their judgement for me is brought even further into question by the following:

“In our DCF terminal value, we capture the Open Access risk by assuming the pricing models flip to Gold Open Access with average revenue per article of USD3,000. Even on that assumption, we find value in the shares.”

Pricing the risk at this level is risible. The notion that Elsevier could flip to an author pays model by charging $US3000 an article is absurd. The poor take up of the current Elsevier options and the massive growth of PLoS ONE and clones at half this price sets a clear price point, and one that is likely a high water mark for journal APCs. If there is value in the shares at $3000 then I can’t help but feel there won’t be very much at a likely end point price well below $1000.

However both reports appear to me to fail to recognize one very important aspect of the situation – its volatility. As I understand it these firms make their names by being right when they take positions away from the consensus. Thus they have a tendency to report their views as certainties. In this case I think the situation could swing either way very suddenly. As the Bernstein report notes, the defection of editorial staff from Elsevier journals is the most significant risk. A single board defection from a middle to high ranking journal – or a signal from a major society journal that they will not renew an Elsevier contract – could very easily start a landslide that ends Elsevier’s dominance as the largest research publisher. Equally, nothing much could happen which would certainly likely lead to a short term rally in stock prices. But no-one is in a position to guess how this is going to play out.

In the long term I side with Aspesi – I see nothing in the overall tenor of Elsevier’s position statements that suggests to me that they really understand either the research community, the environment, or how it is changing. Their pricing model for hybrid options seems almost designed to fail. As mandates strengthen it appears the company is likely to continue to fight them rather than adapt. But to accept my analysis you need to be believe my view that the subscription business model is no longer fit for purpose.

What this shows, more than anything else, is that the place where the battle for change will ultimately be fought out is in stock market. While Elsevier continues to tell its shareholders that it can deliver continuing profit growth from scholarly publishing with a subscription business model – it will be trapped into defending that business model against all threats. The Research Works Act is a part of that fight – as will be attempts to block simple and global mandates by funders on researchers in other places. While the shareholders believe that the status quo can continue the senior management of the company is trapped by a legacy mindset. Until shareholders accept that the company needs to take a short-term haircut the real investment required for change seems unlikely. And I don’t meant a few million here or there. I mean a full year’s profits ploughed back into the company over a few years to allow for root and branch change.

The irony seems that large-scale change requires that the investors get spooked. For that to happen something has to go very publicly wrong. The uproar over the support of SOPA and RWA is not, yet, enough to convince the analysts beyond Aspesi that something is seriously wrong. It is an interesting question what would be. My sense is that nothing big enough will come along soon enough and that those structural issues will gradually come into play leading to a long term decline. It may be that we are very near “Peak Elsevier”. Your mileage, of course, may vary.

In case it is not obvious I am not competent to offer financial or investment advice and no-one should view the proceeding as any form of such. 

Enhanced by Zemanta

The parable of the garage: Why the business model shift is so hard

An auto mechanic works on a rally car at the 2...
Image via Wikipedia

Mike Taylor has a parable on the Guardian Blog about research communication and I thought it might be useful to share one that I have been using in talks recently. For me it illustrates just how silly the situation is, and how hard it is to break out of the mindset of renting access to content for the incumbent publishers. It also, perhaps, has a happier ending.

Imagine a world very similar to our own. People buy cars, they fill them with fuel, they pay road tax and these things largely work as well as they do in our own world. There is just one difference, when a car needs its annual service and is taken to a garage – just as we do – for its mechanical checkup and maintenance. In return for the service, the car is then gifted to the mechanic, who in turn provides it back to the owner for a rental fee.

Some choose to do their own servicing, or form clubs where they can work together to help service each other’s cars, but this is both hard work, and to be frank, a little obsessive and odd. Most people are perfectly happy to hand over the keys and then rent them back. It works just fine. The trouble is society is changing, there is an increase in public transport, the mechanics are worried about their future, and the users seem keen to do new and strange things with the cars. They want to use them for work purposes, they want to loan them to friends, in some cases they event want to use them to teach others to drive – possibly even for money.

Now for the mechanic, this is a concern on two levels. First they are uncertain about their future as the world seems to be changing pretty fast. How can they provide certainty for themselves? Secondly all these new uses seem to have the potential to make money for other people. That hardly seems fair and the mechanics want a slice of that income, derived as it is from their cars. So looking closely at their existing contracts they identify that the existing agreements only provide for personal use. No mention is made of work use, certainly not of lending it to others, and absolutely not for teaching.

For the garage, in this uncertain world, this is a godsend. Here are a whole set of new income streams. They can provide for the users to do all these new things, they have a diversified income stream, and everyone is happy! They could call it “Universal Uses” – a menu of options that car users can select from according to their needs and resources. Everyone will understand that this is a fair exchange. The cars are potentially generating more money and everyone gets a share of it, both the users and the real owners, the mechanics.

Unfortunately the car users aren’t so happy. They object to paying extra. After all they feel that the garage is already recouping the costs of  doing the service and making a healthy profit so why do they need more? Having to negotiate each new use is a real pain in the backside and the fine print seems to be so fine that every slight variation requires a new negotiation and a new payment. Given the revolution in the possible uses they might want to be putting their cars to isn’t this just slowing down progress? Many of them even threaten to do their own servicing.

The problem for the garages is that they face a need for new equipment and staff training. Each time they see a new use that they don’t charge for they see a lost sales opportunity. They spend money on getting the best lawyers to draw up new agreements, make concessions on one use to try and shore up the market for another. At every stage there’s a need to pin everything down, lock down the cars, ensure they can’t be used for unlicensed purposes, all of which costs more money, leading to a greater need to focus on different possibilities for charging. And every time they do this it puts them more and more at odds with their customers. But they’re so focussed on a world view in which they need to charge for every possible different use of the “their” cars that they can’t see a way out beyond identifying each new possible use as it comes up and pinning it to the wall with a new contract and a new charge and new limitations to prevent any unexpected new opportunities for income being lost.

But things are changing. There’s a couple of radical new businesses down the road, BMC Motors and PLoS Garages. They do things differently. They charge up front for the maintenance and service but then allow the cars to be used for any purpose whatsoever. There’s a lot of scepticism – will people really pay for a service up front? How can people be sure that the service is any good? After all if they get the money when you get your car back what incentive do they have to make sure it keeps working? But there’s enough aggravation for a few people to start using them.

And gradually the view starts to shift. Where there is good service people want to come back with their new cars – they discover entirely new possibilities of use because they are free to experiment, earn more money, by more cars. The idea spreads and there is a slow but distinct shift – the whole economy gets a boost as the all of the licensing costs simply drop out of the system. But the thing that actually drives the change? It’s all those people who just got sick of having to go back to the garage every time they wanted to do something new. In the end the irritation and waste of time in negotiating for every new use just isn’t worth their time and effort. Paying up front is clean, clear, and simple. And lets everyone get on with the things they really want to do.

 

Enhanced by Zemanta

Network Enabled Research: Maximise scale and connectivity, minimise friction

BBN Technologies TCP/IP internet map early 1986
Image via Wikipedia

Prior to all the nonsense with the Research Works Act, I had been having a discussion with Heather Morrison about licenses and Open Access and peripherally the principle of requiring specific licenses of authors. I realized then that I needed to lay out the background thinking that leads me to where I am. The path that leads me here is one built on a technical understanding of how networks functional and what their capacity can be. This builds heavily on the ideas I have taken from (in no particular order) Jon Udell, Jonathan Zittrain, Michael Nielsen, Clay Shirky, Tim O’Reilly, Danah Boyd, and John Wilbanks among many others. Nothing much here is new but it remains something that very few people really get. Ironically the debate over the Research Works Act is what helped this narrative crystallise. This should be read as a contribution to Heather’s suggested “Articulating the Commons” series.

A pragmatic perspective

I am at heart a pragmatist. I want to see outcomes, I want to see evidence to support the decisions we make about how to get outcomes. I am happy to compromise, even to take tactical steps in the wrong direction if they ultimately help us to get where we need to be. In the case of publicly funded research we need to ensure that the public investment in research is made in such a way that it maximizes those outcomes. We may not agree currently on how to prioritize those outcomes, or the timeframe they occur on. We may not even agree that we can know how best to invest. But we can agree on the principle that public money should be effectively invested.

Ultimately the wider global public is for the most part convinced that research is something worth investing in, but in turn they expect to see outcomes of that research, jobs, economic activity, excitement, prestige, better public health, improved standards of living. The wider public are remarkably sophisticated when it comes to understanding that research may take a long time to bear fruit. But they are not particularly interested in papers. And when they become aware of academia’s obsession with papers they tend to be deeply unimpressed. We ignore that at our peril.

So it is important that when we think about the way we do research, that we understand the mechanisms and the processes that lead to outcomes. Even if we can’t predict exactly where outcomes will spring from (and I firmly believe that we cannot) that does not mean that we can avoid the responsibility of thoughtfully designing our systems so as to maximize the potential for innovation. The fact that we cannot, literally cannot under our current understanding of physics, follow the path of an electron through a circuit does not meant that we cannot build circuits with predictable overall behaviour. You simply design the system at a different level.

The assumptions underlying research communication have changed

So why are we having this conversation? And why now? What is it about today’s world that is so different? The answer, of course, is the internet. Our underlying communications and information infrastructure is arguably undergoing its biggest change since the development of the Gutenberg’s press. Like all developments of new communication networks, SMS, fixed telephones, the telegraph, the railways, and writing itself, the internet doesn’t just change how well we can do things, it qualitatively changes what we can do. To give a seemingly trivial example the expectations and possibilities of a society with mobile telephones is qualitatively different and their introduction has changed the way we behave and expect others to behave. The internet is a network on a scale, and with connectivity, that we have never had before. The potential change in our capacity as individuals, communities, and societies is therefore immense.

Why do networks change things? Before a network technology spreads you can imagine people, largely separated from each other, unable to communicate in this new way. As you start to make connections nothing much really happens, a few small groups can start to communicate in this new way, but that just means that they can do a few things a bit better. But as more connections form suddenly something profound happens. There comes a point where there is a transition – where suddenly nearly everyone is connected. For the physical scientists this is in fact a phase transition and can display extreme cooperativity – a sudden break where the whole system crystallizes into a new state.

At this point the whole is suddenly greater than the sum of its parts. Suddenly there is the possibility of coordination, of distribution of tasks that was simply not possible before. The internet simply does this better than any other network we have ever had. It is better for a range of reasons but they key ones are: its immense scale – connecting more people, and now machines than any previous network; its connectivity – the internet is incredibly densely connected, essentially enabling any computer to speak to any other computer globally; its lack of friction – transfer of information is very low cost, essentially zero compared to previous technologies, and is very very easy. Anyone with a web browser can point and click and be a part of that transfer.

What does this mean for research?

So if the internet and the web bring new capacity, where is the evidence that this is making a difference? If we have fundamentally new capacity where are the examples of that being exploited? I will give two examples, both very familiar to many people now, but ones that illustrate what can be achieved.

In late January 2009 Tim Gowers, a Fields medalist and arguably one of the worlds greatest living mathematicians, posed a question. Could a group of mathematicians working together be better at solving a problem than one on their own. He suggested a problem, one that he had an idea how to solve but felt was too challenging to tackle on his own. He then started to hedge his bets, stating:

“It is not the case that the aim of the project is [to solve the problem but rather it is to see whether the proposed approach was viable] I think that the chances of success even for this more modest aim are substantially less than 100%.”

A loose collection of interested parties, some world leading mathematicians, others interested but less expert, started to work on the problem. Six weeks later Gower’s announced that he believed the problem solved:

“I hereby state that I am basically sure that the problem is solved (though not in the way originally envisaged).”

In six weeks a non-planned assortment of contributors had solved a problem that a world-leading mathematician had thought both interesting, and too hard. And had solved it by a route other than the one he had originally proposed. Gower’s commented:

“It feels as though this is to normal research as driving is to pushing a car.”

For one of the world’s great mathematicians, there was a qualitative difference in what was possible when a group of people with the appropriate expertise were connected via a network through which they could easily and effectively transmit ideas, comments, and proposals. Three key messages emerge, the scale of the network was sufficient to bring the required resources to bear, the connectivity of the network was sufficient that work could be divided effectively and rapidly, and there was little friction in transferring ideas.

The Galaxy Zoo project arose out of a different kind of problem at at different kind of scale. One means of testing theories of the history and structure of the universe is to look at the numbers and types of different categories of galaxy in the sky. Images of the sky are collected and made freely available to the community. Researchers will then categories galaxies by hand to build up data sets to allow them to test theories. An experienced researcher could perhaps classify a hundred galaxies in a day. A paper might require a statistical sample of around 10,000 galaxy classifications to get past peer review. One truly heroic student classified 50,000 galaxies within their PhD, declaring at the end that they would never classify another again.

However problems were emerging. It was becoming clear that the statistical power offered by even 10,000 galaxies was not enough. One group would get different results to another. More classifications were required. Data wasn’t the problem. The Sloan Digital Sky Survey had a million galaxy images. But computer based image categorization wasn’t up to the job. The solution? Build a network. In this case a network of human participants willing to contribute by categorizing the galaxies. Several hundred thousand people classified the millions of images several times over in a matter of months. Again the key messages: scale of the network – both the number of images and the number of participants; the connectivity of the network – the internet made it easy for people to connect and participate; a lack of friction – sending images one way, and a simple classification was easy. Making the website easy, even fun, for people to use was a critical part of the success.

Galaxy Zoo changed the scale of this kind of research. It provided a statistical power that was unheard of and made it possible to ask fundamentally new types of questions. It also enabled fundamentally new types of people to play an effective role in the research, school children, teachers, full time parents. It enabled qualitatively different research to take place.

So why hasn’t the future arrived then?

These are exciting stories, but they remain just that. Sure I can multiply examples but they are still limited. We haven’t yet taken real advantage of the possibilities. There are lots of reasons for this but the fundamental one is inertia. People within the system are pretty happy for the most part with how it works. They don’t want to rock the boat too much.

But there are a group of people who are starting to be interested in rocking the boat. The funders, the patient groups, that global public who want to see outcomes. The thought process hasn’t worked through yet, but when it does they will all be asking one question. “How are you building networks to enable research”. The question may come in many forms – “How are you maximizing your research impact?” – “What are you doing to ensure the commercialization of your research?” – “Where is your research being used?” – but they all really mean the same thing. How are you working to make sure that the outputs of your research are going into the biggest, most connected, lowest friction, network that they possibly can.

As service providers, all of those who work in this industry – and I mean all, from the researchers to the administrators, to the publishers to the librarians – will need to have an answer. The suprising thing is that it’s actually very easy. The web makes building and exploiting networks easier than it has ever been because it is a network infrastructure. It has scale, billions of people – billions of computers – exabytes of information resources – exaflops of computational resources. It has connectivity on a scale that is literally unimaginable – the human mind can’t conceive of that number of connections because the web has more. It is incredibly low in friction – the cost of information transfer is in most cases so close to zero as to make no difference.

Service requirements

To exploit the potential of the network all we need to do is get as much material online as fast as we can. We need to connect it up, to make it discoverable, to make sure that people can find and understand and use it. And we need to ensure that once found those resources can be easily transferred, shared, and used. And used in any way – at network scale the system is designed to ensure  that resources get used in unexpected ways. At scale you can have serendipity by design, not by blind luck.

The problem arises with the systems we have in place to get material online. The raw material of science is not often in a state where putting it online is immediately useful. It needs checking, formatting, testing, indexing. All of this does require real work, and real money. So we need services to do this, and we need to be prepared to pay for those services. The trouble is our current system has this backwards. We don’t pay directly for those services so those costs have to be recouped somehow. And the current set of service providers do that by producing the product that we really need and want and then crippling it.

Currently we take raw science and through a collaborative process between researchers and publishers we generate a communication product, generally a research paper, that is what most of the community holds as the standard means by which they wish to receive information. Because the publishers receive no direct recompense for their contribution they need to recover those costs by other means. They do this by artificially introducing friction and then charging to remove it.

This is a bad idea on several levels. Firstly because it means the product we get doesn’t have the maximum impact it could, because its not embedded in the largest possible network. From a business perspective it creates risks, publishers have to invest up front and then recoup money later, rather than being confident that expenditure and cash flow are coupled. This means, for instance that if there is a sudden rise (or fall) in the number of submissions there is no guarantee that cash flows or costs will scale with that change. But the real problem is that it distorts the market. Because on the researcher side we don’t pay for the product of effective communication we don’t pay much attention to what we’re getting. On the publisher side it drives a focus on surface and presentation, because it enhances the product in the current purchasers eyes, rather than a ruthless focus on production costs and shareability.

Network Ready Research Communication

If we care about taking advantage of the web and internet for research then we must tackle the building of scholarly communication networks. These networks will have those critical characteristics described above, scale and a lack of friction. The question is how do we go about building them. In practice we actually already have a network at huge scale – the web and the internet do that job for us, connecting essentially all professional researchers and a large proportion of the interested public. There is work to be done on expanding the reach of the network but this is a global development goal, not something specific to research.

So if we already have the network then what is the problem? The issue lies in the second characteristic – friction. Our current systems are actually designed to create friction. Before the internet was in place our network was formed of a distribution system involving trucks and paper – reducing costs to reasonable levels meant charging for that distribution process. Today those distribution costs have fallen to as near zero as makes no difference, yet we retain the systems that add friction unnecessarily. Slow review processes, charging for access, formats and discovery tools that are no longer fit for purpose.

What we need to do is focus on the process of taking research we that we do and convert it into a Network Ready form. That is we need to have access to the services that take our research and make them ready to exploit our network infrastructure – or we need to do it ourselves. What does “Network Ready” mean? A piece of Network Ready Research will be modular and easily discoverable, it will present different facets that will allow people and systems to use it in a wide variety of ways, it will be compatible with the widest range of systems and above all it will be easily shareable. Not just copyable or pasteable but easily shared through multiple systems while carrying with it all the context required to make use of it, all the connections that will allow a user to dive deeper into its component parts.

Network Ready Research will be interoperable, socially, technically, and legally with the rest of the network. The network is more than just technical infrastructure. It is also built up from the social connections, a shared understanding of the parameters of re-use, and a compatible system of checks and balances. The network is the shared set of technical and social connections that together enable new connections to be made. Network Ready Research will move freely across that, building new connections as it goes, able to act as both connecting edge and connected node in different contexts.

Building and strengthening the network

If you believe the above, as I do, then you see a potential for us to qualitatively change our capacity as a society to innovate, understand our world, and help to make it a better place. That potential will be best realized by building the largest possible, most effective, and lowest friction network possible. A networked commons in which ideas and data, concepts and expertise can be most easily shared, and can most easily find the place where they can do the most good.

Therefore the highest priority is building this network, making its parts and components interoperable, and making it as easy as possible to connect up networks that already exist. For an agency that funds research and seeks to ensure that research makes a difference the only course of action is to place the outputs of that research where they are most accessible on the network. In blunt terms that means three things: free at the point of access, technically interoperable with as many systems as possible, and free to use for any purpose. The key point is that at network scale the most important uses are statistically likely to be unexpected uses. We know we can’t predict the uses, or even success, of much research. That means we must position it so it can be used in unexpected ways.

Ultimately, the bigger the commons, the bigger the network, the better. And the more interoperable and the widest range of uses the better. That ultimately is why I argue for liberal licences, for the exclusion of non-commercial terms. It is why I use ccZero on this blog and for software that I write where I can. For me, the risk of commercial enclosure is so much smaller than the risk of not building the right networks, or of creating fragmented incompatible networks, of ultimately not being able to solve the crises we face today in time to do any good, that the course of action is clear. At the same time we need to build up the social interoperability of the network, to call out bad behavior and perhaps in some cases to isolate its perpetrators but we need to find ways of doing this that don’t damage the connectivity and freedom of movement on the network. Legal tools are useful to assure users of interoperability and their rights, otherwise they just become a source of friction. Social tools are a more viable route for encouraging desirable behaviour.

The priority has to be achieving scale and lowering friction. If we can do this then we have the potential to create a qualitative jump in our research capacity on a scale not seen since the 18th century and perhaps never. And it certainly feels like we need it.

Enhanced by Zemanta

Response to the RFI on Public Access to Research Communications

Have you written your response to the OSTP RFIs yet? If not why not? This is amongst the best opportunities in years to directly tell the U.S. government how important Open Access to scientific publications is and how to start moving to a much more data centric research process. You’d better believe that the forces of stasis, inertia, and vested interests are getting their responses in. They need to be answered.

I’ve written mine on public access and you can read and comment on it here. I will submit it tomorrow just in front of the deadline but in the meantime any comments are welcome. It expands on and discusses many of the same issues, specifically on re-configuring the debate on access away from IP and towards services, that have been in my recent posts on the Research Works Act.

Enhanced by Zemanta

IP Contributions to Scientific Papers by Publishers: An open letter to Rep Maloney and Issa

Dear Representatives Maloney and Issa,

I am writing to commend your strong commitment to the recognition of intellectual property contributions to research communication. As we move to a modern knowledge economy, supported by the technical capacity of the internet, it is crucial that we have clarity on the ownership of intellectual property arising from the federal investment in research. For the knowledge economy to work effectively it is crucial that all players receive fair recompense for the contribution of intellectual property that they make and the services that they provide.

As a researcher I like to base my work on solid data, so I thought it might interest you to have some quantitation of the level of contribution of IP that publishers make to the substance of scientific papers. In this, I have focussed on the final submitted version of papers after peer review as this is the version around which the discussion of mandates for deposition in repositories revolve. This also has the advantage of separating the typesetting and copyright in layout, clearly the property of the publishers from the intellectual substance of the research.

Contribution of IP to the final (post peer review) submitted versions of papers

Methodology: I examined the final submitted version (i.e. the version accepted for publication) of the ten most recent research papers on which I was an author along with the referee and editorial comments received from the publisher. For each paper I examined the text of the final submitted version and the diagrams and figures.  As the only IP of significance in this case is copyright the specific contributions that were searched for were text or elements of figures contributed by the publisher that satisfied the requirements for obtaining copyright. Figures that were re-used from other publications (where the copyright had been transferred to the other publisher and permission been obtained to republish) were not included as these were considered “old IP” that did not relate to new IP embodied in the specific paper under consideration. The text and figures were searched for specific creative contributions from the publisher and these were quantified for each paper.

Results: The contribution of IP by publishers to the final submitted versions of these ten papers, after peer review had been completed, was zero. Zip. Nada. Zilch. Not one single word, line, or graphical element was contributed by the publisher or the editor acting as their agent. A small number of single words, or forms of expression, were found that were contributed by external peer reviewers. However as these peer reviewers do not sign over copyright to the publisher and are not paid this contribution cannot be considered work for hire and any copyright resides with the original reviewers.

Limitations: This is a small and arguably biased study based on the publications I have to hand. I recommend that other researchers examine their own oeuvre and publish similar analyses so that effects of discipline, age, and venue of publication can be examined. Following such analysis I ask that researchers provide the data via twitter using the hashtag #publisheripcontrib where I will aggregate it and republish.

Data availability: I regret that the original submissions can not be provided as the copyright in these articles was transferred after acceptance for publication to the publishers. I can not provide the editorial reports as these contain material from the publishers for which I do not have re-distribution rights.

The IP argument is sterile and unproductive. We need to discuss services.

The analysis above at its core shows how unhelpful framing this argument around IP is. The fact that publishers do not contribute IP is really not relevant. Publishers do contribute services, the provision of infrastructure, the management of the peer review process, dissemination and indexing, that are crucial for the current system of research dissemination via peer reviewed papers. Without these services papers would not be published and it is therefore clear that these services have to be paid for. What we should be discussing is how best to pay for those services, how to create a sustainable market place in which they can be offered, and what level of service the federal government expects in exchange for the services it is buying.

There is a problem with this. We currently pay for these services in a convoluted fashion which is the result of historical developments. Rather than pay up front for publication services, we currently give away the intellectual property in our papers in exchange for publication. The U.S. federal and state governments then pay for these publication services indirectly by funding libraries to hire access back to our own work. This model made sense when the papers were physically on paper; distribution, aggregation, and printing were major components of the cost. In that world a demand side business model worked well and was appropriate.

In the current world the costs of dissemination and provision of access are as near to zero as makes no difference. The major costs are in the peer review process and preparing the paper in a version that can be made accessible online. That is, we have moved from a world where the incremental cost of dissemination of each copy was dominant, to a world where the first copy costs are dominant and the incremental costs of dissemination after those first copy costs are negligible. Thus we must be clear that we are paying for the important costs of the services required to generate that first web accessible copy, and not that we are supporting unnecessary incremental costs. A functioning market requires, as discussed above, that we have clarity on what is being paid for.

In a service based model the whole issue of IP simply goes away. It is clear that the service we would wish to pay for is one in which we generate a research communication product which provides appropriate levels of quality assurance and is as widely accessible and available for any form of use as possible. This ensures that the outputs of the most recent research are available to other researchers, to members of the public, to patients, to doctors, to entrepreneurs and technical innovators, and not least to elected representatives to support informed policy making and legislation. In a service based world there is no logic in artificially reducing access because we pay for the service of publication and the full first copy costs are covered by the purchase of that service.

Thus when we abandon the limited and sterile argument about intellectual property and move to a discussion around service provision we can move from an argument where no-one can win to a framework in which all players are suitably recompensed for their efforts and contributions, whether or not those contributions generate IP in the legal sense, and at the same time we can optimise the potential for the public investment in research to be fully exploited.

HR3699 prohibits federal agencies from supporting publishers to move to a transparent service based model

The most effective means of moving to a service based business model would be for U.S. federal agencies as the major funders of global research to work with publishers to assure them that money will be available for the support of publication services for federally funded researchers. This will require some money to be put aside. The UK’s Wellcome Trust estimates that they expect to spend approximately 1.5% of total research funding on publication services. This is a significant sum, but not an overly large proportion of the whole. It should also be remembered that governments, federal and state, are already paying these costs indirectly through overheads charges and direct support to research institutions via educational and regional grants. While there will be additional centralised expenditure over the transitional period in the longer term this is at worst a zero-sum game. Publishers are currently viable, indeed highly profitable. In the first instance service prices can be set so that the same total sum of money flows to them.

The challenge is the transitional period. The best way to manage this would be for federal agencies to be able to guarantee to publishers that their funded researchers would be moving to the new system over a defined time frame. The most straight forward way to do this would be for the agencies to have a published program over a number of years through which the publication of research outputs via the purchase of appropriate services would be made mandatory. This could also provide confidence to the publishers by defining the service level agreements that the federal agencies would require, and guarantee a predictable income stream over the course of the transition.

This would require agencies working with publishers and their research communities to define the timeframes, guarantees, and service level agreements that would be put in place. It would require mandates from the federal agencies as the main guarantor of that process. The Research Works Acts prohibits any such process. In doing so it actively prevents publishers from moving towards business models that are appropriate for today’s world. It will stifle innovation and new entrants to the market by creating uncertainty and continuing the current obfuscation of first copy costs with dissemination costs. In doing so it will damage the very publishers that support it by legislatively sustaining an out of date business model that is no longer fit for purpose.

Like General Motors, or perhaps more analogously, Lehman Brothers, the incumbent publishers are trapped in a business model that can not be sustained in the long term. The problem for publishers is that their business model is predicated on charging for the dissemination and access costs that are disappearing and not explicitly charging for the costs that really matter. Hiding the cost of one thing in a charge for another is never a good long term business strategy. HR3699 will simply prop them up for a little longer, ultimately leading to a bigger crash when it comes. The alternative is a managed transition to a better set of business models which can simultaneously provide a better return on investment for the taxpayer.

We recognise the importance of the services that scholarly publishers provide. We want to pay publishers for the services they provide because we want those services to continue to be available and to improve over time. Help us to help them make that change. Drop the Research Works Act.

Yours sincerely

Cameron Neylon

Enhanced by Zemanta