Debt, Pensions and Capitalisation: Funding schol comms innovation

Money
Money (Photo credit: Tax Credits)

One of the things that has been bothering me for some time is the question of finding the right governance and finance models for supporting both a core set of scholarly communications infrastructures and shared innovation spaces. I wrote about those issues in the wake of the Elsevier purchase of Mendeley and really my view hasn’t changed very much since then. While I realise this post will just reinforce the views of some that I am too interested in financial instrumentalism, in this post I wanted to think about how we bridge the funding gap from promising pilot to community infrastructure.

Some of the most exciting services and systems being developed in the scholarly communications space are venture or commercially funded for-profits. This model allows large scale experimentation, and rapid scale up when things appear to be working. But the end game for investors is to sell and there are only three or four likely purchasers in our space – and the community as a whole does not trust these players because our interests do not align. There’s nothing wrong with that mis-alignment but we shouldn’t expect an Elsevier or a Thomson-Reuters or a Nature or an EBSCO to have the same interests as each other, or of an institution or single researcher.

Some of the most important services and systems in our space are constructed as not-for-profits so as to ensure that this doesn’t happen and that control remains with the community. Governance structures are very important here. The challenges of setting up ORCID as a trusted community organisation demonstrated that. But equally challenging for an organisation like ORCID is getting from establishment to financial sustainability. Particularly for those organisations founded as not-for-profits success can be a major financial challenge. Without the opportunity to raise capital investment, funding the growth of systems, services and mostly staff that are required to reach break even can distract an organisation from exactly the focus on systems and services that are crucial to success.

At the London Book Fair I ran into Lucy Montgomery from Knowledge Unlatched which is facing just this kind of issue. They have just run a successful pilot in which library consortia pledged to cover the costs of making 28 books freely accessible in electronic form. Making the platform sustainable requires a significant scale up and will take time and money. KU is a UK Community Interest Company, an equivalent of the US 501(c)3 that is familiar in this space which means that it can’t take on private equity investment.

This isn’t a revenue problem, its a capital problem. As an organisation like this grows it needs investment to grow its resources, its capital, until it can generate the revenue to cover its costs. For a for-profit this can come from outside investors who expect (sometimes) to make a return. But again the problem in our space is that for an investor to realise that return the equity that they buy into has to be sold. In the wider start-up world an IPO can release that money but in our space the options are limited in practice to selling to a small number of companies.

So we are left with the question – can we structure the governance of a for-profit so as to ensure alignment with community values and needs, while not scaring off investors. Or can we find other forms of financing that work in the not-for-profit space? The first question still seems very hard to answer positively, the structure and culture of start-up investing is driven by the big pay-off, and control for investors is a crucial part of maximising that pay off. But there are some potential options for the second.

In 2009 a large for profit network of child care centres in Australia collapsed. The company was wound up and the assets purchased by a new not for profit called Goodstart Early Learning. The purchase and further development was funded largely through debt issued by the not-for-profit. This was an interest bearing loan that enabled social investors to take an interest and gain a financial upside. The risks in the investment are relatively low, at least compared to start up investing, and consequently the returns are not as stellar but as part of a portfolio this can be a good investment. It can also be a particularly attractive form of investment for those looking to make socially responsible use of their capital.

Could debt financing or similar tools work in scholarly communications innovation? This would require us to understand the relationship between the capital requirements, the growth potential and where any subsequent value is created. For a service or innovation in our space the capital requirement can vary wildly but with a pilot or proof of concept in hand scaling up in many cases could cost from $1M – $15M. These are not huge sums.

What returns would an investor want? And who would those investors be? The claim for many new services is that they will generate efficiencies, and therefore savings in the longer term. Academic institutions, mostly through libraries, fund these activities in part to realise future savings. But libraries often can’t make investments. It would seem obvious that we should structure any debt offering in a way that does allow libraries and institutions to buy in, perhaps through memberships. At the same time we don’t want to raise all the money from within our community, and that means packaging up some of the potential financial upside, and yes, selling it in effect. A package that offers future discounts for early members and interest for investors could perhaps work – the details are beyond me but on the surface it looks plausible.

But are there investors out there? There are social entrepreneurs and investors interested in this space, but the risks may be too high and the returns too low, or at least too long term for many to be interested. But there is a class of investors looking for long term investments, that has concerns about social responsibility and actually has an interest in academic institutions. Lucy’s stroke of genius was to identify that University Pension funds could be a source of investment for academic innovation. Many are looking right now to re-consider their stance on ethical investing but perhaps most important this tends to be our money.

If we believe in the potential for change – if we are invested in the view that efficiencies are possible and will make the institutions that many of us work in a better place – then perhaps we can and should invest in making that happen.

 

Enhanced by Zemanta

It’s funny…

…one of the motivations I had to get writing again was a request from someone at a traditional publisher to write more because it “was so useful to have a moderate voice to point to”. Seems I didn’t do so well at that with that first post back.

When you get a criticism about tone it is easy to get defensive. It’s particularly easy when there has been a history of condescension, personal attacks and attacks on the fundamental logic of what you’re doing from “the other side”. But of course many times, perhaps mostly, those who are concerned about tone and civility are not the same ones who made those attacks – there often is no “other side” just a gradation of views. Its also easy to feel that comments about tone or “reasonableness” are a strategy to discredit an argument by attacking the person. Again, this is a strategy that has been used against OA advocates, including myself but that doesn’t mean that its necessarily the motivation behind any specific expression of concern. Equally it can be seductive to view criticism of tone as success, that the “opposition” can’t deal with the argument itself. That way however lies the madness of far too many internet pundits and sterile argumentative discussion forums focussed on scoring points. I use many strategies for persuasion, including ridicule, but I try not to attack people, only ideas. I wouldn’t make any claim to be perfect at that – and I lose my temper as much as the next person –  but I try to own my mis-steps.

But, and its a big but, the sense I get is that what has upset people is a sense that the Access to Research program is a positive, if small step, and that it is unreasonable for myself and others to criticise it as being “too small”. I want to be clear about this. My view is not that it’s too small, but that it is a step in entirely the wrong direction. The reason for this is that it couples a very small increase in access to a contractual decrease in rights. This is part of a broader strategy of the traditional publishing industry to couple any increase in access to placing more contractual obligations on users. Licenses for Europe in which agreements to allow text mining would be coupled to new licensing conditions, CHORUS, where access to read is to be controlled through publishers, and Access to Research are all about building systems that enable contractual control over the use of content, rather than actively seeking to create a space where content can be freely re-used. In the case of Access to Research most people would be better getting a membership at their local university library – where the restrictions on their use would be much less. A much more positive (and potentially easier and cheaper) approach would have been to strike terms from library contracts that make it difficult for them to offer memberships to community members outside their institutions or a program to support those libraries in creating membership schemes. These efforts to retain control and the fear of losing control are also corrosive to the long term future value of traditional publishers but that’s a topic for another post.

So I’m not going to applaud Access to Research, but I would like to think that I do applaud positive steps, even small ones, regardless of who makes them. I do have to accept a criticism that has been made to me that I’m not as good at this as I should be. There are positive steps, the traditional publisher support of ORCID has been exemplary, the efforts by Springer and Wiley to keep licensing offerings simple and interesting experiments in executable papers by Elsevier and data publication from NPG are all valuable – even where I disagree with the details of the strategy. So here and now I will make a commitment to calling out positive steps in what I see is the right direction, even if small. I’m also more than happy to talk to anyone, in complete confidence, about what I see as positive and practical steps they might take, and to discuss how they can find easy wins that work within the limitations they face. I’ve done this in the past with many organisations and I think people have found those discussions useful.

Discussion is always more useful than shouting matches. And sometimes that discussion will be robust, and sometimes people will get angry. It’s always worth trying to understand why someone has a strong response. Of course a strong response will always be better received if it focuses on issues. And that goes regardless of which side of any particular fence we might be standing on.

 

Improving on “Access to Research”

Access to Research is an initiative from a 20th Century industry attempting to stave off progress towards the 21st Century by applying a 19th Century infrastructure. Depending on how generous you are feeling it can either be described as a misguided waste of effort or as a cynical attempt to divert the community from tackling the real issues of implementing full Open Access. As is obvious I’m not a neutral observer here so I recommend reading the description at the website. Indeed I would also recommend anyone who is interested to take a look at the service itself.

Building a map of sites

I was interested in possibly doing this myself. In many ways as a sometime researcher who no longer has access to a research library I’m exactly the target audience. Unfortunately the Access to Research website isn’t really very helpful. The Bath public library where I live isn’t a site, nor is Bristol. So which site is closest? Aylesbury perhaps? Or perhaps somewhere near to the places I visit in London. Unfortunately there is no map provided to help find your closest site. For an initiative that is supposed to be focused on user needs this might have been a fairly obvious thing to provide. But no problem, it is easy enough to create one myself, so here it is (click through for a link to the live map).

a2r
Access to Research for the UK…or at least certain corners of England anyway.

What I have done is to write some Python code that screen scrapes the Access to Research website to obtain the list of participating libraries, and their URLs. Then my little robot visits each of those library websites and looks for something that matches a UK post code. I’ve then uploaded that to Google Maps to create the map itself. You can also see a version of the code via the IPython Notebook Viewer. Of course the data and code is also available. All of this could easily be improved upon. Surrey County Council don’t actually provide postcodes or even addresses for their libraries on their web pages [ed: Actually Richard Smith and Gary offer sources for this data in the comments – another benefit of an open approach]. I’m sure someone could either fix the data or improve the code to create better data. It would also be nice to use an open source map visualisation rather than Google Maps to enable further re-use but I didn’t want to spend too long on this.

The irony

You might well ask why I would spend a Saturday afternoon making it easier to use an initiative which I feel is a cynical political ploy. The answer is to prove a point. The knowledge and skills I used to create this map are not rare – nor is the desire to contribute to making resources better and more useful for others. But in gathering this data and generating the map I’ve violated pretty much every restriction which traditional publishers want to apply to anyone using “their” work.

What I have done here is Text Mining. Something these publishers claim to support, but only under their conditions and licenses. Conditions that make it effectively impossible to do anything useful However I’ve done this without permission, without registration, and without getting a specific license to do so. All of this would be impossible if this were research that I had accessed through the scheme or if I had agreed to the conditions that legacy publishers would like to lay down for us to carry out Content Mining.

Lets take a look at the restrictions you agree to to use the Access to Research service.

I can only use accessed information for non-commercial research and private study.

Well I work for a non-profit, but this is the weekend. Is that private? Not sure, but there are no ads on this website at least. On the other hand I’ve used a Google service. Does that make it commercial? Google are arguably benefiting from me adding data to their services, and the free service I used is a taster of the more powerful version they charge for.

I will only gain access through the password protected secure service

oh well, not really, although I guess you might argue that there wasn’t an access restricted system in this case. But is access via a robot bypassing the ‘approved’ route?

I will not build any repository or other archive

…well there would hardly be any point if I hadn’t.

I will not download

…well that was the polite thing to do, grab a copy and process it to create the dataset. Otherwise I’d have to keep hitting the website over and over again. And that would be rude.

I will not forward, distribute, sell

well I’m not selling it at least…

I will not adapt, modify

…ooops.

I will not make more than one copy…and I will not remove any copyright notices

well there’s one copy on my machine, one on github, one in however many forks of the repo there are, one on Google…oh and there weren’t any copyright notices on the website. This probably makes it All Rights Reserved with an implied license to view and process the web page but the publisher argument that says I need a license for text mining would mean that I’m in violation of the implied license I would guess. I would argue that as I have only made as many copies required to process and that I have extracted facts to which copyright doesn’t apply I’m fine…but I’m not a lawyer, and this is not legal advice.

I will not modify…any digital rights information. 

Well there’s one that I didn’t violate! Thank heavens for that. But only because there wasn’t any statement of usage conditions for the site data.

I will not allow the making of any derivative works

oh dear…

I will not copy otherwise retain [sic] any [copies] onto my personal systems

…sigh

I agree not to rely on the publications as a substitute for specific medical, professional or expert advice. 

What is ‘expert advice’ I wonder? In any case, don’t rely on the map if you’re in a mad rush.

I could easily go on with the conditions required to sign up for the Elsevier Text Mining program. Again I would either be in clear violation of most of them or it would be difficult to tell. Peter Murray-Rust has a more research-oriented dissection of the problems with the Elsevier conditions in several posts at his blog. Its also quite difficult to tell because Elsevier don’t make those conditions publicly available. The only version I have seen is on Peter’s Blog.

Conclusions

You may feel that I’m making an unfair comparison, that the research content that publishers want to control the use of is different and that the analysis I have done, and the value I have added is different to that involved in using, reading and analysing research. That is both incorrect, and missing the point. The web has brought us a rich set of tools and made it easy for those skilled with them to connect with interesting problems that they can be applied to. The absolutely core question for effective 21st Century research communication is how to enable those tools, skillsets, and human energy to be applied to the outputs of research.

I did this on a rainy Saturday afternoon because I could, because it helped me learn a few things, and because it was fun. I’m one of tens or hundreds of thousands who could have done this, who might apply those skills to cleaning up the geocoding of species in research articles, or extracting chemical names, or phylogenetic trees, or finding new ways to understand the networks of influence in the research literature. I’m not going to ask for permission, I’m not going to go out of my way to get access, and I’m not going to build something I’m not allowed to share. A few dedicated individuals will tackle the permissions issues and the politics. The rest will just move on to the next interesting, and more accessible, puzzle.

Traditional publishers actions, whether this access initiative, CHORUS, or their grudging approach to Open Access implementation, consistently focus on retaining absolute control over any potential use of content that might hypothetically be a future revenue source. This means each new means of access, each new form of use, needs to be regulated, controlled and licensed. This is perfectly understandable. It is the logical approach for a business model which is focussed on monetising a monopoly control over pieces of content. It’s just a really bad way of serving the interests of authors in having their work used, enhanced, and integrated into the wider information commons that the rest of the world uses.

 

Open is a state of mind

English: William Henry Fox Talbot's 'The Open ...
English: William Henry Fox Talbot’s ‘The Open Door’ (Photo credit: Wikipedia)

“Open source” is not a verb

Nathan Yergler via John Wilbanks

I often return to the question of what “Open” means and why it matters. Indeed the very first blog post I wrote focussed on questions of definition. Sometimes I return to it because people disagree with my perspective. Sometimes because someone approaches similar questions in a new or interesting way. But mostly I return to it because of the constant struggle to get across the mindset that it encompasses.

Most recently I addressed the question of what “Open” is about in a online talk I gave for the Futurium Program of the European Commission (video is available). In this I tried to get beyond the definitions of Open Source, Open Data, Open Knowledge, and Open Access to the motivation behind them, something which is both non-obvious and conceptually difficult. All of these various definitions focus on mechanisms – on the means by which you make things open – but not on the motivations behind that. As a result they can often seem arbitrary and rules-focussed, and do become subject to the kind of religious wars that result from disagreements over the application of rules.

In the talk I tried to move beyond that, to describe the motivation and the mind set behind taking an open approach, and to explain why this is so tightly coupled to the rise of the internet in general and the web in particular. Being open as opposed to making open resources (or making resources open) is about embracing a particular form of humility. For the creator it is about embracing the idea that – despite knowing more about what you have done than any other person –  the use and application of your work is something that you cannot predict. Similarly for someone working on a project being open is understanding that – despite the fact you know more about the project than anyone else – that crucial contributions and insights could come from unknown sources. At one level this is just a numbers game, given enough people it is likely that someone, somewhere, can use your work, or contribute to it in unexpected ways. As a numbers game it is rather depressing on two fronts. First, it feels as though someone out there must be cleverer than you. Second, it doesn’t help because you’ll never find them.

Most of our social behaviour and thinking feels as though it is built around small communities. People prefer to be a (relatively) big fish in a small pond, scholars even take pride in knowing the “six people who care about and understand my work”, the “not invented here” syndrome arises from the assumption that no-one outside the immediate group could possibly understand the intricacies of the local context enough to contribute. It is better to build up tools that work locally rather than put an effort into building a shared community toolset. Above all the effort involved in listening for, and working to understand outside contributions, is assumed to be wasted. There is no point “listening to the public” because they will “just waste my precious time”. We work on the assumption that, even if we accept the idea that there are people out there who could use our work or could help, that we can never reach them. That there is no value in expending effort to even try. And we do this for a very good reason; because for the majority of people, for the majority of history it was true.

For most people, for most of history, it was only possible to reach and communicate with small numbers of people. And that means in turn that for most kinds of work, those networks were simply not big enough to connect the creator with the unexpected user, the unexpected helper with the project. The rise of the printing press, and then telegraph, radio, and television changed the odds, but only the very small number of people who had access to these broadcast technologies could ever reach larger numbers. And even they didn’t really have the tools that would let them listen back. What is different today is the scale of the communication network that binds us together. By connecting millions and then billions together the probability that people who can help each other can be connected has risen to the point that for many types of problem that they actually are.

That gap between “can” and “are”, the gap between the idea that there is a connection with someone, somewhere, that could be valuable, and actually making the connection is the practical question that underlies the idea of “open”. How do we make resources, discoverable, and re-usable so that they can find those unexpected applications? How do we design projects so that outside experts can both discover them and contribute? Many of these movements have focussed on the mechanisms of maximising access, the legal and technical means to maximise re-usability. These are important; they are a necessary but not sufficient condition for making those connections. Making resources open enables, re-use, enhances discoverability, and by making things more discoverable and more usable, has the potential to enhance both discovery and usability further. But beyond merely making resources open we also need to be open.

Being open goes in two directions. First we need to be open to unexpected uses. The Open Source community was first to this principle by rejecting the idea that it is appropriate to limit who can use a resource. The principle here is that by being open to any use you maximise the potential for use. Placing limitations always has the potential to block unexpected uses. But the broader open source community has also gone further by exploring and developing mechanisms that support the ability of anyone to contribute to projects. This is why Yergler says “open source” is not a verb. You can license code, you can make it “open”, but that does not create an Open Source Project. You may have a project to create open source code, an “Open-source project“, but that is not necessarily a project that is open, an “Open source-project“. Open Source is not about licensing alone, but about public repositories, version control, documentation, and the creation of viable communities. You don’t just throw the code over the fence and expect a project to magically form around it, you invest in and support community creation with the aim of creating a sustainable project. Successful open source projects put community building, outreach, both reaching contributors and encouraging them, at their centre. The licensing is just an enabler.

In the world of Open Scholarship, and I would include both Open Access and Open Educational Resources in this, we are a long way behind. There are technical and historical reasons for this but I want to suggest that a big part of the issue is one of community. It is in large part about a certain level of arrogance. An assumption that others, outside our small circle of professional peers, cannot possibly either use our work or contribute to it. There is a comfort in this arrogance, because it means we are special, that we uniquely deserve the largesse of the public purse to support our work because others cannot contribute. It means do note need to worry about access because the small group of people who understand our work “already have access”. Perhaps more importantly it encourages the consideration of fears about what might go wrong with sharing over a balanced assessment of the risks of sharing versus the risks of not sharing, the risks of not finding contributors, of wasting time, of repeating what others already know will fail, or of simply never reaching the audience who can use our work.

It also leads to religious debates about licenses, as though a license were the point or copyright was really a core issue. Licenses are just tools, a way of enabling people to use and re-use content. But the license isn’t what matters, what matters is embracing the idea that someone, somewhere can use your work, that someone, somewhere can contribute back, and adopting the practices and tools that make it as easy as possible for that to happen. And that if we do this collectively that the common resource will benefit us all. This isn’t just true of code, or data, or literature, or science. But the potential for creating critical mass, for achieving these benefits, is vastly greater with digital objects on a global network.

All the core definitions of “open” from the Open Source Definition, to the Budapest (and Berlin and Bethesda) Declarations on Open Access, to the Open Knowledge Definition have a common element at their heart – that an open resource is one that any person can use for any purpose. This might be good in itself, but thats not the real point, the point is that it embraces the humility of not knowing. It says, I will not restrict uses because that damages the potential of my work to reach others who might use it. And in doing this I provide the opportunity for unexpected contributions. With Open Access we’ve only really started to address the first part, but if we embrace the mind set of being open then both follow naturally.

Enhanced by Zemanta

Guest Post – The Open Access Button

English: Open Access logo designed by the Publ...
Open Access logo designed by the Public Library of Science (Photo credit: Wikipedia)

This is a guest post from Joseph McArthur and David Carroll. They have an idea and they’re looking for your help to make it happen.

Bio: David and Joe are full time health advocates who do their degrees and jobs in their spare time. They can be found in the twitterverse at @Mcarthur_Joe and @davidecarroll

For the past few months, like chickens on eggs we have been sitting on what we think is a game changing idea. We’ve been sitting on it because despite trying as two student activists, we just haven’t found the help we need to make it a reality. So to preface what you’re about to read – we need your help.

It almost goes without saying that the current model of scientific publishing needs a rethink. Every day, academics, students and the public are denied access to the vital research they both need and paid for. Open Access is a solution to this problem; Open Access is the practice of providing unrestricted access via the Internet to peer-reviewed scholarly journal articles. If Open Access is new to you, we’d recommend you watch this video on Open Access before continuing on. You only need look to PLOS’ recent award program, or the story of Jack Andraka, the 16 year old who used Open Access papers to invent a diagnostic test for pancreatic cancer to understand the positive impact of open access to research.

Despite the potential of Open Access to speed innovation, save lives and empower all, we’ve got a long way to go until it’s the norm. In fact, in some respects we’re actually moving away from a more open world. You only need to look at decisions made by Research Council-UK, to see this. Research Council-UK is one of the largest funders of public research with a budget of £3 billion and they have recently withdrawn three policies that once made their open access policy exemplary.

If we want to bring about a more open community we’ll need more tools, more information and more engagement around the issue. That’s where our idea comes in.

Imagine a browser-based tool which allowed you to track every time someone was denied access to a paper? Better yet, imagine if that tool told gave you basic information about where in the world they were or their profession and why they were looking.  Integrating this into one place would create a real time, worldwide, interactive picture of the problem. The integration of social media would allow us to make this problem visible to the world. Lastly, imagine if the tool actually helped the person gain access to the paper they’d been denied access too in the first place. Incentivising use and opening the barriers to knowledge combined can make this really powerful.

That’s what we’re imagining. We’re calling it the Open Access button. Every paywall met is an isolated incident; it’s time we capture those individual moments of injustice and frustration to turn them into positive change.

We’ve figured out how all this and more can be done – If you’re a programmer, we’d love your help creating a basic prototype to prove this is a viable idea. After that our dreams include a having a slick website to provide a home for the button and an accompanying campaign. If you have anything you feel that can help us and you want to help create change to move towards a more open world join us by contacting oabutton@gmail.com.

Enhanced by Zemanta

Chapter, Verse, and CHORUS: A first pass critique

And this is the chorus
This is the chorus
It goes round and around and gets into your brain
This is the chorus
A fabulous chorus
And thirty seconds from now you’re gonna to hear it again

This is the Chorus - Morris Major and the Minors

The Association of American Publishers have launched a response to the OSTP White House Executive Order on public access to publicly funded research. In this they offer to set up a registry or system called CHORUS which they suggest can provide the same levels of access to research funded by Federal Agencies as would the widespread adoption of existing infrastructure like PubMedCentral. It is necessary to bear in mind that this substantially the same group that put together the Research Works Act, a group with a long standing, and in some cases personal, antipathy to the success of PubMedCentral. There is therefore some grounds for scepticism about the motivations of the proposal.

However here I want to dig a bit more into the details of whether the proposal can deliver. I will admit to being sceptical from the beginning but the more I think about this, the more it seems that either there is nothing there at all –  just a restatement of already announced initiatives – or alternately the publishers involved are setting themselves up for a potentially hugely expensive failure. Let’s dig a little deeper into this to see where the problems lie.

First the good bits. The proposal is to leverage FundRef to identify federally funded research papers that will be subject to the Executive Order. FundRef is a newly announced initiative from CrossRef which will include Funder grant information within the core metadata that CrossRef collects and can provide to users and will start to address the issues of data quality and completeness. To the extent that this is a commitment from a large group of publishers to support FundRef it is a very useful step forward. Based on the available funding information the publishers would then signal that these papers are accessible and this information would be used to populate a registry. Papers that are in the registry would be made available via the publisher websites in some manner.

Now the difficulties. You will note two sets of weasel words in the previous paragraph: “…the available funding information…” and “…made available via the publisher websites in some manner”. The second is really a problem for the publishers but I think a much bigger one than they realise. Simply making the version of record available without restrictions is “easy” but ensuring that access works properly in the context of a largely paywalled corpus is not as easy as people tend to think. Nature Publishing Group have spent years sorting out the fact that every time they do a system update that they remove access to the genome papers that are supposed to be freely accessible. If publishers decide they just want to make the final author manuscripts available then they will have to build up a whole parallel infrastructure to provide these – an infrastructure that will look quite a lot like PubMedCentral in fact, leading to potential duplication of effort and potential costs. This is probably less of an issue for the big publishers but for small publishers could become a real issue.

Bad for the agencies

But its the first set of weasel words that are the most problematic. The whole of CHORUS seems to be based on assumption that the FundRef information will be both accurate and complete. Anyone who has dealt with funding information inside publication workflows knows this is far from true. Comparison of funder information pulled from different sources can give nearly disjunct sets. And we know that authors are terrible at giving the correct grant codes when they can bothered including them at all. The Executive Order and FASTR put the agencies on the hook to report on success, compliance, and the re-use of published content. It is the agencies who get good information in the long term on the outputs of projects they fund – information that is often at odds with what is reported in the acknowledgement sections of papers.

Put this issue of data quality alongside the fact that the agencies will be relying on precisely those organisations that have worked to prevent, limit, and where that failed slow down the widening of public access and we have a serious problem of mismatched incentives. For the publishers there is direct incentive to fail to solve the data quality issue at the front end – it lets them make less papers available. The agencies are not in a position to force this issue at paper submission because their data isn’t complete until the grant finally reports. The NIH already has high compliance and an operating system, precisely because they couple grant reports to deposition. Other agencies will struggle to catch up using CHORUS and will deliver very poor compliance based on their own data. This is not a criticism of FundRef incidentally. FundRef is a necessary and well designed part of the effort to solve this problem in the longer term – but it is going to take years for the necessary systems changes to work their way through and there a big changes required to submission and editorial management systems to make this work well. And this brings us to the problems for publishers.

Bad for the publishers

If the agencies agree to adopt CHORUS they will do so with these issues very clear in their minds. The Office of Management and Budget oversight means that agencies have to report very closely on cost-benefit analyses for new projects. This alongside the issues with incentive misalignment, and just plain lack of trust, means that the agencies will do two things: they will insist that the costs are firewalled onto the publisher side, and they will put strong requirements on compliance levels and completeness. If I were an agency negotiator I would place a compliance requirement of 60% on CHORUS in year one rising to 75% and 90% in years two and three and stipulate that that compliance will be measured against final grant reports on an ongoing basis. Where compliance didn’t meet the requirements the penalty would be for all the relevant papers from that publisher to be placed in PubMedCentral at the publisher’s expense. Even if they’re not this tough they are certainly going to demand that the registry be updated to include all the papers that got missed at the publisher’s expense necessitating an on-going manual grind of metadata update, paper corrections, index notifications. Bear in mind that if we generously assume that 50% of submitted papers have good grant metadata and the US agencies contribute to around 25% of all global publications that this means around 10% of the entire corpus will need to be updated year on year, probably through a process of semi-automated and manual reconciliation. If you’ve worked with agency data then you know its generally messy and difficult to manage – this is being worked on by building shared repositories and data systems that leverage a lot of the tooling provided by PubMed and PubMedCentral.

Alternately this could be a “triggering event” meaning that content would become available in the archives like CLOCKSS and PORTICO because access wasn’t properly provided. Putting aside the potential damage to the publisher brand if this happens, and the fact that it destroys the central aim of CHORUS – to control the dissemination path – this will also cost money. These archives are not well set up to provide differential access to triggered content, they release whole journals when a publisher goes bust. It’s likely that a partial trigger would require specialist repository sites to be set up to serve the content – again sites that would like an awful lot like PubMedCentral. The process is likely to lead to significantly more trigger events, requiring these dark repositories to function more actively as publishers, raising costs, and requiring them to build up repositories to serve content that would look an awful lot like…well you get the idea.

Finally there is the big issue – this puts the costs of improving funding data collection firmly in the hands of CHORUS publishers and means it needs to be done extremely rapidly. This work needs to be done, but it would be much better done through effective global collaboration between all funders, institutions and publishers. What CHORUS has effectively done is offer to absorb the full cost of this transition. As noted above the agencies will firewall their contributions. You can bet that institutions – for whom CHORUS will not assist and might hamper their efforts to ensure the collection of research outputs – will not pay for it through increased subscriptions. And publishers who don’t want to engage with CHORUS will be unlikely to contribute. It’s also almost certain that this development process will be rushed and ham fisted and irritate authors even more than they already are by current submission systems.

Finally of course a very large proportion of federal money moves through the NIH. The NIH has a system in place, it works, and they’re not about to adopt something new and unproven, especially given the popularity of PubMedCentral as demonstrated by the public response to the Research Works Act. So publishers will have to maintain dual systems anyway – indeed the most likely outcome of CHORUS will be to make it easier for authors to deposit works into PubMedCentral, and easier for the NIH to prod them into doing so raising the compliance rates for the NIH policy and making them look even better on the annual reports to the White House, leading ultimately to some sharp questions about why agencies didn’t adopt PMC in the first place.

Bad for the user

From the perspective of an Open Access advocate putting access into the hands of publishers who have actively worked to limit access and invested vast sums of money in systems to limit and control access seems a bad idea. But that’s a personal perspective – the publishers in questions will say they are guiding these audiences to the “right” version of papers in the best place for them to consume it. But lets look at the incentives for the different players. The agencies are on the hook to report on usage and impact of their work. They have the incentives to insure that whatever systems are in place work well and provide access well. Subscription publishers? They have a vested interest in trying to show there is a lack of public interest, in tweaking embargoes so as to only make things available after interest has waned, in providing systems that are poorly resourced so page loads are slow, and in general making the experience as poor as possible. After all if you need to show you’re adding value with your full cost version, then its really helpful to be in complete control of the free version so as to cripple it. On the plus side it would mean that these publishers would almost certainly be forced to provide detailed usage information which would be immensely valuable.

…which is bad for the publishers…

The more I think about this, the less it seems to have been thought through in detail. Is it just a commitment to use FundRef? This would be a great step but it goes nowhere near even beginning to satisfy the White House requirements. If its more than that what is it? A registry? But that requires a crucial piece of metadata, which appears as “Licence Reference” in the diagram, that is needed to assert things are available. This hasn’t been agreed yet (I should know, I’ve been involved in drafting the description). And even when it is no piece of metadata can make sure access actually happens. Is it a repository that would guarantee access? No – that’s what the CHORUS members hate above all other things. Is it a firm contractual commitment to making those articles with agency grant numbers attached available? Not that I’ve seen, but even it were it wouldn’t address the requirements of either the Executive Order or FASTR. As noted above, the mandate applies to all agency funded research, not just those where the authors remembered to put in all the correct grant numbers.

Is it a commitment to ensuring the global collection of comprehensive grant information at manuscript submission? With the funding to make it happen – and the funding to ensure the papers become available - and real penalties if it doesn’t happen? With provision of comprehensive usage data for both subscription and freely available content? This is the only level at which the agencies will bite. And this is a horrendous and expensive can of worms.

In the UK we have a Victorian infrastructure for delivering water. It just about works but a huge proportion of the total just leaks out of the pipes – its not like we have a shortage of rain but when we have a “drought” we quickly run into serious problems. The cost of fixing the pipes? Vastly more than we can afford. What I think happened with CHORUS is what happens with a lot industry wide tech projects. Someone had a bright idea, and went to each player asking them whether they could deliver their part of the pipeline. Each player has slightly overplayed the ease of delivery, and slightly underplayed the leakage and problems. A few percent here and a few percent there isn’t a problem for each step in isolation – but along the whole pipeline it adds up to the point where the whole system simply can’t deliver. And delivering means replacing the whole set of pipes.

 

Enhanced by Zemanta

The bravery of librarians

Two things caught my attentions over the past few days. The first was the text of a Graduation Address from Dorothea Salo to the graduating students of the Library and Information Sciences Program at the University of Wisconsin-Madison. The second was a keynote from Chris Bourg, whose blog is entitled “Feral Librarian”, gave at The Acquisitions Institute.

Both focus on how the value of libraries and the value of those who defend the needs of all to access information are impossible to completely measure. Both offer a prescription of action and courage, in Dorothea Salo’s case the twin messages that librarians “aim to misbehave” and that “we’ve got each others back”, in Chris Bourg’s text quoting Henry Rollins also speaking to librarians “What you do is the definition of good. It’s very noble and you are very brave.”

What struck me was the question of how well we are helping these people. We seek to make scientific information free, for it flow easily to those who need it. What can we do to create a world where we need to rely less on the bravery of librarians and therefore benefit so much more from it?

Enhanced by Zemanta

What’s the right model for shared scholarly communications infrastructure?

Dollar
Dollar (Photo credit: Wikipedia)

There have been a lot of electrons spilled over the Elsevier Acquisition of Mendeley. I don’t intend to add too much to that discussion but it has provoked for me an interesting train of thought which seems worth thinking through. For what its worth my views of the acquisition are not too dissimilar to those of Jason Hoyt and John Wilbanks, and I recommend their posts. I have no doubt that the Mendeley team remain focussed on their vision and I hope they do well with it. And even with the cash reserves of Elsevier you don’t spend somewhere in the vicinity of $100M on something you intend to break.

But the question is not the intentions of individuals, or even the intentions of the two organisations, but whether the culture and promise of Mendeley can survive, or perhaps even thrive within the culture and organisation of Elsevier. No-one can know whether that will work, we will simply have to wait and see. But that raises a broader question for me. A for-profit startup, particularly one funded by VCs, has a limited number of exit strategies; IPO, sale, or more rarely a gradual move to a revenue positive independent company. This means startups behave in certain ways, and it means that interacting with them, particularly depending on them, has certain risks, primarily that a big competitor could buy your important partner out from under you. It’s not just the community who are wondering what Elsevier will do with the data and community that Mendeley will bring them, its also the other big publishers who were seeing valuable traffic and data coming to them from Mendeley, it’s the whole ecology of organisations that came to rely on the API.

It can be tempting to think that the world would be a better place if this kind of innovation was done by non-profits rather than startups. Non-profits have their strengths, a statutory requirement to focus on mission, the assurance that the promise of a big buy-out won’t change management behaviour. But non-profits have their weaknesses as well. That focus on mission can prevent the pivot that can make a startup.  It can be much harder to raise capital. Where a non-profit is governed by a board made up of a diverse community then conflicts of interest can make decision making glacial.

The third model is that of academic projects, and many useful tools have come from this route, but again there are problems. The peculiar nature of academic projects means that the financial imperatives that characterise the early stages of both for-profits and not-for-profits never really seem to bite. This can lead in turn to a lack of focus on user requirements and from there to a lack of adoption that condemns new tools to the category of interesting, even exciting, but not viable.

Of course all weaknesses are strengths in a different context. The freedom to explore in an academic context can enable exceptional leaps that would never be possible when you are focussed on finding next months rent. The promise of equity can bring in people whose salary you could never afford. The requirement for consensus can be painful but it means that where it can be found it is so much more powerful.

Geoff Bilder at the Rigour and Openness meeting in Oxford last week commented that the board of Crossref was made up of serious commercial competitors who could struggle to reach agreement because of their different interests. The process of building ORCID was painfully and frustratingly slow for many of us because of the different and sometimes conflicting needs of the various stakeholder groups. But when agreement is reached it is so much more powerful because it is clear that there is strong shared need. And agreement is the sign that something really needs to be done.

What has struck me in the conversation of the last week or so is how the interests of a very diverse range of stakeholders; researchers, altmetrics advocates, publishers, both radical and traditional, seem to be coming into alignment. At least on some issues. We need a way to build up shared infrastructure that can be utilised by all of us. Community run not-for-profits seem a good model for that, yet the innovation that builds new elements of infrastructure often comes from commercial startups. A for-profit can raise development capital to support a new tool but this may engender a lack of trust that an academic project might enjoy with a potential userbase.

What our sector lacks, and this might well be a more general problem, is a deep understanding of how these different development and governance models can be combined and applied in different places. We need incubators for non-profits but we also need models where a community non-profit might be setup to buy out a startup. Various publishers have labs groups, and technology will continue to be a key point of competition, but is there a space to do what pharmaceutical companies are increasingly doing and taking some parts of the drug development process pre-competitive so that everyone benefits from a shared infrastructure?

I don’t have any answers, nor do I have experience of running either for-profit or non-profit startups. But it feels like we are at a moment in time where we are starting to see shared infrastructure needs for the whole sector. It isn’t in anyone’s long term interest for us to have to build it more than once – and that means we need to find the right way to both support innovative developments but also ensure that they end up in hands that everyone feels they can trust.

 

Enhanced by Zemanta

OA and the UK Humanities & Social Sciences: Wrong risks and missed opportunities

Someone once said to me that the best way to get researchers to be serious about the issue of modernising scholarly communications was to let the scholarly monograph business go to the wall as an object lesson to everyone else. After the last couple of weeks I’m beginning to think the same might be said of the UK Humanities and Social Sciences literature. I get that people are worried, even scared. I can also see some are stirring up mud behind the scenes to get academics and editors angry. But the problem is that people are focussing on the wrong problems and missing the significant opportunities to rejuvenate H&SS in the UK.

Thesis: The problem of money

The core of the issue is money. H&SS are chronically underfunded for the number of scholars in the UK. It’s easy to say that H&SS are cheap but they are also labour intensive and people are the most expensive academic resource of them all. This means there is very little spare cash around and when looks to be another demand on a non-existent budget people are going to get upset. And reasonably so. But of course there is money in the system, being used to purchase journal subscriptions and monographs. In the UK this money comes down a different budget line, largely through grant overheads and direct funding from HEFCE with some coming from teaching budgets (or rather, these days, the fees that students are paying or will be paying back in the future). So there is money in the system but its not accessible to scholars, and if it were, they might quite like to spend it on something else (a whiteboard, a new computer, a functioning filing cabinet).

Antithesis: The challenge of “impact” for H&SS

It is a hobby of a certain kind of mass media outlet to pick up and ridicule H&SS projects. Let’s be honest its also a hobby of some quantitative (and not so quantitative) scientists as well. At the same time there is much hand wringing from within the H&SS community that their work is not appreciated by the public, or by government for the wider impact that it has. There is a seeming paradox here. The ridicule arises from the apparent ease of understanding of the topic at hand; the hand wringing from a view that the wider public doesn’t understand. There is of course no paradox, only a communication failure. On one side the intricacies and context are lost and on the other the context and importance.

I believe that research in the humanities and social sciences makes a huge contribution to our culture and our society. In many disciplines the societal importance, whether to policy development, or through cultural enrichment is of far greater value than anything I have done as a scientist. In my current job I’m an amateur social scientist. I (try to) read sociology, history, anthropology and even the odd bit of literary theory to guide me. I can’t of course read most of it, I don’t have access. And I wonder how many other people could benefit from access to history, literary criticism, economics, sociology don’t have access. How many are interested amateurs, and how many policy makers, entrepreneurs, or creatives? And how many are voters?

Synthesis: A great future in an accessible world, but who will pay?

It seems to me that the opportunity for H&SS to reach much wider audiences who appreciate the value of their work generally, and to reach those specific people who will make important use of it is enormous. But most of this work is locked up in books with print runs in the hundreds and journals with similar numbers of subscribers. The existing system is covering its first copy costs – or at least not losing too much money – so further distribution isn’t a problem as long as its cheap, and electronic distribution fits that bill.

So lets start with the minimal approach. Change nothing of the process, simply make electronic copies freely available, retain the charges for print. In the short term libraries are unlikely to cancel subscriptions because frankly the amounts of money are pretty small and libraries do have an interest in supporting scholarly communications. Monographs are still worth buying in book form so charge for that but make the electronic version freely available. I’ll bet the first publisher willing to try a beer that sales go up. In the longer term there would need to be consortium agreements put in place to support the ongoing costs of the journals but that’s probably do-able because the current subscription lists are small and charges are relatively low. A model for making this work on a much larger scale already exists in particle physics in the form of SCOAP3. If even this is too scarey, look at the repository-route. The evidence from particle physics suggests that decades of access through repositories makes no difference to journal viability.

A more daring solution is to go for scale. What happens if the level of interest in a journal or monograph goes up by an order of magnitude? Or two? What does that mean about costs? Are there economies of scale that aren’t currently accessible? Given that H&SS do seem to like print there are possibilities here. Grow the print customer base from a few hundred to several thousand, use the e-version to drive sales, give people a premium experience that means enough of them want that upgrade. One argument that is not going to go down well is “our publishing is really expensive so we have to keep it exclusive”. It’s just not going to wash – that means consolidation and finding efficiencies is going to be necessary anyway, so getting more readers while finding those efficiences is a win-win. If you don’t find those efficiencies someone else will.

Clearly though that approach will work better for some people and for some disciplines than others. More imaginative approaches will involve finding ways to utilise the characteristics of the H&SS communities and the technologies that might support them. Some have argued that a PLOS ONE approach (scale up, keep costs down, simple base criteria for publication) can’t work because there is no simple criteria for “publication-worthy” in H&SS. I’m not convinced this is true  – STEM folks said it wouldn’t work for PLOS ONE either – but lets take it at face value. This means thinking the other way, what are the benefits of small and community based scale for publication infrastructures?

One benefit is that small communities tend to know each other, and therefore are willing to contribute effort to a common pool. In fact I bet that H&SS journals are largely run by small editorial boards of unpaid academics who mostly know the authors submitting and mostly know the referees they are approaching. These are ideal conditions for a community to take over control of the means of production and then take a ruthless capitalistic approach to reducing the costs outside of what they value – the review process itself. The technology isn’t quite there yet for a journal system to be run easily by non-technical people, but its not far off, and could built if the community as a whole demanded it. Some communities have done this, and some very prestigious journals are run for practically no money at all.

There are many more potential routes that H&SS could take to engage effectively with an open access future while also engaging with the communities of interest that would appreciate and use their work. It’s not really my place to tell the community what to do, but as a (potential) consumer of this scholarship I’m keen to see something happen.

It comes down to brass tacks

This will however cost money, and the community will argue that its money they don’t have. And this is really the key point and why the whole current strategy is wrong-headed. The British Academy, Institutes for Historical Research[correction: Statement is on the IHR site, not from the IHR itself, but from a collection of journal editors], and others seem to believe that the right route is to make a stand, presumably in the hope that this will tone down the HEFCE requirements for REF2020 (the RCUK policy is a given and won’t shift). What the community is failing to grasp is that this is the biggest opportunity in 20 years to re-assess the funding base for H&SS in the UK. HEFCE and RCUK are serious about the move to open access, and serious about doing it in a way that maximises the overall return on investment. They’re prepared, indeed demanding, to fund that process.

UK funding for H&SS research is structurally different than that for STEM subjects. The government, and its funding agencies, have taken to heart the concept that the costs of dissemination are part of the costs of research. The H&SS community needs to be developing a coherent plan for how those costs could be effectively funded and the mechanisms that will be put in place to make sure they’re constrained. Go to HEFCE and RCUK with a plan, that speaks to their agenda, that is well-informed about the core issues and you have an opportunity to rejuvenate H&SS in the UK.

The alternative, to be blunt, is oblivion. On one side you will have STEM researchers, most of them less inclined than me to keep subsidising your communications costs through “our” overheads, teaching budgets, and QR income. On the other will be government asking blunt questions about why your research isn’t being used and spread, while they don’t use it to inform policy development or cultural programs because they don’t even know it exists (pro-tip Google some terms around your area of expertise; any of your work visible?). And in the middle will be funders, increasingly losing their patience with your intransigence, while trying to defend the value of and special characteristics of H&SS, to increasingly unimpressed researchers, institutions, and government, while other subjects areas streak ahead and take advantage of new opportunities. At best this approach will allow a managed decline.

It doesn’t have to be this way. The more I look at, the more I think H&SS and in particular UK H&SS are amongst the best placed to take advantage of both the technological possibilities and the policy landscape. Get informed and look at and discuss the options to find the right approach for your discipline and domain. Once you get over the fact that the status-quo isn’t an option you will see a whole range of new possibilities. This is a generational opportunity to reset the thinking, and critically the funding mechanisms, for humanities and social sciences in this country. Use it or lose it.

Note: For any irritated philosophers amongst the readers I am aware that I’ve mangled the “Hegelian” dialectic that I used as a structure. Think of it as illustrating the choice you have. If I know enough to be dangerous/intriguing but not enough of your methodology to contribute effectively are you better off ignoring me, or engaging with me as both a potential ally and someone who might even contribute back to your thinking? Bear in mind that there are a lot of us out here.

Enhanced by Zemanta

The challenge for scholarly societies

society
Cemetery Society (Photo credit: Aunt Owwee)

With major governments signalling a shift to Open Access it seems like a good time to be asking which organisations in the scholarly communications space will survive the transition. It is likely that the major current publishers will survive, although relative market share and focus is likely to change. But the biggest challenges are faced by small to medium scholarly societies that depend on journal income for their current viability. What changes are necessary for them to navigate this transition and can they survive?

The fate of scholarly societies is one of the most contentious and even emotional in the open access landscape. Many researchers have strong emotional ties to their disciplinary societies and these societies often play a crucial role in supporting meetings, providing travel stipends to young researchers, awarding prizes, and representing the community. At the same time they face a peculiar bind. The money that supports these efforts often comes from journal subscriptions. Researchers are very attached to the benefits but seem disinclined to countenance membership fees that would support them. This problem is seen across many parts of the research enterprise – where researchers, or at least their institutions, are paying for services through subscriptions but unwilling to pay for them directly.

What options do societies have? Those with a large publication program could do worse in the short term than look very closely at the announcement from the UK Royal Society of Chemistry last week. The RSC is offering an institutional mechanism where by those institutions that have a particular level of subscription will receive an equivalent amount of publication services, set at the price of £1600 per paper. This is very clever for the RSC, it allows it to help institutions prepare effectively for changes in UK policy, it costs them nothing, and lets them experiment with a route to transition to full open access at relatively low risk. Because the contribution of UK institutions with this particular subscription plan is relatively small it is unlikely to reduce subscriptions significantly in the short term, but if and when it does it positions the RSC to offer package deals on publication services with very similar terms. Tactically by moving early it also allows the RSC to hold a higher price point than later movers will – and will help to increase its market share in the UK over that of the ACS.

Another route is for societies to explore the “indy band model”. Similar to bands that are trying to break through by giving away their recorded material but charging for live gigs, societies could focus on raising money through meetings rather than publications. Some societies already do this – having historically focussed on running large scale international or national meetings. The “in person” experience is something that cannot yet be done cheaply over the internet and “must attend” meetings offer significant income and sponsorship opportunities. There are challenges to be navigated here – ensuring commercial contributions don’t damage the brand or reduce the quality of meetings being a big one – but expect conference fees to rise as subscription incomes drop. Societies that currently run lavish meetings off the back of journal income will face a particular struggle over the next two to five years.

But even meetings are unlikely to offer a long term solution. It’s some way off yet but rising costs of travel and increasing quality of videoconferencing will start to eat into this market as well. If all the big speakers are dialling it in, is it still worth attending the meeting? So what are the real value offerings that societies can provide? What are the things that are unique to that community collection of expertise that no-one else can provide?

Peer review (pre-, post-, or peri-publication) is one of them. Publication services are not. Publication, in the narrow sense of “making public”, will be commoditised, if it hasn’t already. With new players like PeerJ and F1000 Research alongside the now fairly familiar landscape of the wide-ranging megajournal the space for publication services to make fat profits is narrowing rapidly. This will, sooner or later, be a low margin business with a range of options to choose from when someone, whether a society or a single researcher, is looking for a platform to publish their work. While the rest of us may argue whether this will happen next year or in a decade, for societies it is the long term that matters, and in the long term commoditisation will happen.

The unique offering that a society brings is the aggregation and organisation of expert attention. In a given space a scholarly society has a unique capacity to coordinate and organise assessment by domain experts. I can certainly imagine a society offering peer review as a core member service, independent of whether the thing being reviewed is already “published”. This might be a particular case where there are real benefits to operating a small scale – both because of the peer pressure for each member of the community to pull their weight and because the scale of the community lends itself to being understood and managed as a small set of partly connected small world networks. The question is really whether the sums add up. Will members pay $100 or $500 per year for peer review services? Would that provide enough income? What about younger members without grants? And perhaps crucially, how cheap would a separated publication platform have to be to make the sums look good?

Societies are all about community. Arguably most completely missed the boat on the potential of the social web when they could have built community hubs of real value – and those that didn’t miss it entirely largely created badly built and ill thought through community forums well after the first flush of failed generic “Facebook for Science” clones had faded. But another chance is coming. As the ratchet moves on funder and government open access policies, society journals stuck in a subscription model will become increasingly unattractive options for publication. The slow rate of progress and disciplinary differences will allow some to hold on past the point of no return and these societies will wither and die. Some societies will investigate transitional pricing models. I commend the example of the RSC to small societies as something to look at closely. Some may choose to move to publishing collections in larger journals where they retain editorial control. My bet is that those that survive will be the ones that find a way to make the combined expertise of the community pay – and I think the place to look for that will be those societies that find ways to decouple the value they offer through peer review from the costs of publication services.

This post was inspired by a twitter conversation with Alan Cann and builds on many conversations I’ve had with people including Heather Joseph, Richard Kidd, David Smith, and others. Full Disclosure: I’m interested, in my role as Advocacy Director for PLOS, in the question of how scholarly societies can manage a transition to an open access world. However, this post is entirely my own reflections on these issues.

Enhanced by Zemanta