A citizen of the network

English: Passport entry stamp for citizens and...
Image via Wikipedia

A few weeks ago I attended a workshop run by the ESRC Genomics Forum in Edinburgh which brought together humanists, social scientists, and science focused folks with an interest in how open approaches can and should be applied to genomic science. This was interesting on a number of levels but I was especially interested in the comments of Marina Levina on citizenship. In particular she asked the question “what are the civic responsibilities of a network citizen?”

Actually she asked me this question several times and it took me until quite late in the day to really understand what she meant. I initially answered with reference to Clay Shirky on the rise of creative contribution on the web as if just making stuff was all that a citizen need do but what Marina was getting at was a deeper question about a shared sense of responsibilities.

Citizenship as a concept is a vexed question and there are a range of somewhat incompatible philosophical approaches to describing and understanding it. For my purposes here I want to focus on citizenship as a sense of belonging to a group with shared values and resources, and rights to access those resources. Traditionally these allegiances lie with the nation state but, while nationalism is undeniably on the rise, there seems to be a growing group of us who have a patchwork of citizenships with different groups and communities.

Many of these communities live on the web and benefit from the use of the internet as a sort of commons. At the same time there has been a growing sense of behavioural norms and responsibilities in some parts of the social web: a sophisticated sense of identity, the responsibility to mark spam for takedown, a dedication to broad freedom of expression, perhaps even a growing understanding of the tensions between that freedom and “civilty”.

In the context of research on the web we have often talked about the value of “norms” of behaviour as a far better mechanism for regulation than licences and legal documents. A sense of belonging to a community, of being a citizen, and the consequent risk of exclusion for bad behaviour is a powerful encouragement to adhere to those norms, even if that exclusion is just being shunned. Of course such enforcement can lead to negative consequences as well as positive but I would argue that in our day to day activities in most cases an element of social pressure has a largely positive effect.

A citizen has a responsibility to contribute to the shared resources that support the community. In a nation state we pay taxes, undertake jury duty, vote in elections. What are the contributions expected of a network citizen? Taking one step back, what are those shared resources? The internet and the underlying framework of the web are one set of resources. Of course these are resources that lie at the intersection of our traditional states, as physical and commercial resources, and our network society. In this context the protests against SOPA, PIPA, and ACTA might be seen as the citizens of the network attending a rally, perhaps even mobilizing our “military” if only to demonstrate their capacity.

But the core resources of the network are the nodes on the network and the connections between them. The people, information resources, and tools make up the nodes, and the links connecting them are what actually makes them usable. As citizens of the network our contribution is to make these links, to tend the garden of resources, to build tools. Above all our civic duty is to share.

It is a commonly made point that with digital resources being infinitely copyable there is no need for a tragedy of the commons. But there is a flip side to this – when we think of physical commons we often think of resources that don’t need active maintenance. As long as they are properly managed, not over-grazed or polluted, there is a sense that these physical commons will be ok. The digital commons requires constant maintenance. As an information resource it needs to be brought up to date. And with these constant updates the tools and resources need to be constantly checked for interoperability.

Maintaining these resources requires work. It requires money and it requires time. The active network citizen contributes to these resources, modifying content, adding links, removing vandalism. In exchange for this the active network citizen obtains influence – not dissimilar to getting to vote in elections – in those discussions about norms and behaviour. But the core civic duty is to share, with the expectation that other citizens, in their turn, will share back; that working together as a community the citizenry will build, maintain, and strengthen the civic institutions of the network.

This analysis scales beyond individual people to organizations. Wikipedia is an important civic institution of network, one that accepts a tithe from the active citizen in the form of time and eyeballs but which gives much back to the community in the form of links and high quality resources. Google accepts the links we make and gives back search results but isn’t always quite such a good citizen, breaking standards, removing the RSS feeds that could be used by others. Facebook? Well the less said the better. But good citizens will both take what they need from the pool of resources and contribute effectively back to the common institutions, those aggregation points for resources and tools that make the network an attractive place to live and work.

And I use “work” advisedly because a core piece of the value of the network is the ability for citizens to use it to do their jobs, for it to be a source of resources tools and expertise, that can be used by people to make a living. And the quid pro quo is that the good citizen contributes back resources that others might use to make money. In a viable community with a viable commons there will be money, or its equivalent, being generated and spent. A networked community will encourage its citizens to generate value because this floats all boats higher. In return for taking value out of the system the good citizen will contribute it back. But they will do this as a matter of principle, as part of their social contract, not because a legal document tells them to. Indeed requiring someone to do something actually reduces the sense of community, the valuing of good practice, that makes a healthy society.

When I first applied the ccZero waiver to this blog I didn’t really think deeply about what I was doing. I wanted to make a point. I wanted my work to be widely shared and I wanted to make it as easily shareable as I could. In retrospect I can see I was making a statement about the networked world I wanted to work in, one in which people actively participate in building a better network. I was making the point that I didn’t just want to consume and benefit from the content, links, and resources that other people had created, I wanted to give back. And I have benefited, commercially, in the form of consultancies and grants, and simply the opportunities that have opened up for me as a result of reading and conversing about the work of other people.

My current life and work would be unthinkable without the network and the value I have extracted from it. In return it is clear to me that I need to give back in the form of resources that others are free to use, and to exploit, even to make money off them. There may be a risk of enclosure, although I think it small, but my choice as a citizen is to be clear about what I expect of other citizens, not to attempt to enforce my beliefs about good behaviour through legal documents but through acting to build up and support the community of good citizens.

Dave White has talked and written about the distinction between visitors and residents in social networks, the experience they bring and the experience they have. I think there is a space, indeed a need, to recognize that there is another group beyond those who simply inhabit online spaces. Those of us who want to build a sustainable networked society should identify ourselves, our values, and our expectations of others. Our networked world needs citizens as well

.

Enhanced by Zemanta

A tale of two analysts

Understanding how a process looks from outside our own echo chamber can be useful. It helps to calibrate and sanity check our own responses. It adds an external perspective and at its best can save us from our own overly fixed ideas. In the case of the ongoing Elsevier Boycott we even have a perspective that comes from two opposed directions. The two analyst/brokerage firms Bernstein and Exane Paribas have recently published reports on their view of how recent events should effect the view of those investing in Reed Elsevier. In the weeks following the start of the boycott Elsevier’s stock price dropped – was this an indication of a serious structural problems in the business revealed by the boycott (the Bernstein view) or just a short term over reaction that provides an opportunity for a quick profit (the Exane view)?

Claudio Aspesi from Bernstein has been negative on Elsevier stock for sometime [see Stephen Curry’s post for links and the most recent report], citing the structural problem that the company is stuck in a cycle of publishing more, losing subscriptions, charging more, and managing to squeeze out a little more profit for shareholders in each cycle. Aspesi has been stating for some time that this simply can’t go on. He also makes the link between the boycott and a potentially increased willingness of libraries to drop subscriptions or abandon big deals altogether. He is particularly scathing about the response to the boycott arguing that Elsevier is continuing to estrange the researcher community and that this must ultimately be disastrous. In particular the report focuses on the claims management have made of their ability to shift the cost base away from libraries and onto researchers based on “excellent relations with researchers”.

The Exane view on the other hand is that this is a storm in a teacup [summary at John Baez’s G+]. They point to the relatively small number of researchers signing up to the boycott, particularly in the context of the much larger numbers involved in similar pledges in 2001 and 2007. In doing this I feel they are missing the point – that the environment of those boycotts was entirely different both in terms of disciplines and targeting but an objective observer might well view me as biased.

I do however find this report complacent on details – claiming as it does that the “low take-up of this petition is a sign of the scientific community’s improving perception of Elsevier”, an indication of a lack of real data on researcher sentiment. They appear to have bought the Elsevier line on “excellent relations” uncritically – and what I see on the ground is barely suppressed fury that is increasingly boiling over. It also focuses on OA as a threat – not an opportunity – for Elsevier, a view which would certainly lead me to discount their long term views on the company’s stock price. Their judgement for me is brought even further into question by the following:

“In our DCF terminal value, we capture the Open Access risk by assuming the pricing models flip to Gold Open Access with average revenue per article of USD3,000. Even on that assumption, we find value in the shares.”

Pricing the risk at this level is risible. The notion that Elsevier could flip to an author pays model by charging $US3000 an article is absurd. The poor take up of the current Elsevier options and the massive growth of PLoS ONE and clones at half this price sets a clear price point, and one that is likely a high water mark for journal APCs. If there is value in the shares at $3000 then I can’t help but feel there won’t be very much at a likely end point price well below $1000.

However both reports appear to me to fail to recognize one very important aspect of the situation – its volatility. As I understand it these firms make their names by being right when they take positions away from the consensus. Thus they have a tendency to report their views as certainties. In this case I think the situation could swing either way very suddenly. As the Bernstein report notes, the defection of editorial staff from Elsevier journals is the most significant risk. A single board defection from a middle to high ranking journal – or a signal from a major society journal that they will not renew an Elsevier contract – could very easily start a landslide that ends Elsevier’s dominance as the largest research publisher. Equally, nothing much could happen which would certainly likely lead to a short term rally in stock prices. But no-one is in a position to guess how this is going to play out.

In the long term I side with Aspesi – I see nothing in the overall tenor of Elsevier’s position statements that suggests to me that they really understand either the research community, the environment, or how it is changing. Their pricing model for hybrid options seems almost designed to fail. As mandates strengthen it appears the company is likely to continue to fight them rather than adapt. But to accept my analysis you need to be believe my view that the subscription business model is no longer fit for purpose.

What this shows, more than anything else, is that the place where the battle for change will ultimately be fought out is in stock market. While Elsevier continues to tell its shareholders that it can deliver continuing profit growth from scholarly publishing with a subscription business model – it will be trapped into defending that business model against all threats. The Research Works Act is a part of that fight – as will be attempts to block simple and global mandates by funders on researchers in other places. While the shareholders believe that the status quo can continue the senior management of the company is trapped by a legacy mindset. Until shareholders accept that the company needs to take a short-term haircut the real investment required for change seems unlikely. And I don’t meant a few million here or there. I mean a full year’s profits ploughed back into the company over a few years to allow for root and branch change.

The irony seems that large-scale change requires that the investors get spooked. For that to happen something has to go very publicly wrong. The uproar over the support of SOPA and RWA is not, yet, enough to convince the analysts beyond Aspesi that something is seriously wrong. It is an interesting question what would be. My sense is that nothing big enough will come along soon enough and that those structural issues will gradually come into play leading to a long term decline. It may be that we are very near “Peak Elsevier”. Your mileage, of course, may vary.

In case it is not obvious I am not competent to offer financial or investment advice and no-one should view the proceeding as any form of such. 

Enhanced by Zemanta

On the 10th Anniversary of the Budapest Declaration

Budapest: Image from Wikipedia, by Christian Mehlführer

Ten years ago today, the Budapest Declaration was published. The declaration was the output of a meeting held some months earlier, largely through the efforts of Melissa Hagemann, that brought together key players from the, then nascent, Open Access movement. BioMedCentral had been publishing for a year or so, PLoS existed as an open letter, Creative Commons was still focussed on building a commons and hadn’t yet released its first licences. The dotcom bubble had burst, deflating many of the exuberant expectations of the first generation of web technologies and it was to be another year before Tim O’Reilly popularised the term “Web 2.0” arguably marking the real emergence of the social web.

In that context the text of the declaration is strikingly prescient. It focusses largely on the public good of access to research, a strong strand of the OA argument that remains highly relevant today.

“An old tradition and a new technology have converged to make possible an unprecedented public good. The old tradition is the willingness of scientists and scholars to publish the fruits of their research in scholarly journals without payment, for the sake of inquiry and knowledge. The new technology is the internet. The public good they make possible is the world-wide electronic distribution of the peer-reviewed journal literature and completely free and unrestricted access to it by all scientists, scholars, teachers, students, and other curious minds. Removing access barriers to this literature will accelerate research, enrich education, share the learning of the rich with the poor and the poor with the rich, make this literature as useful as it can be, and lay the foundation for uniting humanity in a common intellectual conversation and quest for knowledge.”

But at the same time, and again remember this is at the very beginning of the development of the user-generated web, the argument is laid out to support a networked research and discovery environment.

“…many different initiatives have shown that open access […] gives readers extraordinary power to find and make use of relevant literature, and that it gives authors and their works vast and measurable new visibility, readership, and impact.”

But for me, the core of the declaration lies in its definition. At one level it seems remarkable to have felt a need to define Open Access, and yet this is something we still struggle with this today. The definition in the Budapest Declaration is clear, direct, and precise:

“By ‘open access’ to this literature, we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited.”

Core to this definition are three things. Access to the text, understood as necessary to achieve the other aims; a limitation on restrictions and a limitation on the use of copyright to only support the integrity and attribution of the work – which I interpret in retrospect to mean the only acceptable licences are those that require attribution only. But the core forward looking element lies in the middle of the definition, focussing as it does on specific uses; crawling, passing to software as data, that would have seemed outlandish, if not incomprehensible, to most researchers at the time.

In limiting the scope of acceptable restrictions and in focussing on the power of automated systems, the authors of the Budapest declaration recognised precisely the requirements of information resources that we have more recently come to understand as requirements for effective networked information. Ten years ago, before Facebook existed, let alone before anyone was talking about frictionless sharing – the core characteristics were identified that would enable research outputs to be accessed and read, but above all integrated, mined, aggregated and used in ways that their creators did not, could not, expect. The core characteristics of networked information that enable research outputs to become research outcomes. The characteristics that will maximise the impact of that research.

I am writing this in a hotel room in Budapest. I am honoured to have been invited to attend a meeting to mark the 10th anniversary of the declaration and excited to be discussing what we have learnt over the past ten years and how we can navigate the next ten. The declaration itself remains as clear and relevant today as it was ten years ago. Its core message is one of enabling the use and re-use of research to make a difference. Its prescience in identifying exactly those issues that best support that aim in a networked world is remarkable.

In looking both backwards, over the achievements of the past ten years, and forwards, towards the challenges and opportunities that await us when true Open Access is achieved, the Budapest Declaration is, for me, the core set of principles that can guide us along the path to realising the potential of the web for supporting research and its wider place in society.

Enhanced by Zemanta

The parable of the garage: Why the business model shift is so hard

An auto mechanic works on a rally car at the 2...
Image via Wikipedia

Mike Taylor has a parable on the Guardian Blog about research communication and I thought it might be useful to share one that I have been using in talks recently. For me it illustrates just how silly the situation is, and how hard it is to break out of the mindset of renting access to content for the incumbent publishers. It also, perhaps, has a happier ending.

Imagine a world very similar to our own. People buy cars, they fill them with fuel, they pay road tax and these things largely work as well as they do in our own world. There is just one difference, when a car needs its annual service and is taken to a garage – just as we do – for its mechanical checkup and maintenance. In return for the service, the car is then gifted to the mechanic, who in turn provides it back to the owner for a rental fee.

Some choose to do their own servicing, or form clubs where they can work together to help service each other’s cars, but this is both hard work, and to be frank, a little obsessive and odd. Most people are perfectly happy to hand over the keys and then rent them back. It works just fine. The trouble is society is changing, there is an increase in public transport, the mechanics are worried about their future, and the users seem keen to do new and strange things with the cars. They want to use them for work purposes, they want to loan them to friends, in some cases they event want to use them to teach others to drive – possibly even for money.

Now for the mechanic, this is a concern on two levels. First they are uncertain about their future as the world seems to be changing pretty fast. How can they provide certainty for themselves? Secondly all these new uses seem to have the potential to make money for other people. That hardly seems fair and the mechanics want a slice of that income, derived as it is from their cars. So looking closely at their existing contracts they identify that the existing agreements only provide for personal use. No mention is made of work use, certainly not of lending it to others, and absolutely not for teaching.

For the garage, in this uncertain world, this is a godsend. Here are a whole set of new income streams. They can provide for the users to do all these new things, they have a diversified income stream, and everyone is happy! They could call it “Universal Uses” – a menu of options that car users can select from according to their needs and resources. Everyone will understand that this is a fair exchange. The cars are potentially generating more money and everyone gets a share of it, both the users and the real owners, the mechanics.

Unfortunately the car users aren’t so happy. They object to paying extra. After all they feel that the garage is already recouping the costs of  doing the service and making a healthy profit so why do they need more? Having to negotiate each new use is a real pain in the backside and the fine print seems to be so fine that every slight variation requires a new negotiation and a new payment. Given the revolution in the possible uses they might want to be putting their cars to isn’t this just slowing down progress? Many of them even threaten to do their own servicing.

The problem for the garages is that they face a need for new equipment and staff training. Each time they see a new use that they don’t charge for they see a lost sales opportunity. They spend money on getting the best lawyers to draw up new agreements, make concessions on one use to try and shore up the market for another. At every stage there’s a need to pin everything down, lock down the cars, ensure they can’t be used for unlicensed purposes, all of which costs more money, leading to a greater need to focus on different possibilities for charging. And every time they do this it puts them more and more at odds with their customers. But they’re so focussed on a world view in which they need to charge for every possible different use of the “their” cars that they can’t see a way out beyond identifying each new possible use as it comes up and pinning it to the wall with a new contract and a new charge and new limitations to prevent any unexpected new opportunities for income being lost.

But things are changing. There’s a couple of radical new businesses down the road, BMC Motors and PLoS Garages. They do things differently. They charge up front for the maintenance and service but then allow the cars to be used for any purpose whatsoever. There’s a lot of scepticism – will people really pay for a service up front? How can people be sure that the service is any good? After all if they get the money when you get your car back what incentive do they have to make sure it keeps working? But there’s enough aggravation for a few people to start using them.

And gradually the view starts to shift. Where there is good service people want to come back with their new cars – they discover entirely new possibilities of use because they are free to experiment, earn more money, by more cars. The idea spreads and there is a slow but distinct shift – the whole economy gets a boost as the all of the licensing costs simply drop out of the system. But the thing that actually drives the change? It’s all those people who just got sick of having to go back to the garage every time they wanted to do something new. In the end the irritation and waste of time in negotiating for every new use just isn’t worth their time and effort. Paying up front is clean, clear, and simple. And lets everyone get on with the things they really want to do.

 

Enhanced by Zemanta

Network Enabled Research: Maximise scale and connectivity, minimise friction

BBN Technologies TCP/IP internet map early 1986
Image via Wikipedia

Prior to all the nonsense with the Research Works Act, I had been having a discussion with Heather Morrison about licenses and Open Access and peripherally the principle of requiring specific licenses of authors. I realized then that I needed to lay out the background thinking that leads me to where I am. The path that leads me here is one built on a technical understanding of how networks functional and what their capacity can be. This builds heavily on the ideas I have taken from (in no particular order) Jon Udell, Jonathan Zittrain, Michael Nielsen, Clay Shirky, Tim O’Reilly, Danah Boyd, and John Wilbanks among many others. Nothing much here is new but it remains something that very few people really get. Ironically the debate over the Research Works Act is what helped this narrative crystallise. This should be read as a contribution to Heather’s suggested “Articulating the Commons” series.

A pragmatic perspective

I am at heart a pragmatist. I want to see outcomes, I want to see evidence to support the decisions we make about how to get outcomes. I am happy to compromise, even to take tactical steps in the wrong direction if they ultimately help us to get where we need to be. In the case of publicly funded research we need to ensure that the public investment in research is made in such a way that it maximizes those outcomes. We may not agree currently on how to prioritize those outcomes, or the timeframe they occur on. We may not even agree that we can know how best to invest. But we can agree on the principle that public money should be effectively invested.

Ultimately the wider global public is for the most part convinced that research is something worth investing in, but in turn they expect to see outcomes of that research, jobs, economic activity, excitement, prestige, better public health, improved standards of living. The wider public are remarkably sophisticated when it comes to understanding that research may take a long time to bear fruit. But they are not particularly interested in papers. And when they become aware of academia’s obsession with papers they tend to be deeply unimpressed. We ignore that at our peril.

So it is important that when we think about the way we do research, that we understand the mechanisms and the processes that lead to outcomes. Even if we can’t predict exactly where outcomes will spring from (and I firmly believe that we cannot) that does not mean that we can avoid the responsibility of thoughtfully designing our systems so as to maximize the potential for innovation. The fact that we cannot, literally cannot under our current understanding of physics, follow the path of an electron through a circuit does not meant that we cannot build circuits with predictable overall behaviour. You simply design the system at a different level.

The assumptions underlying research communication have changed

So why are we having this conversation? And why now? What is it about today’s world that is so different? The answer, of course, is the internet. Our underlying communications and information infrastructure is arguably undergoing its biggest change since the development of the Gutenberg’s press. Like all developments of new communication networks, SMS, fixed telephones, the telegraph, the railways, and writing itself, the internet doesn’t just change how well we can do things, it qualitatively changes what we can do. To give a seemingly trivial example the expectations and possibilities of a society with mobile telephones is qualitatively different and their introduction has changed the way we behave and expect others to behave. The internet is a network on a scale, and with connectivity, that we have never had before. The potential change in our capacity as individuals, communities, and societies is therefore immense.

Why do networks change things? Before a network technology spreads you can imagine people, largely separated from each other, unable to communicate in this new way. As you start to make connections nothing much really happens, a few small groups can start to communicate in this new way, but that just means that they can do a few things a bit better. But as more connections form suddenly something profound happens. There comes a point where there is a transition – where suddenly nearly everyone is connected. For the physical scientists this is in fact a phase transition and can display extreme cooperativity – a sudden break where the whole system crystallizes into a new state.

At this point the whole is suddenly greater than the sum of its parts. Suddenly there is the possibility of coordination, of distribution of tasks that was simply not possible before. The internet simply does this better than any other network we have ever had. It is better for a range of reasons but they key ones are: its immense scale – connecting more people, and now machines than any previous network; its connectivity – the internet is incredibly densely connected, essentially enabling any computer to speak to any other computer globally; its lack of friction – transfer of information is very low cost, essentially zero compared to previous technologies, and is very very easy. Anyone with a web browser can point and click and be a part of that transfer.

What does this mean for research?

So if the internet and the web bring new capacity, where is the evidence that this is making a difference? If we have fundamentally new capacity where are the examples of that being exploited? I will give two examples, both very familiar to many people now, but ones that illustrate what can be achieved.

In late January 2009 Tim Gowers, a Fields medalist and arguably one of the worlds greatest living mathematicians, posed a question. Could a group of mathematicians working together be better at solving a problem than one on their own. He suggested a problem, one that he had an idea how to solve but felt was too challenging to tackle on his own. He then started to hedge his bets, stating:

“It is not the case that the aim of the project is [to solve the problem but rather it is to see whether the proposed approach was viable] I think that the chances of success even for this more modest aim are substantially less than 100%.”

A loose collection of interested parties, some world leading mathematicians, others interested but less expert, started to work on the problem. Six weeks later Gower’s announced that he believed the problem solved:

“I hereby state that I am basically sure that the problem is solved (though not in the way originally envisaged).”

In six weeks a non-planned assortment of contributors had solved a problem that a world-leading mathematician had thought both interesting, and too hard. And had solved it by a route other than the one he had originally proposed. Gower’s commented:

“It feels as though this is to normal research as driving is to pushing a car.”

For one of the world’s great mathematicians, there was a qualitative difference in what was possible when a group of people with the appropriate expertise were connected via a network through which they could easily and effectively transmit ideas, comments, and proposals. Three key messages emerge, the scale of the network was sufficient to bring the required resources to bear, the connectivity of the network was sufficient that work could be divided effectively and rapidly, and there was little friction in transferring ideas.

The Galaxy Zoo project arose out of a different kind of problem at at different kind of scale. One means of testing theories of the history and structure of the universe is to look at the numbers and types of different categories of galaxy in the sky. Images of the sky are collected and made freely available to the community. Researchers will then categories galaxies by hand to build up data sets to allow them to test theories. An experienced researcher could perhaps classify a hundred galaxies in a day. A paper might require a statistical sample of around 10,000 galaxy classifications to get past peer review. One truly heroic student classified 50,000 galaxies within their PhD, declaring at the end that they would never classify another again.

However problems were emerging. It was becoming clear that the statistical power offered by even 10,000 galaxies was not enough. One group would get different results to another. More classifications were required. Data wasn’t the problem. The Sloan Digital Sky Survey had a million galaxy images. But computer based image categorization wasn’t up to the job. The solution? Build a network. In this case a network of human participants willing to contribute by categorizing the galaxies. Several hundred thousand people classified the millions of images several times over in a matter of months. Again the key messages: scale of the network – both the number of images and the number of participants; the connectivity of the network – the internet made it easy for people to connect and participate; a lack of friction – sending images one way, and a simple classification was easy. Making the website easy, even fun, for people to use was a critical part of the success.

Galaxy Zoo changed the scale of this kind of research. It provided a statistical power that was unheard of and made it possible to ask fundamentally new types of questions. It also enabled fundamentally new types of people to play an effective role in the research, school children, teachers, full time parents. It enabled qualitatively different research to take place.

So why hasn’t the future arrived then?

These are exciting stories, but they remain just that. Sure I can multiply examples but they are still limited. We haven’t yet taken real advantage of the possibilities. There are lots of reasons for this but the fundamental one is inertia. People within the system are pretty happy for the most part with how it works. They don’t want to rock the boat too much.

But there are a group of people who are starting to be interested in rocking the boat. The funders, the patient groups, that global public who want to see outcomes. The thought process hasn’t worked through yet, but when it does they will all be asking one question. “How are you building networks to enable research”. The question may come in many forms – “How are you maximizing your research impact?” – “What are you doing to ensure the commercialization of your research?” – “Where is your research being used?” – but they all really mean the same thing. How are you working to make sure that the outputs of your research are going into the biggest, most connected, lowest friction, network that they possibly can.

As service providers, all of those who work in this industry – and I mean all, from the researchers to the administrators, to the publishers to the librarians – will need to have an answer. The suprising thing is that it’s actually very easy. The web makes building and exploiting networks easier than it has ever been because it is a network infrastructure. It has scale, billions of people – billions of computers – exabytes of information resources – exaflops of computational resources. It has connectivity on a scale that is literally unimaginable – the human mind can’t conceive of that number of connections because the web has more. It is incredibly low in friction – the cost of information transfer is in most cases so close to zero as to make no difference.

Service requirements

To exploit the potential of the network all we need to do is get as much material online as fast as we can. We need to connect it up, to make it discoverable, to make sure that people can find and understand and use it. And we need to ensure that once found those resources can be easily transferred, shared, and used. And used in any way – at network scale the system is designed to ensure  that resources get used in unexpected ways. At scale you can have serendipity by design, not by blind luck.

The problem arises with the systems we have in place to get material online. The raw material of science is not often in a state where putting it online is immediately useful. It needs checking, formatting, testing, indexing. All of this does require real work, and real money. So we need services to do this, and we need to be prepared to pay for those services. The trouble is our current system has this backwards. We don’t pay directly for those services so those costs have to be recouped somehow. And the current set of service providers do that by producing the product that we really need and want and then crippling it.

Currently we take raw science and through a collaborative process between researchers and publishers we generate a communication product, generally a research paper, that is what most of the community holds as the standard means by which they wish to receive information. Because the publishers receive no direct recompense for their contribution they need to recover those costs by other means. They do this by artificially introducing friction and then charging to remove it.

This is a bad idea on several levels. Firstly because it means the product we get doesn’t have the maximum impact it could, because its not embedded in the largest possible network. From a business perspective it creates risks, publishers have to invest up front and then recoup money later, rather than being confident that expenditure and cash flow are coupled. This means, for instance that if there is a sudden rise (or fall) in the number of submissions there is no guarantee that cash flows or costs will scale with that change. But the real problem is that it distorts the market. Because on the researcher side we don’t pay for the product of effective communication we don’t pay much attention to what we’re getting. On the publisher side it drives a focus on surface and presentation, because it enhances the product in the current purchasers eyes, rather than a ruthless focus on production costs and shareability.

Network Ready Research Communication

If we care about taking advantage of the web and internet for research then we must tackle the building of scholarly communication networks. These networks will have those critical characteristics described above, scale and a lack of friction. The question is how do we go about building them. In practice we actually already have a network at huge scale – the web and the internet do that job for us, connecting essentially all professional researchers and a large proportion of the interested public. There is work to be done on expanding the reach of the network but this is a global development goal, not something specific to research.

So if we already have the network then what is the problem? The issue lies in the second characteristic – friction. Our current systems are actually designed to create friction. Before the internet was in place our network was formed of a distribution system involving trucks and paper – reducing costs to reasonable levels meant charging for that distribution process. Today those distribution costs have fallen to as near zero as makes no difference, yet we retain the systems that add friction unnecessarily. Slow review processes, charging for access, formats and discovery tools that are no longer fit for purpose.

What we need to do is focus on the process of taking research we that we do and convert it into a Network Ready form. That is we need to have access to the services that take our research and make them ready to exploit our network infrastructure – or we need to do it ourselves. What does “Network Ready” mean? A piece of Network Ready Research will be modular and easily discoverable, it will present different facets that will allow people and systems to use it in a wide variety of ways, it will be compatible with the widest range of systems and above all it will be easily shareable. Not just copyable or pasteable but easily shared through multiple systems while carrying with it all the context required to make use of it, all the connections that will allow a user to dive deeper into its component parts.

Network Ready Research will be interoperable, socially, technically, and legally with the rest of the network. The network is more than just technical infrastructure. It is also built up from the social connections, a shared understanding of the parameters of re-use, and a compatible system of checks and balances. The network is the shared set of technical and social connections that together enable new connections to be made. Network Ready Research will move freely across that, building new connections as it goes, able to act as both connecting edge and connected node in different contexts.

Building and strengthening the network

If you believe the above, as I do, then you see a potential for us to qualitatively change our capacity as a society to innovate, understand our world, and help to make it a better place. That potential will be best realized by building the largest possible, most effective, and lowest friction network possible. A networked commons in which ideas and data, concepts and expertise can be most easily shared, and can most easily find the place where they can do the most good.

Therefore the highest priority is building this network, making its parts and components interoperable, and making it as easy as possible to connect up networks that already exist. For an agency that funds research and seeks to ensure that research makes a difference the only course of action is to place the outputs of that research where they are most accessible on the network. In blunt terms that means three things: free at the point of access, technically interoperable with as many systems as possible, and free to use for any purpose. The key point is that at network scale the most important uses are statistically likely to be unexpected uses. We know we can’t predict the uses, or even success, of much research. That means we must position it so it can be used in unexpected ways.

Ultimately, the bigger the commons, the bigger the network, the better. And the more interoperable and the widest range of uses the better. That ultimately is why I argue for liberal licences, for the exclusion of non-commercial terms. It is why I use ccZero on this blog and for software that I write where I can. For me, the risk of commercial enclosure is so much smaller than the risk of not building the right networks, or of creating fragmented incompatible networks, of ultimately not being able to solve the crises we face today in time to do any good, that the course of action is clear. At the same time we need to build up the social interoperability of the network, to call out bad behavior and perhaps in some cases to isolate its perpetrators but we need to find ways of doing this that don’t damage the connectivity and freedom of movement on the network. Legal tools are useful to assure users of interoperability and their rights, otherwise they just become a source of friction. Social tools are a more viable route for encouraging desirable behaviour.

The priority has to be achieving scale and lowering friction. If we can do this then we have the potential to create a qualitative jump in our research capacity on a scale not seen since the 18th century and perhaps never. And it certainly feels like we need it.

Enhanced by Zemanta

The Research Works Act and the breakdown of mutual incomprehension

Man's face screaming/shouting. Stubbly wearing...
Image via Wikipedia

When the history of the Research Works Act, and the reaction against it, is written that history will point at the factors that allowed smart people with significant marketing experience to walk with their eyes wide open into the teeth of a storm that thousands of people would have predicted with complete confidence. That story will detail two utterly incompatible world views of scholarly communication. The interesting thing is that with the benefit of hindsight both will be totally incomprehensible to the observer from five or ten years in the future. It seems worthwhile therefore to try and detail those world views as I understand them.

The scholarly publisher

The publisher world view places them as the owner and guardian of scholarly communications. While publishers recognise that researchers provide the majority of the intellectual property in scholarly communication, their view is that researchers willingly and knowingly gift that property to the publishers in exchange for a set of services that they appreciate and value. In this view everyone is happy as a trade is carried out in which everyone gets what they want. The publisher is free to invest in the service they provide and has the necessary rights to look after and curate the content. The authors are happy because they can obtain the services they require without having to pay cash up front.

Crucial to this world view is a belief that research communication, the process of writing and publishing papers, is separate to the research itself. This is important because otherwise it would be clear that, at least in an ethical sense, that the writing of papers would be work for hire for the funders – and part and parcel of the contract of research. For the publishers the fact that no funding contract specifies that “papers must be published” is the primary evidence of this.

The researcher

The researcher’s perspective is entirely different. Researchers view their outputs as their own property, both the ideas, the physical outputs, and the communications. Within institutions you see this in the uneasy relationship between researchers and research translation and IP exploitation offices. Institutions try to avoid inflaming this issue by ensuring that economic returns on IP go largely to the researcher, at least until there is real money involved. But at that stage the issue is usually fudged as extra investment is required which dilutes ownership. But scratch a researcher who has gone down the exploitation path and then pushed gently aside and you’ll get a feel for the sense of personal ownership involved.

Researchers have a love-hate relationship with papers. Some people enjoy writing them, although I suspect this is rare. I’ve never met any researcher who did anything but hate the process of shepherding a paper through the review process. The service, as provided by the publisher, is viewed with deep suspicion. The resentment that is often expressed by researchers for professional editors is primarily a result of a loss of control over the process for the researcher and a sense of powerlessness at the hands of people they don’t trust. The truth is that researchers actually feel exactly the same resentment for academic editors and reviewers. They just don’t often admit it in public.

So from a researcher’s perspective, they have spent an inordinate amount of effort on a great paper. This is their work, their property. They are now obliged to hand over control of this to people they don’t trust to run a process they are unconvinced by. Somewhere along the line they sign something. Mostly they’re not too sure what that means, but they don’t give it much thought, let alone read it. But the idea that they are making a gift of that property to the publisher is absolute anathema to most researchers.

To be honest researchers don’t care that much about a paper once its out. It caused enough pain and they don’t ever want to see it again. This may change over time if people start to cite it and refer to it in supportive terms but most people won’t really look at a paper again. It’s a line on a CV, a notch on the bedpost. What they do notice is the cost, or lack of access, to other people’s papers. Library budgets are shrinking, subscriptions are being chopped, personal subscriptions don’t seem to be affordable any more.

The first response to this when researchers meet is “why can’t we afford access to our work?” The second is, given the general lack of respect for the work that publishers do, is to start down the process of claiming that they could do it better. Much of the rhetoric around eLife as a journal “led by scientists” is built around this view. And a lot of it is pure arrogance. Researchers neither understand, nor appreciate for the most part, the work of copyediting and curation, layout and presentation. While there are tools today that can do many of these things more cheaply there are very few researchers who could use them effectively.

The result…kaboom!

So the environment that set the scene for the Research Works Act revolt was a combination of simmering resentment amongst researchers for the cost of accessing the literature, combined with a lack of understanding of what it is publishers actually do. The spark that set it off was the publisher rhetoric about ownership of the work. This was always going to happen one day. The mutually incompatible world views could co-exist while there was still enough money to go around. While librarians felt trapped between researchers who demanded access to everything and publishers offering deals that just about meant they could scrape by things could continue.

Fundamentally once publishers started publicly using the term “appropriation of our property” the spark had flown. From the publisher perspective this makes perfect sense. The NIH mandate is a unilateral appropriation of their property. From the researcher perspective it is a system that essentially adds a bit of pressure to do something that they know is right, promote access, without causing them too much additional pain. Researchers feel they ought to be doing something to improve acccess to research output but for the most part they’re not too sure what, because they sure as hell aren’t in a position to change the journals they publish in. That would be (perceived to be) career suicide.

The elephant in the room

But it is of course the funder perspective that we haven’t yet discussed and looking forward, in my view it is the action of funders that will render both the publisher and researcher perspective incomprehensible in ten years time. The NIH view, similar to that of the Wellcome Trust, and indeed every funder I have spoken to, is that research communication is an intrinsic part of the research they fund. Funders take a close interest in the outputs that their research generates. One might say a proprietorial interest because again, there is a strong sense of ownership. The NIH Mandate language expresses this through the grant contract. Researchers are required to grant to the NIH a license to hold a copy of their research work.

In my view it is through research communication that research has outcomes and impact. From the perspective of a funder their main interest is that the research they fund generates those outcomes and impacts. For a mission driven funder the current situation signals one thing and it signals it very strongly. Neither publishers, nor researchers can be trusted to do this properly. What funders will do is move to stronger mandates, more along the Wellcome Trust lines than the NIH lines, and that this will expand. At the end of the day, the funders hold all the cards. Publishers never really did have a business model, they had a public subsidy. The holders of those subsidies can only really draw one conclusion from current events. That they are going to have to be much more active in where they spend it to successfully perform their mission.

The smart funders will work with the pre-existing prejudice of researchers, probably granting copyright and IP rights to the researchers, but placing tighter constraints on the terms of forward licensing. That funders don’t really need the publishers has been made clear by HHMI, Wellcome Trust, and the MPI. Publishing costs are a small proportion of their total expenditure. If necessary they have the resources and will to take that in house. The NIH has taken a similar route though technically implemented in a different way. Other funders will allow these experiments to run, but ultimately they will adopt the approaches that appear to work.

Bottom line: Within ten years all major funders will mandate CC-BY Open Access on publication arising from work they fund immediately on publication. Several major publishers will not survive the transition. A few will and a whole set of new players will spring up to fill the spaces. The next ten years look to be very interesting.

Enhanced by Zemanta

Response to the OSTP Request for Information on Public Access to Research Data

Response to Request for Information – FR Doc. 2011-28621

Dr Cameron Neylon – U.K. based research scientist writing in a personal capacity

Introduction

Thankyou for the opportunity to respond to this request for information and to the parallel RFI on access to scientific publications. Many of the higher level policy issues relating to data are covered in my response to the other RFI and I refer to that response where appropriate here. Specifically I re-iterate my point that a focus on IP in the publication is a non-productive approach. Rather it is more productive to identify the outcomes that are desired as a result of the federal investment in generating data and from those outcomes to identify the services that are required to convert the raw material of the research process into accessible outputs that can be used to support those outcomes.

Response

(1) What specific Federal policies would encourage public access to and the preservation of broadly valuable digital data resulting from federally funded scientific research, to grow the U.S. economy and improve the productivity of the American scientific enterprise?

Where the Federal government has funded the generation of digital data, either through generic research funding or through focussed programs that directly target data generation, the purpose of this investment is to generate outcomes. Some data has clearly defined applications, and much data is obtained to further very specific research goals. However while it is possible to identify likely applications it is not possible, indeed is foolhardy, to attempt to define and limit the full range of uses which data may find.

Thus to ensure that data created through federal investment is optimally exploited it is crucial that data be a) accessible, b) discoverable, c)interpretable and d) legally re-usable by any person for any purpose. To achieve this requires investment in infrastructure, markup,  and curation. This investment is not currently seen as either a core activity for researchers themselves, or a desirable service for them to purchase. It is rare therefore for such services or resource need to be thoughtfully costed in grant applications.

The policy challenge is therefore to create incentives, both symbolic and contractual, but also directly meaningful to researchers with an impact on their career and progression, that encourage researchers to either undertake these necessary activities directly themselves or to purchase and appropriately cost third party services to have them carried out.

Policy intervention in this area will be complex and will need to be thoughtful. Three simple policy moves however are highly tractable and productive, without requiring significant process adjustments in the short term:

a) Require researchers to provide a data management or data accessibility plan within grant requests. The focus of these plans should be showing how the project will enable third party groups to discover and re-use data outputs from the project.

b) As part of the project reporting, require measures of how data outputs have been used. These might include download counts, citations, comments, or new collaborations generated through the data. In the short term this assessment need to be directly used but it sends a message that agencies consider this important.

c) Explicitly measure performance on data re-use. Require as part of bio sketches and provide data on previous performance to grant panels. In the longer term it may be appropriate to provide guidance to panels on the assessment of previous performance on data re-use but in the first instance simply providing the information will affect behaviour and the general awareness of issues of data accessibility, discoverability, and usability.

(2) What specific steps can be taken to protect the intellectual property interests of publishers, scientists, Federal agencies, and other stakeholders, with respect to any existing or proposed policies for encouraging public access to and preservation of digital data resulting from federally funded scientific research?

As noted in my response to the other RFI, the focus on intellectual property is note helpful. Private contributors of data such as commercial collaborators should be free to exploit their own contribution of IP to projects as they see fit. Federally funded research should seek to maximise the exploitation and re-use of data generated through public investment.

It has been consistently and repeatedly demonstrated in a wide range of domains that the most effective way of exploiting the outputs of research innovation, be they physical samples, or digital data, to support further research, to drive innovation, or to support economic activity globally is to make those outputs freely available with no restrictive terms. That is, the most effective way to use research data to drive economic activity and innovation at a national level is to give the data away.

The current IP environment means that in specific cases, such as where there is very strong evidence of a patentable result with demonstrated potential, that the optimisation of outcomes does require protection of the IP. There are also situations where privacy and other legal considerations mean that data cannot be released or not be fully released. These should however be seen as the exception rather than the rule.

(3) How could Federal agencies take into account inherent differences between scientific disciplines and different types of digital data when developing policies on the management of data?

At the Federal level only very high-level policy decisions should be taken. These should provide direction and strategy but enable tactics and the details of implementation to be handled at agency or community levels. What both the Federal Agencies and coordination bodies such as OSTP can provide is an oversight and, where appropriate, funding support to maintain, develop, and expand interoperability between developing standards in different communities. Federal agencies can also effectively provide an oversight function that supports activities that enhance interoperability.

Local custom, dialects, and community practice will always differ and it is generally unproductive to enforce standardisation on implementation details. The policy objectives should be to set the expectations and the frameworks within local implementation can be developed and approaches to developing criteria against which those local implementations can be assessed.

(4) How could agency policies consider differences in the relative costs and benefits of long-term stewardship and dissemination of different types of data resulting from federally funded research?

Prior to assessing differences in performance and return on investment it will be necessary to provide data gathering frameworks and to develop significant expertise in the detailed assessment of the data gathered. A general principle that should be considered is that the administrative and performance data related to accessibility and re-use of research data should provide an outstanding exemplar of best practice in terms of accessibility, curation, discoverability, and re-usability.

The first step in cost benefit analysis must be to develop an information and data base that supports that analysis. This will mean tracking and aggregating forms of data use that are available today (download counts, citations) as well as developing mechanisms for tracking the use and impact of data in ways that are either challenging or impossible today (data use in policy development, impact of data in clinical practice guidelines).

Only once this assessment data framework is in place can detailed process of cost benefit analysis be seriously considered. Differences will exist in the measurable and imponderable return on investment in data availability, and also in the timeframes over which these returns are realised. We have only a very limited understanding of these issues today.

(5) How can stakeholders (e.g., research communities, universities, research institutions, libraries, scientific publishers) best contribute to the implementation of data management plans?

If stakeholders have serious incentives to optimise the use and re-use of data then all players will seek to gain competitive advantage through making the highest quality contributions. An appropriate incentives framework obviates the need to attempt to design in or pre-suppose how different stakeholders can, will, or should best contribute going forward.

(6) How could funding mechanisms be improved to better address the real costs of preserving and making digital data accessible?

As with all research outputs there should be a clear obligation on researchers to plan on a best efforts basis to publish these (as in make public) in a form that most effectively support access and re-use tensioned against the resources available. Funding agencies should make clear that they expect communication of research outputs to be a core activity for their funded research, that researchers and their institutions will be judged based on their performance in optimising the choices they make in selecting the appropriate modes of communication.

Further funding agencies should explicitly set guidance levels on the proportion of a research grant that is expected under normal circumstances to be used to support the communication of outputs. Based on calculations from the Wellcome Trust where projected expenditure on the publication of traditional research papers was around 1-1.5% of total grant costs, it would be reasonable to project total communication costs once data and other research communications are considered of 2-4% of total costs. This guidance and the details of best practice should clearly be adjusted as data is collected on both costs and performance.

(7) What approaches could agencies take to measure, verify, and improve compliance with Federal data stewardship and access policies for scientific research? How can the burden of compliance and verification be minimized?

Ideally compliance and performance will be trackable through automated systems that are triggered as a side effect of activities required for enabling data access. Thus references for new data should be registered with appropriate services to enable discovery by third parties – these services can also be used to support the tracking of these outputs automatically. Frameworks and infrastructure for sharing should be built with tracking mechanisms built in. Much of the aggregation of data at scale can build on the existing work in the STARMETRICS program and draw inspiration from that experience.

Overall it should be possible to reduce the burden of compliance from its current level while gathering vastly more data and information of much higher quality than is currently collected.

(8) What additional steps could agencies take to stimulate innovative use of publicly accessible research data in new and existing markets and industries to create jobs and grow the economy?

There are a variety of proven methods for stimulating innovative use of data at both large and small scale. The first is to make it available. If data is made available at scale then it is highly likely that some of it will be used somewhere. The more direct encouragement of specific uses can be achieved through directed “hack events” that bring together data handling and data production expertise from specific domains. There is significant US expertise in successfully managing these events and generating exciting outcomes. These in turn lead to new startups and new innovation.

There is also a significant growth in the number of data-focussed entrepreneurs who are now veterans of the early development of the consumer web. Many of these have a significant interest in research as well as significant resources and there is great potential for leveraging their experience to stimulate further growth. However this interface does need to be carefully managed as the cultures involved in research data curation and web-scale data mining and exploitation are very different.

(9) What mechanisms could be developed to assure that those who produced the data are given appropriate attribution and credit when secondary results are reported?

The existing norms of the research community that recognise and attribute contributions to further work should be strengthened and supported. While it is tempting to use legal instruments to enforce a need for attribution there is growing evidence that this can lead to inflexible systems that cannot adapt to changing needs. Thus it is better to utilise social enforcement than legal enforcement.

The current good work on data citation and mechanisms for tracking the re-use of data should be supported and expanded. Funders should explicitly require that service providers add capacity for tracking data citation to the products that are purchased for assessment purposes. Where possible the culture of citation should be expanded into the wider world in the form of clinical guidelines, government reports, and policy development papers.

(10) What digital data standards would enable interoperability, reuse, and repurposing of digital scientific data? For example, MIAME (minimum information about a microarray experiment; see Brazma et al., 2001, Nature Genetics 29, 371) is an example of a community-driven data standards effort.

At the highest level there are a growing range of interoperable information transfer formats that can provide machine readable and integratable data transfer including RDF, XML, OWL, JSON and others. My own experience is that attempting to impose global interchange standards is an enterprise doomed to failure and it is more productive to support these standards within existing communities of practice.

Thus the appropriate policy action is to recommend that communities adopt and utilise the most widely used possible set of standards and to support the transitions of practice and infrastructure required to support this adoption. Selecting standards at the highest level is likely to counterproductive. Identifying and disseminating best practice in the development and adoption of standards is however something that is the appropriate remit of federal agencies.

(11) What are other examples of standards development processes that were successful in producing effective standards and what characteristics of the process made these efforts successful?

There is now a significant literature on community development and practice and this should be referred to. Many lessons can also be drawn from the development of effective and successful open source software projects.

(12) How could Federal agencies promote effective coordination on digital data standards with other nations and international communities?

There are a range of global initiatives that communities should engage with. The most effective means of practical engagement will be to identify communities that have a desire to standardise or integrate systems and to support the technical and practical transitions to enable this. For instance there is a widespread desire to support interoperable data formats from analytical instrumentation but few examples of bringing this to transition. Funding could be directed to supporting a specific analytical community and the vendors that support them to apply an existing standard to their work.

(13) What policies, practices, and standards are needed to support linking between publications and associated data?

Development in this area is at an early stage. There is a need to reconsider the form of publication in its widest sense and this will have a significant impact on the forms and mechanisms of linking. This is a time for experimentation and exploration rather than standards development.

 

Enhanced by Zemanta

Response to the OSTP Request for Information on Public Access to Scientific Publications

Response to Request for Information – FR Doc. 2011-28623

Dr Cameron Neylon – U.K. based research scientist writing in a personal capacity

Introduction

Thank you for the opportunity to respond to this request for information. As a researcher based in the United Kingdom and Europe, it might be argued that I have a conflict of interest. In some ways it is in my interest for U.S. federally funded research to be uncompetitive. There are many opportunities that have been brought through evolving technology that have the potential to increase the efficiency of research itself, as well as its exploitation, and conversion into improved health outcomes, economic activity, a highly trained workforce, and technical innovation. Globally this potential has not been fully realised. In arguing for steps that work towards realising that potential in the U.S. it might be expected that I am risking aiding a competitor and perhaps in the longer term reducing the opportunity for Europe to overtake the U.S. as a global research contributor.

However I do not believe this to be the case. The potential efficiency gains and the extent to which they would increase the rate of innovation and economic development are so great, that their adoption in any part of the world will increase the effectiveness and capacity of research globally. Secondly the competition provided by a resurgent U.S. research base will galvanise action in Europe and more widely, leading to a “race to the top” in which, while those at the lead will benefit the most, there will be significant opportunities for the entire research base. My contribution is made in that light.

Preamble

The RFI and the America Competes Act are welcome developments in the area of public information, as they take forward the discussion about how best to improve the effectiveness of publicly funded research. Nonetheless I must respectfully state that I believe the framing of the RFI is flawed. The concentration on the disposition of intellectual property risks obscuring the real issues and preventing the resolution of current tensions between researchers, the public that funds research, federal agencies, and service provides, including scholarly publishers.

The intellectual property that is generated through publicly funded research takes many forms. It includes patents, the scholarly communications of researchers (including peer reviewed papers), as well as trade secrets, and expertise. The funder of this IP is the taxpayer, through the action of government. Federal funders pay for the direct costs of research, as well as the indirect costs including, but not limited to, investigator salaries, subscription to scholarly journals, and the provision of infrastructure. That the original ownership of this IP is vested in the government is recognised in the Bayh-Doyle act which explicitly transfers those rights to the research institutions and in response places an obligation on the institutions to maximise the benefits arising from that research.

The government chooses to invest in the generation of this intellectual property for a variety of reasons, including wealth generation, the support of innovation, the creation of a skilled workforce, evidence to support policy making, and improved health outcomes. That is, the government invests in research to support outcomes, not to generate IP per se. Thus the appropriate debate is not to argue about the final disposition of the IP itself, but how best support the services that take that IP and generate the outcomes desired by government and the wider community.

A focus on services greatly clarifies the debate and offers a promise of resolution that can support the interests of all stakeholders. It will allow us to identify what the required services are, as well as how they differ across different disciplines and for different forms of IP. It will provide a framework in which we can discuss how to provide a sustainable market in which service providers are paid a fair price for their contribution.

If we focus on the final disposition of IP it will be easy to create a situation in which we argue about who made what contribution and the IP is either divided to the point where it is useless, or concentrated in places where it never actually gets exploited. If instead we focus on the deliver of services that support the generation of outcomes we will have a framework that recognises the full range of contributions to the scholarly communications process, allows us to optimise that process on a case by case basis, and ultimately forces us to focus on ensuring that the public investment in research is optimally directed to what is intended to achieve: making the U.S. more economically successful and a better place to live.

Response

(1) Are there steps that agencies could take to grow existing and new markets related to the access and analysis of peer-reviewed publications that result from federally funded scientific research? How can policies for archiving publications and making them publically accessible be used to grow the economy and improve the productivity of the scientific enterprise? What are the relative costs and benefits of such policies? What type of access to these publications is required to maximize U.S. economic growth and improve the productivity of the American scientific enterprise? 

1 a) New markets for traditional peer reviewed publications

There are two broad forms of new market that can be identified for peer reviewed publications resulting from federally funded scientific research. The first of these is “new” markets for the traditionally published paper. There is massive and demonstrated demand from the general public for access to peer reviewed papers, particularly for access to medical research. A second crucial market for traditional papers is small and medium enterprise. The U.S. has a grand tradition of the small scale technical entrepreneur. In the modern world these entrepreneurs require up to date information on the latest research to be competitive. Estimates of the loss to the U.S. economy from the current lack of comprehensive access to peer reviewed papers by SMEs are around US$16 B (http://osc.hul.harvard.edu/stprfiresponsejanuary-2012).

Education at levels from primary through the postgraduate can also benefit from access to current research, and effective training of a modern skilled workforce is dependent on training being up to date. I am not aware of any estimates of the potential national costs due to deficiencies in education that result from a lack of access to current research but an investigation of these costs would be worthwhile.

The incremental cost of providing immediate access upon publication to peer reviewed research communications is at worst zero. The incremental cost of making a publication more widely available once the sunk costs involved in its preparation and peer review have been covered is zero. The infrastructure exists, both in the form of journal websites, and other repositories to serve this content. The question is how to create a sustainable market in which the services required to produce peer reviewed papers can be supported.

Open Access publishers, such as the Public Library of Science and BioMedCentral have demonstrated that it is financially viable to make peer reviewed research freely available via charging for the service of publication up front. The charges levied by PLoS and BMC are in fact less than those charged by subscription based publishers for vastly inferior “public access” services. For instance, the American Chemical Society charges up to $3500 for authors to obtain the right to place a copy of the paper in an institutional or disciplinary repository but limits the rights to commercial use (including for instance use in research by a biotechnology startup or for teaching in an institution which charges fees). By contrast the charge made by PLoS for publication in PLoS ONE is $1350. This provides the service of peer review, publication, archival, and places the final, peer reviewed and typset, version of the paper on the web for the use of any person or organisation for any purposes, thus maximising the potential for that research to reach the people who can use it to generate specific outcomes.

Again, the debate over where the IP is finally located, in which a publicly funded author has to purchase a limited right to use their own work, having donated their copyright to the publisher, is ultimately sterile. The debate should be focussed on the provision of publication services, the best mechanisms for paying for those services and ensuring a competitive market, and the value for money that is provided for the public investment. It is noteworthy in this context that a number of new entrants to this market, who have essentially copied the PLoS ONE model, are charging exactly the same fee, suggesting that there is still not a fully functional market and that there is a significant margin for costs to be reduced further.

1b) New service based markets for the generation of new forms of research outputs

A second set of markets are opened up when the focus is shifted from IP to services. The current debate has been largely limited to discussion of a single form of output the peer reviewed paper. However when we consider the problem from the angle of what services are required to ensure that the public investment in research generates the maximum possible outcomes, we can see that there will be new forms of services required. This include, but are not limited to, data publication and archival, summarization and current awareness services, integration and aggregation services, translation and secondary publication services.

The current focus on the ownership of IP for a narrow subset of possible forms of research communication is actively preventing experimentation and development of entirely new services and markets. Given the technical expertise contained within the U.S. these are markets where U.S. companies could be expected to take a lead. However the cost of entry to these markets, and the cost of development and experimentation, are made artificially high by uncertainty around the rights to re-use scholarly material. It is instructive that almost all innovation in this space is based on publicly accessible and re-usable resources such as PubMed, articles from Open Access journals, and freely available research data archives online. The federal government could support a flowering of commercial innovation in this space by signalling that it was concerned with creating markets for services that would support the effective, appropriate, and cost effective dissemination and accessibility of the full range of research outputs.

(2) What specific steps can be taken to protect the intellectual property interests of publishers, scientists, Federal agencies, and other stakeholders involved with the publication and dissemination of peer-reviewed scholarly publications resulting from federally funded scientific research? Conversely, are there policies that should not be adopted with respect to public access to peer-reviewed scholarly publications so as not to undermine any intellectual property rights of publishers, scientists, Federal agencies, and other stakeholders. 

Again, I wish to emphasise that the focus on intellectual property is not helpful here. It is crucial that all service providers, including publishers, research institutions, and researchers themselves receive appropriate recompense for their contributions, intellectual and otherwise, and that we create markets that support sustainable business models for the provision of these services as well as providing competition that ensures a fair price is being paid by the taxpayer for these services and encourages innovation. This is actually entirely separate to the issue of intellectual property as many of the critical contributions to the process do not generate any intellectual property in the legal sense. Let me illustrate this with an example.

I have gone through the final submitted version, after peer review, of the ten most recent peer reviewed papers on which I was an author. I have examined the text and diagrams of these, which were subsequently accepted for publication in this form, for any intellectual property that was contributed by the publishers during the peer review process. I have found none.

I am not a lawyer, so this does not constitute a legal opinion but in my view the only relevant intellectual property here is copyright. No single word of text, or any element of a diagram was contributed to these documents by the publishers. In some cases small amounts of text were suggested by external peer reviewers and incorporated. However in the fifteen years I have been carrying out peer review I have never signed over the copyright in my comments to a publisher, nor have I been paid for the review of papers, so there is no sense in which the publisher has any rights to text or comments provided by external peer reviewers. The final published versions of these papers do have a small contribution of intellectual property from the publishers, the typesetting and layout in some cases, but these are not relevant to the substance of the research itself.

But my main point is that this argument is ultimately not helpful. The publishers for each of these papers have provided a range of critical services, without which the paper would not have been published, including the infrastructure, management of the peer review process, archival, and deposition with appropriate indexing services. These important services are clearly ones for which a fair price should be paid to the service provider. It is therefore the services that we require to purchase and the most effective and appropriate mechanism by which to purchase them, that should be the point of discussion, not the disposition of intellectual property.

Our focus should therefore be on identifying for the full range of research outputs:

  1. How to ensure that they are accessible to the widest possible range of potential users. This might include maximising rights of re-use, ensuring that the outputs are discoverable by appropriate means, translation, interpretation, and publication in alternative media.
  2. Identify the services available, or if not available the services required, to achieve the maximum level of accessibility
  3. Work with service providers to identify appropriate business models that will support the provision of the required services and the development of markets that will ensure a fair price is received for those services.
  4. Tension the desired accessibility against the resources available to purchase services to provide that access. With limited resources it may be necessary and appropriate to choose, for instance, between paying for peer reviewed publication and generating material targeted at a specific audience most likely to be benefit from the research output.

The optimal solution for most of these issues is currently unclear. There is one exception to this rule. Once the costs of preparing and reviewing a research output and making that output available online have been met there is no economic benefit or reduced cost achieved by reducing access to that output. There is no gain in paying the full costs for a service that places an output online but then limits access to that output.

(3) What are the pros and cons of centralized and decentralized approaches to managing public access to peer-reviewed scholarly publications that result from federally funded research in terms of interoperability, search, development of analytic tools, and other scientific and commercial opportunities? Are there reasons why a Federal agency (or agencies) should maintain custody of all published content, and are there ways that the government can ensure long-term stewardship if content is distributed across multiple private sources?

Again, I feel this frames the question the wrong way, focusing on control and ownership of resources rather than the provision of services that enable discovery and use of research outputs. The question is not one of whether a distributed or a centralized approach is globally the best. This is likely to differ between disciplines, types of research output, and indeed across national borders. The question is how best to ensure that the outputs of federally funded research outputs are accessible and re-usable for those who could effectively exploit them. This will require a wide range of services focusing on different disciplines, different forms of research, but also crucially on different user groups.

The question for government and federal agencies is how best to provide the infrastructure that can support the fullest range of publication, discovery, archival, and integration services. This will inevitably be mix of services, and technical and human infrastructure, provided by government, commercial entities, and not-for-profits, some of which are centralised, some of which are distributed. Economies of scale mean that it will be more cost effective for some elements of this to be centralised and done up-front by federal agencies (e.g. long term preservation and archival as undertaken by the Library of Congress), whereas in other cases a patchwork of private service providers will be appropriate (specialist discovery services for specific communities or interest groups).

Once again, if a service based model is adopted in which a fair price for the costs of providing review and publication services is paid up front, guaranteeing that any interested party can access and re-use the published research output, then government will be free to archive and manage such outputs where appropriate while not interfering with the freedom to act of any other interested public or private stakeholder. This model can provide the greatest flexibility for all stakeholders in the system.

(4) Are there models or new ideas for public-private partnerships that take advantage of existing publisher archives and encourage innovation in accessibility and interoperability, while ensuring long-term stewardship of the results of federally funded research?

There are a range of such models ranging from ArXiv through relatively traditional publishers like PLoS and BMC to new and emerging forms of low cost publication that disaggregate the traditional role of the scholarly publisher into a menu of services which can be selected from as desired. It is not the place of government, federal agencies, or even scholarly communities to attempt to pick winners at this very early stage of development. Rather the role of government and federal funding agencies is to make a clear statement of expectations as to the service level expected of the researcher and their institution as a condition of funding and an appropriate level of resourcing the support the purchase of such services as required for effective communication of research outputs.

The role of the researcher is to select, on a best efforts basis, the appropriate services required for the effective communication of their research, consistent with the resources available. The role of the funder is to help provide a stable and viable market in the provision of such services that encourages competition, innovation, and the development of new services in response to the needs of an evolving research agenda.

(5) What steps can be taken by Federal agencies, publishers, and/or scholarly and professional societies to encourage interoperable search, discovery, and analysis capacity across disciplines and archives? What are the minimum core metadata for scholarly publications that must be made available to the public to allow such capabilities? How should Federal agencies make certain that such minimum core metadata associated with peer-reviewed publications resulting from federally funded scientific research are publicly available to ensure that these publications can be easily found and linked to Federal science funding?

Standardisation and interoperability remain challenging problems both technically and politically. Federal agencies should take advice on the adoption of standards when and where they have widespread adoption and traction. However it is in general unwise for government to select or impose standards where there is not already widespread adoption. Federal agencies are well place to provide an overview and where appropriate help to create “mid-course corrections” that will help to align the development of otherwise disconnected communities. The funding of specific targeted developments to support standards and interoperability development is appropriate. Consideration should be given at all times to aligning research standards with standards of wider relevance (e.g. consumer web standards) where appropriate and possible as these are likely to be better funded. There are however risks that the development of such standards can take directions not well suited to the research community.

Standards adopted by federal agencies should be open in the sense of having:

  1. Clear documentation that enables third parties to adhere to and interoperate with the standard.
  2. Working implementations of the standard that can be examined and reverse engineered by interested parties.
  3. Defined and accessible processes for the development and ongoing support of the standard.

(6) How can Federal agencies that fund science maximize the benefit of public access policies to U.S. taxpayers, and their investment in the peer-reviewed literature, while minimizing burden and costs for stakeholders, including awardee institutions, scientists, publishers, Federal agencies, and libraries?

Federal agencies, consistent with the Paperwork Reduction Act and guidance from the Office of Management and Budgets should adopt a “write once – use many” approach. That is that where possible the reporting burden for federally funded research should be discharged once by researchers for the communication of each research output. This means in turn that services purchased in the communication of that research should be sufficient to provide for any downstream use of that communication that does not involve a marginal cost.

Thus, for instance, researchers should not be expected to write two independent documents, the peer reviewed paper, and a further public report, to support public access policies. Reporting on the outcomes of federally funded research should depend, as far as possible, on existing previous communications. The providers of publication services should be encouraged to remove or modify existing restrictions that limit the accessibility of published research outputs including for instance, length limitations, limitations on the use of links to background information and unnecessary use of highly technical language. Service providers should be explicitly judged on the accessibility of the products generated through their services to a wide range of potential audiences and users.

(7) Besides scholarly journal articles, should other types of peer-reviewed publications resulting from federally funded research, such as book chapters and conference proceedings, be covered by these public access policies?

Yes. All research outputs should be covered by coherent federal policies that focus on ensuring that global outcomes of the public investment in research are maximised. The focus purely on research articles is damaging and limiting to the development of effective communication and thus exploitation. 

(8) What is the appropriate embargo period after publication before the public is granted free access to the full content of peer-reviewed scholarly publications resulting from federally funded research? Please describe the empirical basis for the recommended embargo period. Analyses that weigh public and private benefits and account for external market factors, such as competition, price changes, library budgets, and other factors, will be particularly useful. Are there evidence-based arguments that can be made that the delay period should be different for specific disciplines or types of publications?

Once the misleading focus on intellectual property is discarded in favour of a service based analysis it is clear that there is no justification for any length of embargo. Embargoes seek to ensure a private gain through creating an artificial scarcity by reducing access for a limited period of time. If a fair price is paid for the service of publication then the publisher has received full recompense in advance of publication and no further artificial monopoly rights are required. As noted above the costs of providing such services are at most no higher than is currently paid through subscription costs. With appropriate competition the costs might indeed become lower.

From the perspective of exploiting the public investment in research embargoes are also not justifiable. Technical exploitation, commercial development, and the saving of lives all depend on having the best and most up to date information to hand. Once a decision has been taken to publish a specific research result it is crucial that all of those who could benefit have access, whether they are private citizens with sick family members, small business owners and entrepreneurs, not-for-profit community support organisations, or major businesses.

Given the current environment of intellectual property law it may be appropriate under some circumstances for the researcher or their institution to delay publication to ensure that the research will be fully exploited. However there is no benefit to either the researcher, their institution, or the federal funding agency in reducing access once the research is published. Further it is clear that reducing access, whether to specific domains, communities, or for specific times, cannot improve the opportunities for exploitation of the research. It can only reduce them.

Conclusion

To conclude, to focus on the final disposition of intellectual property arising from the authoring of research outputs relating to federally funded research is to continue a sterile and non-productive discussion. Given that the federal government funds research, and provides its agencies with a mandate to support research through direct funding to research institutions, it is incumbent upon government, federal agencies, and the recipients of that funding to ensure that research communication is carried out in such a way that it optimally supports the exploitation and the generation of outcomes from that research.

To achieve this it is necessary to purchase services that support effective communication. These services have traditionally been provided by scholarly publishers and it is right and proper that they continue to receive a fair price for those services. The productive discussion is therefore how to develop the markets in these services that means service providers are viable and sustainable, and that there is sufficient competition to prevent price inflation and encourage innovation. That such services can be economically provided through a direct publication service model where the full costs of review and publication are charged at the point of publication has been demonstrated by the success of PLoS and BioMedCentral.

However this is just a starting point. A fully functional market will encourage the development of a wide range of competitive services that will enable researchers to select the most cost effective way of communicating and disseminating their research and ensuring that it reaches the widest possible audience and in turn is exploited fully. This in turn will enable federal agencies to support research, and its communication, in a way that ensures that the public investment is exploited fully for the benefit of the U.S., its citizens, and its economy.

Enhanced by Zemanta

Response to the RFI on Public Access to Research Communications

Have you written your response to the OSTP RFIs yet? If not why not? This is amongst the best opportunities in years to directly tell the U.S. government how important Open Access to scientific publications is and how to start moving to a much more data centric research process. You’d better believe that the forces of stasis, inertia, and vested interests are getting their responses in. They need to be answered.

I’ve written mine on public access and you can read and comment on it here. I will submit it tomorrow just in front of the deadline but in the meantime any comments are welcome. It expands on and discusses many of the same issues, specifically on re-configuring the debate on access away from IP and towards services, that have been in my recent posts on the Research Works Act.

Enhanced by Zemanta

IP Contributions to Scientific Papers by Publishers: An open letter to Rep Maloney and Issa

Dear Representatives Maloney and Issa,

I am writing to commend your strong commitment to the recognition of intellectual property contributions to research communication. As we move to a modern knowledge economy, supported by the technical capacity of the internet, it is crucial that we have clarity on the ownership of intellectual property arising from the federal investment in research. For the knowledge economy to work effectively it is crucial that all players receive fair recompense for the contribution of intellectual property that they make and the services that they provide.

As a researcher I like to base my work on solid data, so I thought it might interest you to have some quantitation of the level of contribution of IP that publishers make to the substance of scientific papers. In this, I have focussed on the final submitted version of papers after peer review as this is the version around which the discussion of mandates for deposition in repositories revolve. This also has the advantage of separating the typesetting and copyright in layout, clearly the property of the publishers from the intellectual substance of the research.

Contribution of IP to the final (post peer review) submitted versions of papers

Methodology: I examined the final submitted version (i.e. the version accepted for publication) of the ten most recent research papers on which I was an author along with the referee and editorial comments received from the publisher. For each paper I examined the text of the final submitted version and the diagrams and figures.  As the only IP of significance in this case is copyright the specific contributions that were searched for were text or elements of figures contributed by the publisher that satisfied the requirements for obtaining copyright. Figures that were re-used from other publications (where the copyright had been transferred to the other publisher and permission been obtained to republish) were not included as these were considered “old IP” that did not relate to new IP embodied in the specific paper under consideration. The text and figures were searched for specific creative contributions from the publisher and these were quantified for each paper.

Results: The contribution of IP by publishers to the final submitted versions of these ten papers, after peer review had been completed, was zero. Zip. Nada. Zilch. Not one single word, line, or graphical element was contributed by the publisher or the editor acting as their agent. A small number of single words, or forms of expression, were found that were contributed by external peer reviewers. However as these peer reviewers do not sign over copyright to the publisher and are not paid this contribution cannot be considered work for hire and any copyright resides with the original reviewers.

Limitations: This is a small and arguably biased study based on the publications I have to hand. I recommend that other researchers examine their own oeuvre and publish similar analyses so that effects of discipline, age, and venue of publication can be examined. Following such analysis I ask that researchers provide the data via twitter using the hashtag #publisheripcontrib where I will aggregate it and republish.

Data availability: I regret that the original submissions can not be provided as the copyright in these articles was transferred after acceptance for publication to the publishers. I can not provide the editorial reports as these contain material from the publishers for which I do not have re-distribution rights.

The IP argument is sterile and unproductive. We need to discuss services.

The analysis above at its core shows how unhelpful framing this argument around IP is. The fact that publishers do not contribute IP is really not relevant. Publishers do contribute services, the provision of infrastructure, the management of the peer review process, dissemination and indexing, that are crucial for the current system of research dissemination via peer reviewed papers. Without these services papers would not be published and it is therefore clear that these services have to be paid for. What we should be discussing is how best to pay for those services, how to create a sustainable market place in which they can be offered, and what level of service the federal government expects in exchange for the services it is buying.

There is a problem with this. We currently pay for these services in a convoluted fashion which is the result of historical developments. Rather than pay up front for publication services, we currently give away the intellectual property in our papers in exchange for publication. The U.S. federal and state governments then pay for these publication services indirectly by funding libraries to hire access back to our own work. This model made sense when the papers were physically on paper; distribution, aggregation, and printing were major components of the cost. In that world a demand side business model worked well and was appropriate.

In the current world the costs of dissemination and provision of access are as near to zero as makes no difference. The major costs are in the peer review process and preparing the paper in a version that can be made accessible online. That is, we have moved from a world where the incremental cost of dissemination of each copy was dominant, to a world where the first copy costs are dominant and the incremental costs of dissemination after those first copy costs are negligible. Thus we must be clear that we are paying for the important costs of the services required to generate that first web accessible copy, and not that we are supporting unnecessary incremental costs. A functioning market requires, as discussed above, that we have clarity on what is being paid for.

In a service based model the whole issue of IP simply goes away. It is clear that the service we would wish to pay for is one in which we generate a research communication product which provides appropriate levels of quality assurance and is as widely accessible and available for any form of use as possible. This ensures that the outputs of the most recent research are available to other researchers, to members of the public, to patients, to doctors, to entrepreneurs and technical innovators, and not least to elected representatives to support informed policy making and legislation. In a service based world there is no logic in artificially reducing access because we pay for the service of publication and the full first copy costs are covered by the purchase of that service.

Thus when we abandon the limited and sterile argument about intellectual property and move to a discussion around service provision we can move from an argument where no-one can win to a framework in which all players are suitably recompensed for their efforts and contributions, whether or not those contributions generate IP in the legal sense, and at the same time we can optimise the potential for the public investment in research to be fully exploited.

HR3699 prohibits federal agencies from supporting publishers to move to a transparent service based model

The most effective means of moving to a service based business model would be for U.S. federal agencies as the major funders of global research to work with publishers to assure them that money will be available for the support of publication services for federally funded researchers. This will require some money to be put aside. The UK’s Wellcome Trust estimates that they expect to spend approximately 1.5% of total research funding on publication services. This is a significant sum, but not an overly large proportion of the whole. It should also be remembered that governments, federal and state, are already paying these costs indirectly through overheads charges and direct support to research institutions via educational and regional grants. While there will be additional centralised expenditure over the transitional period in the longer term this is at worst a zero-sum game. Publishers are currently viable, indeed highly profitable. In the first instance service prices can be set so that the same total sum of money flows to them.

The challenge is the transitional period. The best way to manage this would be for federal agencies to be able to guarantee to publishers that their funded researchers would be moving to the new system over a defined time frame. The most straight forward way to do this would be for the agencies to have a published program over a number of years through which the publication of research outputs via the purchase of appropriate services would be made mandatory. This could also provide confidence to the publishers by defining the service level agreements that the federal agencies would require, and guarantee a predictable income stream over the course of the transition.

This would require agencies working with publishers and their research communities to define the timeframes, guarantees, and service level agreements that would be put in place. It would require mandates from the federal agencies as the main guarantor of that process. The Research Works Acts prohibits any such process. In doing so it actively prevents publishers from moving towards business models that are appropriate for today’s world. It will stifle innovation and new entrants to the market by creating uncertainty and continuing the current obfuscation of first copy costs with dissemination costs. In doing so it will damage the very publishers that support it by legislatively sustaining an out of date business model that is no longer fit for purpose.

Like General Motors, or perhaps more analogously, Lehman Brothers, the incumbent publishers are trapped in a business model that can not be sustained in the long term. The problem for publishers is that their business model is predicated on charging for the dissemination and access costs that are disappearing and not explicitly charging for the costs that really matter. Hiding the cost of one thing in a charge for another is never a good long term business strategy. HR3699 will simply prop them up for a little longer, ultimately leading to a bigger crash when it comes. The alternative is a managed transition to a better set of business models which can simultaneously provide a better return on investment for the taxpayer.

We recognise the importance of the services that scholarly publishers provide. We want to pay publishers for the services they provide because we want those services to continue to be available and to improve over time. Help us to help them make that change. Drop the Research Works Act.

Yours sincerely

Cameron Neylon

Enhanced by Zemanta