Science Commons Symposium – Redmond 20th February

Science Commons
Image by dullhunk via Flickr

One of the great things about being invited to speak that people don’t often emphasise is that it gives you space and time to hear other people speak. And sometimes someone puts together a programme that means you just have to shift the rest of the world around to make sure you can get there. Lisa Green and Hope Leman have put together the biggest concentration of speakers in the Open Science space that I think I have ever seen for the Science Commons Symposium – Pacific Northwest to be held on the Microsoft Campus in Redmond on 20 February. If you are in the Seattle area and have an interest in the future of science, whether pro- or anti- the “open” movement, or just want to hear some great talks you should be there. If you can’t be there then watch out for the video stream.

Along with me you’ll get Jean-Claude Bradley, Antony Williams, Peter Murray-Rust, Heather Joseph, Stephen Friend, Peter Binfield, and John Wilbanks. Everything from policy to publication, software development to bench work, and from capturing the work of a single researcher to the challenges of placing several hundred millions dollars worth of drug discovery data into the public domain. All with a focus on how we make more science available and generate more and innovative. Not to be missed, in person or online – and if that sounds too much like self promotion then feel free to miss the first talk… ;-)

Reblog this post [with Zemanta]

Peer review: What is it good for?

Peer Review Monster
Image by Gideon Burton via Flickr

It hasn’t been a real good week for peer review. In the same week that the Lancet fully retract the original Wakefield MMR article (while keeping the retraction behind a login screen – way to go there on public understanding of science), the main stream media went to town on the report of 14 stem cell scientists writing an open letter making the claim that peer review in that area was being dominated by a small group of people blocking the publication of innovative work. I don’t have the information to actually comment on the substance of either issue but I do want to reflect on what this tells us about the state of peer review.

There remains much reverence of the traditional process of peer review. I may be over interpreting the tenor of Andrew Morrison’s editorial in BioEssays but it seems to me that he is saying, as many others have over the years “if we could just have the rigour of traditional peer review with the ease of publication of the web then all our problems would be solved”.  Scientists worship at the altar of peer review, and I use that metaphor deliberately because it is rarely if ever questioned. Somehow the process of peer review is supposed to sprinkle some sort of magical dust over a text which makes it “scientific” or “worthy”, yet while we quibble over details of managing the process, or complain that we don’t get paid for it, rarely is the fundamental basis on which we decide whether science is formally published examined in detail.

There is a good reason for this. THE EMPEROR HAS NO CLOTHES! [sorry, had to get that off my chest]. The evidence that peer review as traditionally practiced is of any value at all is equivocal at best (Science 214, 881; 1981, J Clinical Epidemiology 50, 1189; 1998, Brain 123, 1954; 2000, Learned Publishing 22, 117; 2009). It’s not even really negative. That would at least be useful. There are a few studies that suggest peer review is somewhat better than throwing a dice and a bunch that say it is much the same. It is at its best at dealing with narrow technical questions, and at its worst at determining “importance” is perhaps the best we might say. Which for anyone who has tried to get published in a top journal or written a grant proposal ought to be deeply troubling. Professional editorial decisions may in fact be more reliable, something that Philip Campbell hints at in his response to questions about the open letter [BBC article]:

Our editors […] have always used their own judgement in what we publish. We have not infrequently overruled two or even three sceptical referees and published a paper.

But there is perhaps an even more important procedural issue around peer review. Whatever value it might have we largely throw away. Few journals make referee’s reports available, virtually none track the changes made in response to referee’s comments enabling a reader to make their own judgement as to whether a paper was improved or made worse. Referees get no public credit for good work, and no public opprobrium for poor or even malicious work. And in most cases a paper rejected from one journal starts completely afresh when submitted to a new journal, the work of the previous referees simply thrown out of the window.

Much of the commentary around the open letter has suggested that the peer review process should be made public. But only for published papers. This goes nowhere near far enough. One of the key points where we lose value is in the transfer from one journal to another. The authors lose out because they’ve lost their priority date (in the worse case giving the malicious referees the chance to get their paper in first). The referees miss out because their work is rendered worthless. Even the journals are losing an opportunity to demonstrate the high standards they apply in terms of quality and rigor – and indeed the high expectations they have of their referees.

We never ask what the cost of not publishing a paper is or what the cost of delaying publication could be. Eric Weinstein provides the most sophisticated view of this that I have come across and I recommend watching his talk at Science in the 21st Century from a few years back. There is a direct cost to rejecting papers, both in the time of referees and the time of editors, as well as the time required for authors to reformat and resubmit. But the bigger problem is the opportunity cost – how much that might have been useful, or even important, is never published? And how much is research held back by delays in publication? How many follow up studies not done, how many leads not followed up, and perhaps most importantly how many projects not refunded, or only funded once the carefully built up expertise in the form of research workers is lost?

Rejecting a paper is like gambling in a game where you can only win. There are no real downside risks for either editors or referees in rejecting papers. There are downsides, as described above, and those carry real costs, but those are never borne by the people who make or contribute to the decision. Its as though it were a futures market where you can only lose if you go long, never if you go short on a stock. In Eric’s terminology those costs need to be carried, we need to require that referees and editors who “go short” on a paper or grant are required to unwind their position if they get it wrong. This is the only way we can price in the downside risks into the process. If we want open peer review, indeed if we want peer review in its traditional form, along with the caveats, costs and problems, then the most important advance would be to have it for unpublished papers.

Journals need to acknowledge the papers they’ve rejected, along with dates of submission. Ideally all referees reports should be made public, or at least re-usable by the authors. If full publication, of either the submitted form of the paper or the referees report is not acceptable then journals could publish a hash of the submitted document and reports against a local key enabling the authors to demonstrate submission date and the provenance of referees reports as they take them to another journal.

In my view referees need to be held accountable for the quality of their work. If we value this work we should also value and publicly laud good examples. And conversely poor work should be criticised. Any scientist has received reviews that are, if not malicious, then incompetent. And even if we struggle to admit it to others we can usually tell the difference between critical, but constructive (if sometimes brutal), and nonsense. Most of us would even admit that we don’t always do as good a job as we would like. After all, why should we work hard at it? No credit, no consequences, why would you bother? It might be argued that if you put poor work in you can’t expect good work back out when your own papers and grants get refereed. This again may be true, but only in the long run, and only if there are active and public pressures to raise quality. None of which I have seen.

Traditional peer review is hideously expensive. And currently there is little or no pressure on its contributors or managers to provide good value for money. It is also unsustainable at its current level. My solution to this is to radically cut the number of peer reviewed papers probably by 90-95% leaving the rest to be published as either pure data or pre-prints. But the whole industry is addicted to traditional peer reviewed publications, from the funders who can’t quite figure out how else to measure research outputs, to the researchers and their institutions who need them for promotion, to the publishers (both OA and toll access) and metrics providers who both feed the addiction and feed off it.

So that leaves those who hold the purse strings, the funders, with a responsibility to pursue a value for money agenda. A good place to start would be a serious critical analysis of the costs and benefits of peer review.

Addition after the fact: Pointed out in the comments that there are other posts/papers I should have referred to where people have raised similar ideas and issues. In particular Martin Fenner’s post at Nature Network. The comments are particularly good as an expert analysis of the usefulness of the kind of “value for money” critique I have made. Also a paper in the Arxiv from Stefano Allesina. Feel free to mention others and I will add them here.

Reblog this post [with Zemanta]

Everything I know about software design I learned from Greg Wilson – and so should your students

Visualization of the "history tree" ...
Image via Wikipedia

Which is not to say that I am any good at software engineering, good practice, or writing decent code. And you shouldn’t take Greg to task for some of the dodgy demos I’ve done over the past few months either. What he does need to take the credit for is enabling me to go from someone who knew nothing at all about software design, the management of software development or testing to being able to talk about these things, ask some of the right questions, and even begin to make some of my own judgements about code quality in an amazingly short period of time. From someone who didn’t know how to execute a python script to someone who feels uncomfortable working with services where I can’t use a testing framework before deploying software.

This was possible through the online component of the training programme, called Software Carpentry, that Greg has been building, delivering and developing over the past decade. This isn’t a course in software engineering and it isn’t built for computer science undergraduates. It is a course focussed on taking scientists who have done a little bit of tinkering or scripting and giving them the tools, the literacy, and the knowledge to apply the best of knowledge base of software engineering to building useful high quality code that solves their problems.

Code and computational quality has never been a priority in science and there is a strong argument that we are currently paying, and will continue to pay a heavy price for that unless we sort out the fundamentals of computational literacy and practices as these tools become ubiquitous across the whole spread of scientific disciplines. We teach people how to write up an experiment; but we don’t teach them how to document code. We teach people the importance of significant figures but many computational scientists have never even heard of version control. And we teach the importance of proper experimental controls but never provide the basic training in testing and validating software.

Greg is seeking support to enable him to update Software Carpentry to provide an online resource for the effective training of scientists in basic computational literacy. It won’t cost very much money; we’re talking a few hundred thousand dollars here. And the impact is potentially both important and large. If you care about the training of computational scientists; not computer scientists, but the people who need, or could benefit from, some coding, data managements, or processing in their day to day scientific work, and you have money then I encourage you to contribute. If you know people or organizations with money please encourage them to contribute. Like everything important, especially anything to do with education and preparing for the future, these things are tough to fund.

You can find Greg at his blog: http://pyre.third-bit.com

His description of what wants to do and what he needs to do it is at: http://pyre.third-bit.com/blog/archives/3400.html

Reblog this post [with Zemanta]

Why I am disappointed with Nature Communications

Towards the end of last year I wrote up some initial reactions to the announcement of Nature Communications and the communications team at NPG were kind enough to do a Q&A to look at some of the issues and concerns I raised. Specifically I was concerned about two things. The licence that would be used for the “Open Access” option and the way that journal would be positioned in terms of “quality”, particularly as it related to the other NPG journals and the approach to peer review.

Unfortunately I have to say that I feel these have been fudged, and this is unfortunate because there was a real opportunity here to do something different and quite exciting.  I get the impression that that may even have been the original intention. But from my perspective what has resulted is a poor compromise between my hopes and commercial concerns.

At the centre of my problem is the use of a Creative Commons Attribution Non-commercial licence for the “Open Access” option. This doesn’t qualify under the BBB declarations on Open Access publication and it doesn’t qualify for the SPARC seal for Open Access. But does this really matter or is it just a side issue for a bunch of hard core zealots? After all if people can see it that’s a good start isn’t it? Well yes, it is a good start but non-commercial terms raise serious problems. Putting aside the fact that there is an argument that universities are commercial entities and therefore can’t legitimately use content with non-commercial licences the problem is that NC terms limit the ability of people to create new business models that re-use content and are capable of scaling.

We need these business models because the current model of scholarly publication is simply unaffordable. The argument is often made that if you are unsure whether you are allowed to use content then you can just ask, but this simply doesn’t scale. And lets be clear about some of the things that NC means you’re not licensed for: using a paper for commercially funded research even within a university, using the content of paper to support a grant application, using the paper to judge a patent application, using a paper to assess the viability of a business idea…the list goes on and on. Yes you can ask if you’re not sure, but asking each and every time does not scale. This is the central point of the BBB declarations. For scientific communication to scale it must allow the free movement and re-use of content.

Now if this were coming from any old toll access publisher I would just roll my eyes and move on, but NPG sets itself up to be judged by a higher standard. NPG is a privately held company, not beholden to share holders. It is a company that states that it is committed to advancing scientific communication not simply traditional publication. Non-commercial licences do not do this. From the Q&A:

Q: Would you accept that a CC-BY-NC(ND) licence does not qualify as Open Access under the terms of the Budapest and Bethesda Declarations because it limits the fields and types of re-use?

A: Yes, we do accept that. But we believe that we are offering authors and their funders the choices they require.Our licensing terms enable authors to comply with, or exceed, the public access mandates of all major funders.

NPG is offering the minimum that allows compliance. Not what will most effectively advance scientific communication. Again, I would expect this of a shareholder-controlled profit-driven toll access dead tree publisher but I am holding NPG to a higher standard. Even so there is a legitimate argument to be made that non-commercial licences are needed to make sure that NPG can continue to support these and other activities. This is why I asked in the Q&A whether NPG made significant money off re-licensing of content for commercial purposes. This is a discussion we could have on the substance – the balance between a commercial entity providing a valuable service and the necessary limitations we might accept as the price of ensuring the continued provision of that service. It is a value for money judgement. But not one we can make without a clear view of the costs and benefits.

So I’m calling NPG on this one. Make a case for why non-commercial licences are necessary or even beneficial, not why they are acceptable. They damage scientific communication, they create unnecessary confusion about rights, and more importantly they damage the development of new business models to support scientific communication. Explain why it is commercially necessary for the development of these new activities, or roll it back, and take a lead on driving the development of science communication forward. Don’t take the kind of small steps we expect from other, more traditional, publishers. Above all, lets have that discussion. What is the price we would have to pay to change the license terms?

Because I think it goes deeper. I think that NPG are actually limiting their potential income by focussing on the protection of their income from legacy forms of commercial re-use. They could make more money off this content by growing the pie than by protecting their piece of a specific income stream. It goes to the heart of a misunderstanding about how to effectively exploit content on the web. There is money to be made through re-packaging content for new purposes. The content is obviously key but the real value offering is the Nature brand. Which is much better protected as a trademark than through licensing. Others could re-package and sell on the content but they can never put the Nature brand on it.

By making the material available for commercial re-use NPG would help to expand a high value market for re-packaged content which they would be poised to dominate. Sure, if you’re a business you could print off your OA Nature articles and put them on the coffee table, but if you want to present them to investors you want that Nature logo and Nature packaging that you can only get from one place.  And that NPG does damn well. NPG often makes the case that it adds value through selection, presentation, and aggregation. It is the editorial brand that is of value. Let’s see that demonstrated though monetization of the brand, rather than through unnecessarily restricting the re-use of the content, especially where authors are being charged $5000 to cover the editorial costs.

Reblog this post [with Zemanta]

New Year – New me

FireworksApologies for any wierdness in your feed readers. The following is the reason why as I try to get things working properly again.

The past two years on this blog I wrote made some New Year’s resolutions and last year I assessed my performance against the previous year’s aims. This year I will admit to simply being a bit depressed about how much I achieved in real terms and how effective I’ve been at getting ideas out and projects off the ground. This year I want to do more in terms of walking the walk, creating examples, or at least lashups of the things I think are important.

One thing that has been going around in my head for at least 12 months is the question of identity. How I control what I present, who I depend on, and in the world of a semantic web where I am represented by a URL what should actually be there when someone goes to that address. So the positive thing I did over the holiday break, rather than write a new set of resolutions was to start setting up my own presence on the web, to think about what I might want to put there and what it might look like.

This process is not as far along as I would like but its far enough along that this will be the last post at this address. OpenWetWare has been an amazing resource for me over the past several years and we will continue to use the wiki for laboratory information and I hope to work with the team in whatever way I can as the next generation of tools develops. OpenWetWare was also a safe place where I could learn about blogging without worrying about the mechanics, confident in the knowledge that Bill Flanagan was covering the backstops. Bill is the person who has kept things running through the various technical ups and down and I’d particularly like to thank him for all his help.

However I have now learnt enough to be dangerous and want to try some more things out on my own. More than can be conveniently managed on a website that someone else has to look after. I will write a bit more about the idea and choices I’ve made in setting up the site soon but for the moment I just want to point you to the new site and offer you some choices about subscribing to different feeds.

If you are on the feedburner feed for the blog you should be automatically transferred over to the feed on the new site. If you’re reading in a feed reader you can check this by just clicking through to the item on my site. If you end up at a url starting https://cameronneylon.net/ then you are in the right place. If not, just change your reader to point at http://feeds.feedburner.com/ScienceInTheOpen.

This feed will include posts on things like papers and presentations as well as blog posts so if you are already getting that content in another stream and prefer to just get the blog posts via RSS you should point your reader at http://feeds.feedburner.com/ScienceInTheOpen_blog.  I can’t test this until I actually post something so just hold tight if it doesn’t work and I will try to get it working as soon as I can. The comments feed for all seven of you subscribed to it should keep working. All the posts are mirrored on the new site and will continue to be available at OpenWetWare

Once again I’d like to thank all the people at OpenWetWare that got me going in the blogging game and hope to see you over at the new site as I figure out what it means to present yourself as a scientist on the web.

Reblog this post [with Zemanta]

What should social software for science look like?

Nat Torkington, picking up on my post over the weekend about the CRU emails takes a slant which has helped me figure out how to write this post which I was struggling with. He says:

[from my post...my concern is that in a kneejerk response to suddenly make things available no-one will think to put in place the social and technical infrastructure that we need to support positive engagement, and to protect active researchers, both professional and amateur from time-wasters.] Sounds like an open science call for social software, though I’m not convinced it’s that easy. Humans can’t distinguish revolutionaries from terrorists, it’s unclear why we think computers should be able to.

As I responded over at Radar, yes I am absolutely calling for social software for scientists, but I didn’t mean to say that we could expect it to help us find the visionaries amongst the simply wrong. But this raises a very helpful question. What is it that we would hope Social Software for Science would do? And is that realistic?

Over the past twelve months I seem to have got something of a reputation for being a grumpy old man about these things, because I am deeply sceptical of most of the offerings out there. Partly because most of these services don’t actually know what it is they are trying to do, or how it maps on to the success stories of the social web. So prompted by Nat I would like to propose a list of what effective Social Software for Science (SS4S) will do and what it can’t.

  1.  SS4S will promote engagement with online scientific objects and through this encourage and provide paths to those with enthusiasm but insufficient expertise to gain sufficient expertise to contribute effectively (see e.g. Galaxy Zoo). This includes but is certainly not limited to collaborations between professional scientists. These are merely a special case of the general.
  2. SS4S will measure and reward positive contributions, including constructive criticism and disagreement (Stack overflow vs YouTube comments). Ideally such measures will value quality of contribution rather than opinion, allowing disagreement to be both supported when required and resolved when appropriate.
  3. SS4S will provide single click through access to available online scientific objects and make it easy to bring references to those objects into the user’s personal space or stream (see e.g. Friendfeed “Like” button)
  4. SS4S should provide zero effort upload paths to make scientific objects available online while simultaneously assuring users that this upload and the objects are always under their control. This will mean in many cases that what is being pushed to the SS4S system is a reference not the object itself, but will sometimes be the object to provide ease of use. The distinction will ideally be invisible to the user in practice barring some initial setup (see e.g. use of Posterous as a marshalling yard).
  5. SS4S will make it easy for users to connect with other users and build networks based on a shared interest in specific research objects (Friendfeed again).
  6. SS4S will help the user exploit that network to collaboratively filter objects of interest to them and of importance to their work. These objects might be results, datasets, ideas, or people.
  7. SS4S will integrate with the user’s existing tools and workflow and enable them to gradually adopt more effective or efficient tools without requiring any severe breaks (see Mendeley/Citeulike/Zotero/Papers and DropBox)
  8. SS4S will work reliably and stably with high performance and low latency.
  9. SS4S will come to where the researcher is working both with respect to new software and also unusual locations and situations requiring mobile, location sensitive, and overlay technologies (Layar, Greasemonkey, voice/gesture recognition – the latter largely prompted by a conversation I had with Peter Murray-Rust some months ago).
  10. SS4S will be trusted and reliable with a strong community belief in its long term stability. No single organization holds or probably even can hold this trust so solutions will almost certainly need to be federated, open source, and supported by an active development community.

What SS4S won’t do is recognize geniuses when they are out in the wilderness amongst a population of the just plain wrong. It won’t solve the cost problems of scientific publication and it won’t turn researchers into agreeable, supportive, and collaborative human beings. Some things are beyond even the power of Web 2.0

I was originally intending to write this post from a largely negative perspective, ranting as I have in the past about how current services won’t work. I think now there is a much more positive approach. Lets go out there and look at what has been done, what is being done, and how well it is working in this space. I’ve set up a project on my new wiki (don’t look too closely, I haven’t finished the decorating) and if you are interested in helping out with a survey of what’s out there I would appreciate the help. You should be able to log in with an OpenID as long as you provide an email address. Check out this Friendfeed thread for some context.

My belief is that we are near to position where we could build a useful requirements document for such a beast with references to what has worked and what hasn’t. We may not have the resources to build it and maybe the NIH projects currently funded will head in that direction. But what is valuable is to pull the knowledge together to figure out the most effective path forward.

It wasn’t supposed to be this way…

I’ve avoided writing about the Climate Research Unit emails leak for a number of reasons. Firstly it is clearly a sensitive issue with personal ramifications for some and for many others just a very highly charged issue. Probably more importantly I simply haven’t had the time or energy to look into the documents myself. I haven’t, as it were, examined the raw data for myself, only other people’s interpretations. So I’ll try to stick to a very general issue here.

There are appear to be broadly two responses from the research community to this saga. One is to close ranks and to a certain extent say “nothing was done wrong here”. This is at some level, the tack taken by the Nature Editorial of 3 December, which was headed up with “Stolen e-mails have revealed no scientific conspiracy…”. The other response is that the scandal has exposed the shambolic way that we deal with collecting, archiving, and making available both data and analysis in science, as well as the endemic issues around the hoarding of data by those who have collected it.

At one level I belong strongly in the latter camp, but I also appreciate the dismay that must be felt by those who have looked at, and understand what the emails actually contain, and their complete inability to communicate this into the howling winds of what seems to a large extent a media beatup. I have long felt that the research community would one day be shocked by the public response when, for whatever reason, the media decided to make a story about the appalling data sharing practices of publicly funded academic researchers like myself. If I’d thought about it more deeply I should have realised that this would most likely be around climate data.

Today the Times reports on its front page that the UK Metererology Office is to review 160 years of climate data and has asked a range of contributing organisations to allow it to make data public. The details of this are hazy but if the UK Met Office is really going to make the data public this is a massive shift. I might be expected to be happy about this but I’m actually profoundly depressed. While it might in the longer term lead to more strongly worded and enforced policies it will also lead to data sharing being forever associated with “making the public happy”. My hope has always been that the sharing of the research record would come about because people started to see the benefits, because they could see the possibilities in partnership with the wider community, and that it made their research more effective. Not because the tabloids told us we should.

Collecting the best climate data and doing the best possible analysis on it is not an option. If we get this wrong and don’t act effectively then with some probability that is significantly above zero our world ends. The opportunity is there to make this the biggest, most important, and most effective research project ever undertaken. To actively involve the wider community in measurement. To get an army of open source coders to re-write, audit, and re-factor the analysis software. Even to involve the (positively engaged) sceptics, to use their interest and ability to look for holes and issues. Whether politicians will act on data is not the issue that the research community can or should address; what we need to be clear on is that we provide the best data, the best analysis, and an honest view of the uncertainties. Along with the ability of anyone to critically analyse the basis for those conclusions.

There is a clear and obvious problem with this path. One of the very few credible objections to open research that I have come across is that by making material available you open your inbox to a vast community of people who will just waste your time. The people who can’t be bothered to read the background literature or learn to use the tools; the ones who just want the right answer. This is nowhere more the case than it is with climate research and it forms the basis for the most reasonable explanation of why the CRU (and every other repository of climate data as far as I am aware) have not made more data or analysis software directly available.

There are no simple answers here, and my concern is that in a kneejerk response to suddenly make things available no-one will think to put in place the social and technical infrastructure that we need to support positive engagement, and to protect active researchers, both professional and amateur from time-wasters. Interestingly I think this infrastructure might look very similar to that which we need to build to effectively share the research we do, and effectively discover the relevant work of others. Infrastructure is never sexy, particularly in the middle of a crisis. But there is one thing in the practice of research that we forget at our peril. Any given researcher needs to earn the right to be taken seriously. No-one ever earns the right to shut people up. Picking out the objection that happens to be important is something we have to at least attempt to build into our systems.

Nature Communications Q&A

A few weeks ago I wrote a post looking at the announcement of Nature Communications, a new journal from Nature Publishing Group that will be online only and have an open access option. Grace Baynes, fromthe  NPG communications team kindly offered to get some of the questions raised in that piece answered and I am presenting my questions and the answers from NPG here in their complete form. I will leave any thoughts and comments on the answers for another post. There has also been more information from NPG available at the journal website since my original post, some of which is also dealt with below. Below this point, aside from formatting I have left the response in its original form.

Q: What is the motivation behind Nature Communications? Where did the impetus to develop this new journal come from?

NPG has always looked to ensure it is serving the scientific community and providing services which address researchers changing needs. The motivation behind Nature Communications is to provide authors with more choice; both in terms of where they publish, and what access model they want for their papers.At present NPG does not provide a rapid publishing opportunity for authors with high-quality specialist work within the Nature branded titles. The launch of Nature Communications aims to address that editorial need. Further, Nature Communications provides authors with a publication choice for high quality work, which may not have the reach or breadth of work published in Nature and the Nature research journals, or which may not have a home within the existing suite of Nature branded journals. At the same time authors and readers have begun to embrace online only titles – hence we decided to launch Nature Communications as a digital-first journal in order to provide a rapid publication forum which embraces the use of keyword searching and personalisation. Developments in publishing technology, including keyword archiving and personalization options for readers, make a broad scope, online-only journal like Nature Communications truly useful for researchers.

Over the past few years there has also been increasing support by funders for open access, including commitments to cover the costs of open access publication. Therefore, we decided to provide an open access option within Nature Communications for authors who wish to make their articles open access.

Q: What opportunities does NPG see from Open Access? What are the most important threats?

Opportunities: Funder policies shifting towards supporting gold open access, and making funds available to cover the costs of open access APCs. These developments are creating a market for journals that offer an open access option.Threats: That the level of APCs that funders will be prepared to pay will be too low to be sustainable for journals with high quality editorial and high rejection rates.

Q: Would you characterise the Open Access aspects of NC as a central part of the journal strategy

Yes. We see the launch of Nature Communications as a strategic development.Nature Communications will provide a rapid publication venue for authors with high quality work which will be of interest to specialists in their fields. The title will also allow authors to adhere to funding agency requirements by making their papers freely available at point of publication if they wish to do so.

or as an experiment that is made possible by choosing to develop a Nature branded online only journal?

NPG doesn’t view Nature Communications as experimental. We’ve been offering open access options on a number of NPG journals in recent years, and monitoring take-up on these journals. We’ve also been watching developments in the wider industry.

Q: What would you give as the definition of Open Access within NPG?

It’s not really NPG’s focus to define open access. We’re just trying to offer choice to authors and their funders.

Q: NPG has a number of “Open Access” offerings that provide articles free to the user as well as specific articles within Nature itself under a Creative Commons Non-commercial Share-alike licence with the option to authors to add a “no derivative works” clause. Can you explain the rationale behind this choice of licence?

Again, it’s about providing authors with choice within a framework of commercial viability.On all our journals with an open access option, authors can choose between the Creative Commons Attribu­tion Noncommercial Share Alike 3.0 Unported Licence and the Creative Commons Attribution-Non-commer­cial-No Derivs 3.0 Unported Licence.The only instance where authors are not given a choice at present are genome sequences articles published in Nature and other Nature branded titles, which are published under Creative Commons Attribu­tion Noncommercial Share Alike 3.0 Unported Licence. No APC is charged for these articles, as NPG considers making these freely available an important service to the research community.

Q: Does NPG recover significant income by charging for access or use of these articles for commercial purposes? What are the costs (if any) of enforcing the non-commercial terms of licences? Does NPG actively seek to enforce those terms?

We’re not trying to prevent derivative works or reuse for academic research purposes (as evidenced by our recent announcement that NPG author manuscripts would be included in UK PMC’s open access subset).What we are trying to keep a cap on is illegal e-prints and reprints where companies may be using our brands or our content to their benefit. Yes we do enforce these terms, and we have commercial licensing and reprints services available.

Q: What will the licence be for NC?

Authors who wish to take for the open access option can choose either the Creative Commons Attribu­tion Noncommercial Share Alike 3.0 Unported Licence or the Creative Commons Attribution-Non-commer­cial-No Derivs 3.0 Unported Licence.Subscription access articles will be published under NPG’s standard License to Publish.

Q: Would you accept that a CC-BY-NC(ND) licence does not qualify as Open Access under the terms of the Budapest and Bethesda Declarations because it limits the fields and types of re-use?

Yes, we do accept that. But we believe that we are offering authors and their funders the choices they require.Our licensing terms enable authors to comply with, or exceed, the public access mandates of all major funders.

Q: The title “Nature Communications” implies rapid publication. The figure of 28 days from submission to publication has been mentioned as a minimum. Do you have a target maximum or indicative average time in mind?

We are aiming to publish manuscripts within 28 days of acceptance, contrary to an earlier report which was in error. In addition, Nature Communications will have a streamlined peer review system which limits presubmission enquiries, appeals and the number of rounds of review – all of which will speed up the decision making process on submitted manuscripts.

Q: In the press release an external editorial board is described. This is unusual for a Nature branded journal. Can you describe the makeup and selection of this editorial board in more detail?

In deciding whether to peer review manuscripts, editors may, on occasion, seek advice from a member of the Editorial Advisory Panel. However, the final decision rests entirely with the in-house editorial team. This is unusual for a Nature-branded journal, but in fact, Nature Communications is simply formalising a well-established system in place at other Nature journals.The Editorial Advisory Panel will be announced shortly and will consist of recognized experts from all areas of science. Their collective expertise will support the editorial team in ensuring that every field is represented in the journal.

Q: Peer review is central to the Nature brand, but rapid publication will require streamlining somewhere in the production pipeline. Can you describe the peer review process that will be used at NC?

The peer review process will be as rigorous as any Nature branded title – Nature Communications will only publish papers that represent a convincing piece of work. Instead, the journal will achieve efficiencies by discouraging presubmission enquiries, capping the number of rounds of review, and limiting appeals on decisions. This will enable the editors to make fast decisions at every step in the process.

Q: What changes to your normal process will you implement to speed up production?

The production process will involve a streamlined manuscript tracking system and maximise the use of metadata to ensure manuscripts move swiftly through the production process. All manuscripts will undergo rigorous editorial checks before acceptance in order to identify, and eliminate, hurdles for the production process. Alongside using both internal and external production staff we will work to ensure all manuscripts are published within 28days of acceptance – however some manuscripts may well take longer due to unforeseen circumstances. We also hope the majority of papers will take less!

Q: What volume of papers do you aim to publish each year in NC?

As Nature Communications is an online only title the journal is not limited by page-budget. As long as we are seeing good quality manuscripts suitable for publication following peer review we will continue to expand. We aim to launch publishing 10 manuscripts per month and would be happy remaining with 10-20 published manuscripts per month but would equally be pleased to see the title expand as long as manuscripts were of suitable quality.

Q: The Scientist article says there would be an 11 page limit. Can you explain the reasoning behind a page limit on an online only journal?

Articles submitted to Nature Communications can be up to 10 pages in length. Any journal, online or not, will consider setting limits to the ‘printed paper’ size (in PDF format) primarily for the benefit of the reader. Setting a limit encourages authors to edit their text accurately and succinctly to maximise impact and readability.

Q: The press release description of pap
ers for NC sounds very similar to papers found in the other “Nature Baby” journals, such as Nature Physics, Chemistry, Biotechnology, Methods etc. Can you describe what would be distinctive about a paper to make it appropriate for NC? Is there a concern that it will compete with other Nature titles?

Nature Communications will publish research of very high quality, but where the scientific reach and public interest is perhaps not that required for publication in Nature and the Nature research journals. We expect the articles published in Nature Communications to be of interest and importance to specialists in their fields. This scope of Nature Communications also includes areas like high-energy physics, astronomy, palaeontology and developmental biology, that aren’t represented by a dedicated Nature research journal.

Q: To be a commercial net gain NC must publish papers that would otherwise have not appeared in other Nature journals. Clearly NPG receives many such papers that are not published but is it not that case that these papers are, at least as NPG measures them, by definition not of the highest quality? How can you publish more while retaining the bar at its present level?

Nature journals have very high rejection rates, in many cases well over 90% of what is submitted. A proportion of these articles are very high quality research and of importance for a specialist audience, but lack the scientific reach and public interest associated with high impact journals like Nature and the Nature research journals. The best of these manuscripts could find a home in Nature Communications. In addition, we expect to attract new authors to Nature Communications, who perhaps have never submitted to the Nature family of journals, but are looking for a high quality journal with rapid publication, a wide readership and an open access option.

Q: What do you expect the headline subscription fee for NC to be? Can you give an approximate idea of what an average academic library might pay to subscribe over and above their current NPG subscription?

We haven’t set prices for subscription access for Nature Communications yet, because we want them to base them on the number of manuscripts the journal may potentially publish and the proportion of open access content. This will ensure the site licence price is based on absolute numbers of manuscripts available through subscription access. We’ll announce these in 2010, well before readers or librarians will be asked to pay for content.

Q: Do personal subscriptions figure significantly in your financial plan for the journal?

No, there will be no personal subscriptions for Nature Communications. Nature Communications will publish no news or other ‘front half content’, and we expect many of the articles to be available to individuals via the open access option or an institutional site license. If researchers require access to a subscribed-access article that is not available through their institution or via the open-access option, they have the option of buying the article through traditional pay-per-view and docu­ment-delivery options. For a journal with such a broad scope, we expect individuals will want to pick and choose the articles they pay for.

Q: What do you expect author charges to be for articles licensed for free re-use?

$5,000 (The Americas)€3,570 (Europe)¥637,350 (Japan)£3,035 (UK and Rest of World)Manuscripts accepted before April 2010 will receive a 20% discount off the quoted APC.

Q: Does this figure cover the expected costs of article production?

This is a flat fee with no additional production charges (such as page or colour figure charges). The article processing charges have been set to cover our costs, including article production.

Q: The press release states that subscription costs will be adjusted to reflect the take up of the author-pays option. Can you commit to a mechanistic adjustment to subscription charges based on the percentage of author-pays articles?

We are working towards a clear pricing principle for Nature Communications, using input from NESLi and others. Because the amount of subscription content may vary substantially from year to year, an entirely mechanistic approach may not give libraries the ability to they need to forecast with confidence.

Q: Does the strategic plan for the journal include targets for take-up of the author-pays option? If so can you disclose what those are?

We have modelled Nature Communications as an entirely subscription access journal, a totally open access journal, and continuing the hybrid model on an ongoing basis. The business model works at all these levels.

Q: If the author-pays option is a success at NC will NPG consider opening up such options on other journals?

We already have open access options on more than 10 journals, and we have recently announced the launch in 2010 of a completely open access journal, Cell Death & Disease. In addition, we publish the successful open access journal Molecular Systems Biology, in association with the European Molecular Biology OrganizationWe’re open to new and evolving business models where it is sustainable.The rejection rates on Nature and the Nature research journals are so high that we expect the APC for these journals would be substantially higher than that for Nature Communications.

Q: Do you expect NC to make a profit? If so over what timeframe?

As with all new launches we would expect Nature Communications to be financially viable during a reasonable timeframe following launch.

Q: In five years time what are the possible outcomes that would be seen at NPG as the journal being a success? What might a failure look like?

We would like to see Nature Communications publish high quality manuscripts covering all of the natural sciences and work to serve the research community. The rationale for launching this title is to ensure NPG continues to serve the community with new publishing opportunities.A successful outcome would be a journal with an excellent reputation for quality and service, a good impact factor, a substantial archive of published papers that span the entire editorial scope and significant market share.

Reflections on Science 2.0 from a distance – Part II

This is the second of two posts discussing the talk I gave at the Science 2.0 Symposium organized by Greg Wilson in Toronto in July. As I described in the last post Jon Udell pulled out the two key points from my talk and tweeted them. The first suggested some ideas about what the limiting unit of science, or rather science communication, might be. The second takes me in to rather more controversial areas:

@cameronneylon uses tags to classify records in a bio lab wiki. When emergent ontology doesn’t match the standard, it’s useful info. #osci20

It may surprise many to know that I am a great believer in ontologies and controlled vocabularies. This is because I am a great believer in effectively communicating science and without agreed language effective communication isn’t possible. Where I differ with many is that assumption that because an ontology exists it provides the best means of recording my research. This is borne out of my experiences trying to figure out how to apply existing data models and structured vocabularies to my own research work. Very often the fit isn’t very good, and more seriously, it is rarely clear why or how to go about adapting or choosing the right ontology or vocabulary.

What I was talking about in Toronto was the use of key-value pairs within the Chemtools LaBLog system and the way we use them in templates. To re-cap briefly the templates were initially developed so that users can avoid having to manually mark up posts, particularly ones with tables, for common procedures. The fact that we were using a one item-one post system meant that we knew that important inputs into that table would have their own post and that the entry in the table could link to that post. This in turn meant that we could provide the user of a template with a drop down menu populated with post titles. We filter those on the basis of tags, in the form of key-value pairs, so as to provide the right set of possible items to the user. This creates a remarkably flexible, user-driven, system that has a strong positive reinforcement cycle. To make the templates work well, and to make your life easier you need to have the metadata properly recorded for research objects, but in turn you can create templates for your objects that make sure that the metadata is recorded correctly.

The effectiveness of the templates clearly depends very strongly on the organization of the metadata. The more the pattern of organization maps on to the realities of how substances and data files are used in the local research process, and the more the templates reflect the details of that process the more effective they are. We went though a number of cycles of template and metadata re-organization. We would re-organize, thinking we had things settled and then we would come across another instance of a template breaking, or not working effectively. The motivation to re-organize was to make the templates work well, and save effort. The system aided us in this by allowing us to make organizational changes without breaking any of the previous schemes.

Through repeated cycles of modification and adaption we identified an organizational scheme that worked effectively. Essentially this is a scheme that categorizes objects based on what they can be used for. A sample may be in the material form of a solution, but it may also be some form of DNA.  Some procedures can usefully be applied to any solution, some are only usefully applied to DNA. If it is a form of DNA then we can ask whether it is a specific form, such as an oligonucleotide, that can be used in specific types of procedure, such as a PCR. So we ended up with a classification of DNA types based on what they might be used for (any DNA can be a PCR templates, only a relatively short single stranded DNA can be used as a – conventional – PCR primer). However in my work I also had to allow for the fact that something that was DNA might also be protein; I have done work on protein-DNA conjugates and I might want to run these on both a protein gel and a DNA gel.

We had, in fact, built our own, small scale laboratory ontology that maps onto what we actually do in our laboratory. There was little or no design that went into this, only thinking of how to make our templates work. What was interesting was the process of then mapping our terms and metadata onto designed vocabularies. The example I used in the talk was the Sequence Ontology terms relating to categories of DNA. We could map the SO term plasmid on to our key value pair DNA:plasmid, meaning a double stranded circular DNA capable in principle of transforming bacteria. SO:ss_oligo maps onto DNA:oligonucleotide (kind of, I’ve just noticed that synthetic oligo is another term in SO).

But we ran into problems with our type DNA:double_stranded_linear. In SO there is more than one term, including restriction fragments and PCR products. This distinction was not useful to us. In fact it would create a problem. For our purposes restriction fragments and PCR products were equivalent in terms of what we could do with them. The distinction the SO makes is in where they come from, not what they can do. Our schema is driven by what we can do with them. Where they came from and how they were generated is also implicit in our schema but it is separated from what an object can be used for.

There is another distinction here. The drop down menus in our templates do not have an “or” logic in the current implementation. This drives us to classify the possible use of objects in as general a way as possible. We might wish to distinguish between “flat ended” linear double stranded DNA (most PCR products) and “sticky ended” or overhanging linear ds DNA (many restriction fragments) but we are currently obliged to have at least one key value pair places these together as many standard procedures can be applied to both. In ontology construction there is a desire to describe as much detail as possible. Our framework drives us towards being as general as possible. Both approaches have their uses and neither is correct. They are built for different purposes.

The bottom line is that for a structured vocabulary to be useful and used it has to map well onto two things. The processes that the user is operating and the inputs and outputs of those processes. That is it must match the mental model of the user. Secondly it must map well onto the tools that the user has to work with. Most existing biological ontologies do not map well onto our LaBLog system, although we can usually map to them relatively easy for specific purposes in a post-hoc fashion. However I think our system is mapped quite well by some upper ontologies.

I’m currently very intrigued by an idea that I heard from Allyson Lister, which matches well onto some other work I’ve recently heard about that involves “just in time” and “per-use” data integration. It also maps onto the argument I made in my recent paper that we need to separate the issues of capturing research from those involved in describing and communicating research. The idea was that for any given document or piece of work, rather than trying to fit it into a detailed existing ontology you build a single-use local ontology based on what is happening in this specific case based on a more general ontology, perhaps OBO, perhaps something even more general. Then this local description can be mapped onto more widely used and more detailed ontologies for specific purposes.

At the end of the day the key is effective communication. We don’t all speak the same language and we’re not going to. But if we had the tools to help us capture our research in an appropriate local dialect in a way that makes it easy for us, and others, to translate into whatever lingua franca is best for a given purpose, then we will make progress.

Reflections on Science 2.0 from a distance – Part I

Some months ago now I gave a talk at very exciting symposium organized by Greg Wilson as a closer for the Software Carpentry course he was running at Toronto University. It was exciting because of the lineup but also because it represented a real coming together of views on how developments in computer science and infrastructure as well as new social capabilities brought about by computer networks are changing scientific research.I talked, as I have several times recently, about the idea of a web-native laboratory record, thinking about what the paper notebook would look like if it were re-invented with today’s technology. Jon Udell gave a two tweet summary of my talk which I think captured the two key aspects of my view point perfectly. In this post I want to explore the first of these.

@cameronneylon: “The minimal publishable unit of science — the paper — is too big, too monolithic. The useful unit: a blog post.”#osci20

The key to the semantic web, linked open data, and indeed the web and the internet in general, is the ability to be able to address objects. URLs in and of themselves provide an amazing resource making it possible to identify and relate digital objects and resources. The “web of things” expands this idea to include addresses that identify physical objects. In science we aim to connect physical objects in the real world (samples, instruments) to data (digital objects) via concepts and models. All of these can be made addressable at any level of granularity we choose. But the level of detail is important. From a practical perspective too much detail means that the researcher won’t, or even can’t, record it properly. Too little detail and the objects aren’t flexible enough to allow re-wiring when we discover we’ve got something wrong.

A single sample deserves an identity. A single data file requires an identity, although it may be wrapped up within a larger object. The challenge comes when we look at process, descriptions of methodology and claims. A traditionally published paper is too big an object, something that is shown clearly by the failure of citations to papers to be clear. A paper will generally contain multiple claims, and multiple processes. A citation could  refer to any of these. At the other end I have argued that a tweet, 140 characters, is too small, because while you can make a statement it is difficult to provide context in the space available. To be a unit of science a tweet really needs to contain a statement and two references or citations, providing the relationship between two objects. It can be done but its just a bit too tight in my view.

So I proposed that the natural unit of science research is the blog post. There are many reasons for this. Firstly the length is elastic, accommodating something (nearly) as short as a tweet, to thousands of lines of data, code, or script. But equally there is a broad convention of approximate length, ranging from a few hundred to a few thousand words, about the length in fact of of a single lab notebook page, and about the length of a simple procedure. The second key aspect of a blog post is that it natively comes with a unique URL. The blog post is a first class object on the web, something that can be pointed at, scraped, and indexed. And crucially the blog post comes with a feed, and a feed that can contain rich and flexible metadata, again in agreed and accessible formats.

If we are to embrace the power of the web to transform the laboratory and scientific record then we need to think carefully about what the atomic components of that record are. Get this wrong and we make a record which is inaccessible, and which doesn’t take advantage of the advanced tooling that the consumer web now provides. Get it right and the ability to Google for scientific facts will come for free. And that would just be the beginning.

If you would like to read more about these ideas I have a paper just out in the BMC Journal Automated Experimentation.