Articles tagged with: Academic publishing
Understanding how a process looks from outside our own echo chamber can be useful. It helps to calibrate and sanity check our own responses. It adds an external perspective and at its best can save us from our own overly fixed ideas. In the case of the ongoing Elsevier Boycott we even have a perspective that comes from two opposed directions. The two analyst/brokerage firms Bernstein and Exane Paribas have recently published reports on their view of how recent events should effect the view of those investing in Reed Elsevier. …
When the history of the Research Works Act, and the reaction against it, is written that history will point at the factors that allowed smart people with significant marketing experience to walk with their eyes wide open into the teeth of a storm that thousands of people would have predicted with complete confidence. That story will detail two utterly incompatible world views of scholarly communication.
Last Friday I spoke at the STM Innovation Seminar in London, taking in general terms the theme I’ve been developing recently of focussing on enabling user discovery rather than providing central filtering, enabling people to act as their own gatekeeper rather than publishers taking that role on themselves. The example I used, the JACS hydride oxidation paper, wasn’t such a good example because, as a retraction, there is more detailed information available in the published retraction. Had it come a few days earlier the arsenic microbes paper, and subsequent detailed critique might well have made a better example. But both of these examples seem to be pointing towards a world in which post publication peer review is not just happening, but expected. How can publishers work to make the best of this new information and treat papers as the beginning of a story rather than its end?
I recently made the most difficult decision I’ve had to take thus far as a journal editor. That decision was ultimately to accept the paper; that probably doesn’t sound like a difficult decision until I explain that I made this decision despite a referee saying I should reject the paper with no opportunity for resubmission not once, but twice. One of the real problems I have with traditional pre-publication peer review is the way it takes a very nuanced problem around a work which has many different parts and demands that you take a hard yes/no decision.
The idea that “it’s not information overload, it’s filter failure” combined with the traditional process of filtering scholarly communication by peer review prior to publication seems to be leading towards the idea that we need to build better filters by beefing up the curation of research output before it is published. Here I argue that this is backwards and that the ‘filter failure’ soundbite is maybe unfortunate in the context of scholarly communications. The web won’t reduce the cost of curation, but it has reduced the cost of publication. This means that instead of building filters to prevent stuff getting on the web it is more productive to focus on enhancing discovery. A focus on enabling discovery can both deliver for researchers and provide business models that are more aligned with the way the web works.
There has been an awful lot recently written and said about author-pays business models for scholarly publishing and a lot of it has focussed on PLoS ONE. Most recently Kent Andersen has written a piece on Scholarly Kitchen that contains a number of fairly serious misconceptions about the processes of PLoS ONE. This is a shame because I feel this has muddled the much more interesting question that was intended to be the focus of his piece. Nonetheless here I want to give a robust defence of author pays models and of PLoS ONE in particular.
Suddenly it seems everyone wants to re-imagine scientific communication. From the ACS symposium a few weeks back to a PLoS Forum, via interesting conversations with a range of publishers, funders and scientists, it seems a lot of people are thinking much more seriously about how to make scientific communication more effective, more appropriate to the 21st century and above all, to take more advantage of the power of the web. For me, the “paper” of the future has to encompass much more than just the narrative descriptions of processed results we have today. Here I discuss the idea of the research communication as an aggregation of objects that are linked together into a story by an “editor”. This has the potential both to encompass what papers look like today and prepare us for a much more diverse future. At the same time if we built our research communications this way we get the semantic web for research data more or less as a free extra.
A story of two major retractions from a well known research group has been getting a lot of play over the last few days with a News Feature (1) and Editorial (2) in the 15 May edition of Nature. The story turns on claim that Homme Hellinga’s group was able to convert the E. coli ribose binding protein into a Triose phosphate isomerase (TIM) using a computational design strategy. Two papers on the work appeared, one in Science (3) and one in J Mol Biol (4). However another group, having …
Image via Wikipedia
Once again a range of conversations in different places have collided in my feed reader. Over on Nature Networks, Martin Fenner posted on Researcher ID which lead to a discussion about attribution and in particular Martin’s comment that there was a need to be able to link to comments and the necessity of timestamps. Then DrugMonkey posted a thoughtful blog about the issue of funding body staff introducing ideas from unsuccessful grant proposals they have handled to projects which they have a responsibility in guiding.
Image via Wikipedia Once …