The end of the journal? What has changed, what stayed the same?

goodsThis is an approximate rendering of my comments as part of the closing panel of “The End of Scientific Journal? Transformations in Publishing” held at the Royal Society, London on 27 November 2015. It should be read as a reconstruction of what I might have said rather than an accurate record. The day had focussed on historical accounts of “journals” as mediators of both professional and popular research communications. A note of the meeting will be published. Our panel was set the question of “will the journal still exist in 2035”.

Over the course of 2015 I’ve greatly enjoyed being part of the series of meetings looking at the history of research communications and scientific journals in the past. In many cases we’ve discovered that our modern concerns, today the engagement of the wider public, the challenge of expertise, are not at all new, that many of the same issues were discussed at length in the 17th, 18th and 19th centuries. And then there are moments of whiplash as something incomprehensible streaks past: Pietro Corsi telling us that dictionaries were published as periodicals; Aileen Fyfe explaining that while papers given to at Royal Society meetings were then refereed, the authors could make only make “verbal” not intellectual changes to the text in response; Jon Topham telling us that chemistry and physics were characterised under literature in the journals of the early 19th century.

So if we are to answer the exam question we need to address the charge that Vanessa Heggie gave us in the first panel discussion. What has remained the same? And what has changed? If we are to learn from history then we need to hold ourselves to a high standard in trying to understand what it is (not) telling us. Prediction is always difficult, especially about the future…but it wasn’t Niels Bohr who first said that. A Dane would likely tell us that “det er svært at spÃ¥, især om fremtiden” is a quote from Storm Peterson or perhaps Piet Hein, but it probably has deeper roots. It’s easy to tell ourselves compelling stories, whether they say that “everything has changed” or that “it’s always been that way”, but actually checking and understanding the history matters.

So what has stayed the same? We’ve heard throughout today the importance of groups, communities. Of authors, of those who were (or were trying to be) amongst the small group of paid professors at UK universities. Of the distinctions between communities of amateurs and of professionals. We’ve heard about language communities, of the importance of who you know in being read at the Royal Society and of the development of journals as a means of creating research disciplines. I think this centrality of communities, of groups, of clubs is a strand that links us to the 19th century. And I think that’s true because of the nature of knowledge itself.

Knowledge is a slippery concept, and I’ve made the argument elsewhere, so for now I’ll just assert that it belongs in the bottom right quadrant of Ostrom‘s categorisation of goods. Knowledge is non-rivalrous – if I give it to you I still have it – but also excludable – I can easily prevent you from having it, by not telling you, or by locking it up behind a paywall, or simply behind impenetrable jargon. This is interesting because Buchannan‘s work on the economics of clubs show us that it is precisely the goods in this quadrant which are used to sustain clubs and make them viable.

The survival of journals, or of scholarly societies, disciplines or communities, therefore depends on how they deploy knowledge as a club good. To achieve this deployment it is necessary to make that knowledge, the club good, less exclusive, and more public. What is nice about this view is that it allows us, to borrow Aileen Fyfe’s language, to talk about “public-making” (Jan Velterop has used the old term “publicate” in a similar way) as an activity which includes public engagement, translation, and – to Rebekah Higgitt‘s point – education as well as scholarly publishing as we traditionally understand it as overlapping subsets of this broader activity.

But what has changed? I would argue that the largest change in the 20th century was one of scale. The massive increase in the scale and globalisation of the research enterprise, as well as the rise of literacy meant that traditional modes of coordination, within scholarly societies and communities, and beyond to interested publics were breaking down. To address this coordination problem we took knowledge as a club good and privatised it, introducing copyright and intellectual property as a means of engaging corporate interests to manage the coordination problem for us. It is not an accident that the scale up, the introduction of copyright and IP to scholarly publishing, and scholarly publishing becoming (for the first time) profitable all co-incide. The irony of this, is that by creating larger, and clearly defined markets, we solved the problem of market scale that troubled  early journals that needed to find both popular and expert audiences, by locking wider publics out.

The internet and the web also changed everything, but its not the cost of reproduction that most matters. The critical change for our purpose here is the change in the economics of discovery. As part of our privatisation of knowledge we parcelled it up into journals, an industrial broadcast mechanism in which one person aims with as much precision as possible to reach the right, expert, audience. The web shifts the economics of discovering expertise in a way that makes it viable to discover, not the expert who knows everything about a subject, but the person who just happens to have the right piece of knowledge to solve a specific problem.

These two trends are pulling us in opposite directions. The industrial model means creating specialisation and labelling. The creation of communities and niches that are, for publishers, markets that can be addressed individually. These communities are defined by credentialling and validation of deep expertise in a given subject. The ideas of micro-expertise, of a person with no credentials having the key information or insight radically undermines the traditional dynamics of scholarly group formation. I don’t think it is an accident that those scholarly communities that Michèle Lamont identifies as having the most stable self conception have a tendency to being the most traditional in terms of their communication and public engagement. Lamont identifies history (but not as Berris Charnley reminded me the radicals from the history of science here today!) and (North American analytical) philosophy in this group. I might add synthetic chemistry from my own experience as examples.

It is perhaps indicative of the degree of siloing that I’m a trained biochemist at a history conference, telling you about economics – two things I can’t claim any deep expertise in – and last week I gave a talk from a cultural theory perspective. I am merrily skipping across the surface of these disciplines, dipping in a little to pull out interesting connections and no-one has called me on it*. You are being forced, both by the format of this panel, and the information environment we inhabit, to assess my claims not based on my PhD thesis topic or my status or position, but on how productively my claims and ideas are clashing with yours. We discover each other, not through the silos of our disciplinary clubs and journals, but through the networked affordances that connect me to you, that in this case we could trace explicitly via Berris Charnley and Sally Shuttleworth. That sounds to me rather more like the 19th century world we’ve been hearing about today than the 20th century one that our present disciplinary cultures evolved in.

This restructuring of the economics of discovery has profound implications for our understanding of expertise. And it is our cultures of expertise that form the boundaries of our groups – our knowledge clubs – whether they be research groups, disciplines, journals, discussion meetings or scholarly societies. The web shifts our understanding of public-making. It shifts from the need to define and target the expert audience through broadcast – a one-to-audience interaction – to a many-to-many environent in which we aim to connect with the right person to discover the right contribution. The importance of the groups remains. The means by which they can, and should want to communicate has changed radically.

The challenge lies, not in giving up on our ideas of expertise, but in identifying how we can create groups that both develop shared understanding that enables effective and efficient communication internally but are also open to external contributions. It is not that defining group boundaries doesn’t matter, it is crucial, but that the shape and porosity of those boundaries needs to change. Journals have played a role throughout their history in creating groups, defining boundaries, and validating membership. That role remains important, it is just that the groups, and their cultures, will need to change to compete and survive.

We started 2015 with the idea that the journal was invented in 1665. This morning we heard from Jon Topham that the name was first used in the early 19th century, but for something that doesn’t look much like what we would call a journal today. I believe in 20 years we will still have things called journals, and they will be the means of mediating communications between groups, including professional scholars and interested publics. They’ll look very different from what we have today but their central function, of mediating and expressing identity for groups will remain.

* This is not quite true. Martin Eve has called me on skipping too lightly across the language of a set of theoretical frameworks from the humanities without doing sufficient work to completely understand them. I don’t think it is co-incidental that Martin is a cultural and literary scholar who also happens to be a technologist, computer programmer and deeply interested in policy design and implementation, as well as the intersection of symbolic and financial economies.

Added Value: I do not think those words mean what you think they mean

There are two major strands to position of traditional publishers have taken in justifying the process by which they will make the, now inevitable, transition to a system supporting Open Access. The first of these is that the transition will cost “more money”. The exact costs are not clear but the, broadly reasonable, assumption is that there needs to be transitional funding available to support what will clearly be a mixed system over some transitional period. The argument of course is how much money and where it will come from, as well as an issue that hasn’t yet been publicly broached, how long will it last for? Expect lots of positioning on this over the coming months with statements about “average paper costs” and “reasonable time frames”, with incumbent subscription publishers targeting figures of around $2,500-5,000 and ten years respectively, and those on my side of the fence suggesting figures of around $1,500 and two years. This will be fun to watch but the key will be to see where this money comes from (and what subsequently gets cut), the mechanisms put in place to release this “extra” money and the way in which they are set up so as to wind down, and provide downwards price pressure.

The second arm of the publisher argument has been that they provide “added value” over what the scholarly community provides into the publication process. It has become a common call of the incumbent subscription publishers that they are not doing enough to explain this added value. Most recently David Crotty has posted at Scholarly Kitchen saying that this was a core theme of the recent SSP meeting. This value exists, but clearly we disagree on its quantitative value. The problem is we never see any actual figures given. But I think there are some recent numbers that can help us put some bounds on what this added value really is, and ironically they have been provided by the publisher associations in their efforts to head off six month embargo periods.

When we talk about added value we can posit some imaginary “real” value but this is really not a useful number – there is no way we can determine it. What we can do is talk about realisable value, i.e. the amount that the market is prepared to pay for the additional functionality that is being provided. I don’t think we are in a position to pin that number down precisely, and clearly it will differ between publishers, disciplines, and work flows but what I want to do is attempt to pin down some points which I think help to bound it, both from the provider and the consumer side. In doing this I will use a few figures and reports as well as place an explicit interpretation on the actions of various parties. The key data points I want to use are as follows:

  1. All publisher associations and most incumbent publishers have actively campaigned against open access mandates that make the final refereed version of a scholarly article, prior to typesetting, publication, indexing, and archival, online in any form either immediately or within six months after publication. The Publishers Association (UK) and ALPSP are both on record as stating that such a mandate would be “unsustainable” and most recently that it would bankrupt publishers.
  2. In a survey run by ALPSP of research libraries (although there are a series of concerns that have to be raised about the methodology) a significant proportion of libraries stated that they would cut some subscriptions if the majority research articles were available online six months after formal publication. The survey states that it appeared that most respondents assumed that the freely available version would be the original author version, i.e. not that which was peer reviewed.
  3. There are multiple examples of financially viable publishing houses running a pure Open Access programme with average author charges of around $1500. These are concentrated in the life and medical sciences where there is both significant funding and no existing culture of pre-print archives.
  4. The SCOAP3 project has created a formal journal publication framework which will provide open access to peer reviewed papers for a community that does have a strong pre-print culture utilising the ArXiv.

Let us start at the top. Publishers actively campaign against a reduction of embargo periods. This makes it clear that they do not believe that the product they provide, in transforming the refereed version of a paper into the published version, has sufficient value that their existing customers will pay for it at the existing price. That is remarkable and a frightening hole at the centre of our current model. The service providers can only provide sufficient added value to justify the current price if they additionally restrict access to the “non-added-value” version. A supplier that was confident about the value that they add would have no such issues, indeed they would be proud to compete with this prior version, confident that the additional price they were charging was clearly justified. That they do not should be a concern to all of us, not least the publishers.

Many publishers also seek to restrict access to any prior version, including the authors original version prior to peer review. These publishers don’t even believe that their management of the peer review process adds sufficient value to justify the price they are charging. This is shocking. The ACS, for instance, has such little faith in the value that it adds that it seeks to control all prior versions of any paper it publishes.

But what of the customer? Well the ALPSP survey, if we take the summary as I have suggested above at face value, suggests that libraries also doubt the value added by publishers. This is more of a quantitative argument but that some libraries would cancel some subscriptions shows that overall the community doesn’t believe the overall current price is worth paying even allowing for a six month delay in access. So broadly speaking we can see that both the current service providers and the current customers do not believe that the costs of the pure service element of subscription based scholarly publication are justified by the value added through this service.  This in combination means we can provide some upper bounds on the value added by publishers.

If we take the approximately $10B currently paid as cash costs to recompense publishers for their work in facilitating scholarly communications neither the incumbent subscription publishers nor their current library customers believe that the value added by publishers justifies the current cost, absent artificial restrictions to access to the non-value added version.

This tells us not very much about what the realisable value of this work actually is, but it does provide an upper bound. But what about a lower bound? One approach would be turn to the services provided to authors by Open Access publishers. These costs are willingly incurred by a paying customer so it is tempting to use these directly as a lower bound. This is probably reasonable in the life and medical sciences but as we move into other disciplinary areas, such as mathematics, it is clear that cost level is not seen as attractive enough. In addition the life and medical sciences have no tradition of wide availability of pre-publication versions of papers. That means for these disciplines the willingness to pay the approximately $1500 average cost of APCs is in part bound up with making the wish to make the paper effectively available through recognised outlets. We have not yet separated the value in the original copy versus the added value provided by this publishing service. The $1000-1500 mark is however a touchstone that is worth bearing in mind for these disciplines.

To do a fair comparison we would need to find a space where there is a thriving pre-print culture and a demonstrated willingness to pay a defined price for added-value in the form of formal publication over and above this existing availability. The Sponsoring Consortium for Open Access Publishing in Particle Physics (SCOAP3) is an example of precisely this. The particle physics community have essentially decided unilaterally to assume control of the journals for their area and have placed their service requirements out for tender. Unfortunately this means we don’t have the final prices yet, but we will soon and the executive summary of the working party report suggests a reasonable price range of €1000-2000. If we assume the successful tender comes in at the lower end or slightly below of this range we see an accepted price for added value, over that already provided by the ArXiv for this disciplinary area, that is not a million miles away from that figure of $1500.

Of course this is before real price competition in this space is factored in. The realisable value is a function of the market and as prices inevitably drop there will be downward pressure on what people are willing to pay. There will also be increasing competition from archives, repositories, and other services that are currently free or near free to use, as they inevitably increase the quality and range of the services they offer. Some of these will mirror the services provided by incumbent publishers.

A reasonable current lower bound for realisable added value by publication service providers is ~$1000 per paper. This is likely to drop as market pressures come to bear and existing archives and repositories seek to provide a wider range of low cost services.

Where does this leave us? Not with a clear numerical value we can ascribe to this added value, but that’s always going to be a moving target. But we can get some sense of the bottom end of the range. It’s currently $1000 or greater at least in some disciplines, but is likely to go down. It’s also likely to diversify as new providers offer subsets of the services currently offered as one indivisible lump. At the top end both customers and service providers actions suggest they believe that the added value is less than what we currently pay and that it is only artificial controls over access to the non-value add versions that justify the current price. What we need is a better articulation of what is the real value that publishers add and an honest conversation about what we are prepared to pay for it.

Enhanced by Zemanta

P ≠ NP and the future of peer review

Decomposition method (constraint satisfaction)
Image via Wikipedia

“We demonstrate the separation of the complexity class NP from its subclass P. Throughout our proof, we observe that the ability to compute a property on structures in polynomial time is intimately related to the statistical notions of conditional independence and sufficient statistics. The presence of conditional independencies manifests in the form of economical parametrizations of the joint distribution of covariates. In order to apply this analysis to the space of solutions of random constraint satisfaction problems, we utilize and expand upon ideas from several fields spanning logic, statistics, graphical models, random ensembles, and statistical physics.”

Vinay Deolalikar [pdf]

No. I have no idea either, and the rest of the document just gets more confusing for a non-mathematician. Nonetheless the online maths community has lit up with excitement as this document, claiming to prove one of the major outstanding theorems in maths circulated. And in the process we are seeing online collaborative post publication peer review take off.

It has become easy to say that review of research after it has been published doesn’t work. Many examples have failed, or been partially successful. Most journals with commenting systems still get relatively few comments on the average paper. Open peer review tests have generally been judged a failure. And so we stick with traditional pre-publication peer review despite the lack of any credible evidence that it does anything except cost around a few billion pounds a year.

Yesterday, Bill Hooker, not exactly a nay-sayer when it comes to using the social web to make research more effective wrote:

“…when you get into “likes” etc, to me that’s post-publication review — in other words, a filter. I love the idea, but a glance at PLoS journals (and other experiments) will show that it hasn’t taken off: people just don’t interact with the research literature (yet?) in a way that makes social filtering effective.”

But actually the picture isn’t so negative. We are starting to see examples of post-publication peer review and see it radically out-perform traditional pre-publication peer review. The rapid demolition [1, 2, 3] of the JACS hydride oxidation paper last year (not least pointing out that the result wasn’t even novel) demonstrated the chemical blogosphere was more effective than peer review of one of the premiere chemistry journals. More recently 23andMe issued a detailed, and at least from an outside perspective devastating, peer review (with an attempt at replication!) of a widely reported Science paper describing the identification of genes associated with longevity. This followed detailed critiques from a number of online writers.

These, though were of published papers, demonstrating that a post-publication approach can work, but not showing it working for an “informally published” piece of research such as a a blog post or other online posting. In the case of this new mathematical proof, the author Vinay Deolalikar, apparently took the standard approach that one does in maths, sent a pre-print to a number of experts in the field for comments and criticisms. The paper is not in the ArXiv and was in fact made public by one of the email correspondents. The rumours then spread like wildfire, with widespread media reporting, and widespread online commentary.

Some of that commentary was expert and well informed. Firstly a series of posts appeared stating that the proof is “credible”. That is, that it was worth deeper consideration and the time of experts to look for holes. There appears a widespread skepticism that the proof will be correct, including a $200,000 bet from Scott Aaronson, but also a widespread view that it nonetheless is useful, that it will progress the field in a helpful way even if it is wrong.

After this first round, there have been summaries of the proof, and now the identification of potential issues is occurring (see RJLipton for a great summary). As far as I can tell these issues are potentially extremely subtle and will require the attention of the best domain experts to resolve. In a couple of cases these experts have already potentially “patched” the problem, adding their own expertise to contribute to the proof. And in the last couple of hours as Michael Nielsen pointed out to me there is the beginning of a more organised collaboration to check through the paper.

This is collaborative, and positive peer review, and it is happening at web scale. I suspect that there are relatively few experts in the area who aren’t spending some of their time on this problem this week. In the market for expert attention this proof is buying big, as it should be. An important problem is getting a good going over and being tested, possibly to destruction, in a much more efficient manner than could possibly be done by traditional peer review.

There are a number of objections to seeing this as a generalizable to other research problems and fields. Firstly, maths has a strong pre-publication communication and review structure which has been strengthened over the years by the success of the ArXiv. Moreover there is a culture of much higher standards of peer review in maths, review which can take years to complete. Both of these encourage circulation of drafts to a wider community than in most other disciplines, priming the community for distributed review to take place.

The other argument is that only high profile work will get this attention, only high profile work will get reviewed, at this level, possibly at all. Actually I think this is a good thing. Most papers are never cited, so why should they suck up the resource required to review them? Of those that are or aren’t published whether they are useful to someone, somewhere, is not something that can be determined by one or two reviewers. Whether they are useful to you is something that only you can decide. The only person competent to review which papers you should look at in detail is you. Sorry.

Many of us have argued for some time that post-publication peer review with little or no pre-publication review is the way forward. Many have argued against this on practical grounds that we simply can’t get it to happen, there is no motivation for people to review work that has already been published. What I think this proof, and the other stories of online review tell us is that these forms of review will grow of their own accord, particularly around work that is high profile. My hope is that this will start to create an ecosystem where this type of commenting and review is seen as valuable. That would be a more positive route than the other alternative, which seems to be a wholesale breakdown of the current system as the workloads rise too high and the willingness of people to contribute drops.

The argument always brought forward for peer review is that it improves papers. What interests me about the online activity around Deolalikar’s paper is that there is a positive attitude. By finding the problems, the proof can be improved, and new insights found, even if the overall claim is wrong. If we bring a positive attitude to making peer review work more effectively and efficiently then perhaps we can find a good route to improving the system for everyone.

Enhanced by Zemanta

Show us the data now damnit! Excuses are running out.

A very interesting paper from Caroline Savage and Andrew Vickers was published in PLoS ONE last week detailing an empirical study of data sharing of PLoS journal authors. The results themselves, that one out ten corresponding authors provided data, are not particularly surprising, mirroring as they do previous studies, both formal [pdf] and informal (also from Vickers, I assume this is a different data set), of data sharing.

Nor are the reasons why data was not shared particularly new. Two authors couldn’t be tracked down at all. Several did not reply and the remainder came up with the usual excuses; “too hard”, “need more information”, “university policy forbids it”. The numbers in the study are small and it is a shame it wasn’t possible to do a wider study that might have teased out discipline, gender, and age differences in attitude. Such a study really ought to be done but it isn’t clear to me how to do it effectively, properly, or indeed ethically. The reason why small numbers were chosen was both to focus on PLoS authors, who might be expected to have more open attitudes, and to make the request from the authors, that the data was to be used in a Master educational project, plausible.

So while helpful, the paper itself isn’t doesn’t provide much that is new. What will be interesting will be to see how PLoS responds. These authors are clearly violating stated PLoS policy on data sharing (see e.g. PLoS ONE policy). The papers should arguably be publicly pulled from the journals. Most journals have similar policies on data sharing, and most have no corporate interest in actually enforcing them. I am unaware of any cases where a paper has been retracted due to the authors unwillingness to share (if there are examples I’d love to know about them! [Ed. Hilary Spencer from NPG pointed us in the direction of some case studies in a presentation from Philip Campbell).

Is it fair that a small group be used as a scapegoat? Is it really necessary to go for the nuclear option and pull the papers? As was said in a Friendfeed discussion thread on the paper: “IME [In my experience] researchers are reeeeeeeally good at calling bluffs. I think there’s no other way“. I can’t see any other way of raising the profile of this issue. Should PLoS take the risk of being seen as hardline on this? Risking the consequences of people not sending papers there because of the need to reveal data?

The PLoS offering has always been about quality, high profile journals delivering important papers, and at PLoS ONE critical analysis of the quality of the methodology. The perceived value of that quality is compromised by authors who do not make data available. My personal view is that PLoS would win by taking a hard line and the moral high ground. Your paper might be important enough to get into Journal X, but is the data of sufficient quality to make it into PLoS ONE? Other journals would be forced to follow – at least those that take quality seriously.

There will always be cases where data can not or should not be available. But these should be carefully delineated exceptions and not the rule. If you can’t be bothered putting your data into a shape worthy of publication then the conclusions you have based on that data are worthless. You should not be allowed to publish. End of. We are running out of excuses. The time to make the data available is now. If it isn’t backed by the data then it shouldn’t be published.

Update: It is clear from this editorial blog post from the PLoS Medicine editors that PLoS do not in fact know which papers are involved.  As was pointed out by Steve Koch in the friendfeed discussion there is an irony that Savage and Vickers have not, in a sense, provided their own raw data i.e. the emails and names of correspondents. However I would accept that to do so would be a an unethical breach of presumed privacy as the correspondents might reasonably have expected these were private emails and to publish names would effectively be entrapment. Life is never straightforward and this is precisely the kind of grey area we need more explicit guidance on.

Savage CJ, Vickers AJ (2009) Empirical Study of Data Sharing by Authors Publishing in PLoS Journals. PLoS ONE 4(9): e7078. doi:10.1371/journal.pone.0007078

Full disclosure: I am an academic editor for PLoS ONE and have raised the issue of insisting on supporting data for all charts and graphs in PLoS ONE papers in the editors’ forum. There is also a recent paper with my name on in which the words “data not shown” appear. If anyone wants that data I will make sure they get it, and as soon as Nature enable article commenting we’ll try to get something up there. The usual excuses apply, and don’t really cut the mustard.

The Future of the Paper…does it have one? (and the answer is yes!)

A session entitled “The Future of the Paper” at Science Online London 2009 was a panel made up of an interesting set of people, Lee-Ann Coleman from the British Library, Katharine Barnes the editor of Nature Protocols, Theo Bloom from PLoS and Enrico Balli of SISSA Medialab.

The panelists rehearsed many of the issues and problems that have been discussed before and I won’t re-hash here. My feeling was that the panelists didn’t offer a radical enough view of the possibilities but there was an interesting discussion around what a paper was for and where it was going. My own thinking on this has been recently revolving around the importance of a narrative as a human route into the data. It might be argued that if the whole scientific enterprise could be made machine readable then we wouldn’t need papers. Lee-Ann argued and I agree that the paper as the human readable version will retain an important place. Our scientific model building exploits our particular skill as story tellers, something computers remain extremely poor at.

But this is becoming an increasingly smaller part of the overall record itself. For a growing band of scientists the paper is only a means of citing a dataset or an idea. We need to widen the idea of what the literature is and what it is made up of. To do this we need to make all of these objects stable and citeable. As Phil Lord pointed out this isn’t enough because you also have to make those objects and their citations “count” for career credit. My personal view is that the market in talent will actually drive the adoption of wider metrics that are essentially variations of Page Rank because other metrics will become increasingly useless, and the market will become increasingly efficient as geographical location becomes gradually less important. But I’m almost certainly over optimistic about how effective this will be.

Where I thought the panel didn’t go far enough was in questioning the form of the paper as an object within a journal. Essentially each presentation became “and because there wasn’t a journal for this kind of thing we created/will create a new one”. To me the problem isn’t the paper. As I said above the idea of a narrative document is a useful and important one. The problem is that we keep thinking in terms of journals, as though a pair of covers around a set of paper documents has any relevance in the modern world.

The journal used to play an important role in publication. The publisher still has an important role but we need to step outside the notion of the journal and present different types of content and objects in the best way for that set of objects. The journal as brand may still have a role to play although I think that is increasingly going to be important only at the very top of the market. The idea of the journal is both constraining our thinking about how best to publish different types of research object and distorting the way we do and communicate science. Data publication should be optimized for access to and discoverability of data, software publication should make the software available and useable. Neither are particularly helped by putting “papers” in “journals”. They are helped by creating stable, appropriate publication mechanisms, with appropriate review mechanisms, making them citeable and making them valued. The point at which our response to needing to publish things stops being “well we’d better create a journal for that” then we might just have made it into the 21st century.

But the paper remains the way we tell story’s about and around our science. And if us dumb humans are going to keep doing science then it will continue to be an important part of the way we go about that.