Best practice in Science and Coding. Holding up a mirror.

The following is the text from which I spoke today at the .Astronomy conference. I think there is some video available on the .Astronomy UStream account and I also have audio which I will put up somewhere soon.

There’s a funny thing about the science and coding communities. Each seems to think that the other has all the answers. Maybe the grass is just greener…For many years as an experimental scientist I looked jealously at both computational scientists and coders in general.  Wouldn’t it be so much less stressfull, I naively thought, to have systems that would do what they were told, to be easily able to re-run experiments and to be able to rely on getting the same answer.  Above all, I thought, imagine the convenience of just being able to take someone else’s work and being able to easily and quickly apply it to my own problems.

There is something of a mythology around code, and perhaps more so around open source, that it can be relied on, that there is a toolkit out there already for every problem. That there is a Ruby Gem, or an R library for every problem, or most memorably that I can sit on a python command line and just demand antigravity by importing it. Sometimes these things are true, but I’m guessing that everyone has experience of it not being true. Of the python library that looks as though it is using dictionaries but is actually using some bizarre custom data type, the badly documented ruby gem, or the perl…well, just the perl really. The mythology doesn’t quite live up to the hype. Or at least not as often as we might like.

But if us experimental scientists have an overoptimistic view of how repeatable and reliable computational tools are then computer scientists have an equally unrealistic view of how experimental science works. Greg Wilson, one of the great innovators in computer science education once said, while criticizing documentation and testing standards of scientific code “An experimental scientist would never get away with not providing their data, not providing their working. Experimental science is expected to be reproducible from the detailed methodology given….”….data provided…detailed methodology…reproducible…this doesn’t really sound like the experimental science literature that I know.

Ted Pedersen in an article with the wonderful title “Empiricism is not a matter of faith” excoriates computational linguistics by holding it up to what he sees as the much higher standards of reproducibility and detailed description of methodology in experimental science. Yet I’ve never been able to reproduce an experiment based only on a paper in my life.

What is interesting about both of these view points is that we are projecting our very real desire to raise standards against a mythology of someone else’s practice. There seems to be a need to view some other community’s practice as the example rather than finding examples within our own. This is odd because it is precisely the best examples, within each community, that inspire the other. There are experimental scientists that give detailed step by step instructions to enable others to repeat their work, who make the details of the protocols available online, and who work within their groups to the highest standards of reproducibility that are possible in the physical world.  Equally there are open source libraries and programmes with documentation that are both succinct and detailed, that just works when you import the library, that is fully tested and comes with everything you need to make sure it will work with your systems. Or that breaks in an informative way, making it clear what you need to do with your own code to get it working.

If we think about what makes science work; effective communication, continual testing and refinement, public criticism of claims and ideas; the things that make up good science, and mean that I had a laptop to write this talk on this morning, that meant the train and taxi I caught actually run, that, more seriously a significant portion of the people in this room did not in fact die in childhood. If we look at these things then we see a very strong correspondence with good practice in software development. High quality and useful documentation is key to good software libraries.  You can be as open source as you like but if no-one can understand your code they’re not going to use it. Controls, positive and negative, statistical and analytical are basically unit tests. Critique of any experimental result comes down to asking whether each aspect of the experiment is behaving the way it should, has each process been tested that a standard input gives the expected output. In a very real sense experiment is an API layer we use to interact with the underlying principles of nature.

So this is a nice analogy, but I think we can take this further, in fact I think that code and experiment are actually linked at a deeper level. Both are an instantiation of process that take inputs and generate outputs. These are (to a first approximation – good enough for this discussion) deterministic in any given instance. But they are meaningless without context. Useless without the meaning that documentation and testing provide.

Let me give you an example. Ana Nelson has written a lovely documentation tool called Dexy. This builds on concepts of literate programming in a beautifully elegant and simple way. Take a look for the details but in essence it enables you to directly incorporate the results of arbitrary running code into your documentation. As you document what your code does you provide examples, parts of the process that are actively running, and testing the code as you go. If you break the method you break your documentation. It is also not an accident that if you are thinking about documentation as you build your code then it helps to create good modular structures that are easy to understand and therefore both easy to use and easy to communicate. They may be a little more work to write but the value you are creating by thinking about the documentation up front means you are motivated to capture this up front. Design by contract and test driven development are tough, Documentation Driven Development can really help drive good process.

Too often when we write a scientific paper it’s the last part of the process. We fabricate a story that makes sense so that we can fit in the bits we want to. Now there’s nothing wrong with this. Humans are narrative processing systems, we need stories to make sense of the world. But its not the whole story. What if, as we collect and capture the events that we ultimately use to tell our story, that we also collect and structure the story of what actually happened? Of the experiments that didn’t work, of the statistical spread of good and bad results. There’s a sarcastic term in synthetic organic chemistry, the “American Yield” in which we imagine that 20 PhD students have been tasked with making a compound and the one who manages to get the highest overall yield gets to be first author. This isn’t actually a particularly useful number. Much more useful to the chemist who wants to use this prep is the spread of values, information that is generally thrown away. The difference between actually incorporating the running of the code into the documentation, and just showing one log file, cut and pasted, from when it worked well. You lose the information about when it doesn’t work.

Other tools from coding can also provide inspiration. Tools like Hudson for continuous integration. Everytime the code base is changed everything gets re-built, dependencies are tested, unit tests run, and a record of what gets broken. If you want to do X you do want to use this version of that library.  This isn’t a problem. In any large codebase things are going to get broken as changes are made, you change something, see what is broken, then go back and gradually fix those things until you’re ready to create commit to the main branch (at which point someone else has broken something…)

Science is continuous integration. This is what we do, we make changes , we check what they break, see if the dependencies still hold and if necessary go back and fix them. This is after all where the interesting science is. Or it would be if we did it properly. David Shotton and others have spoken about the question of “citation creep” or “hedging erosion” [see for example this presentation by Anita de Waard]. This is where something initially reported in one paper as a possibility, or even just a speculation gets converted into fact by a process of citation. What starts as “…it seems possible that…” can get turned into “…as we know that X causes Y (Bloggs et al, 2009)…” within 18 months or a couple of citations. Scientists are actually not very good at checking their dependencies. And they have a tendency of coming back to bite us in exactly the same way as a quick patch that wasn’t properly tested can.

Just imagine if we could do this. If everytime a new paper was added to the literature we could run a test against the rest. Check all the dependencies…if this isn’t true then all of these other papers in doubt as well…indeed if we could unit test papers would it be worth peer reviewing them? There is good evidence that pair coding works, and little evidence that traditional peer review does. What can we learn from this to make the QA processes in science and software development better?

I could multiply examples. What would an agile lab look like? What would be needed to make it work? What can successful library development communities tell us about sharing samples, and what can the best data repositories tell us about building the sites for sharing code? How can we apply the lessons of StackOverflow to a new generation of textbooks and how can we best package up descriptions of experimental protocols in a way that provides the same functionality as sharing an Amazon Machine Image.

Best practice in coding mirrors best practice in science. Documentation, testing, integration are at the core. Best practice is also a long way ahead of common practice in both science and coding. Both, perhaps are driven increasingly by a celebrity culture that is more dependent on what your outputs look like (and where they get published) than whether anyone uses them. Testing and documentation are hardly glamorous activities.

So what can we do about it? Improving practice is an arduous task. Many people are doing good work here with training programmes, tools, standards development and calls to action online and in the scientific literature. Too many people and organizations for me to call out and none of them getting the credit they deserve.

One of the things I have been involved with is to try and provide a venue, a prestigious venue, where people can present code that has been developed to the highest standards. Open Research Computation, a new Open Access journal from BioMedCentral, will publish papers that describe software for research. Our selection criteria don’t depend on how important the research problem is, but on the availability, documentation, and testing of the code. We expect the examples given in these papers to be reproducible, by which we mean that the software, the source code, the data, and the methodology are provided and described well enough that it is possible to reproduce those examples.  By applying high standards, and by working with authors to help them reach those standards we aim to provide a venue which is both useful and prestigious. Think about it, a journal that contains papers describing the most useful and useable tools and libraries is going to get a few citations and (whisper it) ought to get a pretty good impact factor. I don’t care about impact factors but I know the reality on the ground is that that those of you looking for jobs or trying to keep them do need to worry about them.

In the end, the problem with a journal, or with code, or with science is that we want everyone else to provide the best documentation, the best tested code and procedures, but its hard to justify doing it ourselves. I mean I just need something that works; yesterday. I don’t have time to write the tests in advance, think about the architecture, re-read all of that literature to check what it really said. Tools that make this easier will help, tools like Dexy and Hudson, or lab notebooks that capture what we’re doing and what we are creating, rather than what we said we would do, or what we imagine we did in retrospect.

But it’s motivation that is key here. How do you motivate people do the work up front? You can tell them that they have to of course but really these things work best when people want to make the effort. The rewards for making your work re-usable can be enormous but they are usually further down the road than the moment where you make the choice not to bother. And those rewards are less important to most people than getting to the Nature paper, or getting mentioned in Tim O’Reilly’s tweet stream.

It has to be clear that making things re-usable is the highest contribution that you can make, and for it to be rewarded accordingly.  I don’t even really care what forms of re-use are counted, re-use in research, re-use in education, in commerce, in industry, in policy development. ORC is deliberately – very deliberately – intended to hack the impact factor system by featuring highly re-usable tools that will gain lots of citations. We need more of these hacks.

I think this shift is occurring. It’s not widely know just how close UK science funding went to being slashed in the comprehensive spending review. That it wasn’t was due to a highly coherent and well organized campaign that convinced ministers and treasury that the re-use of UK research outputs generated enormous value, both economic, social and educational for the country and indeed globally. That the Sloan Digital Sky Survey was available in a form that could be re-used to support the development of something like Galaxy Zoo played a part in this. The headlong rush of governments worldwide to release their data is a massive effort to realize the potential value of the re-use of that data.

This change in focus is coming. It will no longer be enough in science to just publish. As David Willetts said in [answer to a question] in his first policy speech, “I’m very much in favour of peer review, but I worry when the only form of review is for journals”.  Government wants evidence of wider use. They call it impact, but its basically re-use. The policy changes are coming, the data sharing policies, the public engagement policies, the impact assessments. Just showing outputs will no be enough, showing that you’ve configured those outputs so that the potential for re-use is maximized will be an assumption of receiving funding.

William Gibson said the future is already here, its just unevenly distributed. They Might Be Giants asked, not quite in response, “but where’s my jetpack?”  The jetpacks, the tools, are around us and being developed if you know where to look. Best practice is unevenly distributed both in science and in software development but it’s out there if you want to go looking. The motivation to adopt it? The world around us is changing. The expectations of the people who fund us are changing. Best practice in code and in science have an awful lot in common. If you can master one you will have to tools to help you with the other. And if you have both then you’ll be well positioned to ride the wave of change as it sweeps by.


Enhanced by Zemanta

A return to “bursty work”

Parris Island, S.C., barrage balloon (LOC)
Image by The Library of Congress via Flickr

What seems like an age ago a group of us discussed a different way of doing scientific research. One partly inspired by the modular building blocks approach of some of the best open source software projects but also by a view that there were tremendous efficiency gains to be found in enabling specialisation of researchers, groups, even institutes, while encouraging a shared technical and social infrastructure that would help people identify the right partners for the very specific tasks that they needed doing today.

“Bursty work” is a term first used by Chris Messina but introduced to the online community of scientists by Deepak Singh. At the time it seemed obvious that with enough human and financial capital that a loose network of specialist groups could do much better science, and arguably much more effective exploitation of that science, than isolated groups perpetually re-inventing the wheel.

The problem of course is that science funding is not configured that way, a problem that is that bane of any core-facility manager’s existence. Maintaining a permanent expert staff via a hand to mouth existence of short term grants is tough. Some succeed but more probably fail, and there is very little glory in this approach. Once again it is prestige that gets promotion, not effective and efficient use of resources.

But the world is changing, a few weeks ago I got a query from a commercial partner interested in whether I could solve a specific problem. This is a small “virtual company” that aims to target the small scale, but potentially high value, innovations that larger players don’t have the flexibility to handle.  Everything is outsourced, samples prepared and passed from contractor to contractor. Turns out I think we can solve their problem and it will be exciting to see this work applied. What is even more gratifying is that the company came across this work in an Open Access journal which made it easier both to assess how useful it was and whether to get in touch. In the words of my contact:

“The fact that your work was in an open access journal certainly made it easier for me to access. I guess the same google search would have found it in a different journal, but it might have required a subscription for access. In that case I would have used the free info available (corresponding authors, university addresses etc) to try and get in touch based on the abstract.”

The same problems of course remain. How do I reasonably cost this work? What is the value of being involved vs just being a contractor. And of course, where will I find the time, or the pair of hands, to get the work done. People with the right expertise don’t grow on trees, and it’s virtually impossible to get people on short contracts at the moment. Again, in the words of our collaborator:

“Bursty work” sounds a little like how [our company] is trying to operate. One problem is moving from an investment environment where investors invest in companies to one where they invest in projects. Has any work been done to identify investors who like the idea of bursty work?

Nonetheless, its exciting to me that some elements of what was beginning to seem like a pipe dream are coming to pass. It takes time for the world to catch up, but where there is a demand for innovation, and an effective market, the opportunities are there for the people who can make them work.

[It won’t escape anyone’s notice that I’ve given no details of either the project or the company. We are doing this under an NDA and as this is someone else’s project I’m not going to be difficult about it. We make progress one small step at a time]

Enhanced by Zemanta

Open Source, Open Research and Open Review

Logo Open Source Initiative
Image via Wikipedia

The submissions for Open Research Computation (which I blogged about a month or so back) are starting to come in and we hope to be moving towards getting those initial papers out soon. One of the things we want the journal to do is bring more of the transparency and open critique that characterises the best Open Source Software development processes into the scholarly peer review process. The journal will have an open review process in which reviews and the versions of the manuscript they refer to will be available.

One paper’s authors however have taken matters into their own hands and thrown the doors completely open. With agreement from the editorial board Michael Barton and Hazel Barton have  asked the community on the BioStar site, a bioinformatics focussed member of the StackExchange family of Q&A websites, how the paper and software could be improved. They have published a preprint of the paper and the source code was obviously already available on Github. You can see more at Michael’s blog post. We will run a conventional peer review process in parallel and the final decision on whether the paper is ready to publish will rest with the ORC editors but we will take into account the comments on BioStar and of course the authors will be free to use those comments to improve on their software and documentation.

 

This kind of approach goes a long way towards dealing with the criticisms I often level at conventional peer review processes. By making the process open there is the opportunity for any interested party to offer constructive critique and help to improve the code and the paper. By not restricting commentary to a small number of people we stand a better chance of getting all the appropriate points of view represented. And by (hopefully, we may have some niggling licence issues with copying content from BioStar’s CC-BY-SA to BioMedCentral’s CC-BY) presenting all of that commentary and critique along with the authors responses we can offer a clear view of how effective the review process was and what the final decisions were based on. I’ve talked about what we can do to improve peer review. Michael and Hazel have taken action to make it happen. You can be a part of it.

Enhanced by Zemanta

Reforming Peer Review. What are the practical steps?

Peer Review Monster

The text of this was written before I saw Richard Poynder’s recent piece on PLoS ONE and the responses to that. Nothing in those really changes the views I express here but this text is not a direct response to those pieces.

So my previous post on peer review hit a nerve. Actually all of my posts on peer review hit a nerve and create massive traffic spikes and I’m still really unsure why. The strength of feeling around peer review seems out of all proportion to both its importance and indeed the extent to which people understand how it works in practice across different disciplines. Nonetheless it is an important and serious issue and one that deserves serious consideration, both blue skies thinking and applied as it were. And it is the latter I will try to do here.

Let me start with a statement. Peer review at its core is what makes science work. There are essentially two logical philosophy approaches that can be used to explain why the laptop I’m using works, why I didn’t die as a child of infection, and how we are capable of communication across the globe. One of these is the testing of our working models of the universe against the universe itself. If your theory of engines produces an engine that doesn’t work then it is probable there is something wrong with your theory.

The second is that by exposing our models and ideas to the harshest possible criticism of our peers that we can stress them to see what holds up to the best logical analysis available. The motto of the Royal SocietyNullius in verba” is generally loosely translated as “take no-one’s word for it”. The central idea of the Invisible College, the group that became the Royal Society was that they would present their experiments and their explanations to each other, relying on the criticism of their peers to avoid the risk of fooling themselves. This combined both philosophical approaches, seeing the apparatus for yourself, testing the machinery against the world, and then testing the possible explanations for its behaviour against the evidence and theory available. The community was small but this was in a real sense post-publication peer review; testing and critique was done in the presence of the whole community.

The systems employed by a few tens of wealthy men do not scale to todays global scientific enterprise and the community has developed different systems to manage this. I won’t re-hash my objections to those systems except to note what I hope should be be three fairly uncontroversial issues. Firstly that pre-publication peer review as the only formal process of review runs a severe risk of not finding the correct diversity and expertise of reviewers to identify technical issues. The degree of that risk is more contentious but I don’t see any need to multiply recent examples that illustrate that it is real. Second, because we have no system of formalising or tracking post-publication peer review there is no means either to encourage high quality review after publication, nor to track the current status or context of published work beyond the binary possibility of retraction. Third, that peer review has a significant financial cost (again the actual level is somewhat contentious but significant seems fair) and we should address whether this money is being used as efficiently as it could be.

It is entirely possible to imagine utopian schemes in which these problems, and all the other problems I have raised are solved. I have been guilty of proposing a few myself in my time. These will generally involve taking a successful system from some other community or process and imagining that it can be dropped wholesale on the research community. These approaches don’t work and I don’t propose to explore them here in detail, except as ways to provoke and raise ideas.

Arguments and fears

The prospect of radical change to our current process of peer review provokes very strong and largely negative responses. Most of these are based on fears of what would happen if the protection that our current pre-publication peer review system offers us is ripped away. My personal view is that these protections are largely illusory but a) I could well be wrong and b) that doesn’t mean we shouldn’t treat these fears seriously. They are, after all, a barrier to effective change, and if we can neutralize the fears with evidence then we are also making a case for change, and in most cases that evidence will also offer us guidance on the best specific routes for change.

These fears broadly fall into two classes. The first is the classic information overload problem. Researchers already have too much to track and read. How can they be expected to deal with the apparent flood of additional information? One answer to this is to ask how much more information would be released. This is difficult to answer. Probably somewhere between 50 and 95% of all papers that are submitted somewhere do eventually get published [1, 2 (pdf), 3, 4] suggesting that the total volume would not increase radically. However it is certainly arguable that reducing barriers would increase this. Different barriers, such as cost could be introduced but since my position is that we need to reduce these barriers to minimise the opportunity cost inherent in not making research outputs public I wouldn’t argue for that. However we could imagine a world in which small pieces of research output get published for near zero cost but turning those pieces into an argument, something that would look a lot like the current formally published paper, would cost more either in terms of commitment or financial costs.

An alternative argument, and one I have made in the past is that our discovery tools are already broken and part of the reason for that is there is not enough of an information substrate to build better ones. This argument holds that by publishing more we can make discovery tools better and actually solve the overload problem by bringing the right information to each user as and when they need it. But while I make this argument and believe it, it is conceptually very difficult for most researchers to grasp. I hesitate to suggest that this has something to do with the best data scientists, the people who could solve this problem, eschewing science for the more interesting and financially rewarding worlds of Amazon, Google, and Facebook.

The second broad class of argument against change is that the currently validated and recognized literature will be flooded with rubbish. In particular a common, and strongly held, view is that the wider community will no longer be able to rely on the quality mark that the peer reviewed literature provides in making important health, environmental, and policy decisions. Putting aside the question of whether in fact peer review does achieve an increase in accuracy or reliability there is a serious issue here to be dealt with respect to how the ongoing results of scientific research are presented to the public.

There are real and serious risks in making public the results of research into medicine, public health, and the environment. Equally treating the wider community as idiots is also dangerous. The responsible media and other interested members of the community, who can’t be always be expected to delve into, or be equipped to critique, all of the detail of any specific claim, need some clear mark or statement of the level of confidence the research community has in a finding or claim. Regardless of what we do the irresponsible media will just make stuff up anyway so its not clear to me that there is much that can be done there but responsible reporters on science benefit from being able to reference and rely on the quality mark that peer review brings. It gives them an (at least from their perspectice) objective criterion on which to base the value of a story.

It isn’t of course just the great unwashed that appreciate a quality control process. For any researcher moving out of their central area of expertise to look at a new area there is a bewildering quantity of contradictory statements to parse. How much worse would this be without the validation of peer review? How would the researcher know who to trust?

It is my belief that the emotional response to criticism of traditional pre-publication peer review is tightly connected to this question of quality, and its relation to the mainstream media. Peer review is what makes us difference. It is why we have a special relationship with reporters, and by proxy the wider community, who can trust us because of their reliance on the rigour of our quality marks. Attacks on peer review are perceived as an attack at the centre of what makes the research community special.

The problem of course is that the trust has all but evaporated. Scandals, brought on in part by a reliance on the meaning and value of peer review, have taken away a large proportion of the credibility that was there. Nonetheless, there remains a clear need for systems that provide some measure of the reliability of scientific findings. At one level, this is simple. We just wait ten years or so to see how it pans out. However, there is a real tension between the needs of reporters to get there first and be timely and the impossibility of providing absolute certainty around research findings.

Equally applying findings in the real world will often mean moving before things are settled. Delays in applying the results of medical research can kill people just as much as rushing in ahead of the evidence can. There is always a choice to be made as to when the evidence is strong enough and the downside risks low enough for research results to be applied. These are not easy decisions and my own view is that we do the wider community and ourselves a disservice by pretending that a single binary criterion with a single, largely hidden, process is good enough to universally make that decision.

Confidence is always a moving target and will continue to be. That is the nature of science. However an effective science communication system will provide some guide to the current level of confidence in specific claims.  In the longer term there is a need to re-negotiate the understanding around confidence between the responsible media and the research community. In the shorter term we need to be clearer in communicating levels of confidence and risk, something which is in any case a broader issue for the whole community.

Charting a way forward

So in practical terms what are the routes forward? There is a rhetorical technique of persuasion that uses a three-part structure in arguing for change. Essentially this is to lay out the argument in three parts, firstly that nothing (important) will change, second that there are opportunities for improvement that we can take, and third that everything will change. This approach is supposed to appeal to three types of person, those who are worried about the risks of change, those in the middle who can see some value in change but are not excited by it, and finally those who are excited by the possibilities of radical change. However, beyond being a device this structure suits the issues here, there are significant risks in change, there are widely accepted problems with the current system, and there is the possibility for small scale structural changes to allow an evolution to a situation where radical change can occur if momentum builds behind it.

Nothing need change

At the core of concerns around changing peer review is the issue of validation. “Peer reviewed” is a strong brand that has good currency. It stands for a process that is widely respected and, at least broadly speaking, held to be understood by government and the media. In an environment where mis-reporting of medical or environmental research can easily lead to lost lives this element of validation and certification is critical. There is no need in any of the systems I will propose for this function to go away. Indeed we aim to strengthen it. Nor is there a need to abandon the situation where specific publication venues are marked as having been peer reviewed and only contain material that has been through a defined review process. They will continue to stand or fall on their quality and the value for money that they offer.

The key to managing the changes imposed on science communication by the rise of the web, while maintaining the trust and value of traditional review systems, is to strengthen and clarify the certification and validation provided by peer review and to retain a set of specific publication venues that guarantee those standards and procedures of review. These venues, speaking as they will to both domain specific and more general scientific audience, as well as to the wider community will focus on stories and ideas. They will, in fact look very like our current journals and have contents that look the same as our current papers.

These journals will have a defined and transparent review process with objective standards and reasonable timeframes. This will necessarily involve obtaining opinions from a relatively small number of people and a final decision made by a central editor who might be a practising researcher or a professional editor. In short all the value that is created by the current system, should and can be retained.

Room for improvement

If we are to strengthen the validation process of peer review we need to address a number of issues. The first of these is transparency. A core problem with peer review is that it is in many cases not clear what process was followed. How many external referees were used? Did they have substantive criticisms, and did disagreements remain? Did the editors over-rule the referees or follow their recommendation? Is this section of the journal peer reviewed at all?

Transparency is key. Along with providing confidence to readers such transparency could support quantitative quality control and would provide the data that would help us to identify where peer review succeeds and where it is failing. Data that we desperately need so we can move beyond assertions and anecdote that characterise the current debate.

A number of publishers have experimented with open peer review processes. While these remain largely experiments a number of journals, particularly those in medical fields, will publish all the revisions of a paper along with the review reports at each stage. For those who wish to know whether their concerns were covered in the peer review process this is a great help.

Transparency can also support an effective post publication review process. Post-publication review has occurred at ArXiv for many years where a pre-print will often be the subject of informal discussion and comment before it is submitted for formal review at a peer reviewed journal. However it could be argued that the lack of transparency that results from this review happening informally makes it harder to identify the quality papers in the ArXiv.

A more formal process of publication, then validation and certification has been adopted by Atmospheric Chemistry and Physics and other Copernicus publications. Here the submitted manuscript is published in ACP Discussions (after a “sanity check” review), and then subject to peer review, both traditional by selected referees and in an open forum. If the paper is accepted it is published, along with links to the original submission and commentary in the main journal. The validation provided by review is retained while providing enhanced transparency.

In addition this approach addresses the concerns of delays in publication, whether due to malicious referees or simply the mechanics of the process, and the opportunity costs for further research that they incur. By publishing first, in a clearly non-certificated form, the material is available for those who might find them of value but in a form that is clearly marked as non-validated, use at own risk. This is made clear by retaining the traditional journal, but adding to it at the front end. This kind of approach can even support the traditional system of tiered journals with the papers and reviews trickling down from the top forming a complete record of which journal rejected which papers in which form.

The objection to this style of approach is that this approach doesn’t support the validation needs of biomedical and chemical scientists to be “first to publish in peer reviewed journal”. There is a significant cultural distinction between the physical sciences that use ArXiv and the biosciences in particular best illustrated by a story that I think I first heard from Michael Nielsen.

A biologist is talking to a physicist and says, “I don’t understand how you can put your work in the ArXiv as a preprint. What if someone comes along and takes your results and then publishes them before you get your work to a peer reviewed journal?”

The physicist thinks a little about this before responding, “I don’t understand how you can not put your work in the ArXiv as a preprint. What if someone comes along and takes your result and then publishes them before you get your work to a peer reviewed journal?”

There is a cultural gulf here that can not be easily jumped. However this is happening by stealth anyway with a variety of journals that have subtle differences in the peer review process that are not always clearly and explicitly surfaced. It is interesting in this context that PLoS ONE and now its clones are rapidly moving to dominate the publishing landscape despite a storm of criticism around the (often misunderstood) peer review model. Even in the top tier it can be unclear whether particular classes of article are peer reviewed (see for example these comments [1, 2, 3] on this blog post from Neil Saunders). The two orthogonal concepts of “peer reviewed” and “formally published” appear to be drifting apart from what was an easy (if always somewhat lazy) assumption that they are equivalent. Priority will continue to be established by publication. The question of what kind of publication will “count” is likely to continue to shift but how fast and in what disciplines remains a big question.

This shift can already be seen in the application of DOIs to an increasingly diverse set of research outputs. The apparent desire to apply DOIs stems from the idea that a published object is “real” if it has a DOI. This sense of solidness seems to arise from the confidence that having a DOI makes an object citeable. The same confidence does not apparently apply to URLs or other identifiers, even when those URLs come from stable entities such as Institutional Repositories or recognised Data Services.

This largely unremarked shift may potentially lead to a situation where a significant proportion of the reference list of a peer reviewed paper may include non-peer reviewed work. Again the issue of transparency arises, how should this be marked? But equally there will be some elements that are not worthy of peer review, or perhaps only merit automated validation such as some types of dataset. Is every PDB or Genbank entry “peer reviewed”? Not in the commonly meant sense, but is it validated? Yes. Is an audit trail required? Yes.

A system of transparent publication mechanisms for the wide range of research objects we generate today, along with clear but orthogonal marking of whether and how each of those objects have been reviewed provides real opportunities to both encourage rapid publication, enable transparent and fair review, and to provide a framework for communicating effectively the level of confidence the wider community has in a particular claim.

These new publication mechanisms and the increasing diversity of published research outputs are occurring anyway. All I am really arguing for is a recognition and acceptance that this is happening at different rates and in different fields. The evidence from ArXiv, ACP, and to a lesser extent conferences and online notebooks is that the sky will not fall in as long as there is clarity as to how and whether review has been carried out. The key therefore is much more transparent systems for marking what is reviewed, and what is not, and how review has been carried out.

Radical Changes

A system that accepts that there is more than one version of a particularly communication opens the world up to radical change. Re-publication following (further) review becomes possible as do updates and much more sophisticated retractions. Papers where particular parts are questioned become possible as review becomes more flexible and disagreement, and the process of reaching agreement no longer need to be binary issues.

Reviewing different aspects of a communication leads in turn to the feasibility of publishing different parts for review at different times. Re-aggregating different sets of evidence and analysis to provide a dissenting view becomes feasible. The possibilities of publishing and validating portions of a whole story offer great opportunities for increased efficiency and for much more public engagement and information with the current version of the story. Much is made of poor media reporting of “X cures/causes cancer” style stories but how credible would these be if the communication in question was updated to make it clear that the media coverage was overblown or just plain wrong? Maybe this wouldn’t make a huge difference but at some level what more can we be asked to do?

Above all the blurring of the lines between what is published and what is just available and an increasing need to be transparent about what has been reviewed and how will create a market for these services. That market is ultimately what will help to both drive down the costs of scholarly communication and to identify where and how review actually does add value. Whole classes of publication will cease to be reviewed at all as the (lack of) value of this becomes clear. Equally high quality review can be re-focussed where it is needed, including the retrospective or even continuous review of important published material. Smaller ecosystems will naturally grow up where networks of researchers have an understanding of how much they trust each others results.

The cultural chasm between the pre-review publication culture by users of the ArXiv and the chemical and biomedical sciences will not be closed tomorrow but as the pressures of government demands for rapid exploitation and the possibilities of losing opportunities by failing to communicate rise there will be a gradual move towards more rapid publication mechanisms. In parallel as the pressures to quantitatively demonstrate efficient and effective use of government funding rise opportunities will arise for services to create low barrier publication mechanisms. If the case can be made for measurement of re-use then this pressure has the potential to lead to effective communication as well as just dumping of the research record.

Conclusion

Above all other things the major trend I see is the breakage of the direct link between publication and peer review. Formal publication in the print based world required a filtering mechanism to be financially viable. The web removes that requirement, but not the requirement of quality marking and control. The ArXiv, PLoS ONE and other experiments with simplifying peer review processes, Institutional Repositories, and other data repositories, the expanding use of DOIs, and the explosion of freely available research content and commentary on the web are all signs of a move towards lower barriers in publishing a much more diverse range of research outputs.

None of this removes the need for quality assurance. Indeed it is precisely this lowering of barriers that has brought such a strong focus on the weaknesses of our current review processes. We need to take the best of both the branding and the practice of these processes and adapt them or we will lose both the confidence of our own community and the wider public. Close examination of the strengths and weaknesses and serious evidence gathering is required to adapt and evolve the current systems for the future. Transparency, even radical transparency of review processes may well be something that is no longer a choice for us to make. But if we move in this direction now, seriously and with real intent, then we may as a research community be able to retain control.

The status quo is not an option unless we choose to abandon the web entirely as a place for research communication and leave it for the fringe elements and the loons. This to me is a deeply retrograde step. Rather, we should take our standards and our discourse, and the best quality control we can bring to bear out into the wider world. Science benefits from a diversity of views and backgrounds. That is the whole point of peer review. The members of the Invisible College knew that they might mislead themselves and took the then radical approach of seeking out dissenting and critical views. We need to acknowledge our weaknesses, celebrate our strengths and above all state clearly where we are unsure. It might be bad politics, but it’s good science.

Enhanced by Zemanta

Tweeting the lab

Free twitter badge
Image via Wikipedia

I’ve been interested for some time in capturing information and the context in which that information is created in the lab. The question of how to build an efficient and useable laboratory recording system is fundamentally one of how much information is necessary to record and how much of that can be recorded while bothering the researcher themselves as little as possible.

The Beyond the PDF mailing list has, since the meeting a few weeks ago, been partly focused on attempts to analyse human written text and to annotate these as structured assertions, or nanopublications. This is also the approach that many Electronic Lab Notebook systems attempt to take, capturing an electronic version of the paper notebook and in some cases trying to capture all the information in it in a structured form. I can’t help but feel that, while this is important, it’s almost precisely backwards. By definition any summary of a written text will throw away information, the only question is how much. Rather than trying to capture arbitrary and complex assertions in written text, it seems better to me to ask what simple vocabulary can be provided that can express enough of what people want to say to be useful.

In classic 80/20 style we ask what is useful enough to interest researchers, how much would we lose, and what would that be? This neatly sidesteps the questions of truth (though not of likelihood) and context that are the real challenge of structuring human authored text via annotation because the limited vocabulary and the collection of structured statements made provides an explicit context.

This kind of approach turns out to work quite well in the lab. In our blog based notebook we use a one item-one post approach where every research artifact gets its own URL. Both the verbs, the procedures, and the nouns, the data and materials, all have a unique identifier. The relationships between verbs and nouns is provided by simple links. Thus the structured vocabulary of the lab notebook is [Material] was input to [Process] which generated [Data] (where Material and Data can be interchanged depending on the process).  This is not so much 80/20 as 30/70 but even in this very basic form in can be quite useful. Along with records of who did something and when, and some basic tagging this actually makes a quite an effective lab notebook system.

The question is, how can we move beyond this to create a record which is richer enough to provide a real step up, but doesn’t bother the user any more than is necessary and justified by the extra functionality that they’re getting. In fact, ideally we’d capture a richer and more useful record while bothering the user less. A part of the solution lies in the work that Jeremy Frey’s group have done with blogging instruments. By having an instrument create a record of it’s state, inputs and outputs, the user is freed to focus on what their doing, and only needs to link into that record when they start to do their analysis.

Another route is the approach that Peter Murray-Rust’s group are exploring with interactive lab equipment, particularly a fume cupboard that can record spoken instructions and comments and track where objects are, monitoring an entire process in detail. The challenge in this approach lies in translating that information into something that is easy to use downstream. Audio and video remain difficult to search and worth with. Speech recognition isn’t great for formatting and clear presentation.

In the spirit of a limited vocabulary another approach is to use a lightweight infrastructure to record short comments, either structured, or free text. A bakery in London has a switch on its wall which can be turned to one of a small number of baked good as a batch goes into the oven. This is connected to a very basic twitter client then tells the world that there are fresh baked baguettes coming in about twenty minutes. Because this output data is structured it would in principle be possible to track the different baking times and preferences for muffins vs doughnuts over the day and over the year.

The lab is slightly more complex than a bakery. Different processes would take different inputs. Our hypothetical structured vocabulary would need to enable the construction of sentences with subjects, predicates, and objects, but as we’ve learnt with the lab notebook, even the simple predicate “is input to”, “is output of” can be very useful. “I am doing X” where X is one of a relatively small set of options provides real time bounds on when important events happened. A little more sophistication could go a long way. A very simple twitter client that provided a relatively small range of structured statements could be very useful. These statements could be processed downstream into a more directly useable record.

Last week I recorded the steps that I carried out in the lab via the hashtag #tweetthelab. These free text tweets make a serviceable, if not perfect, record of the days work. What is missing is a URI for each sample and output data file, and links between the inputs, the processes, and the outputs. But this wouldn’t be too hard to generate, particularly if instruments themselves were actually blogging or tweeting its outputs. A simple client on a tablet, phone, or locally placed computer would make it easy to both capture and to structure the lab record. There is still a need for free text comments and any structured description will not be able to capture everything but the potential for capturing a lot of the detail of what is happening in a lab, as it happens, is significant. And it’s the detail that often isn’t recorded terribly well, the little bits and pieces of exactly when something was done, what did the balance really read, which particular bottle of chemical was picked up.

Twitter is often derided as trivial, as lowering the barrier to shouting banal fragments to the world, but in the lab we need tools that will help us collect, aggregate and structure exactly those banal pieces so that we have them when we need them. Add a little bit of structure to that, but not too much, and we could have a winner. Starting from human discourse always seemed too hard for me, but starting with identifying the simplest things we can say that are also useful to the scientist on the ground seems like a viable route forward.

Enhanced by Zemanta

What is it with researchers and peer review? or; Why misquoting Churchill does not an argument make

Winston Churchill in Downing Street giving his...
Image via Wikipedia

I’ve been meaning for a while to write something about peer review, pre and post publication, and the attachment of the research community to traditional approaches. A news article in Nature though, in which I am quoted seems to have really struck a nerve for many people and has prompted me to actually write something. The context in which the quote is presented doesn’t really capture what I meant but I stand by the statement in isolation:

“It makes much more sense in fact to publish everything and filter after the fact” – quoted in Mandavilli (2011) “Trial by Twitter” Nature 469, 286-287

I think there are two important things to tease out here, firstly a critical analysis of the problems and merits of peer review, and secondly a close look at how it could be improved, modified, or replaced. I think these merit separate posts so I’ll start here with the problems in our traditional approach.

One thing that has really started to puzzle me is how un-scientific scientists are about the practice of science. In their own domain researchers will tear arguments to pieces, critically analyse each piece for flaws, and argue incessantly over the data, the methodology, the analysis, and the conclusions that are being put forward, usually with an open mind and a positive attitude.

But shift their attention onto the process of research and all that goes out the window. Personal anecdote, gut feelings, half-baked calculations and sweeping statements suddenly become de rigueur.

Let me pick a toy example. Whenever an article appears about peer review it seems inevitably to begin or end with someone raising Churchill; something along the lines of:

“It’s exactly like what’s said about democracy,” he adds. “The peer-review process isn’t very good — but there really isn’t anything that’s better.” ibid

Now lets examine this through the lens of scientific argument. Firstly it’s an appeal to authority, not something we’re supposed to respect in science and in any case its a kind of transplanted authority. Churchill never said anything about peer review but even if he did, why should we care? Secondly it is a misquotation. In science we expect accurate citation. If we actually look at the Churchill quote we see:

“Many forms of Government have been tried and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time.” – sourced from Wikiquotes, which cites: The Official Report, House of Commons (5th Series), 11 November 1947, vol. 444, cc. 206–07

The key here is “…apart from all those other[s…] tried from time to time…”. Churchill was arguing from historical evidence. The trouble is when it comes to peer review we a) have never really tried any other system so the quote really isn’t applicable (actually its worse than that, other systems have been used, mostly on a small scale, and they actually seem to work pretty well but that’s for the next post) and b) what evidence we do have shows almost universally that peer review is a waste of time and resources and that it really doesn’t achieve very much at all. It doesn’t effectively guarantee accuracy, it fails dismally at predicting importance, and its not really supporting any effective filtering.  If I appeal to authority I’ll go for one with some domain credibility, lets say the Cochrane Reviews which conclude the summary of a study of peer review with “At present, little empirical evidence is available to support the use of editorial peer review as a mechanism to ensure quality of biomedical research.” Or perhaps Richard Smith, a previous editor of the British Medical Journal, who describes the quite terrifying ineffectiveness of referees in finding errors deliberately inserted into a paper. Smith’s article is a good entry into to the relevant literature as is a Research Information Network study that notably doesn’t address the issue of whether peer review of papers helps to maintain accuracy despite being broadly supportive of the use of peer review to award grants.

Now does this matter? I mean in some ways people seem to feel we’re bumbling along well enough. Why change things? Well consider the following scenario.

The UK government gives £3B to a company, no real strings attached, except the expectation of them reporting back. At the end of the year the company says “we’ve done a lot of work but we know you’re worried about us telling you more than you can cope with, and you won’t understand most of it so we’ve filtered it for you.”

A reported digs a bit into this and is interested in these filters. The interview proceeds as follows:

“So you’ll be making the whole record available as well as the stuff that you’ve said is most important presumably? I mean that’s easy to do?”

“No we’d be worried about people getting the wrong idea so we’ve kept all of that hidden from them.”

“OK, but you’ll be transparent about the filtering at least?”

“No, we’ll decide behind closed doors with three of our employers and someone to coordinate the decision. We can’t really provide any information on who is making the decisions on what has been filtered out. Our employees are worried that their colleagues might get upset about their opinions so we have to keep it secret who looked at what.”

“Aaaalright so how much does this filtering cost?”

“We’re not too sure, but we think between £180M and £270M a year.”

“And that comes out of the £3B?”

“No, we bill that separately to another government department.”

“And these filters, you are sure that they work?”

“Well we’ve done a bit of work on that, but no-one in the company is especially interested in the results.”

“But what are the results?”

“Well we can’t show any evidence that the filtering is any good for deciding what is important or whether it’s accurate, but our employees are very attached to it. I can get some of them in, they’ll tell you lots of stories about how its worked for them…”

I mean seriously? They’d be ripped to shreds in moments. What if this happened within government? The media would have a field day. What makes us as a research community any different? And how are you going to explain that difference to taxpapers? Lets look at the evidence, see where the problems are , see where the good things are, and lets start taking our responsibility to the public purse seriously. Lets abandon the gut feelings and anecdotes and actually start applying some scientific thinking to the processes we used to do and communicate science. After all if science works, then we can’t lose can we?

Now simply abandoning the current system tomorrow is untenable and impractical. And there are a range of perfectly valid concerns that can be raised about moving to different systems. These are worth looking at closely and we need to consider carefully what kinds of systems and what kinds of transition might work. But that is a job for a second post.


Enhanced by Zemanta

Hoist by my own petard: How to reduce your impact with restrictive licences

No Television
Image via Wikipedia

I was greatly honoured to be asked to speak at the symposium held on Monday to recognize Peter Murray-Rusts’ contribution to scholarly communication. The lineup was spectactular, the talks insightful and probing, and the discussion serious, but also no longer trapped in the naive yes/no discussions of openness and machine readability, but moving on into detail, edge cases, problems and issues.

For my own talk I wanted to do something different to what I’ve been doing in recent talks. Following the example of Deepak Singh, John Wilbank and others I’ve developed what seems to be a pretty effective way of doing an advocacy talk, involving lots of slides, big images, few words going by at a fast rate. Recently I did 118 slides in 20 minutes. The talk for Peter’s symposium required something different so I eschewed slides and just spoke for 40 minutes wanting to explore the issues deeply rather than skate over the surface in the way the rapid fire approach tends to do.

The talk was, I think, reasonably well received and provoked some interesting (and heated) discussion. I’ve put the draft text I was working from up on an Etherpad. However due to my own stupidity the talk was neither livestreamed nor recorded. In a discussion leading up to talk I was asked whether I wanted to put up a pretty picture as a backdrop and I thought it would be good to put up the licensing slide that I use in all of my talks to show that livestreaming, twittering, etc, is fine and encouraging people to do it. The trouble is that I navigated to the slideshare deck that has that slide and just hit full screen without thinking. What the audience therefore saw was the first slide, which looks like this.

A restrictive talk licence prohibiting live streaming, tweeting, etc.

I simply didn’t notice as I was looking the other way. The response to this was both instructive and interesting. The first thing that happened as soon as the people running the (amazingly effective given the resources they had) livestream and recording saw the slide they shut down everything. In a sense this is really positive, it shows that people respect the requests of the speaker by default.

Across the audience people didn’t tweet, and indeed in a couple of cases deleted photographs that they had taken. Again the respect for the request people thought I was making was solid. Even in an audience full of radicals and open geeks no-one questioned the request. I’m slightly gobsmacked in fact that no-one shouted at me to ask what the hell I thought I was doing. Some thought I was being ironic, which I have to say would have been too clever by half. But again it shows, if you ask, people do for the most part respect that request.

Given the talk was about research impact, and how open approaches will enable it, it is rather ironic that by inadvertantly using the wrong slide I probably significantly reduced the impact of the talk. There is no video that I can upload, no opportunity for others to see the talk. Several people who I know were watching online whose opinion I value didn’t get to see the talk, and the tweetstream that I might have hoped would be full of discussion, disagreement, and alternative perspectives was basically dead. I effectively made my own point, reducing what I’d hoped might kick off a wider discussion to a dead talk that only exists in a static document and memories of the limited number of people who were in the room.

The message is pretty clear. If you want to reduce the effectiveness and impact of the work you’re doing, if you want to limit the people you can reach, then use restrictive terms. If you want our work to reach people and to maximise the chance it has to make a difference, make it clear and easy for people to understand that they are encouraged to copy, share, and cite your work. Be open. Make a difference.

Enhanced by Zemanta

PLoS (and NPG) redefine the scholarly publishing landscape

Open Access logo, converted into svg, designed...
Image via Wikipedia

Nature Publishing Group yesterday announced a new venture, very closely modelled on the success of PLoS ONE, titled Scientific Reports. Others have started to cover the details and some implications so I won’t do that here. I think there are three big issues here. What does this tell us about the state of Open Access? What are the risks and possibilities for NPG? And why oh why does NPG keep insisting on a non-commercial licence? I think those merit separate posts so here I’m just going to deal with the big issue. And I think this is really big.

[I know it bores people, hell it bores me, but the non-commercial licence is a big issue. It is an even bigger issue here because this launch may define the ground rules for future scholarly communication. Open Access with a non-commercial licence actually achieves very little either for the community, or indeed for NPG, except perhaps as a cynical gesture. The following discussion really assumes that we can win the argument with NPG to change those terms. If we can the future is very interesting indeed.]

The Open Access movement has really been defined by two strands of approach. The “Green Road” involves self archiving of pre-prints or published articles in subscription journals as a means of providing access. It has had its successes, perhaps more so in the humanities, with deposition mandates becoming increasingly common both at the institutional level and the level of funders. The other approach, the “Gold Road” is for most intents and purposes defined by commercial and non-profit publishers based on a business model of article processing charges (APCs) to authors and making the published articles freely available at a publisher website. There is a thriving community of “shoe-string business model” journals publishing small numbers of articles without processing charges but in terms of articles published OA publishing is dominated by BioMedCentral, the pioneers in this area, now owned by Springer, Public Library of Science, and on a smaller scale Hindawi. This approach has gained more traction in the sciences, particularly the biological sciences.

From my perspective yesterday’s announcement means that for the sciences, the argument for Gold Open Access as the default publication mechanism has effectively been settled. Furthermore the future of most scholarly publishing will be in publication venues that place no value on a subjective assessment of “importance”. Those are big claim, but NPG have played a bold and possibly decisive move, in an environment where PLoS ONE was already starting to dominate some fields of science.

PLoS ONE was already becoming a default publication venue. A standard path for getting a paper published is, have a punt at Cell/Nature/Science, maybe a go at one of the “nearly top tier” journals, and then head straight for PLoS ONE, in some cases with the technical assessments already in hand. However in some fields, particularly chemistry, the PLoS brand wasn’t enough to be attractive against the strong traditional pull of American Chemical Society or Royal Society of Chemistry journals and Angewandte Chemie. Scientific Reports changes this because of the association with the Nature brand. If I were the ACS I’d be very worried this morning.

The announcement will also be scaring the hell out of those publishers who have a lot of separate, lower tier journals. The problem for publication business models has never been with the top tier, that can be made to work because people want to pay for prestige (whether we can afford it in the long term is a separate question). The problem has been the volume end of the market. I back Dorothea Salo’s prediction [and again] that 2011/12 would see the big publishers looking very closely at their catalogue of 100s or 1000s of low yield, low volume, low prestige journals and see the beginning of mass closures, simply to keep down subscription increases that academic libraries can no longer pay for. Aggregated large scale journals with streamlined operating and peer review procedures, simplified and more objective selection criteria, and APC supported business models make a lot of sense in this market. Elsevier, Wiley, Springer (and to a certain extent BMC) have just lost the start in the race to dominate what may become the only viable market in the medium term.

With two big players now in this market there will be real competition. Others have suggested [see Jason Priem‘s comment] this will be on the basis of services and information. This might be true in the longer term but in the short to medium term it will be on two issues: brand, and price. The choice of name is a risk for NPG, the Nature brand is crucial to success of the venture, but there’s a risk of dilution of the brand which is NPG’s major asset. That the APC for Science Reports has been set identically to PLoS ONE is instructive. I have previously argued that APC driven business models will be the most effective way of forcing down publication costs and I would expect to see competition develop here. I hope we might soon see a third player in this space to drive effective competition.

At the end of the day what this means is that there are now seriously credible options for publishing in Open Access venues (assuming we win the licensing argument) across the sciences, that funders now support Article Processing Charges, and that there is really no longer any reason to publish in that obscure subscription journal that no-one actually read anyway. The dream of a universal database of freely accessible research outputs is that much closer to our reach.

Above all, this means that PLoS in particular has succeeded in its aim of making Gold Open Access publication a credible default option. The founders and team at PLoS set out with the aim of changing the publication landscape. PLoS ONE was a radical and daring step at the time which they pulled off. The other people who experimented in this space also deserve credit but it was PLoS ONE in particular that found the sweet spot between credibility and pushing the envelope. I hope that those in office are cracking open some bubbly today. But not too much. For the first time there is now some serious competition and its going to be tough to keep up. There remains a lot more work to be done (assuming we can sort out the licence).

Full disclosure: I am an academic editor for PLoS ONE, editor in chief of the BioMedCentral journal Open Research Computation, and have advised PLoS, BMC, and NPG in a non-paid capacity on a variety of issues that relate closely to this post.

Enhanced by Zemanta

Finding the time…

My melting time
Image by Aníbal Pées Labory via Flickr

Long term readers of this blog will know that I occasionally write an incomprehensible post that no-one understands about the nature of time on the web. This is my latest attempt to sort my thinking out on the issue. I don’t hold out that much hope but it seemed appropriate for the New Year…

2010 was the year that real time came to the mainstream web. From the new Twitter interface to live updates, the flash crash, any number of other developments and stories focussed on how everything is getting faster, better, more responsive. All of this is good, but I don’t think its the end game. Real time is fun but it is merely a technical achievement. It is just faster. It demonstrates our technical ability to overcome observable latency but beyond that not a lot.

Real time also seems to have narrowed the diversity of our communities, paradoxically by speeding up the conversation. As conversations have moved from relatively slow media (such as blog comments) through non-real time services, through to places like twitter I have noticed the geographical spread of my conversations has narrowed. I am more limited because the timeframe of the conversations limited them to people near enough to my own timezone. As I move to different timezones, the people, the subjects, and the tone of the conversation changes. I become trapped by the native timeframe of the conversation, which on Twitter is just slightly slower than a spoken conversation.

A different perspective. Someone last year (I’m embarrassed to say I can’t remember who) talked to me about the idea of using the live twitter stream generated during a talk to subtitle the video (thanks to @ambrouk for the link) of that talk to enable searching. Essentially using the native timestamp of both twitter and the video to synchronise a textual record of what the speaker was talking about. Now this is interesting from a search perspective but I found it even more interesting from a conversational perspective. Imagine that watching a video of a talk and you see embedded a tweeted comment that you want to follow up. Well you can just reply, but the original commenter won’t have any context for your pithy remark. But what if it were possible to use the video to recreate the context? The context is at least partly shared, if the original commenter was viewing the talk remotely then almost completely shared, so can we (partially) recreate enough of the setting, efficiently enough, to enable that conversation to continue?

This is now a timeshifted conversation. Time shifting, or more precisely controlling the intrinsic timescale of a conversation, is for me the big challenge. Partly I was prompted to write this post by the natural use of “timeshifting” in a blog post by Andrew Walkingshaw in reference to using Instapaper. Instapaper lets you delay a very simple “conversation” into a timeframe under your control but it is very crude. The context is only re-created in as much as the content that you selected is saved for a later time. To really enable conversations to be timeshifted requires much more sophisticated recreation of context as well as very sensitive notification. When is the right moment to re-engage?

One of the things I love about Friendfeed (and interestingly one of the things Robert Scoble hates, but that’s another blog post) is the way that as a conversation proceeds a whole thread is promoted back to the top of the stream as a new comment or interaction comes in. This both provides notification that the conversation is continuing but also critically recreates the context of the ongoing discussion. I think this is part of what originally tipped me into thinking about time and context.

The point is that technically we need to regain the control of our time. Currently the value of our conversations are diminished by our ability to control their intrinsic timescale. For people like Scoble who actually live in the continuous flow, this is fine. But this is not a feasible mode of interaction for many of us. It isn’t a productive mode of interaction for many of us, much of the time, and we are losing potential value that is in the stream. We need mechanisms that re-surface the conversation at the right time, and on the right timescale, we need tools that enable us to timeshift conversations both with people and with technology, but above all we need effective and efficient ways to recover the context in which those conversations are taking place.

If these problem can be solved then we can move away from the current situation where social media tools are built by and used and critiqued largely by the people who can spend the most time interactign with them. We don’t get a large proportion of the potential value out of these tools because they don’t support occasional and timeshifted modes of interaction, which in turn means that most people don’t get much value out them, and in turn means that most people don’t use them. Facebook is so dominant precisely because the most common conversation is effectively saying “hello, I’m still here!”, something that requires very little context to make sense. That lack of a need for context makes it possible for everyone from the occasional user to the addict to get value from the service. It doesn’t matter how long it takes for someone to reply “hello, I’m still here as well”, the lack of required context means it still makes sense. Unless you’ve forgotten entirely who the person is…

To extract the larger potential value from social media, particularly in professional settings, we need to make this work on a much more sophisticated scale. Notifications that come when they should based on content and importance, capturing and recreating context that makes it possible to continue conversations over hours, days, years or even decades. If this can be made to work, then a much wider range of people will gain real value from their interactions. If a larger proportion of people are interacting there is more value that can be realised. The real time web is an important step along the road in this direction but it is really only first base. Time to move on.

Enhanced by Zemanta

Forward linking and keeping context in the scholarly literature

Alchemical symbol for arsenic
Image via Wikipedia

Last Friday I spoke at the STM Innovation Seminar in London, taking in general terms the theme I’ve been developing recently of focussing on enabling user discovery rather than providing central filtering, of enabling people to act as their own gatekeeper rather than publishers taking that role on for themselves.

An example I used, one I’ve used before was the hydride oxidation paper that was published in JACS, comprehensively demolished online and subsequently retracted. The point I wanted to make was that detailed information, the comprehensive view of what had happened was only available by searching Google. In retrospect, as has been pointed out to me in private communication, this wasn’t such a good example because, there is often more detailed information available in the published retraction. It isn’t always as visible as I might like, particularly to automated systems but actually the ACS does a pretty good job overall with retractions.

Had it come a few days earlier the arsenic microbes paper, and subsequent detailed critique might well have made a better example. Here again, the detailed criticism is not visible from the paper but only through a general search on the web, or via specialist indexes like researchblogging.org. The external reader, arriving at the paper, would have no idea that this conversation was even occurring. The best case scenario is that if and when a formal critique is published that this will be visible from the page, but even in this case this can easily be buried in other citations from the formal literature.

The arsenic story is still unfolding and deserves close observation, as does the critique of the P/NP paper from a few months ago. However a broader trend does appear to be evident. If a high profile paper is formally published, it will receive detailed, public critique. This in itself is remarkable. Open peer review is happening, even becoming common place, an expected consequence of the release of big papers. What is perhaps even more encouraging as that when that critique starts it seems capable of aggregating sufficient expertise to make the review comprehensive. When Rosie Redfield first posted her critique of the arsenic paper I noted that she skipped over the EXAFS data which I felt could be decisive. Soon after, people with EXAFS expertise were in the comments section of the blog post, pulling it apart [1, 2, 3, 4].

Two or three things jump out at me here. First that the complaint that people “won’t comment on papers” now seems outdated. Sufficiently high profile papers will receive criticism, and woe betide those journals who aren’t able to summon a very comprehensive peer review panel for these papers. Secondly that this review is not happening on journal websites even when journals provide commenting fora. The reasons for this are, in my opinion, reasonably clear. The journal websites are walled gardens, often requiring sign in, often with irritating submission or review policies. People simply can’t be arsed. The second is the fact that people are much more comfortable commenting in their own spaces, their own blogs, their community on twitter or facebook. These may not be private, but they feel safer, less wide open.

This leads onto the third point. I’ve been asked recently to try to identify what publishers (widely drawn) can do to take advantage of social media in general terms. Forums and comments haven’t really worked, not on the journal websites. Other adventures have had some success, some failures, but nothing which has taken the world by storm.

So what to do? For me the answer is starting to form, and it might be one that seems obvious. The conversation will always take place externally. Conversations happen where people come together. And people fundamentally don’t come together on journal websites. The challenge is to capture this conversation and use it to keep the static paper in context. I’d like to ditch the concept of the version of record but its not going to happen. What we can do, what publishers could do to add value and, drawing on theme of my talk, to build new discovery paths that lead to the paper, is to keep annotating, keep linking, keep building the story around the paper as it develops.

This is both technically achievable and it would add value that doesn’t really exist today. It’s something that publishers with curation and editorial experience and the right technical infrastructure could do well. And above all it is something that people might find of sufficient value to pay for.

Enhanced by Zemanta