Home » Blog, Featured

Forward linking and keeping context in the scholarly literature

6 December 2010 313 views 11 Comments
Alchemical symbol for arsenic
Image via Wikipedia

Last Friday I spoke at the STM Innovation Seminar in London, taking in general terms the theme I’ve been developing recently of focussing on enabling user discovery rather than providing central filtering, of enabling people to act as their own gatekeeper rather than publishers taking that role on for themselves.

An example I used, one I’ve used before was the hydride oxidation paper that was published in JACS, comprehensively demolished online and subsequently retracted. The point I wanted to make was that detailed information, the comprehensive view of what had happened was only available by searching Google. In retrospect, as has been pointed out to me in private communication, this wasn’t such a good example because, there is often more detailed information available in the published retraction. It isn’t always as visible as I might like, particularly to automated systems but actually the ACS does a pretty good job overall with retractions.

Had it come a few days earlier the arsenic microbes paper, and subsequent detailed critique might well have made a better example. Here again, the detailed criticism is not visible from the paper but only through a general search on the web, or via specialist indexes like researchblogging.org. The external reader, arriving at the paper, would have no idea that this conversation was even occurring. The best case scenario is that if and when a formal critique is published that this will be visible from the page, but even in this case this can easily be buried in other citations from the formal literature.

The arsenic story is still unfolding and deserves close observation, as does the critique of the P/NP paper from a few months ago. However a broader trend does appear to be evident. If a high profile paper is formally published, it will receive detailed, public critique. This in itself is remarkable. Open peer review is happening, even becoming common place, an expected consequence of the release of big papers. What is perhaps even more encouraging as that when that critique starts it seems capable of aggregating sufficient expertise to make the review comprehensive. When Rosie Redfield first posted her critique of the arsenic paper I noted that she skipped over the EXAFS data which I felt could be decisive. Soon after, people with EXAFS expertise were in the comments section of the blog post, pulling it apart [1, 2, 3, 4].

Two or three things jump out at me here. First that the complaint that people “won’t comment on papers” now seems outdated. Sufficiently high profile papers will receive criticism, and woe betide those journals who aren’t able to summon a very comprehensive peer review panel for these papers. Secondly that this review is not happening on journal websites even when journals provide commenting fora. The reasons for this are, in my opinion, reasonably clear. The journal websites are walled gardens, often requiring sign in, often with irritating submission or review policies. People simply can’t be arsed. The second is the fact that people are much more comfortable commenting in their own spaces, their own blogs, their community on twitter or facebook. These may not be private, but they feel safer, less wide open.

This leads onto the third point. I’ve been asked recently to try to identify what publishers (widely drawn) can do to take advantage of social media in general terms. Forums and comments haven’t really worked, not on the journal websites. Other adventures have had some success, some failures, but nothing which has taken the world by storm.

So what to do? For me the answer is starting to form, and it might be one that seems obvious. The conversation will always take place externally. Conversations happen where people come together. And people fundamentally don’t come together on journal websites. The challenge is to capture this conversation and use it to keep the static paper in context. I’d like to ditch the concept of the version of record but its not going to happen. What we can do, what publishers could do to add value and, drawing on theme of my talk, to build new discovery paths that lead to the paper, is to keep annotating, keep linking, keep building the story around the paper as it develops.

This is both technically achievable and it would add value that doesn’t really exist today. It’s something that publishers with curation and editorial experience and the right technical infrastructure could do well. And above all it is something that people might find of sufficient value to pay for.

Enhanced by Zemanta

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

  • Rosie Redfield

    Wow, wonderful post!

  • http://cameronneylon.net Cameron Neylon

    Thanks Rosie. Really just a brief follow up to your incredible detailed and
    comprehensive post though. That level of review seems to be becoming rare,
    either before or after publication.

  • anna

    Agree fully with the solution. We need information nodes in reliable places. The catch is going to be to make them clear to follow, discovery of obscure but relevant sources and, as you mention, curation to remove the dross (which needs to be carefully done so as not to create a biased picture).

    We’ve been wondering (here) about the obscure sources issue though; this is one thing google does not do well, and will require either humans (personal networks) or a totally different approach to accessing internet materials …

  • http://www.axiope.com Rory

    Cameron,

    I’m intrigued by your comment that, “People are much more comfortable commenting in their own spaces, their own blogs, their community on twitter or facebook. These may not be private, but they feel safer, less wide open.” There seems to be an implication that although people are already using these communities to comment on papers, the communities are lacking something as forums for commenting. Is that a correct inference? If so, what do you think it is that they are lacking — is it the ability to “build new discovery paths that lead to the paper, to keep annotating, keep linking, keep building the story around the paper as it develops? Is it something else?

  • http://cameronneylon.net Cameron Neylon

    Good question. I’m not sure that there is a generic failure of all of these forums, although there are features I would like to see. I think the overall feature is that we are failing to federate the conversation and in particular the publishers are failing to hook those back into the original paper as an entry point, a discovery junction if you like, into that conversation. So if there is a general failing its that the conversation isn’t easily aggregatable because we don’t have open systems for bringing all of it together (there are social issues, which I allude to in the portion you quote but I think that’s separate).

    There’s a broader question as to whether people understand the extent to which things are public or private or anywhere in between. A lot of the problems on the web seem to stem from people in the same space having different expectations of what is public and private about that space.

  • http://cameronneylon.net Cameron Neylon

    I think it ought to be possible at least in the top tier to manage that curation process manually. These journals are supposed to be adding much more value than the average so this is a place that I think some of that effort ought to be applied. As you get to more general content though the automation problem arises and we get back to needing social annotation and reputation frameworks that work well for researchers. But it’s not a simple problem, and would be very easy to give a skewed view.

    The obscure sources issue. Is this about prioritisation of high quality but obscure commentary or simply about finding it. Would a trackback style approach work here at least for notification. I’m not sure how you can find and prioritise high quality material from unknown contributors except by manual annotation…

  • http://www.dcc.ac.uk/ Kevin Ashley

    I agree it’s necessary to try to associate the dialogue/commentary with the publication wherever that dialogue is taking place. Whether folks would be willing to pay for that – I’m not really in a position to say. There’s a hysterisis effect here. What keeps me paying for something I’m already paying for (but becoming disenchanted with) as opposed to what will induce me to start paying for something that I don’t currently subscribe to. The former case is a lower hurdle to jump.

    The problem isn’t new, by any means. Usenet news was certainly a common discussion mechanism for some areas of research – more so before the eternal september, but still for some time afterwards.

  • http://cameronneylon.net Cameron Neylon

    Fair criticism as I was a bit vague about why I thought people would pay for
    it. More precisely people are currently paying for high quality literature
    search environments (try prising a chemist away from their one institutional
    SciFinder licence) and it seems to me that leveraging your content as the
    central point of a discovery process would offer similar value. You could
    see subscription publishers moving away from charging for content and using
    freely available content as an entry point to a search and discovery
    service. This in a sense is what Elsevier are looking to do with SciVerse
    and the integration of Apps with Science Direct display of papers I guess.

  • http://twitter.com/gthorisson Gudmundur Thorisson

    Great post, Cameron. Stating the obvious perhaps, but establishing the identity of author and other individuals in the process needs to be crucial consideration going forward, esp. with respect to reputation frameworks & social/community annotation (which you mentioned in a comment) and linking one’s contributions to one’s ‘digital scholar’ identity. I see ORCID (http://www.orcid.org) playing a key role in this setting. #orcid

  • http://cameronneylon.net Cameron Neylon

    Absolutely, the whole problem will turn on unique identifiers for both
    contributors and all other research objects as well. For papers we already
    have dois, and for some types of data we have IDs but we need a lot more,
    and probably contributors are the next important class to capture.

  • Pingback: Quality Assurance « Research Communications Strategy