Giving credit, filtering, and blogs versus traditional research papers
Another post prompted by an exchange of comments on Neil Saunder’s blog. The discussion here started about the somewhat arbitrary nature of what does and does not get counted as ‘worthy contributions’ in the research community. Neil was commenting on an article in Nature Biotech that had similar subject matter to some Blog posts, and he was reflecting on the fact that one would look convincing on a CV and the others wouldn’t. The conversation in the comments drifted somewhat into a discussion of peer review with Maxine (I am presuming Maxine Clarke from Nature?). You should read her comment and the post and other comments in full but I wanted to pick out one bit.
…I think that unfiltered content (wherever it is published, web or book or wherever) is on average less accurate, more error-prone, less interesting to read than content that has had some filter applied to it. Your flip characterisation of peer review ignores (1) the editor and (2) the fact that several referees (all independent) of a paper see each others’ comments at revision/resubmission stages. Your comment about the “web being about filtering….” etc seems to ignore, for example, scientific and medical “misinformation” sites…
Neil’s comment about the web being about filtering, was I think absolutely apposite here. This is a point that I think that the traditional research press are generally missing and need desperately to engage with. Maxine’s point is that the peer reviewed literature is trusted because each piece of work has been approved by an editor and independent referees, whereas the web is inherently untrustworthy because it has not had any filtering.
Neil’s point, or at least my interpretation of it, is that the point about material on the web is that filtering is at the user’s control. This has the potential to be much more trustworthy and crucially, much more transparent than the rather opaque process of peer review and editorial policies. My trust in any source of information relies on my knowledge of how it is filtered. It is entirely possible that I would place more trust in Neil’s recommendation than I would in the fact that an article has been through the editorial board of Nature Biotech. Not because I dislike print but because the recommendation comes with from someone I know of and whose judgement I trust, not from anonymous referees and editors. I may trust them, because they are, for instance, from or chosen by Nature Publishing Group, but if I don’t know who they are then that trust does not have the personal dimension. A key question for publishers is how they can (or whether they should) be working to get this personal dimenion into their review process, and how high a priority is it in the short medium and longer term.
If I may put up a straw man of my own, the fact that something is in a journal that is, or claims to be, peer reviewed is no guarantee of either accuracy or quality. To begin with there are many journals where the editorial oversight of paper quality (by which I mean clarity and standard of writing) is minimal or non-existent. NPG stands out as one of relatively few publishers remaining that will actually edit a research paper for language and style (as opposed to pernickety issues of house style and format, should thirteen be written out anyone?). Secondly there are journals where refereeing is unreliable, inconsistent, or not properly used (and no I’m not going to name names).
In fact in many ways the peer reviewed literature is not dissimilar to the web. Perhaps a somewhat lower proportion of it is rubbish but a significant proportion is either wrong or misleading or just poorly carried out. The user still has to filter, but the tools for doing this are actually very poor; essentially reputation of the journal and some sort of view of the reliability of the authors. There are much more effective tools for filtering blog and web material in a transparent fashion, although there is still much work to do to automate these filtering processes more effectively. Perhaps it is fairer to say that blog readers are more effective at utilising these tools than the general research community. Many of these tools could be directly applied to the traditional peer reviewed literature and Nature Publishing Group have taken the lead on this amongst traditional publishers. But it is really hard to get scientists to comment on and rate papers after publication.
To put it another way, the general editorial standard of web material is certainly very poor. But the standard of material recommended by Neil or other specific people may be very high. We should be having a discussion about the relative editorial standards of NPG, the Journal of Dodgy Stuff Published by the Editor’s Mates, and the Journal of Stuff that Neil Recommended. I would guess that the editorial quality of JSNR is higher than that of JDSPEM but lower than NPG. On the other hand the JSNR may contain material that is more relevant, more accessible, or even just more my style than NPG. So I think we can compare and contrast Blogs and Papers. They (may) have different roles but we can (and should) compare quality metrics between them, assuming of course we can agree on some.
The problem seems lie in actually persuading scientists to use these tools and giving people the credit for the work they do in carrying it out. Social filtering relies on people doing it. And here we have a problem which seems to be echoed in another of Maxine’s comments (full comment here):
…Anecdotally, many scientists are often not respected by their peers for writing popular journal articles such as News and Views, etc. And they certainly don’t do it for the money….
So actually we are all on the same team. How do we actually make all these different contributions ‘count’ in the academic world. I think we agree that these contributions are important, whether they are a well written News and Views article or a great Blog discussing some aspect of a research problem or technique. I subscribe to Nature, but really I read the first half, and then skim through the research articles. Those of us who are assessing other scientists (and that includes me) have to champion the added value that these things bring. Those of us who do these things and are preparing CVs for assessment (including me, annual performance reviews coming up) should be including these as part of a balanced description of what it is we do, and defending their value. After all, if you don’t think it’s important, why are you doing it? If you do think it’s important then why aren’t you defending it?
I don’t know whether there is that much that journals can do to encourage these activities. Editorial may say they are important but that in and of itself won’t have a lot of impact on tenure reviews and job applications. Perhaps selecting News and Views authors from those who are writing good material on the web? Younger people, who are over represented on the web probably have more to gain from having a couple of N&V or Primers on their CV than older authors. And in the longer term, if it helps them get over the hurdle of getting a job or promotion they may be more inclined to help out in the future. Perhaps this can be a win-win situation?