Giving credit, filtering, and blogs versus traditional research papers
Another post prompted by an exchange of comments on Neil Saunder’s blog. The discussion here started about the somewhat arbitrary nature of what does and does not get counted as ‘worthy contributions’ in the research community. Neil was commenting on an article in Nature Biotech that had similar subject matter to some Blog posts, and he was reflecting on the fact that one would look convincing on a CV and the others wouldn’t. The conversation in the comments drifted somewhat into a discussion of peer review with Maxine (I am presuming Maxine Clarke from Nature?). You should read her comment and the post and other comments in full but I wanted to pick out one bit.
…I think that unfiltered content (wherever it is published, web or book or wherever) is on average less accurate, more error-prone, less interesting to read than content that has had some filter applied to it. Your flip characterisation of peer review ignores (1) the editor and (2) the fact that several referees (all independent) of a paper see each others’ comments at revision/resubmission stages. Your comment about the “web being about filtering….†etc seems to ignore, for example, scientific and medical “misinformation†sites…
Neil’s comment about the web being about filtering, was I think absolutely apposite here. This is a point that I think that the traditional research press are generally missing and need desperately to engage with. Maxine’s point is that the peer reviewed literature is trusted because each piece of work has been approved by an editor and independent referees, whereas the web is inherently untrustworthy because it has not had any filtering.
Neil’s point, or at least my interpretation of it, is that the point about material on the web is that filtering is at the user’s control. This has the potential to be much more trustworthy and crucially, much more transparent than the rather opaque process of peer review and editorial policies. My trust in any source of information relies on my knowledge of how it is filtered. It is entirely possible that I would place more trust in Neil’s recommendation than I would in the fact that an article has been through the editorial board of Nature Biotech. Not because I dislike print but because the recommendation comes with from someone I know of and whose judgement I trust, not from anonymous referees and editors. I may trust them, because they are, for instance, from or chosen by Nature Publishing Group, but if I don’t know who they are then that trust does not have the personal dimension. A key question for publishers is how they can (or whether they should) be working to get this personal dimenion into their review process, and how high a priority is it in the short medium and longer term.
If I may put up a straw man of my own, the fact that something is in a journal that is, or claims to be, peer reviewed is no guarantee of either accuracy or quality. To begin with there are many journals where the editorial oversight of paper quality (by which I mean clarity and standard of writing) is minimal or non-existent. NPG stands out as one of relatively few publishers remaining that will actually edit a research paper for language and style (as opposed to pernickety issues of house style and format, should thirteen be written out anyone?). Secondly there are journals where refereeing is unreliable, inconsistent, or not properly used (and no I’m not going to name names).
In fact in many ways the peer reviewed literature is not dissimilar to the web. Perhaps a somewhat lower proportion of it is rubbish but a significant proportion is either wrong or misleading or just poorly carried out. The user still has to filter, but the tools for doing this are actually very poor; essentially reputation of the journal and some sort of view of the reliability of the authors. There are much more effective tools for filtering blog and web material in a transparent fashion, although there is still much work to do to automate these filtering processes more effectively. Perhaps it is fairer to say that blog readers are more effective at utilising these tools than the general research community. Many of these tools could be directly applied to the traditional peer reviewed literature and Nature Publishing Group have taken the lead on this amongst traditional publishers. But it is really hard to get scientists to comment on and rate papers after publication.
To put it another way, the general editorial standard of web material is certainly very poor. But the standard of material recommended by Neil or other specific people may be very high. We should be having a discussion about the relative editorial standards of NPG, the Journal of Dodgy Stuff Published by the Editor’s Mates, and the Journal of Stuff that Neil Recommended. I would guess that the editorial quality of JSNR is higher than that of JDSPEM but lower than NPG. On the other hand the JSNR may contain material that is more relevant, more accessible, or even just more my style than NPG. So I think we can compare and contrast Blogs and Papers. They (may) have different roles but we can (and should) compare quality metrics between them, assuming of course we can agree on some.
The problem seems lie in actually persuading scientists to use these tools and giving people the credit for the work they do in carrying it out. Social filtering relies on people doing it. And here we have a problem which seems to be echoed in another of Maxine’s comments (full comment here):
…Anecdotally, many scientists are often not respected by their peers for writing popular journal articles such as News and Views, etc. And they certainly don’t do it for the money….
So actually we are all on the same team. How do we actually make all these different contributions ‘count’ in the academic world. I think we agree that these contributions are important, whether they are a well written News and Views article or a great Blog discussing some aspect of a research problem or technique. I subscribe to Nature, but really I read the first half, and then skim through the research articles. Those of us who are assessing other scientists (and that includes me) have to champion the added value that these things bring. Those of us who do these things and are preparing CVs for assessment (including me, annual performance reviews coming up) should be including these as part of a balanced description of what it is we do, and defending their value. After all, if you don’t think it’s important, why are you doing it? If you do think it’s important then why aren’t you defending it?
I don’t know whether there is that much that journals can do to encourage these activities. Editorial may say they are important but that in and of itself won’t have a lot of impact on tenure reviews and job applications. Perhaps selecting News and Views authors from those who are writing good material on the web? Younger people, who are over represented on the web probably have more to gain from having a couple of N&V or Primers on their CV than older authors. And in the longer term, if it helps them get over the hurdle of getting a job or promotion they may be more inclined to help out in the future. Perhaps this can be a win-win situation?
I don’t think these are particularly new ideas (see also in particular Jon Udell on Circles of Trust, via a BBGM post) but I felt compelled to try and phrase them myself. YMMV.
Couldn’t agree more with everything you said. As you mentioned, the thread following my post went places that I didn’t really intend – but hey, that’s blogging. I do admire Maxine’s role as “defender of NPG’s integrity on the web”, even when it’s entirely irrelevant ;)
Anyway – the question of how we evaluate web-based information is an important issue and one with which a lot of your more “traditional” academics seem to struggle. Let me tell you the story of the group meeting in which I introduced OpenWetWare to my colleagues. Let’s call the questioners Q1 and Q2.
Q1: So…how is this different to our lab wiki?
Me (confused): Well – our lab wiki is our own private wiki. OWW is an open community.
Q1 + Q2: (stony silence)
Me (a little flustered now): It’s got protocols…methods…useful information.
Q1: I don’t see how this can work.
Q2: But what about peer review?
Me (wishing I’d never started): Um…what about it?
Scientists are of course trained to be sceptical but sometimes that manifests itself as downright ugly cynicism. The number of times I’ve heard people begin a sentence with “I’m sceptical that this can work” when it’s a topic to which you’ve just introduced them and of which they have no prior experience.
I digress – the point is, I’d never even considered that something like traditional academic peer review by a small panel of experts would have the slightest relevance to web content. If I find a protocol at a trusted site like OWW, I tend to think that the author has placed it there in good faith. Their name and affiliation are attached, they’re approved by the community – I mean, who registers at a scientific research community website with the malevolent intention of depositing misleading information, for fun?
Furthermore – let’s say that I tried their protocol and it didn’t work. I have several courses of action available to me: leave a comment, contact the author – hey it’s a wiki! – edit the article myself. What can I do if the protocol in (insert journal name here) fails? Not a whole lot.
New rules for the information age.
Couldn’t agree more with everything you said. As you mentioned, the thread following my post went places that I didn’t really intend – but hey, that’s blogging. I do admire Maxine’s role as “defender of NPG’s integrity on the web”, even when it’s entirely irrelevant ;)
Anyway – the question of how we evaluate web-based information is an important issue and one with which a lot of your more “traditional” academics seem to struggle. Let me tell you the story of the group meeting in which I introduced OpenWetWare to my colleagues. Let’s call the questioners Q1 and Q2.
Q1: So…how is this different to our lab wiki?
Me (confused): Well – our lab wiki is our own private wiki. OWW is an open community.
Q1 + Q2: (stony silence)
Me (a little flustered now): It’s got protocols…methods…useful information.
Q1: I don’t see how this can work.
Q2: But what about peer review?
Me (wishing I’d never started): Um…what about it?
Scientists are of course trained to be sceptical but sometimes that manifests itself as downright ugly cynicism. The number of times I’ve heard people begin a sentence with “I’m sceptical that this can work” when it’s a topic to which you’ve just introduced them and of which they have no prior experience.
I digress – the point is, I’d never even considered that something like traditional academic peer review by a small panel of experts would have the slightest relevance to web content. If I find a protocol at a trusted site like OWW, I tend to think that the author has placed it there in good faith. Their name and affiliation are attached, they’re approved by the community – I mean, who registers at a scientific research community website with the malevolent intention of depositing misleading information, for fun?
Furthermore – let’s say that I tried their protocol and it didn’t work. I have several courses of action available to me: leave a comment, contact the author – hey it’s a wiki! – edit the article myself. What can I do if the protocol in (insert journal name here) fails? Not a whole lot.
New rules for the information age.
Great posts (both this one and the Neil’s linked post) and I agree with many of the sentiments expressed. However, I think that there is a tendency by many researchers (myself included) to underestimate the importance of “soft credit” in science and engineering. I may not be able to put a line on my CV if I write an insightful blog post, detailed protocol on OWW or even a News and Views for an NPG journal, but I may enhance my reputation and standing with my colleagues or it may mean that more people know who I am. Given that the reward system in research is largely based on evaluation by peers, whether it is papers, grants, invited talks, jobs or tenure, such soft credit can pay off in ways I may not ever fully realize. Soft credit can even ultimately translate to hard credit like research publications if reviewers say know who I am and respect me as a researcher.
Great posts (both this one and the Neil’s linked post) and I agree with many of the sentiments expressed. However, I think that there is a tendency by many researchers (myself included) to underestimate the importance of “soft credit” in science and engineering. I may not be able to put a line on my CV if I write an insightful blog post, detailed protocol on OWW or even a News and Views for an NPG journal, but I may enhance my reputation and standing with my colleagues or it may mean that more people know who I am. Given that the reward system in research is largely based on evaluation by peers, whether it is papers, grants, invited talks, jobs or tenure, such soft credit can pay off in ways I may not ever fully realize. Soft credit can even ultimately translate to hard credit like research publications if reviewers say know who I am and respect me as a researcher.
I agree with your points about the two publication methods being complementary. I think Google can be the filter for these, if you need to find a technique your first option is usually a web search, and you’ll look at the first 2-3 results. I guess there will be problems, depending on how google can index journal articles, but usually the most meritocratic result with be first. Often journal articles will be the top hits on research theory if you like, but web pages are often the top hits for research techniques.
I have also run into what Neil calls ugly cynisism, but I think this can a good thing. If some people start to feel uncomfortable about you’re doing, then you’re probably doing something right.
I agree with your points about the two publication methods being complementary. I think Google can be the filter for these, if you need to find a technique your first option is usually a web search, and you’ll look at the first 2-3 results. I guess there will be problems, depending on how google can index journal articles, but usually the most meritocratic result with be first. Often journal articles will be the top hits on research theory if you like, but web pages are often the top hits for research techniques.
I have also run into what Neil calls ugly cynisism, but I think this can a good thing. If some people start to feel uncomfortable about you’re doing, then you’re probably doing something right.
As we have discussed a lot of these problems disappear if authors support their statements with links to the relevant raw data, whether in a peer reviewed article or blog post.
As we have discussed a lot of these problems disappear if authors support their statements with links to the relevant raw data, whether in a peer reviewed article or blog post.
License
To the extent possible under law, Cameron Neylon has waived all copyright and related or neighboring rights to Science in the Open. Published from the United Kingdom.
I am also found at...
Tags
Recent posts
Recent Posts
Most Commented