Home » Blog

Blacklists are technically infeasible, practically unreliable and unethical. Period.

29 January 2017 8 Comments

It’s been a big weekend for poorly designed blacklists. But prior to this another blacklist was also a significant discussion. Beall’s list of so-called “Predatory” journals and publishers vanished from the web around a week ago. There is still not explanation for why, but the most obvious candidate is that legal action, threatened or real, was the cause of it being removed. Since it disappeared many listservs and groups have been asking what should be done? My answer is pretty simple. Absolutely nothing.

It won’t surprise anyone that I’ve never been a supporter of the list. Early on I held the common view that Beall was providing a useful service, albeit one that over-stated the problem. But as things progressed my concerns grew. The criticisms have been rehearsed many times so I won’t delve into the detail. Suffice to say Beall has a strongly anti-OA stance, was clearly partisan on specific issues, and antagonistic – often without being constructively critical – to publishers experimenting with new models of review. But most importantly his work didn’t meet minimum scholarly standards of consistency and validation. Walt Crawford is the go-to source on this having done the painstaking work of actually documenting the state of many of the “publishers” on the list but it seems like only a small percentage of the blacklisted publishers were ever properly documented by Beall.

Does that mean that it’s a good thing the lists are gone? That really depends on your view of the scale of the problem. The usual test case of the limitations of free speech is whether it is ok to shout “FIRE” in a crowded theatre when there is none. Depending on your perspective you might feel that our particular theatre has anything from a candle onstage to a raging inferno under the stalls. From my perspective there is a serious problem, although the problem is not what most people worry about, and is certainly not limited to Open Access publishers. And the list didn’t help.

But the real reason the list doesn’t help isn’t because of its motivations or its quality. It’s a fundamental structural problem with blacklists. They don’t work, they can’t work, and they never will work. Even when they’re put together by “the good guys” they are politically motivated. They have to be because they’re also technically impossible to make work.

Blacklists are technically infeasible

Blacklists are never complete. Listing is an action that has to occur after any given agent has acted in a way that merits listing. Whether that listing involves being called before the House Committee on Un-American Activities or being added to an online list it can only happen after the fact. Even if it seems to happen before the fact, that just means that the real criteria are a lie. The listing still happens after the real criteria were met, whether that is being a Jewish screenwriter or starting up a well intentioned but inexpert journal in India.

Whitelists by contrast are by definition always complete. They are a list of all those agents that have been certified as meeting a certain level of quality assurance. There may be many agents that could meet the requirements, but if they are not on the list they have not yet been certified, because that is the definition of the certification. That may seem circular but the logic is important. Whitelists are complete by definition. Blacklists are incomplete by definition. And that’s before we get to the issue of criteria to be met vs criteria to be failed.

Blacklists are practically unreliable

A lot of people have been saying “we need a replacement for the list because we were relying on it”. This, to be blunt, was stupid. Blacklists are discriminatory in a way that makes them highly susceptible to legal challenge. All that is required is that it be shown that either the criteria for inclusion are discriminatory (or libelous) or that they are being applied in a discriminatory fashion. The redress is likely to be destruction of the whole list. Again, by contrast with a Whitelist the redress for discrimination is inclusion. Any litigant will want to ensure that the list is maintained so they get listed. Blacklists are at high risk of legal takedown and should never be relied on as part of a broader system. Use a Whitelist, or Whitelists (and always provide a mechanism for showing that something that isn’t yet certified should still be included in the broader system).

If your research evaluation system relies on a Blacklist it is fragile, as well as likely being discriminatory.

Blacklists are inherently unethical

Blacklists are designed to create and enforce collective guilt. Because they use negative criteria they will necessarily include agents that should never have been caught up. Blacklisting entire countries means that legal permanent residents, indeed it seems airline staff are being refused boarding onto flights to the US this weekend. Blacklisting publishers seeking to experiment with new forms of review, or new business models both stifles innovation and discriminates against new entrants. Calling out bad practice is different. Pointing to one organisation and saying its business practices are dodgy is perfectly legitimate if done transparently, ethically and with due attention to evidence. Collectively blaming a whole list is not.

Quality assurance is hard work and doing it transparently, consistently and ethically is even harder. Consigning an organisation to the darkness based on a mis-step, or worse a failure to align with a personal bias, is actually quite easy, hard to audit effectively and usually over simplifying a complex situation. To give a concrete example, DOAJ maintains a list of publishers that claim to have DOAJ certification but which do not. Here the ethics is clear, the DOAJ is a Whitelist that is publicly available in a transparent form (whether or not you agree with the criteria). Publishers that claim membership they don’t have can be legitimately, and individually, called out. Such behaviour is cause for serious concern and appropriate to note. But DOAJ does not then propose that these journals should be cast into outer darkness, merely notes the infraction.

So what should we do? Absolutely nothing!

We already have plenty of perfectly good Whitelists. Pubmed listing, WoS listing, Scopus listing, DOAJ listing. If you need to check whether a journal is running traditional peer review at an adequate level, use some combination of these according to your needs. Also ensure there is a mechanism for making a case for exceptions, but use Whitelists not Blacklists by default.

Authors should check with services like ThinkCheckSubmit or Quality Open Access Market if they want data to help them decide whether a journal or publisher is legitimate. But above all scholars should be capable of making that decision for themselves. If we aren’t able to make good decisions on the venue to communicate our work then we do not deserve the label “scholar”.

Finally, if you want a source of data on the scale of the problem of dodgy business practices in scholarly publishing then delve into Walt Crawford’s meticulous, quantitative, comprehensive and above all documented work on the subject. It is by far the best data on both the number of publishers with questionable practices and the number of articles being published. If you’re serious about looking at the problem then start there.

But we don’t need another f###ing list.


8 Comments »

  • #critlib | Pearltrees said:

    […] What is a LibGuide? Understanding the Nature of Research Expanding the Scope of Research. » Blacklists are technically infeasible, practically unreliable and unethical. Period. […]

  • Rants of Wisdom | Pearltrees said:

    […] Just because you can’t see it, doesn’t mean it’s not real. In the library in the gym, Big Brother is coming to universities. There's something better than a 'fuck off fund' » Blacklists are technically infeasible, practically unreliable and unethical. Period. […]

  • Walt Crawford said:

    Thanks for stating the case against blacklists–in general–more clearly than I’ve been able to. Also, of course, thanks for the mentions.

  • Marc Couture said:

    I don’t really follow your reasoning about white and black lists being complete and incomplete, respectively, or journals being listed “after the fact”.

    In my opinion, a list, black or white, of XXXX scientific / scholarly journals (XXXX being a suitable qualifier attributed through a criteria-based evaluation) would be complete if all existing journals had been evaluated, which is impossible. Mere estimates of the number of journals vary by a factor of at least two. There is also the issue of timeliness: some journals are too new to have been evaluated, others may have changed since they were included or rejected. This can become problematic if insufficient resources are allocated to evaluation or revision, and if there’s no built-in regular revision process.

    The effect of incomplete lists (according to my definition) is the same for both types of lists: honest journals may be ignored, while fraudulent (or highly questionable) ones may seep through the cracks.

    By the way, DOAJ does maintain what I view as black lists: (1) a list of journals displaying false DOAJ inclusion, which you mention; (2) a list of journals removed after reapplication, along with the reason (http://bit.ly/2kOPZbE). That DOAJ doesn’t display them much overtly and “does not then propose that these journals should be cast into outer darkness, [but] merely notes the infraction” is irrelevant in this respect.

    A more appropriate criterion to distinguish lists, be they black or white, is how potential entries are identified / selected. This can be done by crowdsourcing (Beall used that a lot), by professionals (Cabell’s), or by an invitation to apply (DOAJ, OASPA). Both black and white lists may use these models, which are not mutually exclusive.

    More important is the quality of a list. It depends on its criteria and its transparency, as you suggest, but also on the robustness and efficiency of its evaluation process. If I were to rate Beall’s and DOAJ lists on these dimensions, I’d say that Beall’s was strong as to criteria and efficiency (due to crowdsourcing), but quite low as to transparency and robustness. In comparison, DOAJ is high on criteria, transparency, and robustness (see https://doaj.org/publishers), but less on efficiency (they carry a significant backlog, partly because they have to treat 300 new applications each month on top of thousands of reapplications).

    My judgment of the robustness of DOAJ’s evaluation process is also based on my experience as associate editor of a small OA journal that had to reapply. They did a really thorough and helpful job. For instance, when required information was erroneous or missing, instead of flatly rejecting the application (which they could have done), they wrote to ask questions and, more generally, to help us making the small adjustments needed to comply with their criteria.

    Are white lists inherently superior? I don’t think so. But DOAJ’s is certainly far better than Beall’s was and, as I agree with your other arguments (susceptibility to legal challenge, stifling of innovation), I definitely prefer, and recommend them.

  • Cameron Neylon said:

    Hi Marc

    Let me try and re-state it because I think I should have defined what I meant more precisely. A list is just a list, something becomes a black list dependent on not just how its put together but how it is used.

    So a blacklist is a list of things you shouldn’t use or touch. If you use something as a blacklist you are always at risk because something might not (yet) be listed. A blacklist is open conceptually but limited in practice. Imagine you work with an explosive material that combines badly with some other materials. If you rely on the blacklist you are at risk that something has been missed.

    A whitelist by contrast is a list of things that are guaranteed to be able to safely use. It is closed conceptually as well as being limited in practice. You can rely on those materials that have been tested to not combine badly with your explosive material. The whitelist is safe in that sense. Of course it may be put together wrong but that’s a separate issue.

    The ethical problem with whitelists is that they are conservative in practice and tend to become discriminatory against new practice. DOAJ has been criticised for this, for being too strict and excluding valid experimentation for instance. The challenge with whitelists is that they only rarely match the precise thing you want to test for.

    So “blacklist” isn’t just the list itself but also the way it is relied on. If you are relying on an external agency to tell you what is dangerous you always risk that they’ve missed something new. If you rely on an external agency to tell you what is safe you will be safe, but possibly overly conservative compared to what you really want.

    Does that make more sense?

  • Marc Couture said:

    Hi Cameron,

    Your example of explosive materials was helpful. The explosion is “publishing in a questionable journal”, all the content of both lists is justified, and their only limitation is omissions (note that I still conclude that both are incomplete). So, relying exclusively on a white list is secure, though it could make one miss a suitable journal, while relying exclusively on a black one is risky, because one may choose a questionable journal not included in the list.

    So far, I agree with you, but you (and I) had to make a few major assumptions: “perfect” lists, and exclusive reliance on a single list, black or white. But in practice, lists are imperfect (some more than others), and one can use both black and white lists when making a choice. For instance, while relying on DOAJ white list, I would certainly avoid the journals that falsely pretend to be in DOAJ (I consider them as fraudulent), and I would consider with much suspicion those flagged by DOAJ as “suspected [of] editorial misconduct”, or “not adhering to Best practice”.

    The quality of DOAJ evaluation process, which I described briefly in my previous comment, justifies in my opinion the use of both its white and black lists.

    These two issues (list quality and respective “safeness” of black and white lists) are maybe separate from a theoretical point of view. However, they should not be considered separately when one discusses guidelines, one of these being, as we both agree, that choices should not be based solely upon lists.

    We all know the limitations of these guidelines: busy researchers enjoy (need?) simple / simplistic solutions and answers. Considering that Beall’s list (missed by many, or so it seems) was often suggested as the definite answer, and that rumours have it that it will soon reincarnate, I think it’s important to assess the quality of existing lists, white or black.

  • Alexey Skalaban said:

    +1

  • Cameron Neylon said:

    Hi Marc

    Yes, and I had a similar discussion with @gavialib as to the pragmatics in a world of imperfect lists. So my train of thought is that a) they don’t work in theory and b) I think black lists are unethical.

    Automatic exclusion is always bad in my view. Whereas asking additional questions, requiring more evidence or simply declining to include until further info is available is ok (if not perfect as you note).

    This is why I emphasise the *use* of the list. A list might technically be a “blacklist” but if it is used to flag potential issues for further investigation, or to focus QA efforts then I don’t really see it as a blacklist, but as part of a QA process for a whitelist.

    That may seem like splitting hairs but what is most important from my view is the act of exclusion. Bottom line excluding something permanently on the basis of someone else’s assessment, particularly in an automated fashion is to my mind unethical.