One of the aspects that people seem to disagree with most, when I write or talk about open access, is that there is a problem with journal “prestige”. Overly-ventriloquizing on behalf of the stereotypical conversant in this debate, he or she usually accepts (as do I) that, in an economy of journals dealing with niche subjects, some will be held in esteem and that others will not fare so well. He or she also usually thinks that this is a useful feature of the system; it is helpful to know that Journal X will feature high-quality material. Where we usually diverge is that I state that I think there are huge problems with the current systems of prestige and that we should move to an article-level (or author-level) method of appraisal.
For easy reference, as this comes up time and time again, here are three core reasons why I think the system of journal prestige is problematic (there are nuances to each of these, they are presented here in an up-front way to give the core message):
- Economic reasons: Journals that hold prestige are usually owned by for-profit commercial publishers. There is pressure from assessment mechanisms to publish in high-prestige journals. This means that the commercial monopoly of entities making astounding commercial profits for their shareholders through the sale of academic material back to academic libraries (that was signed over by academics without compensation from those publishers) can continue unabated.
- Dissemination reasons: I do not believe that the highest-prestige journals offer the best dissemination. Despite arguments re. discoverability (in which I am more versed than many), the logic that a pay-for-access version could always and intrinsically be better disseminated than an open-access equivalent does not seem plausible to me. We are talking about dissemination vs. discoverability.
- Quality reasons: using a wide proxy measure (a journal) to evaluate whether material is good (by evaluating the peer review practice) seems odd. Why not simply base your decision on the editor (and if they move journal, then "transfer" the prestige) and the identity of the peer reviewers who endorsed the work (whose names could be made public [yes, I know there are problems with this too. See: Eve, Martin Paul, ‘Before the Law: Open Access, Quality Control and the Future of Peer Review’, in Debating Open Access, ed. by Nigel Vincent and Chris Wickham (London: British Academy, 2013), pp. 68–81.])? Often, also, it is the same reviewers across multiple journals (especially in small sub-fields), so the choice of where it was submitted often will not impinge upon the quality control in any way.
There are many (sometimes) tiresome counter-arguments against Article-Level Metrics, mostly centering around the way they can be gamed. One need only look at how people try to game the Impact Factor, though, to see that this is hardly exclusive to ALMs. Futhermore, in disciplines such as mine where bibliometrics are rare, what I'm actually talking about when I say ALMs is "how good is the author and who said they are good". This can, in my view, be achieved in a far better way than journal brand as a marker and would most helpfully be conveyed through systems in point #3: which editor accepted the piece and who reviewed it (in cases of pre-publication review that are active in most humanities disciplines).