One of the things we have to contend with at the Open Library of Humanities is the fact that libraries will evaluate our performance and decide whether or not to renew their subscriptions/memberships. This makes sense and is only to be expected.
A few thoughts struck me about this, though. One of the core questions that some librarians have been asking is: how many articles from our researchers are appearing in these journals?
This question makes sense in an age of open-access article processing charges (APCs). After all, if you paid a big-deal to a publisher to cover all APCs for your university, then you’d want to know that your researchers were using the service.
Our model, though, doesn’t quite work like that. We cannot do our publishing without the sponsorship of libraries but it is explicitly not an APC. It is more like a subscription. If this were an APC and every university who participated had multiple researchers publishing in our journals, there is no way that the model could work. Our membership, remember, for the 909 articles (including back-content) that we published in the first year, was less than the cost of a single APC at other commercial publishers (for some smaller institutions, it was less than a quarter of a single APC at other places). If we are evaluated on an APC basis by every institution, though, this model will not work. Our model is designed to be a redistribution mechanism that undoes the cost concentration of article processing charges. This distribution is not equal between all participating institutions.
On the other hand, in the age of the subscription, usage was the measure by which librarians would decide where to cut, rather than publishing output; fair enough. Our usage figures are pretty good and the cost per reader per institution is an average of $0.008. So we’re pretty efficient there. This strikes me as a far better way to appraise our particular model (although I would say that as we do well by it). That’s not to say that there aren’t challenges, though. For example, it’s very difficult for us to provide any meaningful per-institution access, since we do not have any login required (we’re 100% OA). We could do this by IP, but it will miss lots.
In the age of the open-access APC, though, there seems to be another desired aspect by librarians: behavioural change from researchers. Some librarians, I feel, are willing to pay hybrid (and pure-OA) APCs so that researchers grow comfortable with publishing in an OA environment. After all, the more success stories we have, the more OA will become a reality. So, it strikes me, another reason that we are asked “how many of my university’s researchers are publishing here?” is to ascertain whether OA is growing.
Again, in some ways we fare well by this. We believe that not having author-facing charges is the best way to advance open access in the humanities. By continuing this tradition but also by making the work openly accessible, we hope to change researcher minds about OA. Also, by flipping journals and partnering with university presses, we also gradually change the culture. So I think we do end up with cultural change here. However, again, we struggle to demonstrate it within all of our supporting institutions in year one. That said, it is not surprising that, in the first year of a new publisher’s life, we haven’t yet changed the world. I’m hardly shocked. Most publishers with whom I speak say it takes approximately eight years to know whether a new journal will succeed. Librarians also seem to know this as we’ve still got a 100% renewal rate.
I’m not sure really where this all leads, except to note that there are high expectations on new, young OA publishers to demonstrate that they have changed the world immediately. This is tough. The other is to note the curious nature of our non-APC model and the ways in which it can be evaluated. I suspect it will fare poorly if compared to OA APC “Big Deals” but it doesn’t do too badly if looked at from a usage perspective relative to price.
What I do know is that, as we come to renew our agreement with Jisc Collections in the UK next year, we will need to agree on a set of metrics by which progress can be benchmarked. I think this could be really useful for how we think about the relative returns of different models for gold open access. Of course, when we go through that process, as ever, I will write about it, openly, in public.