Martin Paul Eve bio photo

Martin Paul Eve

Professor of Literature, Technology and Publishing at Birkbeck, University of London

Email Books Twitter Google+ Github Stackoverflow MLA CORE Institutional Repo Hypothes.is ORCID ID   ORCID iD

Email Updates

Yesterday, I attended my university’s official training course for Ph.D. examiners. It was an extremely useful day to familiarize myself with the regulations at the University of London and to hear about incoming procedures for independent viva chairs.

However, one thing did leap out at me that I’d forgotten but that, in light of much thinking about scholarly communications, struck me as interesting. One of the criteria for the award of a Ph.D. is that the work should “merit publication”.

I duly raised my hand and, in a gesture that others might have thought facetious, asked “where?”

This was not just me being a contrarian. The criteria for different journals in different fields can vary wildly. Should it merit publication in an ultra-selective journal, perhaps like Nature, Cell, or Science? (In my discipline, perhaps PMLA, Modern Fiction Studies, Textual Practice etc.) Or should it merit publication in PLOS One where the criterion is “technical soundness”? What about the swathes of low-quality journals who will publish work without a pre- or post- publication review process?

In short, the question that came out of this for me was what “meriting publication” was supposed to measure. Was it a measure of novelty? If so, then using, say, PLOS One as the benchmark for venue would be insufficient. Or was it supposed to be a measure of soundness? If so, then it would be unfair to hold it to the standards of Nature. Or was it supposed to be a measure of peer acceptance? If so, then questions arose for me as to “which peers?” A journal is, as Cameron Neylon has framed it, primarily a community. So whichever journal you choose to frame this question of “meriting publication” is likely to be a proxy for the ideologies, standards, and preferences of an imagined/envisioned sub-group of peers.

It also struck me as a fairly subjective criterion but one that was double-edged. If a candidate had not published anything before arriving at a viva, then the examiners can choose the imagined peer group against which the work should be measured. If, however, the candidate can say that parts of the work have been published, then not only has the candidate pre-determined the peer group/venue against which it is valid to benchmark the work, but he or she has also pre-demonstrated the fulfillment of this criterion, at least for that part of the thesis.

In any case, as questions about peer-review practice continue to surface (and I think of Kathleen Fitzpatrick’s notion of “peer-to-peer review”), this aspect of Ph.D. assessment will become more and more tricky. I suggest that, already, it is too vague a criterion and more specific guidance should be given.