Martin Paul Eve bio photo

Martin Paul Eve

Professor of Literature, Technology and Publishing at Birkbeck, University of London

Email Books Twitter Google+ Github Stackoverflow MLA CORE Institutional Repo ORCID ID   ORCID iD

Email Updates

I’ve been sitting on the below piece for a while, but have written about before. In recent days, though, Gary Hall and Kathleen Fitzpatrick have both written critiques of the site and so I thought it also worth adding this specific angle rather than just letting it sit on my hard drive.

In recent days,, the well-known university social networking site, has begun asking for senior academics to become “editors”. This involves, according to correspondence, “recommending” between 4 to 10 articles per month to others in the field. In itself, this may serve as a useful discoverability system that creates a network of trust and circulation; a structure of trusted curation is important and worthwhile. I am concerned, however, by the way in which this is being implemented.

Certainly, it is true that there are many documented faults with peer review as it currently stands. The process is eminently subjective and may problematically exclude work that is innovative. However, eminent thinkers on the reform of this system, such as Kathleen Fitzpatrick, have been careful to point out that while a social system of recommendation and review (“peer-to-peer review”) might work better than existing structures, such measures must be carefully designed.

Specifically, I believe that in an academic environment: 1.) any such system should not have a quantified level of engagement specified (if there are not 4 good papers to recommend, then this is a false measure); 2.) such a system should not be a binary “recommend or ignore” but should allow for qualitative discussion and signalling; 3.) any entity implementing such systems should be open and transparent about the ranking measures they are using and how algorithmic processes will sort these recommendations; 4.) trust, reputation and recommendation metrics require the evaluation of reviewers and transparency about this process: “in using a human filtering system, the most important thing to have information about is less the data that is being filtered, than the human filter itself: who is making the decisions, and why. Thus, in a peer-to-peer review system, the critical activity is not the review of the texts being published, but the review of the reviewers” (Fitzpatrick).

I am unsure that the current approach of is addressing these matters. I also worry that an uncareful implementation may cause an academic backlash against the potential benefits in transforming peer review/adding social discoverability/curational approaches.

Finally, is a useful site, but it is another private company operating in the academic space (funded by Khosla Ventures, True Ventures, Spark Ventures, Spark Capital and Rupert Pennant-Rea). As with other private networking sites, such as Facebook, or as with any of the private academic publisher platforms, the action of “liking” or “recommending” is part of a data-collection exercise that profiles user behaviour. While this does not have to be malign, recent reports have called for the use of “responsible metrics”. I do not yet know what wishes to do with this data and so cannot evaluate whether its use will be responsible or otherwise. It is also the case, here, that academics will be freely giving labour to another for-profit entity, a trend seen increasingly in the research-publication space. I would prefer for the evolution of peer review to be in the hands of organizations whose motivations are not divided between the good of scholarship and venture capital exit strategies.