The views below represent my own personal addendum to other responses to the HEFCE metrics consultation that I have valued and endorsed. They are not necessarily the views of my institution.
In addition to contributing to the University of Lincoln's response and endorsing some of the concerns of Dr Meera Sabaratnam and Dr Paul Kirby in their response, I wanted to write as an individual to flag some specific areas in which changing practice – and particularly the rise of open access – should be considered with respect to research metrics. The shift to open access provides both new metrics and new challenges in their evaluation. With broader access, for example, comes the potential to use geo-location mechanisms to ascertain whether visitors are intra- or extra- institutional and also to evaluate what the international reach of a paper/book might be, thereby contributing to the impact agenda. There is also the potential to track the discoverability route through which the paper/book was reached, including social media, which is less likely in a toll-access ecosystem.
However, these measures, as always, are of access/downloads and not of reading itself. While citation metrics could be of use but are also dangerous, serious consideration should be given to the lack of adequate DOI infrastructure provided by many scholar/library -run OA journals, given the cost of membership and contractual obligations imposed by the Crossref/PILA agreement; a serious hindrance to metrics in an OA environment. There is the potential, though, through text/data mining of open access articles/books – which, despite recent amendments to copyright law via the Hargreaves report, remains an easier task in an OA world – to conduct better evaluation of positive/negative citations through semantic parsing of surrounding text.
However, most crucially, the economic implications of the shift to open access have evidenced the extent to which proxy measures for value (journal brand/publisher name) are dangerous metrics that inhibit change/innovation and act as a re-enforcement of the status quo; substitutes for quality that are rationed according to academic labour scarcity (i.e. we trust “prestige” because it means we do not have to re-evaluate the article every time and it allows us to “outsource” our hiring processes to publishers, as a former director of Harvard University Press put it). The design of future metrics should take care to ensure that prestigious measures are targeted at the author and editor, rather than publisher, -level, to ensure that commercial advantage cannot be hoarded by publications/publishers to affect the economics of scholarly communications. In other words, those who bestow metrical authority have a form of social power that should be tied to academics rather than to commercial entities.