This post is part of an ongoing series where I intend to develop my full personal (not institutional) response to the HE Green Paper. Comments are welcome to refine this.
The Green Paper asks in Question 27:
How would you suggest the burden of REF exercises is reduced?
The burdens of REF, as set out in many critiques (although, as I noted above, many criticisms of REF are criticisms of any audit culture), can be summarised as:
- Senior staff time in evaluating and coordinating submissions (mostly due to selectivity of outputs and selectivity)
- Intra-institutional coordination of “strategy”
- The hiring in of “critical friends” (senior academic consultants) to appraise staff members in the name of selectivity
- Pressure on research staff (sometimes to extreme and unhealthy degrees)
- A distortion of the Haldane principle and academic freedom through the pressure put on researchers to pursue specific projects
- The peer-review process of REF panels
- HEFCE staff overheads in reporting (these are slight)
- The production of impact case studies
The true challenge in this space is that whatever is measured is incentivized. Measuring affects behaviour. Measuring impact, for instance, adds a burden, but incentivizes institutions to consider the broader public good of their research work. This also works in reverse; removing incentives can lead to perverse behaviour. To remove the burdens of REF, future policy should think of researchers not institutions. What will best allow researchers to get on with their jobs without their institutions placing them under unreasonable and unmanageable levels of pressure? Likewise, how can a future REF eliminate the institutional “gaming” that constitutes the majority of decentralized costs in the exercise?
Take, then, the most obvious “cost-saving” approach that could be considered for REF: removing or automating selectivity (both at the output and researcher level). At the output level, this would have the benefits of eliminating institutional gaming (and the expensive consultants that they hire in to supposedly evaluate work) and simplifying the submission process. However, it would also have damaging consequences (and there is a serious methodological challenge: there is no appropriate metricised approach for automated selection). Researchers would be encouraged (probably coercively by their institutions) only to publish work that would be selected so that there is no risk of random or automated sampling selecting an output that might fare less well in front of a REF panel. This would be devastating for the progress of knowledge, which relies upon incremental advances; “standing on the shoulders of giants” is not so apt a metaphor, it is rather a series of ordinary size individuals all standing on each others’ shoulders.
Allowing researchers as individuals to select which outputs to submit could aid in this process, perhaps with a whistleblowing component introduced for when faculty feel that their institutions are influencing such decisions. A future REF could also specify that institutions may not hire in external consultants to evaluate work for the submission to REF, again with a whistleblowing procedure. Likewise, a ban on internal “mock” exercises could go some way. Of course, all of these components will make institutions feel unstable. These exercises are conducted in order to try to gain some certainty about the submission and its likely success. Since funding allocations depend upon this, such measures will make institutions deeply unhappy.
Under certain circumstances, I feel more optimistic about eliminating a burden through the removal of staff selectivity, although this is difficult. Removing selectivity at the researcher level could come with some benefits. Institutions spend a frustrating amount of time deciding on which individuals should be submitted, carefully weighing this against the impact case study ratio etc. This leads to much pain for individual staff members that detracts from their actual work. It also leads to a huge amount of stress; when institutions are pressuring individuals to produce research on demand – as though it were possible to know the results of science and scholarship in advance – there are individuals who are treated appalling under the current regime. Furthermore, institutional hiring practices are utterly distorted by REF-ability. But the flip side is the question of what removing the (arbitrary) count of articles/books to be submitted would add. Is REF to be a measure of standardised productivity or quality, or both? What if a researcher produces a single, but brilliant and epoch-changing, study over a five-year period? If institutions are submitting all researchers, what of those who don’t meet the threshold of outputs? What will be inventivized through such an approach?
I wish also to reiterate here that the peer-review process cannot be replaced by metrics. Attempting this will be devastating for the quality evaluation of UK research. It will also incentivize institutions and researchers into new perverse behavioural patterns, such as citation rings, purchasing metrics packages (from Elsevier or Thomson Reuters), and encouraging researchers to publish only in places (and material which) will fare well by the numbers. I am also wary of removing the impact component; this is a public good.
Again, to reiterate in closing, I would urge any future REF design to focus on the ways in which institutional gaming can be removed so that pressure is lessened at the individual staff level. Researchers go into their fields because they want to do good in the world. They are usually joyful at the prospect. Institutional bullying, in order to attempt to second guess REF, can suck this joy our of their work, which is not good for the production of research (which is best motivated by joyful obsession). This is the burden that should be removed.