Martin Paul Eve bio photo

Martin Paul Eve

Professor of Literature, Technology and Publishing at Birkbeck, University of London

Email Books Twitter Google+ Github Stackoverflow MLA CORE Institutional Repo Hypothes.is ORCID ID   ORCID iD

Email Updates

This post is part of an ongoing series where I intend to develop my full personal (not institutional) response to the HE Green Paper. Comments are welcome to refine this.

The Green Paper asks in Question 11:

Do you agree with the proposed approach to the evidence used to make TEF assessments - common metrics derived from the national databases supported by evidence from the provider? Please give reasons for your answer.

I agree with point 10 on page 33 of the Green Paper which specifies that any metrics used in a proposed TEF assessment must be:

  • valid
  • robust
  • comprehensive
  • credible
  • current

but I do not agree that the proposed approach will deliver this.

  1. “Employment/destination – from the Destination of Leavers from Higher Education Surveys (outcomes)”. This is an extremely dubious proxy measure for teaching excellence. It is surely not valid, robust or credible. There is no direct link between teaching and these outcomes; badly taught students at a prestigious institution, for example, are more likely to find employment than the best-taught students at a younger university. It is also the case that it will be difficult to measure students already in employment and part-time students under this type of measure, making it far from comprehensive.

  2. “Retention/continuation – from the UK Performance Indicators which are published by Higher Education Statistics Agency (HESA)”. As per my response to question 4, I believe that this may perversely encourage institutions to only recruit “safe” students who possess the likely background characteristics for successful continuation. This is likely to damage widening participation, diversity and access.

  3. “Student satisfaction indicators – from the National Student Survey (teaching quality and learning environment)”. These measures are extremely problematic and I refer you to my response to question 10 and in particular the studies of Carrell, Scott E., and James E. West; Braga, Michela, Marco Paccagnella, and Michele Pellizzari; and Bjork, Robert A., John Dunlosky, and Nate Kornell, which demonstrate that these indicators are far from valid, robust, comprehensive, or credible.

Given that the Green Paper specifically states that “there are issues around how robust [these metrics] are”, by its own admission these measures are inadequate according to the Paper’s own criteria for measuring teaching excellence.

The proposed extension metrics are extremely vague and also problematic in some cases:

  1. “Student commitment to learning – including appropriate pedagogical approaches”. This appears to be a non-sequitur. Even the most appropriate pedagogical approaches cannot, with a guarantee, inspire a student commitment to learning.

  2. “Training and employment of staff – measures might include proportion of staff on permanent contracts”. This latter aspect is to be welcomed. There is too much labour in the academy that rests upon precarious contracts. That said, it is also important that Ph.D. candidates gain experience of teaching.

  3. “Teaching intensity – measures might include time spent studying, as measured in the UK Engagement Surveys, proportion of total staff time spent on teaching”. This is a very difficult thing to measure. TRAC and the Time Allocation Survey (TAS) will yield a crude proxy, but in some disciplines the time spent preparing for teaching is different to actual time spent on teaching. Writing lectures, or reading four novels per week in preparation for a literature seminar, for example, are hard to quantify in adequate ways for this proxy measure to work well.

While, then, these metrics are deeply flawed, the call for additional evidence is also problematic. For one, as per my response to other questions, it is expensive to write case studies and present narrative, contextual evidence. For another, it is unclear how these will be made comparable in a fair and transparent way between institutions. This portion of the proposed TEF will be very expensive to administer and run at the institutional level.