West Midlands local group meeting: The REF evaluation and journal reputation

RSS West Midlands Local Group welcomed David Selby and Professor David Firth to talk on the subject of the REF evaluation and its relationship to journal reputation. The event took place on Wednesday 30 October at the University of Warwick.

David Firth began the talk by explaining what the REF is and the way it is assessed. He explained that there are three areas of assessment: quality of output, impacts outside academia, and research environment, of which it was the quality of output that would be investigated in this talk. These were assessed across 36 ‘Units of Assessment’ in 2014, with the talk focusing on four: Mathematical Sciences, Economics & Econometrics, Physics, and Biological Sciences, selected for this study because their main unit of publication is papers (rather than books). It was explained that the quality of output measure was made by attributing each submitted paper to one of a 4*, 3*, 2*, 1* or unclassified bucket, with the aggregate proportions in each bucket being published, along with the journals of the papers submitted, but that the categorisation at the paper level was not published.

At this point, David Selby picked up the baton. He chose to focus on Economics and Econometrics, as this was a field that was relatively self-contained, had a widely-recognised ‘top 5’ highly prestigious journals, and had been the subject of previous study. He explained that the goals of the study were twofold: to measure the extent to which REF2014 output profiles are associated with journals; and to make an inference about each journal’s REF star rating. To do this, ecological inference techniques are required. A simple version of these was explained using an election example, and the history of these techniques goes back to at least the 1950s. The model used in this case was a Poisson-Binomial, with the parameters referring to the probability of a paper from a particular journal being rated 4*. An appropriate prior was chosen for these probabilities and the model fitted using Stan.

These probability estimates showed wide estimate intervals but with means in reasonable agreement to the anecdotal reputations. In order to assess the fit of this model, it was then used to predict the REF2014 output and funding profiles for each institution based just on the journals of their submissions. These showed a good fit. In Mathematical Sciences the output profile was less close but the funding profile prediction was very accurate. The results across the four Units of Assessments were compared with Biological Sciences showing notably more uncertainty.

A discussion was provoked, with the presenters noting that there was good reason to believe that the quality of output measure could not be reproduced based solely on the journals, with reputations of journals changing over time and the large uncertainty in the estimation being particular challenges. A number of audience participants were interested in whether the preferences and backgrounds of the assessor panel could be discerned through related methods, perhaps with more data than this study considered.

Load more