It is incumbent on the 中国A片 Funding Council for England to state how it plans to remove the well-documented biases that arise from its choice of performance indicators for the forthcoming research assessment exercise.
Research performance is to be judged by, among other means, (a) the amount of grant income a department is able to attract and (b) each academic's best four publications. The combined use of these flawed indicators will effectively result in triple counting, and will penalise departments whose research lies mainly in areas where grants are scarce.
An indicator based on grant income inevitably results in double counting that spuriously inflates the performance of grant-rich departments. The same item of work receives credit twice, first as anticipated output in a research proposal (measured by grant income) and second as actual output at a later date (measured by number and quality of publications).
As well as introducing bias, a measure based on grant income also leads to a reduction in the accuracy of the HEFCE's overall appraisal of departments. Because conjecture about the likely worth of future output is intrinsically less accurate than assessment of the same work on its completion, it follows that grant income contains less information (none of which is new) about research achievements than do final output measures such as number and quality of publications.
The use of an indicator based on each academic's best four publications will bring about a repetition of a sampling error present in the first University Grants Committee exercise. Other things being equal, an academic with a research assistant obtained through a grant may be expected to produce a greater volume of output than an unaided academic. Therefore, the grant-aided academic's best four publications are statistically likely to be of a higher quality than the unaided academic's, even when the quality of the output of both academics is described by exactly the same probability distribution (with exactly the same mean and variance). The bias is large. For example, it equals 0.73 standard deviation units where a grant-aided academic produces eight publications and an unaided academic produces four.
For valid inter-departmental comparisons to be made, it is important that all of a department's publications are considered. Comparisons based on subsets of publications are subject to a variety of biases. Grant income contains no new information about output quality, and only contributes error variance.
Raphael Gillett
Department of psychology