As the 中国A片 and research policy systems grind through yet another cycle of assessment, formal consultation and review have become de rigueur. But review fatigue should not detract from the fact that the Stern consultation and other public debates around research assessments are opportunities to take a fresh look at where we are and why.?
Over the past few cycles of research assessment, several major changes have permeated the way in which we speak of research in the public domain, and the ways in which we frame expectations of value or benefit from 中国A片 institutions. These changes included a shift in extending the purposes of the research assessment exercise and research excellence framework from quality-related funding allocation to institutional benchmarking and influencing institutional and individual behaviour. At the same time, the funding formulae oscillated between spread and concentration of funding, and the system responded with similar moves between inclusiveness and selectivity in preparing the submissions. The most recent major change was the introduction of impact as one of the key assessment criteria, accompanied by tectonic shifts across the system, both structural and discursive.
As the assessment frameworks changed, many of us engaged in increasingly technical debates about the best ways to capture and measure the quality and impact of research. We debated peer review, metrics, bureaucratic burden, cost-effectiveness and other, largely technical and specialist, aspects of the exercise. Worth the while as this discussion has been, one of its subtler outcomes has been to deflect attention from issues of principle. In the background of assessment changes, what was being renegotiated were not as much the technical merits of, say, bibliometrics versus peer review, as the broader principles underpinning the relationships between universities and the state, including challenges to the Haldane principle and dual support. In many ways, the very public debates around technical issues, for example the use of metrics in assessing research and research impact, divert from the key debates around these principles.
The introduction of impact assessment to the REF is a clear example of the dynamic described above. A lot of the discussion around impact has focused on the relative merits and limitations of different measures and indicators. Much less interest has been paid to the conceptual and – dare I say it – philosophical challenges of measuring impact. My own theoretical and empirical work on research impact has identified several conceptual challenges to its assessment, including the challenges of: relationships; texture; and narratives.
First, research impact is relational. Its tracing needs to take into account the network of interactions and exchanges through which it is generated. There is no single path of linear causality through these networks, and this complicates the reporting of impact, when it comes both to giving an account of non-quantifiable impacts, and to assigning meaning to quantitative indicators. The networks through which impact arises create its horizon, temporal as well as relational. One of my interviewees commented on how misguided pressures to pursue and demonstrate short-term impact may shift the focus of attention to small-scale technological or methodological developments, thus reducing the “horizon of possible benefit” from, for example, theoretical research. A relational understanding of impact helps to make sense of the dynamic, rather than static, nature of potential benefits from research, and also of the different timescales for meaningful conversations about impact in different fields.
Second, impact is not a flat concept, simply to be chunked into practical domains (impact on health, on education, on economy and so on), but a constellation of textured and relational practices. The model underpinning a produced with colleagues at the University of Oxford is based on primary research that identified five layers of practice and meaning-making that constitute research impact:
- Connectivity: the partnerships built into research design and processes, including co-production of knowledge
- Visibility: the wider communication of research and its reception by relevant audiences
- Use and exploitation: the conversion of research into practical tools, products and services, and their uptake by relevant user groups
- Benefits: the wider outcomes of research engagement and uptake, for subgroups of the population
- Diffusion: the cultural and discursive percolation of research concepts and insights.
The strength (scope and depth) of impacts in each of these layers varies and may be judged in its own terms, rather than comparing, for example, measures of visibility to measures of exploitation; there is no linear progression from one to the other. Such textured approach to impact can help to clarify the issues of attribution and contribution, and the allocation of responsibilities for demonstrating impacts. Although the scope of impacts may increase as we go through the list of these domains, their traceability to a specific piece of underpinning research decreases. Trying to capture and trace benefits and diffusion is particularly problematic, and evidence from the REF 2014 shows that it was largely seen as a risk to REF success, however defined. The REF criteria were interpreted in many parts of the system as prompts to measure what we could (largely, connectedness and use), in order to demonstrate what counted in the exercise (use and benefit), while leaving out great parts of what we also cared about (benefit and diffusion), because we perceived them as too risky in the context of high-stakes assessment. We need to think hard about whether this is the discursive space we want to inhabit.
And third, impact is to be told rather than proved. The recounting of impact is a form of narrative construction. Rather than fear storytelling, it is important to understand how impact is constructed narratively and how even the most technically astute metrics lack meaning if taken out of their context. There is plenty of evidence from the REF case studies that the retelling of impact falls along particular narrative structures and patterns that are strongly shaped by contextual factors, including institution-level interpretations of the assessment framework. However difficult they may have been to construct, case studies are the best way we currently have of narrating impact in a contextualised and meaningful way.
As a consequence of the implicit renegotiation of principles for research governance noted above, a grudging consensus has been forged around the legitimacy of performance-based funding for research, of increased concentration of funding, and of short-term accountability for academic and non-academic impact, as conditions of professional autonomy and academic self-regulation. It is, thus, not only an intellectual and practical, but also an ethical responsibility that at times of stock-taking we look again long and hard at the nature of this consensus and the ways in which it may frame the horizon of possibilities for the future of 中国A片-based research in the UK.
Alis Oancea is associate professor in the philosophy of education at the University of Oxford.