Like many countries, Italy is trying to reshape the way resources are allocated across its system of public universities - a task made all the more urgent by the economic crisis of the past few years.
A broad plan to reform the education system was approved two years ago. One of its key components was the creation - under the aegis of the Ministry of Education, Universities and Research - of the National Agency for the Evaluation of Universities and Research Institutes (known as Anvur).
With an annual budget of €7 million (?5.7 million) and led by the quantum physicist Stefano Fantoni, Anvur has been charged with setting up a system of research evaluation intended to make it possible to reward individual academics and research units on the basis of their research output. This is a process that will sound familiar to British readers and, in fact, Anvur has taken the UK¡¯s research assessment exercise/research excellence framework as its main model. But the Italian version looks rather different from its British counterpart: the qualitative peer-review component has been played down in favour of quantitative, objective parameters to an extent that seems unparalleled.
There is no doubt that the Italian university system is in dire need of reform. Research money, for example, is typically spread around rather than assigned competitively. All research units end up getting a small slice of the pie, but that slice has now become so thin that in most cases it is used for consumables rather than to support well-designed research projects.
ÖйúAƬ
The mechanisms that govern academic careers have also been extensively and convincingly criticised. Careers are shaped by the concorsi, opaque examinations whose often unpredictable outcomes are being challenged with increasing frequency. The format of these examinations has been changed from a national-level exercise to an essentially local one, but this seems to have produced little obvious improvement - so little, in fact, that the current reforms will see them run at a national level once again. Will this put an end to members of universities¡¯ examination committees hiring and promoting favoured candidates irrespective of the quality of their work? Don¡¯t hold your breath.
At the root of these problems is the fact that Italian academic units have had neither real autonomy nor economic responsibility for a very long time. Matters that are vital to academic life, such as hiring, promotion and grant allocation, are managed through institutional mechanisms - the concorsi being one of them - that fail to make decision-making committees responsible for the consequences of their choices. As a result, there is no clear incentive to reward research quality rather than, say, loyalty or political allegiance.
ÖйúAƬ
This state of affairs helps to explain why the introduction of a research evaluation system in Italy was widely seen - at least at first - as a long-awaited solution to endemic cronyism instead of an unwelcome top-down imposition. Initially, many researchers, especially those at the early stages of their careers, hailed the promise of a tenure-track system accompanied by a new evaluation system that would objectively weigh their scientific production. Merit would be restored by replacing untrustworthy colleagues with trustworthy numbers. And indeed, in the evaluation system that is currently being implemented, bibliometrics and quantitative parameters are deployed with gusto.
The limits of this approach, however, have quickly become apparent. Consider the following example. Over the summer, Anvur released lists of the scientific journals that will matter when it comes to choices about recruitment, promotions and funding. The plan was to devise lists of top-class journals that, together with a set of bibliometric indicators, would help to determine, for example, whether or not an academic should be promoted. Conditions for being named to a more senior post might include publishing a certain number of articles in leading journals and scientific productivity above the median for that particular field of research. This sounds clear enough - if we agree on what counts as a scientific journal.
Such lists almost inevitably provoke discussion. But those published by Anvur - whose stated intention was to present titles that had an editorial policy that ¡°explicitly refers to ¡ the publication of original results¡± as well as a ¡°scientific committee¡±, an ¡°editorial committee with a relevant academic component and/or a director of the magazine with an academic affiliation¡± - have been more controversial than anyone might have imagined. Indeed, this autumn they surfaced in Italy¡¯s mainstream media with the journalistic nickname of liste pazze (¡°crazy lists¡±).
The nickname is well deserved when one considers that for some fields in the humanities, the lists include titles such as Suinicoltura (literally: Intensive Pig Farming) and Yacht Capital, a glossy monthly focusing on expensive boats (yes, really). Religious newsletters feature prominently, as do obscure local newspapers, commercial banks¡¯ glossy magazines and defunct online publications. The ¡°crazy lists¡± are full of surprises, both in terms of those surreally inappropriate inclusions and, more worryingly, their exclusions. Should they remain a yardstick by which Italian academics¡¯ work is judged, they could virtually wipe out interdisciplinary fields such as the history of mathematics and science studies. Scholars in a number of fields have voiced their concern, and many proposals have already been made to amend the lists. That is certainly welcome, but it doesn¡¯t seem to get to the heart of the problem.
The ¡°crazy lists¡± affair is only a symptom of a deeper malaise. For one thing, the design of Anvur¡¯s semi-mechanised system for the management of careers across public universities reveals a dramatic lack of confidence in the academic sector as a functioning entity. The agency seems to be working under the delusion that a perfect mix of quantitative parameters can be found that will settle, once and for all, the question of how to produce objective research evaluations, turning untrustworthy committee members into mere operators of the bureaucratic machine. That governmental agencies should aim for explicit and therefore mechanisable procedures to govern academic life is certainly not big news. In the Italian case, however, this mechanisation is pushed to the extreme.
How has this happened? Anvur needed to implement the government¡¯s plan to measure research output to make universities accountable; it had to do this with limited resources and time at its disposal; and it needed to respond to calls for more objective evaluation criteria. Most importantly, the agency has designed and is implementing its evaluation system in a context of low credibility and high fragmentation of the academic community.
ÖйúAƬ
In other words, the government and its agency have not found well-organised and self-confident interlocutors on the academic side. The disciplinary fields, especially those in the social sciences and the humanities, should have taken a leading role in designing meaningful parameters and, above all, carving out spaces to exercise their own expert discretion, which would typically take the form of peer-review procedures. Instead, the Italian academic community has approached this important process as a fragmented set of interest groups, whose response has varied from an uncritical endorsement of the new system to entrenched mistrust of any kind of research evaluation whatsoever. In these conditions, numbers soon become the only game in town.
The concerns that should have informed the making of the lists have now brought the entire system to an impasse. Now each one of the chosen criteria could potentially become the object of endless controversies as relevant groups weigh into the discussion and put forward arguments for or against specific bibliometric indicators. One way out would be through the action of a mutually recognised and fully legitimate authority. But Anvur cannot play that role because it is widely perceived, rightly or wrongly, as part of the political project of the government of former prime minister Silvio Berlusconi, and thus discredited. The academic community, on the other hand, seems unable to coordinate itself effectively and exert its expert authority.
ÖйúAƬ
Stories such as this one are typically framed in terms of neoliberal agendas driving processes of marketisation and shaping new technologies of governmentality. The Italian story, however, brings to the surface a further element, without which Anvur¡¯s extreme solution to the problem of accountability would be incomprehensible. The fact that so many of the relevant actors are convinced that quantitative methods and objective rules are the only way to fix Italian universities should give us pause for thought.
In the ¡°crazy lists¡± affair, much of the controversy is about which journals should be in or out. The very idea that academic communities should know good research when they see it, irrespective of the journal in which it is published, is hardly part of the discussion. There is some irony in the fact that this should happen as we celebrate the 50th anniversary of the publication of Thomas Kuhn¡¯s The Structure of Scientific Revolutions, which is the most effective and influential brief for peer review ever written.
Leave aside the revolutions and focus instead on Kuhn¡¯s profound sociological insights into how scientific communities work. Thanks to those seminal passages, we now understand scientific life as a dense pattern of social interactions through which knowledge is produced and made credible. Scientific communities constitute themselves primarily as expert groups defined by reward systems, criteria of demarcation and the collective management of professional reputations. Even in the age of Big Science, these communities depend essentially on bonds of trust maintained through intense face-to-face interaction. Discretionary authority over what counts as scientific success or failure is key to their functioning as efficient knowledge-makers.
By agreeing that the only way to reform Italian universities is to give up entirely on such discretionary authority and rely instead on a set of allegedly objective parameters, the Italian academic community has revealed just how weak the trust bonds that permeate its scientific life have become. This is a problem that goes well beyond the question of resource allocation.
So what is to be done? The government should create the conditions for restoring universities¡¯ autonomy and fostering the academic freedom to teach and do research according to the dynamics of professional, knowledge-centred, disciplinary communities. This means reshaping the system around the mechanism of professional peer review that, however imperfect, is a better foundation for knowledge than anything else we have tried. Instead, the government is dissipating significant resources trying to design the perfect concorso or find the ultimate rules for promotion. What it should do is support the creation of an academic environment in which cronyism becomes if not impossible then certainly counterproductive, as it would entail a catastrophic reputational loss. If then a research evaluation system needs to be introduced, its form and modalities will have to be negotiated carefully with the disciplinary communities involved. This is a process that will cost time and money, but taking shortcuts will cost much more.
Unfortunately, the process in place seems to be going in the opposite direction: stripping away any remaining academic autonomy and discretion in the hopeless search for unquestionable evaluation criteria. If pursued, this plan will only accelerate those degenerative processes that it intends to reverse. In fact, it will complete the removal of responsibility from committee members and academic units while making it possible for influential interest groups to work the system, as the ¡°crazy lists¡± affair shows all too clearly. Young scholars, less established disciplines and pioneering research will lose out once again.
ÖйúAƬ
The ¡°crazy lists¡± affair cannot be easily written off as the result of carelessness or malpractice. We should see it as a cautionary tale. It illustrates what can happen when responsibility and autonomy are squeezed out of academic life, and expert decision-making in matters of recruitment, promotion and funding is replaced by a system of explicit and apparently obvious rules. At this point, it is difficult to see when and how the Italian academy and the political elites that are reshaping it will be able to get themselves out of this quagmire. Expert communities and the trust bonds that constitute them are relatively easy to dismantle. To build them up from scratch, however, is an entirely different matter.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to °Õ±á·¡¡¯²õ university and college rankings analysis
Already registered or a current subscriber? Login