ÖйúAƬ

U-MultiRank ambitious, but lacking critical mass to face up to World University Rankings

Ò»ÔÂ 1, 1990

Detail of globe lying on side

Times ÖйúAƬ rankings editor Phil Baty outlined his initial views on the European Commission¡¯²õ new project to rank the world¡¯²õ universities, U-MultiRank, in a letter published by EuroScientist magazine on 9 June 2014. The letter is reproduced in full here.

Back in 2012, the UK¡¯²õ universities and science minister David Willetts warned that the European Commission¡¯²õ project to develop a new approach to global university rankings, U-MultiRank, risked being dismissed as a self-serving exercise.

It could be viewed as ¡°an attempt by the Commission to fix a set of rankings in which European universities do better than they appear to do in the conventional rankings¡±, he told a House of Lords European Union Committee inquiry on the modernisation of ÖйúAƬ. Two years on, now that the first ranking is live and we can see which institutions have ¨C and more importantly which have not?¨C chosen to join the bold experiment, it would seem that the minister¡¯²õ warning was remarkably prescient.

Low participation level

Of those institutions actively choosing to take part in the project by providing institutional data, there are currently only nine from the US, four from China and Hong Kong combined, three from Brazil, two from Canada one from South Korea, and one from India. In contrast, there are 57 participants from France and 40 from Germany. Indeed, there are more active participants in U-MultiRank from Latvia (11) than there are from India, China, Hong Kong and South Korea combined.

In the US, the Franklin W. Olin College of Engineering has signed up, as has the American David Livingstone University of Florida, but no data have been supplied by Harvard, the Massachusetts Institute of Technology, Stanford, University of California, Berkeley or University of California, Los Angeles. Even from within Europe, many very prominent institutions have declined to join the project. The UK¡¯²õ nine active participants include the universities of Chester and Hull, but not the universities of Oxford or Cambridge. There is no University of Edinburgh or St Andrews, and indeed, not a single Scottish institution agreed to take part.

In the Netherlands, University of Leiden chose not to submit data, despite the fact that its famous Centre for Science and Technology Studies is a partner in the U-MultiRank consortium. Indeed, the entire League of European Research Universities, representing Europe¡¯²õ most prestigious research-intensive institutions, declined to take part in the project, warning in 2013 that it is ¡°at best an unjustifiable use of taxpayers¡¯ money¡±. But instead of accepting their decision to opt out, the compilers of U-MultiRank simply scraped publicly available research and patent data for these institutions and included them in the ranking anyway, but with the inevitable major gaps in the statistics.

So it is quite clear from the outset that U-MultiRank, after six years of development and €2 million of public funding, still has a major challenge to convince institutions to see the merits of the system.

U-MultiRank concerns

The people at U-MultiRank declare that it is early days, and that they expect more institutions to join in future data collection rounds once they see the benefits of the system, but there are some worrying signs.

One high-profile participant, Sweden¡¯²õ Royal Institute of Technology (KTH), has declared that supplying data to the project ¡°was not really worth the effort¡±. , Per-Anders ?stling, a senior administrative officer at KTH, said that the data that U-MultiRank required was ¡°very time-consuming to collect¡± and involved ¡°a large number of staff¡±. He then notes: ¡°considering the work that KTH put into this project, and given the results for the subject rankings, it was not really worth the effort. I do not think this ranking will be a great success.¡±

Overall, he finds the end-product disappointing. ¡°The tool as presented on the U-MultiRank website ¨C the finished product?¨C is difficult to use and makes great demands on the user,¡± he writes. ¡°I don¡¯t believe the tool is especially useful for an 18-year-old student who is considering which university to apply for.¡±

Comparison with THE ranking

I should declare an interest. As an education journalist of more than 20 years, it is my concern for transparency and accountability, and for holding all university rankings up to public scrutiny ¨C not least one funded with taxpayers¡¯ money ¨C that brings me to write this critique. But it is true that as the editor of the well-established traditional university ranking system, the Times ÖйúAƬ World University Rankings, I could see U-MultiRank as a threat. Indeed, the official U-MultiRank website actively attacks the existing global ranking systems as ¡°simplistic¡± and ¡°patronising¡± and it claims that they give a ¡°false impression¡±. It is clear that U-MultiRank sees itself as a competitor to the existing global rankings.

But as a competitor, I believe that U-MultiRank¡¯²õ biggest criticism of the THE rankings ¨C their narrowness ¨C is actually their great strength. The THE¡¯²õ current methodology was created in 2010, after six years of experience in global rankings and almost 10 months of open consultation with the sector on the changes specifically to judge what one might call the ¡°world class universities¡±. The latter are research-driven institutions which may have different cultures, governing systems, histories and funding regimes, but which share core common characteristics: they push the boundaries of knowledge with research published in the world¡¯²õ leading research journals, they recruit from global pools of student and academic talent, they have globally recognised names and they compete willingly on a global stage.

The THE rankings methodology uses 13 separate performance indicators to judge these world-class universities against their core missions: teaching ¨C including five indicators of the teaching environment, knowledge transfer, international outlook and research. But it is quite rightly weighted towards research excellence. There is no one-size-fits-all model of excellence in ÖйúAƬ, and different types of university mission require different indicators, but THE¡¯²õ metrics are very carefully and deliberately calibrated for a particular global model.

Wider scope

The focus on world-class research universities also means that the THE rankings will, in its overall world university rankings, rank no more than 500 or so institutions. And it will draw its pool of institutions from a master database, run by Thomson Reuters, of around 1,000 of the world¡¯²õ leading research universities. This ensures that a comprehensive and complete picture of excellence among a global peer group is provided, using a set of entirely appropriate and relevant metrics.

In contrast, the scope of U-MultiRank¡¯²õ ambition could be its Achilles heel. While the traditional global rankings focus on the relatively small pool of global research-intensive peers, U-MultiRank declares it has a wide scope. It says: ¡°We focus not only on comprehensive internationally oriented research universities but the full range: including specialist colleges, regionally oriented institutions and universities of applied sciences¡±. U-MultiRank¡¯²õ literature says it includes ¡°regionally oriented colleges¡­ music academies and teacher training colleges¡±.

While this admirable ambition gives U-MultiRank a huge headache over appropriate, relevant performance indicators, it also gives the project an unwieldy pool of about 20,000 ÖйúAƬ institutions in the world from which to draw data.

On this basis, the current figure of around 500 current active participants ¨C with a further 360 or so institutions included in the system with just their publicly available data ¨C is a problem. The target to add just 150-250 new institutions for 2015, giving a total pool of around 1,000 ¨C around one-twentieth of the world¡¯²õ ÖйúAƬ institutions ¨C also seems inadequate.

If this exciting and in many ways admirable experiment is going to realise its ambition to be a serious alternative to the existing global rankings, and not just an inward-looking, EU-funded European initiative, its project team is going to have to work much harder to convince universities, especially those in the world¡¯²õ largest ÖйúAƬ systems in the US, China and India, that it is an experiment worth joining.

is editor of the Times ÖйúAƬ World University Rankings.

?

ÇëÏÈ×¢²áÔÙ¼ÌÐø

ΪºÎҪע²á£¿

  • ×¢²áÊÇÃâ·ÑµÄ£¬¶øÇÒÊ®·Ö±ã½Ý
  • ×¢²á³É¹¦ºó£¬ÄúÿÔ¿ÉÃâ·ÑÔĶÁ3ƪÎÄÕÂ
  • ¶©ÔÄÎÒÃǵÄÓʼþ
×¢²á
Please µÇ¼ or ×¢²á to read this article.
ADVERTISEMENT