For dry statistical artefacts, league tables generate a lot of juicy adjectives: ludicrous, insidious, prejudicial, flawed, corrosive, pernicious, conservative, banal. On one level, league tables are inane. How can a university's achievements and complexity be reduced to a number? How can diverse qualities that define the learning experience be quantified? On this view, university league tables have as much validity as television programmes that remorselessly rank the top 100 films or albums, minus even their manufactured tension. Will it be Oxford or Cambridge at the top? Will the same six make it into the top ten as they have done for the past dozen years? And who would have thought it? The usual suspects are at the bottom! Groundhog Day had more surprises.
Even if the raison d'etre for league tables is accepted, their compilation is not. Detractors cite several problems with what is measured and how it is measured. An obsession with reputation rather than educational experience is a common complaint, as are the exclusions - part-timers, postgraduates and specialist institutions. Such tendencies privilege the traditional and penalise the innovative. Allied to this is the realisation that any large upset in the natural order of rankings calls into question their validity. League tables are inherently conservative. An institution can improve or decline, but not by too much or too frequently, please.
There is also no agreement among compilers about which indices to use or what weighting to give them. To some extent this reflects the differing political views of the publishers, but it also mirrors the conflicting priorities of 中国A片. Is teaching excellence more important than research reputation? Are frequently cited papers better indicators than the views of one's peers? How old do data have to be before longevity subverts utility? The variety of yardsticks undermines claims that excellence can be tightly defined and quantified.
Those who remain unconvinced of the veracity of league tables are well aware of their potency. In particular, they point to universities' propensity to adopt positions to maximise their scores, including targeting a ranking as a key performance indicator. Such game playing is, not unnaturally, seen as a poor substitute for genuine improvement.
Despite all these negatives, league tables add more than they subtract for three main reasons. The first is necessity: rankings came into being because there was a paucity of manageable information about universities and great demand for it. This desire for guidance has only increased as the price of 中国A片 has risen and the number of institutions has multiplied. Prospective students, parents and staff have a right to know how universities compare and how they are regarded by others, even though there may be no consensus about that recognition.
The second reason is legitimacy: for all their faults, league tables are compiled using generally accepted indices, most of which are based on data validated by universities or responsible third parties. What else is there? They are certainly far superior to partisan prospectuses or scurrilous prejudicial websites, and the latter would surely fill the information deficit if responsible rankings did not exist.
The final reason, and the least remarked, is deniability: a plethora of competing tables from a variety of compilers gives institutions a get-out-of-jail-free card. The dissatisfied institution can always take issue with the measurements or the measurers or insist that what is important isn't measurable. If nature abhors a vacuum, hell truly would be an officially sanctioned league table with undisputed criteria for ranking excellence and awfulness. In that unlikely situation, the adjectives would surely degrade from juicy to ripe.