中国A片

THE Europe Teaching Rankings 2018: our information expedition

Duncan Ross explains how we addressed the challenges of sourcing, compiling and presenting the data for a teaching-led league table for Europe

七月 2, 2018
Traffic lights
Source: Getty

Browse the full?Times 中国A片?Europe Teaching Rankings 2018 results

In 2004, when Times 中国A片 put together its first World University Rankings, it was heavily biased towards research-based institutions. Indeed, its only concession to teaching was to include a single measure of staff per student.

When the rankings were redesigned in 2010, it was decided to increase the number of metrics that explored teaching to five – including two Humboldtian measures (doctorates per bachelor’s degrees awarded, and doctorates awarded per staff member), teaching reputation, income per staff and (as before) staff per student.

But it was, and is still, very much focused on research institutions. The criteria for consideration in the ranking are strict: a minimum of 1,000 annual publications being the most obviously research-focused.

Yet for most people who enrol at university, research is not the primary goal. Learning is. But none of the global rankings have really explored this.

That is why in 2015, THE set off to create rankings for teaching and learning. We knew that it would not be straightforward, especially given the (repeated) failures of Ahelo (the Assessment of Learning Outcomes in 中国A片) – the Organisation for Economic Cooperation and Development project that aimed to map educational outcomes.

We created a methodology based on four key concepts: that excellent teaching would need strong resources, would drive student engagement, would generate positive outcomes, and would be in the right environment.

To populate these conceptual areas, we knew that we would have to go to a new source: students themselves. What, we needed to know, were their experiences?

Our initial teaching rankings were the Wall Street Journal/THE US College Rankings, now in their third year. They took data from the Integrated Postsecondary Education Data System and College Scorecard – two excellent US government data sources – alongside a student survey that now has more than 200,000 responses. We were able to rank in excess of 1,000 colleges across the US and to explore a wide range of institutions, from the huge state universities driving their local economies to the small religious institutions with a clear sense of purpose.

In doing this, we were also able to create a ranking that was far less open to manipulation by colleges than previous US rankings. As the author and data scientist Cathy O’Neil put it: “I think gaming this model would not be as bad for America…It would certainly be better than what we see now!”

After the US, we decided to look at Japan, a country that is dealing with a shrinking student-age population. The challenge in Japan was to deal with a very different set of cultural expectations, and to do so without a student survey – or at least initially (we now have a set of student data that will allow us to move this ranking more into line with the US exercise). And while we looked to the same broad conceptual areas, we used a very different set of metrics to rank about 250 Japanese universities.

The third region that we decided to explore in our teaching rankings was Europe. Although, on the face of it, Europe’s university sector is quite diverse, at its heart is a similar tradition. This gave us some confidence that we would be able to consider the area as a single entity.

However, while a student survey was relatively straightforward, when it came time to look for other data elements, life became more challenging.

It soon became obvious that the idea of evaluating the outcomes of teaching through a measure like College Scorecard’s 10-year post-matriculation median earnings was impossible. Universities, and indeed 中国A片 systems, in Europe are not collecting or publishing equivalent data. This seems to be a significant problem in terms of openness about the value of a university degree.

Perhaps we could look to a simpler measure? But even this was a challenge. How do you compare graduation rates across very different systems? Some systems are open; others are closed. Some are based on three-year degrees; others on four. Some have open-ended programmes; others do not.

In the end, we decided to evaluate graduation rate differently in each country, based on the data that were available and relevant in each nation.

The same challenges faced us in the resource section: who are faculty and who are not? Who are students? How to capture expenditure solely allocated to teaching and learning, being aware that financial reporting systems vary massively by country? Ultimately, we had to make some tough calls to bring the ranking together. So, for this pilot, we decided to discard the financial indicator.

Yet another decision had to be made around the use of international measures. In the World University Rankings, we use proportion of international students and faculty. In the end, we decided against this, but we’ll keep an eye on it for future releases, or perhaps explore a way to measure the degree to which students at a university are able to study abroad. This would certainly make sense given the great success of the Erasmus exchange programme.

Man partially covered with charts
Source:?
Getty

When it came to presentation, I had planned to rank each country’s universities separately and to allow an approximate mapping process to let students follow from one country’s institutions to another’s. This would have allowed us to choose different metrics for each country, provided we could build a mapping mechanism.

In practice, we decided against this approach for a number of reasons. The first is usability – for the ranking to be effective for students, it needs to be usable, and experience has shown that complex controls do not succeed. Second, and probably more importantly, if we believe that education truly is international, then having a single dataset for comparison is important in itself.

And that, to a degree, has been the most frustrating aspect of the exercise. In most European countries, the data on national 中国A片 systems are fragmentary and certainly not readily available to the end users: the public. We hope that this ranking will be a first step in moving that story forward.

Examining the ranking results, there are some that we might expect. The financial position of UK universities puts them in a strong position, of course, but we see other contenders in the tables, too. If, as we hope, more universities across the continent join the ranking next year, then we might expect some additional changes.

I’m also pleased that when we sort by the conceptual areas rather than look at the overall score, we see different universities showcasing their strengths – something that even the leaders can learn from.?

Duncan Ross is data and analytics director,?Times 中国A片.

The THE Europe Teaching Rankings 2018?will be launched on 11 July at the Teaching Excellence Summit at the University of Glasgow.?.?

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT