calendar tag arrow download print
Skip to content

Rankings

fact sheet
24 November 2023
rankings

Photo: EyeEm Mobile GmbH/Hollandse Hoogte

Image
What role do rankings play in defining and comparing the performance of countries, higher education systems and universities? How are these rankings used and what are their limitations? In this fact sheet we discuss the various rankings that exist for the innovative performance of countries, rankings of higher education systems and university rankings. We give examples of commonly used rankings and share Dutch positions in various higher education system rankings and university rankings.

In short

  • Rankings give a general view of the position of countries, national science systems and individual universities
  • The Netherlands and its universities generally score relatively well in these rankings
  • Rankings are a simplification of a complex reality. They can function as a starting point for a conversation on the performance of a country or university, but a balanced judgment requires more detailed information on achievements and goals

Rankings or rankings (referred to as rankings in this fact sheet) are a popular way of making comparisons, but they also provoke a lot of criticism. There are rankings of national higher education systems, innovation systems and rankings of individual institutions. Some rankings measure several aspects, while others measure only one aspect. They give users a simple and quick picture of the position of countries or (educational) institutions. Even though rankings are usually based on several underlying indicators, it is the resulting overall position that catches the eye and receives attention. This is also a pitfall of rankings. They are global and do (too) little justice to the complexity of what is being measured.

In this fact sheet, we will first discuss the various types of rankings:

  1. rankings of the innovative performance of countries
  2. rankings of higher education systems; and
  3. university rankings.

Then, we will elaborate on the limitations of rankings and their use.

1. Rankings of the innovative performance of countries

Information on countries' innovation performance can be found in the following popular rankings:

  • The Innovation Scoreboard. Launched in 2001, it is published (annually) by the European Commission. The Scoreboard compares countries on 32 quantitative indicators grouped into four categories (1) “framework conditions”, 2) “investments”, 3) “innovation activities”, 4) "impacts". The result is a composite innovation index score and a performance table based on that score, ranging from moderate innovators to innovation leaders. Eurostat provides most of the data.
  • The Global Innovation Index. Launched in 2007, it is compiled annually by the World Intellectual Property Organization. It uses 81 indicators divided into seven pillars to assess the innovation performance of 127 countries and economies around the world. Of these, 57 indicators are based on hard data, 19 are composite indicators, and five are based on the World Economic Forum’s Executive Opinion Survey.
  • The World Competitiveness Ranking, launched in 1989, is the annual ranking of the International Institute for Management Development. The competitiveness of 64 economies is compared on the basis of 336 criteria, divided into 20 pillars. A large part of the data comes from international, national and regional companies. This data is supplemented with a survey on how market participants experience competitiveness.

Although there is overlap between the pillars of the Global Innovation Index (GII) and the World Competitiveness Ranking (WCR), the indexes use different underlying indicators. The WCR depends more heavily on the results of the annual survey of market participants than the GII. The indicators for the European Innovation Scoreboard and the World Competitiveness Ranking have been fairly constant over the years, the GII indicators and methodologies are altered frequently. In 2017, the GII was adjusted on several aspects (number, composition, definitions and grouping).

The Global Innovation Index and World Competitiveness Ranking are wide-ranging rankings based on a large number of indicators covering many different social and economic areas, from financial markets and infrastructure to education and innovation. Sub-indices are available in each area. To gain an accurate picture of a country’s performance, users are advised to take a close look at the underlying sub-indices and indicators.

2. Rankings of higher education systems

  • Universitas21 Higher Education Systems Ranking (U21 HE) has been published annually since 2012 and compares the national higher education systems of fifty countries. The indicators cover four system-related aspects: resources, environment, connectivity and output. The edition from 2020 was the final iteration of this ranking. 
  • QS publishes a similar ranking of higher education systems, the QS Higher Education System Strength Rankings, based on the average performance of higher education institutions in the QS World Rankings, access to higher education and the performance of the country’s leading (‘flagship’) institution within the global rankings, along with a correction for the country’s economic context.

The table below shows that the Netherlands is in the top ten in each of the above rankings for the most recent year.

Position on rankings
Eu-Innovation Scoreboard 4
World Competitiveness Ranking 5
Global Innovation Index 7
QS HE system strenghts 7
U21 HE system-ranking 10


The EIS provide data covering multiple years. 
 

3. University rankings

There are various international rankings of universities, each one focusing on a different perspective, as the underlying indicators and their weighting make clear. Some focus mainly on research, while others concentrate on performance in multiple areas, such as teaching and internationalisation. We review the most popular ones below. Some are based only on hard data, in some cases provided by the universities themselves; others also make use of ‘soft’ data in the form of survey results.
 

ARWU (‘Shanghai’ ranking)

The Academic Ranking of World Universities (ARWU) was the first worldwide ranking of universities. Initially meant to determine the global standing of top Chinese universities, it quickly came to international prominence. It has been published annually since 2003 by Shanghai Jiao Tong University (hence the epithet ‘Shanghai’ ranking). A university’s standing in the general ARWU ranking depends mainly on research-related indicators:

  • number of alumni winning Nobel Prizes and field medals (10%)
  • number of staff winning Nobel Prizes and field medals (20%)
  • number of highly cited researchers in 21 broad categories (20%)
  • number of articles published in Nature and Science (20%)
  • number of articles indexed in the Science Citation Index-Expanded and the Social Sciences Citation Index (20%)
  • per capita performance of a university (10%)

From 2007 till 2016 rankings in fields (science, engineering, life sciences, medical sciences, social sciences) were added. From 2017 rankings in subjects were added.
 

QS World University Ranking

The QS World University Ranking compares 1.000 top universities on the following six indicators:

  • Academic reputation, measured using a global survey of academics (40%)
  • Employer reputation, measured using a global survey of employers (10%)
  • Citations per faculty score, based on Elsevier’s Scopus database (20%)
  • Student-to-faculty ratio (20%)
  • International student ratio (5%)
  • International faculty ratio (5%)

The QS ranking, which casts its net more broadly than the ARWU, also ranks universities by over forty different subjects. The order in which the QS and ARWU rankings place universities differs because each bases its measures on a different set of indicators. The QS ranking is based mainly on indicators of reputation, whereas the ARWU is based on factual scientific output. 


The Leiden Ranking

The Leiden Ranking is compiled by the Centre for Science and Technology Studies (CWTS) at Leiden University. It goes even further than the ARWU in focusing specifically on research, as it is based solely on bibliometric data (publications and citations) from the Web of Science scientific citation indexing service. That also explains why it focuses mainly on the natural and medical sciences, which traditionally publish in international science journals. The Leiden Ranking is an interactive index of over 1300 research universities world-wide. Users can set their own parameters:

  • Time period: the first available time period is 2006-2009, with each successive period being one year later in time
  • Field: one of the five fields of science or all sciences
  • Region/country
  • Minimum publication output
  • Type of indicator: 'impact', 'collaboration' (based on co-authored publications) and since 2019 also 'open access' and 'gender'
  • Citation-impact indicators (top 1%, top 10% and top 50%)
  • Collaboration indicators (total no. of publications co-authored with other organisations, no. of publications co-authored with other countries, no. of publications co-authored with industry, and no. of co-authored publications based on distance, either <100 km or >5000 km).  

A university’s standing may vary considerably depending on the indicator used, as shown by how the two selected indicators in the table below affect the position of Dutch universities in a number of rankings.


Times Higher Education Ranking

The Times Higher Education (THE) has published THE World University Rankings since 2004. Like the QS World University Rankings, it covers a broad spectrum of university activities: teaching (30%), research (30%), citations (30%), industry income (2.5%) and international outlook (7.5%).

One third of the scores is based on the Academic Reputation Survey. The THE methodology is much-criticised. Because of changes in the methodology, the makers strongly advise against direct comparisons with previous years’ World University Rankings.
 

U-Multirank

A new type of university performance table was launched in 2014 with the financial support of the European Commission. It was developed by a consortium led by the Centre for Higher Education (CHE) in Germany and the Center for Higher Education Policy Studies (CHEPS) and CWTS in the Netherlands. U-Multirank is not so much a ranking as a tool for comparing university performance on five dimensions and thirty aspects, based on information about more than 1300 institutions of higher education. Performance is rated on a scale ranging from very good (A ) to weak (E). The dimensions are:

  • teaching and learning
  • regional engagement
  • knowledge transfer
  • international orientation
  • research

The U-Multirank has a number of ‘readymade’ rankings on its website. One of these focuses on research and research linkages. The results of the Dutch universities on the 'readymade' ranking research and research collaboration is presented in a separate file for the years 2017-2022. It shows that almost all Dutch universities receive the highest score (A: very good) on the indicators citation rate and research publications (size-normalized). All Dutch universities score the highest score on top cited publications (the proportion of a university’s publications that, compared with other publications in the same field and in the same year, belong to the top 10% most frequently cited). All universities score an A (very good) on the indicator co-publications with industrial partners, except for Tilburg University. The indicators international joint publications and regional joint publications show a more diverse pattern. On the latter, we find much more C (average) and D (below average) scores.

In 2022, most universities score very similar to the previous year.
 

Results of various rankings

If we consider the general picture that emerges from the rankings, we see that US and British universities dominate the top of the performance tables. For example, the ARWU has 38 US and eight British universities in the top 100 spots. On the other hand, there are very few British and US universities in the second tier (100-500), even though both countries are home to a vast number of universities. Instead, it has a relative preponderance of universities in the Netherlands and a few other countries (including Switzerland, Sweden and Germany). The rankings show that the Dutch science system generally performs well. If we compare Utrecht University with European universities in the ARWU top 50 by their scores on the underlying indicators, we see wide gaps on some indicators and very narrow ones on others. In fact, Utrecht even has higher scores in some cases.

Besides their standing in a ranking, we can also compare universities by looking at size (based on total budget) and educational function (based on numbers of students enrolled). If we compare Utrecht University with the top 50 European universities in the AWRU, we see that most of the latter have much more money to spend and a much smaller student population than Utrecht (source: ETER database on European higher education).

The table below shows how Dutch universities perform in some of the rankings.

Shanghai 2023 QS World 2023 Leiden CWTS/ PP(10%) impact 2018/2021 Leiden CWTS / PP(Collab) 2018/2021 Leiden CWTS/ PP(industry) 2018/2021 THE 2023
EUR 88 176 90 237 205 99
LEI 125 126 89 192 190 77
RU 125 222 97 171 375 140
RUG 76 139 112 391 268 79
TiU 750 371 150 278 1166 225
TUD 175 47 109 799 44 48
TU/E 450 124 146 592 10 168
UM 250 256 194 142 274 138
UT 450 210 227 369 161 184
UU 52 107 63 165 181 -
UVA 125 53 60 135 364 61
VU 175 207 66 63 338 125
WUR 175 151 75 271 39 64
Netherlands Universities in Top100 3 2 7 1 3 6

International comparison

In the table below, we compare several countries on their number of universities in the top 100 of selected rankings. Universities in the United States are most represented with 137 top-100 positions in the different rankings. Next is the United Kingdom  (59 top-100 positions) and China (38 top-100 positions). Dutch universities also appear frequently in international rankings. The Netherlands has 18 top-100 positions in university rankings. This seems much less than the United States, United Kingdom and China. However, these countries have many more universities compared to the Netherlands. The United States has 16 times more universities and China has 21 times more universities. The Netherlands has 13 universities. The fact that it has 18 top-100 positions is because universities appear in multiple rankings. The country with the most universities in the world is, according to the Leiden ranking, China with 273 universities. This is 67 more than the United States and 260 more than the Netherlands. 

Number of top-100 positions in university rankings, per country
Leiden ranking ARWU (Shanghai) THE QS World Nr of top-100 positions Total number of universities
China 15 11 7 5 38 273
United States 36 38 36 27 137 206
United Kingdom 23 8 11 17 59 64
Japan 0 2 2 4 8 59
Germany 0 4 8 4 16 57
South Korea 0 1 3 5 9 51
Australia 7 6 6 9 28 34
Canada 0 5 3 3 11 31
France 1 4 4 3 12 33
the Netherlands 7 3 6 2 18 13
Sweden 0 3 2 2 7 12
Austria 0 0 0 0 0 12
Switzerland 2 5 3 3 13 8
Belgium 0 2 1 1 4 8
Finland 0 0 0 0 0 9
Norway 0 1 0 0 1 7
Ireland 0 0 0 1 1 6
Denmark 0 2 0 0 2 5

The table shows considerable variation in how Dutch universities perform in international rankings, which we can attribute to the wide range of methods used. Some rankings have only a few Dutch universities in the top-100, whereas in others more than half are. The table also shows that most Dutch universities have a top-100 spot in one or more rankings. In the ARWU Utrecht University has been the highest-ranking Dutch university for many years. Starting in 2003, it made it into the top-50 on eleven separate occasions. In the QS-ranking, the Netherlands had six universities in the QS top-100 in 2015/2016. The following years this number dropped to 2, with both the University of Amsterdam and Delft Technical University in the top-100 in 2023. In 2018/2019 a third university (Eindhoven) entered the QS top-100, but fell out again the following year. In the Leiden Ranking PP(collab) the number of Dutch universities in the top-100 dropped from 4 in 2021 to 1 in 2023. This could be due to the growing number of universities in this ranking: from 963 in 2019 to 1411 in 2023. In the THE World University Rankings, half of Dutch universities are in the top-100. In the Leiden ranking, PP(industry) Eindhoven Technical University places tenth.

Dutch universities also place in the top-100 in field or subject rankings. For example, in the Shanghai subject ranking (GRAS, 2022) there are one or more Dutch universities in the top-100 in 46 out of 54 subject rankings (exceptions are the subject rankings on 'Electrical & Electronic Engineering’, Telecommunication Engineering’, ‘Computer science & Engineering, 'Materials Science & Engineering, ‘Nanoscience & Nanotechnology’, Energy Science & Engineering', ‘Mining & Mineral Engineering’ and ‘Mathematics’). In the GRAS ranking on Education, there are even 10 Dutch universities in the top 100. In the THE subject tables for 2023, there are at least three Dutch universities in the top-100 in all eleven broad fields. In the QS University Rankings by Subject 2023, there is at least one Dutch university in the top-50 for 47 out of 54 disciplines.
 

Limitations of rankings

Besides attracting a great deal of interest, university rankings have also been criticised. For example, they pay (too) little attention to the multifaceted performance of a university that is active in a large number of fields; it is, after all, difficult to sum up such performance in a single mark or figure. As a result, the rankings are comparing apples and oranges. Another criticism is that universities specialising in the social sciences or humanities are at a disadvantage when the indicators consist of publications and citations; these are relevant mainly in the hard sciences, with their tradition of publishing in scientific journals. The size of a university can also influence its standing, especially if the ranking does not correct for size.

In response to these criticisms, subject and field rankings have emerged that are restricted to only a single aspect, and makers have developed interactive rankings that allow users to make their own comparisons based on the underlying data.

Another criticism is that the universities’ standings are influenced by the way in which the ranking is compiled (what data are used and to what extent are they comparable?) and the combination of indicators used. Changes in methodology also mean that a university may move up or down a ranking from year to year, even if nothing demonstrable has changed at the university itself. This means that comparisons over time bear little relation to the real-world situation (European University Association, 2013). If a university drops down in a ranking from one year to the next, this does not necessarily mean that its performance has deteriorated. In addition, the scores within a group of universities may be very close, with only marginal differences between them. Small changes in scores will move those universities up several notches in the next ranking. That is why the ARWU only works with groups of universities below the top 100.

These limitations were confirmed by a study carried out by researchers at the European Commission’s Joint Research Centre (Saisana et al., 2011). The study used simulations to examine two popular rankings, the ARWU and the THE World University Rankings, and how their methodologies influence the standings of individual institutions. The study concludes that rankings are highly susceptible to the underlying statistical methodology (and changes therein). Although the standing of the top 10 universities is robust in the face of such changes, that is much less so for the rest. The results are robust when comparing regions (e.g. North America, Europe and Asia), however.
 

How rankings are used

Rankings are used in different ways, but in each case they help users understand and make more informed choices and decisions. Users include:

  • international students who are selecting a university (Master’s /PhD)
  • researchers looking for a university where they can further their careers
  • companies exploring business locations
  • policymakers (institutional, government) who want information on the international standing of different countries, systems and institutions

Users are advised to take the information provided by rankings as just one of the sources on which they base their choices and decisions, and to consider other information as well. For example, publications on national innovation performance like the WCR and GII not only provide a number of different performance tables (overall and covering underlying aspects), but also a great deal of background information in the form of analyses and essays.

Because rankings mainly give users a global picture of how countries, national science systems and individual universities are performing, they should be used with the necessary circumspection. Small changes in the methodology (which indicators, how data is collected) can already lead to changes in rank.

University rankings in particular have come under fire over time. Despite such criticism, universities are happy to use a good standing in one of the rankings to market themselves.

In 2006, criticism of rankings led to the formulation of a number of principles known as the Berlin Principles on Ranking of Higher Education Institutions. The purpose of these principles is to foster the sound and responsible use and design of rankings, leading to an improvement in the quality of data collection, methodology, and dissemination of results. The principles focus on:

  • Purposes and goals of rankings: rankings should be only one of a number of approaches to assessing higher education, should be clear about their purpose and target groups, recognize the diversity of institutions, provide clarity about the sources of information, and specify the linguistic, cultural, economic and historical context of the university systems involved.
  • Design and weighting of indicators: rankings should be transparent about their methodology, choose valid and relevant indicators, measure outcomes instead of inputs whenever possible, and limit changes to indicators and the weights assigned to them.
  • Collection and processing of data: rankings should use audited and verifiable data, include data collected with proper scientific procedures, and apply measures of quality assurance to their own ranking process.
  • Presentation of ranking results: rankings should give consumers a clear understanding of the factors used to develop the ranking and should compile the ranking in a way that limits errors and corrects faults.

Despite all the criticism, rankings have become indispensable to any discussion about national or university performance. To ensure that the results of rankings – at least in the case of university performance tables – serve a meaningful purpose, we must use them to raise awareness of specific issues in higher education policy and practice. In other words, they should be used to spark a conversation about the mission of the university (what kind of university do we want to be and against whom do we wish to measure ourselves?). Rankings that allow users to ‘play around’ with the data are preferable in that regard. 
 

Finally

Despite all the criticism, rankings have become a permanent part of discussions on the performance of countries, systems of higher education and universities. To use the results of rankings in a meaningful way, they should be a first step in raising awareness on specific subjects. In other words, as a start of the conversations. It is important to consider what a country aims to achieve, what the mission of a university is, what kind of university it aims to be(come) and what universities they compare themselves with. Rankings where users can customize their comparison based on the data are therefore preferable.