Rankings

Process

Factsheet

What role do rankings play in defining and comparing the performance of countries and universities? How are these rankings used and what are their limitations? In this fact sheet we discuss the various rankings that exist for the innovative performance of countries and university rankings. We give examples of commonly used rankings and share Dutch positions in various university rankings.

Photo: EyeEm Mobile GmbH/Hollandse Hoogte

In short

  • Rankings give a general view of the position of countries and individual universities.
  • The Netherlands and its universities generally score relatively well in these rankings.
  • Rankings are a simplification of a complex reality. They can function as a starting point for a conversation on performance, but a balanced judgment requires more detailed information on achievements and goals.

Rankings are a popular way of making comparisons, but they also receive a lot of criticism. There are rankings of innovation systems and rankings of individual institutions. Some rankings measure several aspects, while others measure only one aspect. They give users a simple and quick picture of the position of countries or institutions. Even though rankings are usually based on several underlying indicators, it is the resulting overall position that catches the eye and receives attention. This is also a pitfall of rankings. They are global and do (too) little justice to the complexity of what is being measured.

In this fact sheet, we will first discuss the various types of rankings:

  1. rankings of the innovative performance of countries; and
  2. university rankings.

Then, we will elaborate on the limitations of rankings and their use.

1. Rankings of the innovative performance of countries

Information on countries' innovation performance can be found in the following popular rankings:

  • The European Innovation Scoreboard. Launched in 2001, it is published (annually) by the European Commission. The Scoreboard compares countries on 32 quantitative indicators grouped into four categories (1) “framework conditions”, 2) “investments”, 3) “innovation activities”, and 4) "impacts". The result is a composite innovation index score and a performance table based on that score, ranging from moderate innovators to innovation leaders. Eurostat provides most of the data.
  • The Global Innovation Index. Launched in 2007, it is compiled annually by the World Intellectual Property Organization. It uses 78 indicators to assess the innovation performance of 139 countries and economies around the world. Of these, 63 indicators are based on hard data, 10 are composite indicators, and five are based on the World Economic Forum’s Executive Opinion Survey.
  • The World Competitiveness Ranking, launched in 1989, is the annual ranking of the International Institute for Management Development. The competitiveness of 69 economies is compared on the basis of 341 criteria, divided into 20 pillars. A large part of the data comes from international, national and regional companies. This data is supplemented with a survey on how market participants experience competitiveness.

Although there is overlap between the pillars of the Global Innovation Index (GII) and the World Competitiveness Ranking (WCR), the indexes use different underlying indicators. The WCR depends more heavily on the results of the annual survey of market participants than the GII. The indicators for the European Innovation Scoreboard and the World Competitiveness Ranking have been fairly constant over the years, but the GII indicators and methodologies are altered frequently. In 2017, the GII was adjusted on several aspects (number, composition, definitions and grouping).

The Global Innovation Index and World Competitiveness Ranking are wide-ranging rankings based on a large number of indicators covering many different social and economic areas, from financial markets and infrastructure to education and innovation. Sub-indices are available in each area. To gain an accurate picture of a country’s performance, users are advised to take a close look at the underlying sub-indices and indicators.

The table below shows that the Netherlands is in the top ten in each of the above rankings for the most recent year.

Position on rankings
European Innovation Scoreboard 3
Global Innovation Index 8
World Competitiveness Ranking 10

2. University rankings

There are various international rankings of universities. Rankings focus on different perspectives, as seen in the underlying indicators and their weighting. Some focus mainly on research, while others concentrate on performance in multiple areas, such as teaching and internationalisation. Some are based on hard data only, while others also use soft data from survey results. We review the most popular ones below. 
 

ARWU (‘Shanghai’ ranking)

The Academic Ranking of World Universities (ARWU) was the first worldwide ranking of universities. Initially meant to determine the global standing of top Chinese universities, it quickly evolved to international prominence. It has been published annually since 2003 by Shanghai Jiao Tong University (hence the epithet ‘Shanghai’ ranking). A university’s standing in the general ARWU ranking depends mainly on research-related indicators:

  • number of alumni winning Nobel Prizes and field medals (10%)
  • number of staff winning Nobel Prizes and field medals (20%)
  • number of highly cited researchers in 21 broad categories (20%)
  • number of articles published in Nature and Science (20%)
  • number of articles indexed in the Science Citation Index-Expanded and the Social Sciences Citation Index (20%)
  • per capita performance of a university (10%)

From 2007 till 2016 rankings in fields (science, engineering, life sciences, medical sciences, social sciences) were added. In 2017 rankings in subjects were added.
 

QS World University Ranking

The QS World University Ranking compares 1.000 top universities on the following ten indicators:

  • Academic reputation, measured using a global survey of academics (30%);
  • Citations per faculty score, based on Elsevier’s Scopus database (20%);
  • Employer reputation, measured using a global survey of employers (15%);
  • Employment outcomes, based on alumni impact metrics and graduate employment index (5%);
  • Student-to-faculty ratio (10%)
  • International student ratio (5%)
  • International faculty ratio (5%)
  • International research network (5%)
  • Sustainability (5%).

The QS ranking, which casts its net more broadly than the ARWU, also ranks universities by 55 different subjects. The order in which the QS and ARWU rankings place universities differs because they are based on a different set of indicators. The ARWU ranking is based on factual scientific output, while the QS ranking also uses indicators on reputation, studentoutcomes and sustainability. 

Times Higher Education Ranking

The Times Higher Education (THE) has published THE World University Rankings since 2004. Like the QS World University Rankings, it covers a broad spectrum of university activities: teaching (30%), research environment (29%), research quality (30%), industry (4%) and international outlook (7.5%). One third of the scores is based on the Academic Reputation Survey. The THE methodology is much-criticised. Because of changes in the methodology, the makers strongly advise against direct comparisons with previous years’ World University Rankings.

The Leiden Ranking

The Leiden Ranking is compiled by the Centre for Science and Technology Studies (CWTS) at Leiden University. It goes even further than the ARWU in focusing specifically on research, as it is based solely on bibliometric data (publications and citations) from the Web of Science scientific citation indexing service. That also explains why it focuses mainly on the natural and medical sciences, which traditionally publish in international science journals. The Leiden Ranking is an interactive index of over 1300 research universities world-wide. Users can set their own parameters:

  • Time period: the first available time period is 2006-2009, with each successive period being one year later in time;
  • Field: one of the five fields of science or all sciences;
  • Region/country;
  • Minimum publication output;
  • Type of indicator: 'impact', 'collaboration' (based on co-authored publications) and since 2019 also 'open access' and 'gender';
  • Citation-impact indicators (including top 1%, top 5%, top 10% and top 50%)
  • Collaboration indicators (total no. of publications co-authored with other organisations, no. of publications co-authored with other countries, no. of publications co-authored with industry, and no. of co-authored publications based on distance, either <100 km or >5000 km).  

Since 2024 the Centre for Science and Technology Studies also offers the CWTS Leiden Ranking Open Edition. In this ranking, bibliometric data is from the OpenAlex database. A university’s standing on the Leiden ranking may vary considerably depending on the indicator used, as shown in the table below. 

U-Multirank

A new type of university performance table was launched in 2014 with the financial support of the European Commission. It was developed by a consortium led by the Centre for Higher Education (CHE) in Germany and the Center for Higher Education Policy Studies (CHEPS) and CWTS in the Netherlands. U-Multirank is not so much a ranking as a tool for comparing university performance on five dimensions and thirty aspects, based on information about more than 1300 institutions of higher education. Performance is rated on a scale ranging from very good (A ) to weak (E). The dimensions are:

  • teaching and learning;
  • regional engagement;
  • knowledge transfer;
  • international orientation;
  • research.

The U-Multirank has a number of ‘readymade’ rankings on its website. One of these focuses on research and research linkages. The results of the Dutch universities on the 'readymade' ranking research and research collaboration is presented in a separate file for the years 2017-2022. It shows that almost all Dutch universities received the highest score (A: very good) on the indicators citation rate and research publications (size-normalized). All Dutch universities scored the highest score on top cited publications (the proportion of a university’s publications that, compared with other publications in the same field and in the same year, belong to the top 10% most frequently cited). All universities scored an A (very good) on the indicator co-publications with industrial partners, except for Tilburg University. The indicators international joint publications and regional joint publications showed a more diverse pattern. On the latter, we foundmuch more C (average) and D (below average) scores. Since January 2025 the U-multirank has been incorporated in the European Higher Education Sector Scoreboard of the European Commission. Currently, the scoreboard includes data upto the year 2022. 
 

Results of various rankings

The table below shows how Dutch universities perform in some of these rankings.

Shanghai 2025 QS World 2026 THE 2026 Leiden CWTS/ PP(10%) impact 2020-2023 Leiden CWTS / PP(Collab) 2020-2023 Leiden CWTS/ PP(industry) 2020-2023 Leiden CWTS Open Edition/ PP(10%) impact 2020-2023
EUR 125 140 107 108 309 422 103
LEI 125 119 70 64 216 250 93
RU 125 279 154 97 201 451 122
RUG 73 147 82 126 427 317 139
TiU 750 347 325 278 349 1236 342
TUD 175 47 57 141 913 59 166
TU/E 350 140 192 244 649 18 360
UM 350 239 131 198 160 341 255
UT 550 203 190 281 409 182 535
UU 56 103 66 219 221 85
UVA 125 53 62 52 137 470 67
VU 175 194 176 58 62 500 65
WUR 175 153 66 81 426 31 151
Dutch Universities in Top100 2 2 5 6 1 3 4

The table shows considerable variation in how Dutch universities perform in international rankings, which we can attribute to the wide range of methods used. Some rankings have only a few Dutch universities in the top-100, whereas in others almost half are in the top-100. The table also shows that most Dutch universities have a top-100 spot in one or more rankings. Utrecht University has been the highest-ranking Dutch university in the ARWU ranking for many years. In the QS-ranking, the Netherlands had six universities in the QS top-100 in 2014/2015. The following years this number dropped to 2, with both the University of Amsterdam and Delft Technical University in the top-100 in 2026. In the THE World University Rankings, almost half of Dutch universities are in the top-100. In the Leiden ranking, PP(industry) Eindhoven Technical University even places eighteenth.

Dutch universities also place in the top-100 in field or subject rankings. For example, in the Shanghai subject ranking (Global Ranking of Academic Subjects, 2025) there are one or more Dutch universities in the top-100 in 47 out of 54 subject rankings (exceptions are the subject rankings on Telecommunication Engineering’, 'Materials Science & Engineering, ‘Nanoscience & Nanotechnology’, ‘Textile Science & Engineering’ , ‘Mining & Mineral Engineering’ ‘Biomedical Engineering’ and ‘Mathematics’). In the field of 'psychology' there are even nine Dutch universities placed in the top-100. In the THE subject tables for 2025, there are at least two Dutch universities in the top-100 in all eleven broad fields. In the QS University Rankings by Subject 2025, there is at least one Dutch university in the top-50 for 45 out of 55 disciplines.

Besides their standing in a ranking, we can also compare universities by looking at size (based on total budget) and educational function (based on numbers of students enrolled). The budget per student differs substantially per university (source: ETER database on European higher education).

International comparison

In the table below, we compare several countries on the number of universities in the top-100 of selected rankings. Universities in the United States are most represented with 125 top-100 positions in the different rankings. Next are China and the United Kingdom (54 top-100 positions). Dutch universities also appear frequently in international rankings. The Netherlands has 15 top-100 positions in university rankings. This seems much less than the United States or China. However, these countries have many more universities compared to the Netherlands. The United States has 16 times more universities and China even has 27 times more universities. The Netherlands has 13 universities. The fact that it has 15 top-100 positions is because universities appear in multiple rankings. The country with the most universities in the world is, according to the CWTS Leiden Ranking Traditional Edition, China with 356 universities. 

Number of top-100 positions in university rankings, per country
Shanghai 2025 QS World 2026 THE 2026 Leiden CWTS/ PP(10%) impact 2020-2023 Nr of top-100 positions Total number of universities
United States 37 25 35 28 125 204
United Kingdom 8 17 11 18 54 61
China 15 5 7 27 54 356
Australia 5 9 6 7 27 35
Germany 4 5 8 0 17 58
The Netherlands 2 2 5 6 15 13
France 4 4 4 0 12 34
Switzerland 5 1 2 2 10 8
Canada 3 4 3 0 10 32
Sweden 3 3 3 1 10 13
South-Korea 1 3 4 0 8 52
Japan 2 4 2 0 8 59
Belgium 2 1 1 0 4 8
Denmark 2 0 1 0 3 5
Norway 1 0 0 0 1 7
Ireland 0 1 0 0 1 7
Finland 1 0 0 0 1 9
Austria 0 0 1 0 1 14

Limitations of rankings

Besides attracting a great deal of interest, university rankings have also been criticized. For example, they pay (too) little attention to the multifaceted performance of a university that is active in a large number of fields; it is, after all, difficult to sum up such performance in a single mark or figure. As a result, the rankings are comparing apples and oranges. Another criticism is that universities specialising in the social sciences or humanities are at a disadvantage when the indicators consist of publications and citations; these are relevant mainly in the hard sciences, with their tradition of publishing in scientific journals. The size of a university can also influence its standing, especially if the ranking does not correct for size.

Another criticism is that the universities’ standings are influenced by the way in which the ranking is compiled (what data are used and to what extent are they comparable?) and the combination of indicators used. Changes in methodology also mean that a university may move up or down a ranking from year to year, even if nothing demonstrable has changed at the university itself. This means that comparisons over time bear little relation to the real-world situation (European University Association, 2013). If a university drops down in a ranking from one year to the next, this does not necessarily mean that its performance has deteriorated. In addition, the scores within a group of universities may be very close, with only marginal differences between them. Small changes in scores will move those universities up several notches in the next ranking. That is why the ARWU only works with groups of universities below the top 100.

These limitations were confirmed by a study carried out by researchers at the European Commission’s Joint Research Centre (Saisana et al., 2011). The study used simulations to examine two popular rankings, the ARWU and the THE World University Rankings, and how their methodologies influence the standings of individual institutions. The study concludes that rankings are highly susceptible to the underlying statistical methodology (and changes therein). Although the standing of the top 10 universities is robust in the face of such changes, that is much less so for the rest. The results are robust when comparing regions (e.g. North America, Europe and Asia), however.

In 2006, criticism of rankings led to the formulation of a number of principles known as the Berlin Principles on Ranking of Higher Education Institutions. The purpose of these principles is to foster the sound and responsible use and design of rankings, leading to an improvement in the quality of data collection, methodology, and dissemination of results. The principles focus on:

  • Purposes and goals of rankings: rankings should be only one of a number of approaches to assessing higher education, should be clear about their purpose and target groups, recognize the diversity of institutions, provide clarity about the sources of information, and specify the linguistic, cultural, economic and historical context of the university systems involved.
  • Design and weighting of indicators: rankings should be transparent about their methodology, choose valid and relevant indicators, measure outcomes instead of inputs whenever possible, and limit changes to indicators and the weights assigned to them.
  • Collection and processing of data: rankings should use audited and verifiable data, include data collected with proper scientific procedures, and apply measures of quality assurance to their own ranking process.
  • Presentation of ranking results: rankings should give consumers a clear understanding of the factors used to develop the ranking and should compile the ranking in a way that limits errors and corrects faults.

Since 2006, various supplemental principles and guidelines have been added, like 'The San Francisco Declaration on Research Assessment’ (2013) and ‘Leiden Manifesto for research metrics’ (2015). Moreover, in response to these criticisms, subject and field rankings have emerged that are restricted to only a single aspect. Also, makers have developed interactive rankings that allow users to make their own comparisons based on the underlying data.

How rankings are used

Rankings are used in different ways, but in each case they help users understand differences between countries and universities and make more informed choices and decisions. Users include:

  • international students who are selecting a university (Master’s /PhD);
  • researchers looking for a university where they can further their careers;
  • companies exploring business locations;
  • policymakers (institutional, government) who want information on the international standing of different countries and institutions.

Users are advised to take the information provided by rankings as just one of the sources on which they base their choices and decisions, and to consider other information as well. For example, publications on national innovation performance like the WCR and GII not only provide a number of different performance tables (overall and covering underlying aspects), but also a great deal of background information in the form of analyses and essays.

Because rankings mainly give users a global picture of how countries and individual universities are performing, they should be used with the necessary caution. Small changes in the methodology (which indicators, how data is collected) can already lead to changes in rank.

Finally

Despite all the criticism, rankings are still used frequently in discussions about national or university performance. To ensure that the results of rankings – at least in the case of university performance tables – serve a meaningful purpose, we must use them to raise awareness of specific issues in higher education policy and practice. In other words, they should be used to spark a conversation about the mission of the university (what kind of university do we want to be and against whom do we wish to measure ourselves?). Rankings that allow users to ‘play around’ with the data are preferable in that regard. 

For an explanation of the used definitions and abbreviations we refer to the webpage Definitions for Science in Figures.

Downloads