Rankings

Rankings are a popular way of making comparisons, but they are also the subject of much criticism. We have rankings of national higher education systems, innovation systems and individual institutions. Some rankings consider multiple aspects, and some only one.

Rankings are popular because they show users at a glance how countries or institutions – usually universities – compare with others. Although rankings are usually based on multiple underlying indicators, what stands out and gets the most attention is the country’s or institution’s standing in the overall performance table. That is also one of the pitfalls. Rankings provide a global indication and do (too) little justice to the complexity of what is being measured. In this factsheet, we look at the various types of rankings and then discuss their limitations and uses.

International innovation rankings

Information on countries' innovation performance can be found in the following popular rankings:

  • The Innovation Scoreboard. Launched in 2001, it is published (annually) by the European Commission. The Scoreboard compares countries on 27 quantitative indicators grouped into ten categories (previously 25 indicators in eight categories). The result is a composite innovation index score and a performance table based on that score, ranging from moderate innovators to innovation leaders. Eurostat provides most of the data.
  • The Global Innovation Index. Launched in 2007, it is compiled annually by the World Intellectual Property Organization. It uses 81 indicators divided into seven pillars to assess the innovation performance of 127 countries and economies around the world. Of these, 57 indicators are based on hard data, 19 are composite indicators, and five are based on the World Economic Forum’s Executive Opinion Survey.
  • The Global Competitiveness Index. Launched in 1997, this is the World Economic Forum’s own annual index assessing the competitiveness of about 140 countries based on a broad spectrum of 114 indicators divided into 12 pillars. Two of the pillars are higher education and innovation. A large proportion of the results rests on the World Economic Forum’s Executive Opinion Survey. Although the pillars of the Global Innovation Index (GII) and the Global Competitiveness Index (GCI) overlap to some extent, the two indices use different underlying indicators. The GCI depends more heavily than the GII on the annual Executive Opinion Survey. While the European Innovation Scoreboard indicators have remained fairly constant down through the years, the GII and GCI indicators and methodologies are altered regularly.

The Global Innovation and Global Competitiveness indices are wide-ranging rankings based on a large number of indicators covering many different social and economic areas, from financial markets and infrastructure to education and innovation. Sub-indices are available in each area. To gain an accurate picture of a country’s performance, users are advised to take a close look at the underlying sub-indices and indicators.

Rankings of higher education systems

  • Universitas21 Higher Education Systems Ranking (U21 HE) has been published annually since 2012 and compares the national higher education systems of fifty countries. The indicators cover four system-related aspects: resources, environment, connectivity and output
  • QS publishes a similar ranking of higher education systems, the QS Higher Education System Strength Rankings, based on the average performance of higher education institutions in the QS World Rankings, access to higher education and the performance of the country’s leading (‘flagship’) institution within the global rankings, along with a correction for the country’s economic context.

The figure below shows that the Netherlands is in the top ten in each of the above rankings for the most recent year.

Position of the Netherlands in global/international rankings, 2016/2017

Data: Download as csv
Source: European Commission: EU Innovation Scoreboard 2017; WIPO: Global Innovation Index 2017; WEF: Global Competitiveness Index 2016/2017; Universitas21: Higher Education System Rankings 2017; QS: HE System Strength Rankings 2016
Notes: The lower the figure, the better (higher) the standing in the ranking. When looking broader than the EU-member states, the Netherlands holds a fifth position in the European Innovation Scoreboard. Switzerland then holds the first position. Sweden is first within the EU. In the Global Innovation Index, the Netherlands takes the third place, both globally, within the region and after correction for income levels.


The GCI and the EIS provide data covering multiple years. 

University rankings

There are various international rankings of universities, each one focusing on a different perspective, as the underlying indicators and their weighting make clear. Some focus mainly on research, while others concentrate on performance in multiple areas, such as teaching and internationalisation. We review the most popular ones below. Some are based only on hard data, in some cases provided by the universities themselves; others also make use of ‘soft’ data in the form of survey results.

ARWU (‘Shanghai’ ranking)

The Academic Ranking of World Universities (ARWU) was the first worldwide ranking of universities. Initially meant to determine the global standing of top Chinese universities, it quickly came to international prominence. It has been published annually since 2003 by Shanghai Jiao Tong University (hence the epithet ‘Shanghai’ ranking). A university’s standing in the general ARWU ranking depends mainly on research-related indicators:

  • number of alumni winning Nobel Prizes and field medals (10%)
  • number of staff winning Nobel Prizes and field medals (20%)
  • number of highly cited researchers in 21 broad categories (20%)
  • number of articles published in Nature and Science (20%)
  • number of articles indexed in the Science Citation Index-Expanded and the Social Sciences Citation Index (20%)
  • per capita performance of a university (10%)

A few years ago, rankings in fields (science, engineering, life sciences, medical sciences, social sciences) and subjects (mathematics, physics, chemistry, computer science, economics/business) were added.

Twelve Dutch universities are placed in the top 400 of the 2017 ranking. The sole exception is Tilburg University, which is explicable given its disciplinary background (note that in the subject field ranking for economics and business, Tilburg is in the top 100). The ARWU covers an estimated 1300 universities worldwide. The ARWU ranking places universities ranked 101st and lower into larger groups of fifty or hundred. Within these groups the universities are listed alphabetically.

QS World University Ranking

The QS World University Ranking compares more than 900 top universities on the following six indicators:

  • Academic reputation, measured using a global survey of academics (40%)
  • Employer reputation, measured using a global survey of employers (10%)
  • Citations per faculty score, based on Elsevier’s Scopus database (20%)
  • Student-to-faculty ratio (20%)
  • International student ratio (5%)
  • International faculty ratio (5%)

The QS ranking, which casts its net more broadly than the ARWU, also ranks universities by over forty different subjects. The order in which the QS and ARWU rankings place universities differs because each bases its measures on a different set of indicators. The QS ranking is based mainly on indicators of reputation, whereas the ARWU is based on factual scientific output. 

The Leiden Ranking

The Leiden Ranking is compiled by the Centre for Science and Technology Studies (CWTS) at Leiden University. It goes even further than the ARWU in focusing specifically on research, as it is based solely on bibliometric data (publications and citations) from the Web of Science scientific citation indexing service. That also explains why it focuses mainly on the natural and medical sciences, which traditionally publish in international science journals. The Leiden Ranking is an interactive index of over 900 research universities world-wide. Users can set their own parameters:

  • Time period: the first available time period is 2006-2009, with each successive period being one year later in time
  • Field: one of the five fields of science or all sciences
  • Region/country
  • Minimum publication output
  • Type of indicator: impact or collaboration (based on co-authored publications)
  • Impact indicators (top 1%, top 10% and top 50%)
  • Collaboration  indicators (total no. of publications co-authored with other organisations, no. of publications co-authored with other countries, and no. of co-authored publications based on distance, either <100 km or >5000 km).  

A university’s standing may vary considerably depending on the indicator used, as shown by how the two selected indicators in the table below affect the position of Dutch universities in a number of rankings.

The ‘University-Industry Research Connections’ (UIRC) ranking

In this ranking, the CWTS reports the percentage of publications co-authored by universities and industry (UIC intensity) for a set of 750 research universities. Besides UIC intensity, the UIRC ranking also indicates the percentage of publications co-authored with businesses in the relevant country (% domestic UICs). It does this for all fields of science collectively at the relevant university and for seven individual broad fields of science, i.e. mathematics, computer science and engineering; natural sciences; life sciences; earth and environmental sciences; medical sciences; social sciences; and cognitive sciences.

The most recent UIRC was in 2014. The CWTS is working on a new methodology and a more user-friendly design.

Times Higher Education Ranking

The Times Higher Education (THE) has published THE World University Rankings since 2004. Like the QS World University Rankings, it covers a broad spectrum of university activities: teaching (30%), research (30%), citations (30%), industry income (2.5%) and international outlook (7.5%).

The ranking has subject tables in eight fields. A third of the scores is based on the Academic Reputation Survey. The THE methodology is much-criticised. Because of changes in the methodology, the makers strongly advise against direct comparisons with previous years’ World University Rankings.

U-Multirank

A new type of university performance table was launched in 2014 with the financial support of the European Commission. It was developed by a consortium led by the Centre for Higher Education (CHE) in Germany and the Center for Higher Education Policy Studies (CHEPS) and CWTS in the Netherlands. U-Multirank is not so much a ranking as a tool for comparing university performance on five dimensions and thirty aspects, based on information about more than 1300 institutions of higher education. Performance is rated on a scale ranging from very good (A ) to weak (E). The dimensions are:

  • teaching and learning
  • regional engagement
  • knowledge transfer
  • international orientation
  • research

One novelty is that U-Multirank has a number of ‘readymade’ rankings on its website. One of these focuses on research and research linkages, and another on teaching and learning (subdivided into 16 subjects). The results of the Dutch universities on the 'readymade' ranking research and research collaboration is presented in a separate file. It shows that almost all Dutch universities receive the highest score (A: very good) on the indicators citation rate, research publications (size-normalized) and top cited publications (the proportion of a university’s publications that, compared with other publications in the same field and in the same year, belong to the top 10% most frequently cited). TU Delft and TU Eindhoven score an A (very good) on the indicator co-publications with industrial partners. The indicators international joint publications and regional joint publications show a diverse pattern.

Results of various rankings

If we consider the general picture that emerges from the rankings, we see that US and British universities dominate the top of the performance tables. For example, the ARWU has 31 US and seven British universities in the top 50 spots. On the other hand, there are very few British and US universities in the second tier (100-500), even though both countries are home to a vast number of universities. Instead, it has a relative preponderance of universities in the Netherlands and a few other countries (including Switzerland, Sweden and Germany). The rankings show that the Dutch science system generally performs well. If we compare Utrecht University with European universities in the ARWU top 50 by their scores on the underlying indicators, we see wide gaps on some indicators and very narrow ones on others. In fact, Utrecht even has higher scores in some cases.

Besides their standing in a ranking, we can also compare universities by looking at size (based on total budget) and educational function (based on numbers of students enrolled). If we compare Utrecht University with the top 50 European universities in the AWRU, we see that most of the latter have much more money to spend and a much smaller student population than Utrecht (source: ETER database on European higher education).

The table below shows how Dutch universities perform in some of the rankings.

Position of Dutch universities in various international rankings

Shanghai 2017QS World 2017/2018Leiden CWTS/ PP(10%) impact 2017Leiden CWTS / PP(Collab) 2017THE 2017/2018UIRC 2014: % co-publicaties
EUR731477613672128
LEI88109918867164
RU1252048987122326
RUG5911314625283197
TiU65035729770195641
TUD1755467447634
TU/E3501041344971411
UM25020013534103166
UT35017918135717943
UU471096210668183
UVA12558657459308
VU1252189431165294
WUR12512985976431
Netherlands Universities in Top100 428774
Data: Download as csv
Source: Various
Notes: The universities are listed in alphabetical order based on their Dutch acronym (in parentheses). Leiden ranking PP(top 10%): The proportion of a university’s publications that, compared with other publications in the same field and in the same year, belong to the top 10% most frequently cited. Leiden ranking PP(collab): The proportion of a university’s publications that have been co-authored with one or more other organisations. UIRC 2014: The % of a university’s public-private co-authored publications.


The table shows considerable variation in how Dutch universities perform in international rankings, which we can attribute to the wide range of methods used. Some rankings have only a few Dutch universities in the top 100, whereas in others more than half are. The table also shows that every Dutch university has a top-100 spot in one or more rankings. Utrecht University has been the highest-ranking Dutch university in the ARWU for many years. Starting in 2003, it made it into the top fifty on eight separate occasions and since 2012 it has fallen short by a slim margin, to return into the top fifty in 2017. In 2014/2015 the Netherlands had six universities in the QS top 100 and since 2016/2017, the number dropped to two. Delft Technical University and the University of Amsterdam are the best-performing Dutch universities in this ranking. In the Leiden Ranking and THE World University Rankings, more than half of Dutch universities are in the top 100. In the CWTS’s UIRC ranking, two universities of technology – Eindhoven and Delft – even place first and fourth, respectively.

Dutch universities also place in the top 100 in field or subject rankings. For example, in the ARWU subject ranking there are one or more Dutch universities in the top 100 in its five broad fields (Table 1 in the linked Excel file). There are seven Dutch universities in the top 100 for the social sciences and six in the top 100 for the medical sciences. In the THE subject tables for 2015/2016, there are at least three Dutch universities in the top 100 in each of the six broad fields (Table 2 in the linked Excel file). Indeed, in the field of Health there are seven Dutch universities in the top 100 (seven of the eight universities with a university medical centre). In the QS University Rankings by Subject 2016, there is at least one Dutch university in the top 50 in virtually every one of the more than forty subjects covered (Table 3 in the linked Excel file). The exceptions are Performing Arts, Computer Science, Mathematics, Biological Sciences, Chemistry and Nursing.

Limitations of rankings

Besides attracting a great deal of interest, university rankings have also been criticised. For example, they pay (too) little attention to the multifaceted performance of a university that is active in a large number of fields; it is, after all, difficult to sum up such performance in a single mark or figure. As a result, the rankings are comparing apples and oranges. Another criticism is that universities specialising in the social sciences or humanities are at a disadvantage when the indicators consist of publications and citations; these are relevant mainly in the hard sciences, with their tradition of publishing in scientific journals. The size of a university can also influence its standing, especially if the ranking does not correct for size.

In response to these criticisms, subject and field rankings have emerged that are restricted to only a single aspect, and makers have developed interactive rankings that allow users to make their own comparisons based on the underlying data.

Another criticism is that the universities’ standings are influenced by the way in which the ranking is compiled (what data are used and to what extent are they comparable?) and the combination of indicators used. Changes in methodology also mean that a university may move up or down a ranking from year to year, even if nothing demonstrable has changed at the university itself. This means that comparisons over time bear little relation to the real-world situation (European University Association, 2013). If a university drops down in a ranking from one year to the next, this does not necessarily mean that its performance has deteriorated. In addition, the scores within a group of universities may be very close, with only marginal differences between them. Small changes in scores will move those universities up several notches in the next ranking. That is why the ARWU only works with groups of universities below the top 100.

These limitations were confirmed by a study carried out by researchers at the European Commission’s Joint Research Centre (Saisana et al., 2011). The study used simulations to examine two popular rankings, the ARWU and the THE World University Rankings, and how their methodologies influence the standings of individual institutions. The study concludes that rankings are highly susceptible to the underlying statistical methodology (and changes therein). Although the standing of the top 10 universities is robust in the face of such changes, that is much less so for the rest. The results are robust when comparing regions (e.g. North America, Europe and Asia), however.

How rankings are used

Rankings are used in different ways, but in each case they help users understand and make more informed choices and decisions. Users include:

  • international students who are selecting a university (Master’s /PhD)
  • researchers looking for a university where they can further their careers
  • companies exploring business locations
  • policymakers (institutional, government) who want information on the international standing of different countries, systems and institutions

Users are advised to take the information provided by rankings as just one of the sources on which they base their choices and decisions, and to consider other information as well. For example, publications on national innovation performance like the GCI and GII not only provide a number of different performance tables (overall and covering underlying aspects), but also a great deal of background information in the form of analyses and essays.

Because rankings mainly give users a global picture of how countries, national science systems and individual universities are performing, they should be used with the necessary circumspection. Small changes in the methodology (which indicators, how data is collected) can already lead to changes in rank.

University rankings in particular have come under fire over time. Despite such criticism, universities are happy to use a good standing in one of the rankings to market themselves.

In 2006, criticism of rankings led to the formulation of a number of principles known as the Berlin Principles on Ranking of Higher Education Institutions. The purpose of these principles is to foster the sound and responsible use and design of rankings, leading to an improvement in the quality of data collection, methodology, and dissemination of results. The principles focus on:

  • Purposes and goals of rankings: rankings should be only one of a number of approaches to assessing higher education, should be clear about their purpose and target groups, recognise the diversity of institutions, provide clarity about the sources of information, and specify the linguistic, cultural, economic and historical context of the university systems involved.
  • Design and weighting of indicators: rankings should be transparent about their methodology, choose valid and relevant indicators, measure outcomes instead of inputs whenever possible, and limit changes to indicators and the weights assigned to them.
  • Collection and processing of data: rankings should use audited and verifiable data, include data collected with proper scientific procedures, and apply measures of quality assurance to their own ranking process.
  • Presentation of ranking results: rankings should give consumers a clear understanding of the factors used to develop the ranking and should compile the ranking in a way that limits errors and corrects faults.

Despite all the criticism, rankings have become indispensable to any discussion about national or university performance. To ensure that the results of rankings – at least in the case of university performance tables – serve a meaningful purpose, we must use them to raise awareness of specific issues in higher education policy and practice. In other words, they should be used to spark a conversation about the mission of the university (what kind of university do we want to be and against whom do we wish to measure ourselves?). Rankings that allow users to ‘play around’ with the data are preferable in that regard. 

Acronyms
Acronyms related to the Dutch universities

For more information: