The Sorbonne University, founded in Paris in 1253 and known globally as a symbol of education, science and culture, has just announced that, starting in 2026, it will stop submitting data to Times Higher Education (THE) rankings. It is joining a growing movement of universities questioning the value and methodology of these controversial league tables.

Rankings companies add together various indices that purport to measure quality. The indices include research outputs, the results of reputation surveys, the amount of money they receive in research grants and donations, and how many Nobel prize winners they have ever employed.

Nathalie Drach-Temam, president of the Sorbonne, stated that

the data used to assess each university’s performance is not open or transparent

and

the reproducibility of the results produced cannot be guaranteed.

This echoes wider concerns about the lack of scientific rigour of ranking systems that claim to measure complex institutional performance through simplified metrics.

The problem is that the general public believe that the rankings offer an indication of quality. As a result rankings have enormous influence over the market. This includes the choice of where to study and where to invest funding.

The university’s decision aligns with its commitment to the Agreement on Reforming Research Assessment, an agreement signed by over 700 research organisations, funders and professional societies, and the Barcelona Declaration, signed by about 200 universities and research institutes. Both advocate for open science practices to make scientific research, data, methods, and educational resources transparent, accessible and reusable by everyone without barriers. And both recommend “avoiding the use of rankings of research organisations in research assessment”.

The Sorbonne joins a growing list of high-profile institutions abandoning rankings. Columbia University, Utrecht University and several Indian institutes have opted out of major ranking systems. In the US, 17 medical and law schools, including Yale and Harvard, have withdrawn from discipline-specific rankings.

There are five major rankings companies and at least 20 smaller ones. On top of these are a similar number of discipline specific and regional rankings. Together they make up a billion dollar industry. Yet the rankings are accessible without charge.

The rankings industry has increasingly targeted African countries. It sees the continent as a new market at a time when it is losing traction among high profile institutions in the global north.

There has been a rapid increase in snazzy events run by rankings organisations on the continent. These events are very expensive and often quite luxurious – attended by vice-chancellors, academics, consultants and others.

As an academic involved in higher education teaching, I believe that chasing the rankings can harm Africa’s fragile higher education system. There are two main reasons for this.

Firstly, the rankings metrics largely focus on research output, rather than on the potential for that research to address local problems. Secondly, the rankings fail to consider higher education’s role in nurturing critical citizens, or contributing to the public good.

The Sorbonne’s decision reflects a growing body of opinion that the rankings industry is unscientific and a poor means of measuring quality.

Nevertheless, many vice-chancellors are not willing to risk the cost of withdrawing. Rankings might do a poor job of indicating quality, in all its nuanced forms. Nevertheless, they are very good at shaping public opinion. And even if a university chooses to stay out of the ranking by refusing to hand over its data, the industry continues to include it, based only on limited publicly available data.

The ranking industry

Rankings themselves are available for free. The ranking industry derives most of its revenue from reselling the data that universities provide. Universities submit detailed institutional data to ranking companies without charge. That information is then repackaged and sold back to institutions, governments and corporations.

This data includes institutional income. It often also includes contact details of staff and students. These are used for “reputation surveys”. In the case of QS University Rankings, “reputation” makes up more than 40% of the rankings.

This business model has created what can be described as a sophisticated data harvesting operation disguised as academic assessment.

Mounting criticism

Academic research has extensively documented the problems with ranking methodologies. These include:

  • the use of proxy metrics that poorly represent institutional quality. For example, while many university rankings do not include a measurement of teaching quality at all, those that do, use measures such as income, staff to student ratio, and international academic reputation.

  • composite indexing that combines unrelated measurements. The metrics that are collected are simply added together, even though they have no bearing on each other. Our students are repeatedly warned of the dangers of using composite measurement in research, and yet this is at the heart of the rankings industry.

  • subjective weighting systems that can dramatically alter results based on arbitrary decisions. If the system decides to weight reputation at 20% and then make university income worth 10%, we have one order of institutions. Switch these weightings to make the former 10% and the latter 20% and the list rearranges itself. And yet, the quality of the institutions is unchanged.

Rankings tend to favour research-intensive universities while ignoring teaching quality, community engagement and local relevance.

Most ranking systems emphasise English-language publications. This reinforces existing academic hierarchies rather than providing meaningful assessment of quality.

Where new rankings are being introduced, such as the Sub-Saharan Africa rankings, or the Emerging Economies rankings, or even the Impact rankings, they sadly still have the problem of proxy measures, and composite and subjective weightings.

In addition, many of the ranking companies refuse to reveal precise methodological detail. This makes it impossible to verify their claims or understand on what basis institutions are actually assessed.

Researchers argue that rankings have thrived because they align with the idea of higher education as a marketplace where institutions compete for market share. This has led universities to prioritise metrics that improve their ranking positions rather than activities that best serve their students and communities.

The emphasis on quantifiable outputs has created what scholars call “coercive isomorphism” – pressure for all universities to adopt similar structures and priorities regardless of their specific missions or local contexts.

Research shows that striving for a spot in the rankings limelight affects resource allocation, strategic planning and even which students apply to institutions. Some universities have shifted focus from teaching quality to research output specifically to improve rankings. Others have engaged in “gaming” – manipulating data to boost their positions.

Looking forward

Participation in methodologically flawed ranking systems presents a contradiction: universities built on principles of scientific research continue to support an industry whose methods would fail basic peer review standards.

For universities still participating, Sorbonne’s move raises an uncomfortable question: what are their institutional priorities and commitments to scientific integrity?

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Sioux McKenna, Rhodes University

Read more:

Sioux McKenna does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.