日本の文科省、大学、財界、マスコミはこぞって「国際性の評価が高くなれば世界大学ランキングが上がる」などといい加減なことを言い、徒に数ヶ月〜1年程度の短期交換留学を推し進め、莫大な資金を投入しているが日本の98%以上の大学はTimes Higher Educationの世界ランキングでは底辺大学のままだ。国際性の指標の重要度は極めて低いのだ。
「タイムズ・ハイヤー・エデュケーション」の世界大学ランキング2025年版に関する資料では、大学の評価方法が詳細に説明されています。
評価は教育、研究環境、研究の質、国際性、産業連携という5つの主要分野にわたる18の指標に基づいて行われます。
各指標の具体的な割合や、データの収集・標準化プロセスについても解説されており、ランキングの透明性と信頼性が強調されています。
この枠組みは、学生、研究者、大学関係者、政府など、幅広い関係者にとって大学の比較検討に役立つことを目的としています。
Teaching (the learning environment): 29.5%
- Teaching reputation: 15%
- Staff-to-student ratio: 4.5%
- Doctorate-to-bachelor’s ratio: 2%
- Doctorates-awarded-to-academic-staff ratio: 5.5%
- Institutional income: 2.5%
Research environment: 29%
- Research reputation: 18%
- Research income: 5.5%
- Research productivity: 5.5%
Research quality: 30%
- Citation impact: 15%
- Research strength: 5%
- Research excellence: 5%
- Research influence: 5%
International outlook: 7.5%
- Proportion of international students: 2.5%
- Proportion of international staff: 2.5%
- International collaboration: 2.5%
Industry: 4%
- Industry income: 2%
- Patents: 2%
Exclusions
Universities can be excluded from the World University Rankings if they do not teach undergraduates, or if their research output amounted to fewer than 1,000 relevant publications between 2019 and 2023 (with a minimum of 100 a year, down from 150 in previous years).
Universities can also be excluded if 80 per cent or more of their research output is exclusively in one of our 11 subject areas.
Universities at the bottom of the table that are listed as having “reporter” status provided data but did not meet our eligibility criteria to receive a rank. More information here.
Data collection
Institutions provide and sign off their institutional data for use in the rankings. On the rare occasions when a particular data point at a subject level is not provided, we use an estimate calculated from the overall data point and any available subject-level data point. If a metric score cannot be calculated because of missing data points, it is imputed using a conservative estimate. By doing this, we avoid penalising an institution too harshly with a “zero” value for data that it overlooks or does not provide, but we do not reward it for withholding them.
Getting to the final result
Moving from a series of specific data points to indicators, and finally to a total score for an institution, requires us to match values that represent fundamentally different data. To do this, we use a standardisation approach for each indicator, and then combine the indicators in the proportions we detail above.
The standardisation approach we use is based on the distribution of data within a particular indicator, where we calculate a cumulative probability function, and evaluate where a particular institution’s indicator sits within that function.
For most metrics, we calculate the cumulative probability function using a version of Z-scoring. The distribution of data in the metrics on teaching reputation, research reputation, research excellence, research influence and patents requires us to use an exponential component.