From Iceland — Insititute of Higher Ranking

Insititute of Higher Ranking

Published September 3, 2008

Insititute of Higher Ranking
Photo by
GAS

In 2005, Kristín Ingólfsdóttir succeeded Páll Skúlason as Rector of the
University of Iceland. After a year on the job, she revealed her
ambitious plan to take the University of Iceland to the top 100 of the
leading university ranking lists. Now, two years later, the institution
is already showing progress towards that goal, but what does it all
mean?

Since its foundation in 1911, and for pretty much the rest of the twentieth century, the University of Iceland was the only institution of higher learning in this country. With the addition of more schools at the university level, the University of Iceland is now in an unusual position where it needs to compete for both money and research funding with other national universities.

For Rector Kristín Ingólfsdóttir this is a welcome challenge and one that she has decided to tackle by making the University of Iceland one of the best in the world. “Our long term goal is to be among the top 100 universities in the world, in the next 10–15 years. We have identified several goals along the way that we need to reach in the next five years to obtain that goal. Already, we have noticed a substantial progress,” Ingólfsdóttir says.

“Iceland is the only Nordic country that does not have a university in the top-100. We believe this is a realistic goal for us, and we think it is a necessary goal, both for the University and our society, because a nation that is among the ten richest in the world, we think it is natural that we have one of the best universities in the world.”

Martin Ince, editor of the Times Higher Education Supplement’s World University rankings, seems to agree. When I asked him if he thought the University of Iceland was likely to make the list, he says: “If Reykjavik wants to be among them it will need to build its whole strategy on the need to do so, especially on becoming more visible on the world stage.” Asked if he has any idea how far outside the list the University of Iceland currently lies, he replied: “No. But other Nordic institutions do well in these rankings as do other small countries and territories such as Singapore and Hong Kong. Iceland is richer than these nations and might do well in our rankings. We rank mainly large, general universities. To do well in our system, a university needs to be good in several of our five areas – medicine, science, arts and humanities, technology, and the social sciences.”

While the University of Iceland boasts outstanding departments in some fields, Ingólfsdóttir realises that making the University as a whole stand out is a more difficult task. “It surely is. But we since we started to work towards this goal we have seen an increase in peer-reviewed publications in all fields. In the two years since we installed our plan, we witnessed the mentality change around the school. The productivity is up, publications are up, and people pay more attention to publishing in ranked publications.”

How Are the Rankings Compiled?
There are several lists that rank the world’s best universities, but in the world of academia, there are only two that carry any weight. One is the World University Ranking list compiled by the British weekly Times Higher Education Supplement and the independent education service company Quacquarelli Symonds. The other is the Shanghai’s Jiao Tong University’s Academic Ranking of World Universities, commonly referred to as the Shanghai list.

The Times list is more subjective in nature, as it places more weight on ‘peer assessment’ where scholars are asked to rate the best institutions in their field. In the 2007 edition of the list, editor Martin Ince explains: “The core of our methodology is the belief that expert opinion is a valid way to assess the standing of top universities. Our rankings contain two strands of peer review. The more important is academic opinion, worth 40% of the total score available in the rankings. […] A further 10% of the possible score in these rankings is derived from active recruiters of graduates. [We] ask major global and national employers across the public and private sectors which universities they like to hire from.”

So, a full 50% of the total score is derived from subjective reviews rather than actual empirical data. The rest of the score breaks down so: 20% is based on staff-to-student ratio. Another 20% is based on citations of an institution’s published papers, and finally 5% based on number of international staff, and 5% based on international students.

The Shanghai list is based more on quantifiable measurements. The score is based on Nobel Prizes and Fields Medals won by alumni (10%), Nobel Prizes and Fields Medals won by faculty (20%), citations by faculty in 21 broad subject categories (20%), articles published in Nature and Science (20%), the Science Citation Index, Social Sciences Citation Index, and Arts and Humanities Citation Index (20%) adjusted to size by the per capita academic performance of an institution (10 percent).

The Validity of Ranking
Both lists have received substantial criticism for their rankings and methodology. The Times list is generally considered to be too subjective and lacking in quantifiable measures, while the Shanghai list is considered too focused on natural sciences and research excellence rather than educational indicators. In an interview with the Grapevine, Martin Ince, editor Times World University Rankings, stated: “We have now published the Rankings four times and we have enhanced their quality by gathering more data and improving our quality assurance.” But evidence seems to indicate otherwise.

A study of both lists by academic researchers published in the open access journal BioMed Central Medicine found that only 133 institutions made the top 200 of both lists and four schools from the top 50 on the Shanghai list didn’t even make the top 500 on the Times list. The authors stated that “the lack of better concordance [was] disquieting” and blamed these discrepancies on poor methodology and inappropriate indicators.

The research points out that the response rate for the peer review survey that the Times rankings are heavily based on was less than 1% and that there are no guarantees for protection from selection biases.  The international character of an institution is said to be more of an indicator for economic and legislative factors than academic excellence and other aspects of the ratings are questioned as non-transparent and unreliable.

The research also raises several questions regarding the Shanghai formula. While Nobel and Fields awards “clearly measure research excellence, even if they don’t cover all fields […] it is unclear why universities with Nobel- or Fields-winning alumni are those that provide the best education. As for faculty, Nobel- and Fields-winners typically have performed their groundbreaking work elsewhere. We found that of 22 Nobel Prize winners in Medicine/Physiology in 1997–2006, only seven did their award-winning work at the institution they were affiliated with when they received the award. Therefore, this measurement addresses the ability of institutions to attract prestigious awardees rather than being the site where groundbreaking work is performed. Finally, the vast majority of institutions have no such awardees. Thus, such criteria can rank only a few institutions.”

The study goes on to question the way citations are applied to the rankings. The method used is inherently flawed, and furthermore, the research found that among the corresponding authors of the 10 most-cited articles published as recently as 1996–1999, 50% had changed institutions or were deceased by 2006. Their final conclusion was that “naïve lists of international institutional rankings that do not address these fundamental challenges with transparent methods are misleading and should be abandoned.”

If the accuracy of global university ranking list is proven to be suspect at best, if not all together impossible, the question remains: What purpose do they serve?

The Commodification of Education
The most obvious answer is that everybody loves rankings. We rank the best movies, the best songs, the best books, the best football players, the most handsome movie stars and the most desirable women. Why should universities be any different?

But, as much as I love to browse through Maxim’s Hot 100, I have to admit that there is considerably more research behind the leading university rankings then a panel of journalist measuring saliva reflexes over photos of skin-clad women. It is tempting to assume that there is something more behind these lists than our love of rankings.

The lists fill a certain gap in consumer information for prospective students. They are known to affect students thinking when short-listing schools to attend. And in today’s education system, students equal money – either in direct tuition, or public funding per student. And good students generate even more money. So there is a financial incentive for academic institutions to make the ranking lists, if it makes the school more attractive to the students.

But student attraction only tells half the story. While a study done by the Cornell Higher Education Research Institute only focused on national rankings, it is likely revealing for the rest of the world. The study “shows what educators have longed suspected – where colleges and universities place in U.S. News and World Report’s annual rankings really makes a difference – affecting enrolment yield, student quality, financial aid packages and, as a result, even where institutions place in the rankings the following year.”

Furthermore, studies have suggested that higher ranked universities receive more public and private funding for research. This seems to indicate that there is considerable financial incentive to being placed high in the rankings. THES editor Martin Ince admits as much:

“We are measuring large, general universities because they are increasingly global. Because we don’t have any subject-specific data, the Rankings are useful to students and academics but need to be supplemented by extra data. Their main users are managers of universities and people in education ministries, funding bodies etc who want to know about the strength of their universities and their university systems.”

What does it Mean?
I don’t question whether the University of Iceland can make the top 100 in the not so distant future. I believe that with the right funding and good intentions, the University of Iceland could very well make an appearance as one of the 100 top-ranked universities in the world. But I wonder if it really means what you think it means?

There seems to be little to suggest that a top ranked university offers better teaching for the average student than a non-ranked university. The rankings may be somewhat useful as a general indicator of the quality of an institution’s research power, but it seems to be difficult to establish if the rankings even manage to measure quality over quantity, and there seems to be little in the criteria to justify the difference between top 25 and top 50.

In and of itself, the top 100 goal may be beneficial for the University of Iceland. It is a good opportunity for some navel-gazing and internal reviews. But when you look at it like that, top 100 just doesn’t have the same ring to it anymore.

Sources:
http://www.biomedcentral.com/1741-7015/5/30
http://www.news.cornell.edu/Chronicle/99/12.2.99/rankings-matter.html
http://www.topuniversities.com/worlduniversityrankings/

Support The Reykjavík Grapevine!
Buy subscriptions, t-shirts and more from our shop right here!

Next:
Previous:



Life
Cover Features
The Hidden Scaffolding Of Ben Frost

The Hidden Scaffolding Of Ben Frost

by

Life
Cover Features
Rán Flygenring Never Stops Playing

Rán Flygenring Never Stops Playing

by

Show Me More!