Swiss perspectives in 10 languages

How scientists are redefining success

University rankings, journal impact factor, citation counts: some argue that these methods of measuring the 'best' science are outdated and even a hindrance to scientific progress. Switzerland wants to help implement alternatives – a task easier said than done.


PLACEHOLDER

According to a recent report by the Swiss Sci­ence CouncilExternal link (SSC), scientific activities – whether it’s the employment of researchers, the publication of papers, or public and private investment – have grown significantly in recent decades. This growth, in turn, has resulted in unprecedented levels of competition when it comes to funding, prizes, academic positions, spots in top journals, and other trappings of scientific success.

With the increased pressure to “publish or perish”, more and more importance has been placed on quantitative measures of scientific success [see box] – often focused on publication citationsExternal link and journal impactExternal link factor – which can boost researchers’ incentive to publish even more.

  • H-indexExternal link: A standardised numerical rating that is calculated using the number of scientific papers a researcher has published and the number of times those papers have been cited. Although the h-index aims to express both a scientist’s productivity and impact in his or her field, critics say that it is overly simplistic and can’t be compared across areas of science.
  • Journal impact factor (JIF)External link: Used to rank scientific journals by their importance to their respective fields. A JIF is calculated based on the average number of times a journal’s published papers of the preceding two years have been cited. The JIF is a simple comparison method, but it has flaws: it does not directly assess article quality, can’t always be compared across fields, and can be skewed by a few highly cited articles.
  • University rankingsExternal link: Can be based on knowledge transfer and citations as well as teaching performance. As more universities and research institutions have started vying for access to funds, scientists and students, global rankings have become an increasingly useful tool for assessing quality and impact at these organisations. But some argue External linkthat rankings can create incentives for institutions to focus too much on high-impact research to boost their ranking, and too little on educational and social responsibilities.

It is, as Swiss Academies of Arts and Sciences president Antonio Loprieno put it at a recent conference in BernExternal link, a “paradox” of modern science.

“On the one hand, use of contemporary measures of scientific performance has increased exponentially together with the increase in science funding, which is for us a very good thing. On the other hand, we entertain reservations about their fairness, validity or applicability,” he said.

To address some of these reservations, the SSC this year produced a number of recommendationsExternal link (PDF) for the Swiss science community to change the way it evaluates and funds research – notably by using qualitative indicators to support (but not replace) quantitative ones.

“In recent years [quantitative] indicators have been used increasingly as a substitute for qualitative evaluation, a practice that has given researchers wrong incentives and threatened scientific quality,” the report said.“A national strategy taking into account the diversity of disciplinary and institutional requirements for a differentiated evaluation should be promoted.”

More than a number

Overemphasis on quantitative metrics, critics say, leads to a deterioration of the scientific rigour that these metrics were intended to support in the first place – not to mention tremendous stress for scientists themselves.

“According to Google, my H-indexExternal link is 48, which means that I have 48 papers that have at least 48 citations. I don’t like being reduced by a single number, but there are people on this planet who think of me in terms of a single number,” said Stephen Curry, Professor of Structural Biology at Imperial College London, who also spoke at the Bern conference.

“You can use numbers to characterise some aspects of it, but science is fundamentally a deeply human activity. One cannot simply measure the excellence of science; it has to be a matter of judgment, discussion and expert input.”

Curry leads the steering committee of the San Francisco Declaration on Research Assessment (DORAExternal link), one of several recent efforts to re-evaluate how science is, well, evaluated; and identify new guidelines for assessing researchers and institutions in a more holistic way that is still efficient enough to keep science moving forward.

More

Quantifying quality

Critics also argue that quantitative indicators like the H-index are prone to distortion, and not transparent enough in how they’re calculated and used. These metrics discourage diverse and high risk, high-return research in favour of conformism and uniformity, they say.

For Ellen Hazelkorn, head of the Higher Education Policy Research Unit at the Dublin Institute of Technology, these kinds of metrics are especially problematic at a time when science is for everyone; not just the elite few in their ivory towers.

“Once research is seen to have a value and impact beyond the academy, it’s no longer the pursuit of individual, intellectual curiosity, but balanced by social and national priorities,” Hazelkorn said.

She added that metrics like institutional rankings – for example the much-anticipated annual Times Higher Education World University RankingsExternal link – are “hugely unsuited to a high-participation society” because they focus on accountability only within the academic sphere, and not to society as a whole.

“Rankings are simple to understand, but their indicators of success increase the levels of inequality and stratification within our societies, which in turn have implications for access to public goods.”

Simplicity is also a big part of the appeal of individual and journal-level metrics like citation counts and impact factor, Curry said. “Metrics are easy to calculate; they have a pseudo-objectivity, which is appealing to us, and they make our lives easier.”

Pressure and prestige

But at what cost? As Curry explained, focusing too much on these tools can even slow down scientific progress by encouraging researchers to submit their work to the most prestigious journal possible, raising the likelihood of rejection or even retraction, the latter of which can also erode public trust in science.

Additionally, a narrowing of scientific focus to publications and prestige means that other important activities are undervalued and may suffer as a result – whether it’s teaching, communication and outreach, or mentoring of younger scientists.

Today, DORA has been signed by nearly 14,000 individuals and organisations. The 2015 Leiden ManifestoExternal link has also gained traction in the academic community. Both documents call for reducing reliance on quantitative metrics like citations and impact factor, or at least using them in combination with other, qualitative methods that focus on scientific content.

Emerging alternatives

Science is still growing fast, and everyone wants to be excellent. So, when it comes to the reality of doing science, how can more holistic – but possibly more time-consuming – evaluation methods overcome the allure of ‘quick and dirty’ metrics?

For Sarah de Rijcke, deputy director of the Centre for Science and Technology Studies (CWTS) at Leiden University in the Netherlands – the birthplace of the Leiden Manifesto – one answer lies in what she calls a “portfolio approach” to evaluating science, which can be adapted to a given situation, institution or researcher.

“Universal fixes are not very effective,” de Rijcke said, because the kinds of research that are most rewarded by traditional evaluation methods vary across disciplines. Instead, for a given evaluation context, she recommends generic principles for creating a “standardised narrative” that weaves together a scientist’s expertise, outputs –  from publications and grants to teaching and even social media – and influences on science and society.

The DORA committee also collects examples of such approaches on its website; for example, asking researchers to summarise their best publications and contributions in their own words using a ‘biosketchExternal link’.

Stephen Curry noted that the move of the global scientific community – also seen in SwitzerlandExternal link – toward open access is also likely to be a catalyst for change when it comes to evaluation metrics, since open-access journals and pre-print archivesExternal link may encourage greater emphasis on content and transparency than traditional subscription-based journals.

Swiss responsibility

Loprieno told swissinfo.ch that because of its plentiful funding and flexible administrative requirements compare to other European countries, Switzerland has an international obligation to push for change.

“I think that since we have such a well-funded system, we also have a certain responsibility in the rest of the world. Switzerland might experiment or try solutions in a more proactive way to see how to overcome the difficulties of the current system,” he said.

One of the biggest challenges, he added, will be to ensure that young scientists who are early in their careers are adequately supported during the transition to more diverse science metrics.

“My solution would be slightly more tolerance for longer-term research projects. We tend now to support and finance on a short-term basis, which goes in line with this logic of competition. If we were ready to finance on a longer-term perspective, that might take some pressure off the earlier stages in the career and also create a more equal system.”

Such a big change in research culture is much more easily said than done, as Loprieno acknowledged, but that doesn’t mean it’s impossible.

“This is also a culture we want to revise, so we need somehow to start somewhere,” he said.

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

You can find an overview of ongoing debates with our journalists here. Please join us!

If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR