中国A片

取消期刊影响因子对荷兰科学不利

雷蒙德·普特(Raymond Pot)和威廉·穆徳(Willem Mulder)表示,取消可衡量的评估标准将使判断更具政治性和随机性

八月 3, 2021
Divers feet and a measure stick showing above water level only as a metaphor for Journal impact factor ban is bad for Dutch science
Source: Alamy

?

点击阅读英文原文


最近,乌得勒支大学(Utrecht University),将禁止在评估科学家的过程中使用期刊影响因子。这种可衡量的绩效指标将被抛弃,取而代之的是一种“开放科学”体系。这种体系以牺牲单个科学家的利益为代价,将集体(团队)集中起来。

然而,我们担心乌得勒支新的“认可和奖励”制度将导致随机性,并导致科学质量的降低,而这将对荷兰科学家的认可和评价机制产生严重的后果。特别是,它将对年轻科学家产生负面影响,让他们失去国际竞争力。

乌得勒支认为期刊影响因子在研究人员的评价中起的作用过大,这一说法是有误导性的。对于相当多的研究领域来说,影响因子并不是那么重要。为了考虑特定领域的文化,研究人员开发了领域加权的引用影响分数,将科学家实际得到的总引用数与学科领域的平均引用数进行比较。例如,医学成像研究小组通常在影响因素较低的技术期刊上发表自己的研究成果。虽然他们的技术不是突破性的,但是发展更快的核磁共振方法是很重要的。荷兰研究委员会(Dutch Research Council, NWO)在其奖励政策中考虑到了这一点。因此,NWO的职业发展计划向从未在高影响因子期刊上发表论文的医学成像研究人员提供了许多个人资助。

第二个误解是,期刊的影响因子与其出版质量无关。在《自然》(狈补迟耻谤别)、《科学》(厂肠颈别苍肠别)或《细胞》(颁别濒濒)等顶级期刊上发表的一篇普通论文,比一本技术期刊上的普通论文需要更多的工作。顶级期刊得到世界级专家的帮助,从而保证了其高影响力和高质量。这并不意味着《自然》杂志上的每一篇论文一定都比在技术期刊上发表的文章要好,但总的来说,推翻教条的新技术和概念都是在顶级期刊上发表的。

NWO的“我来,我见,我征服”(Veni, Vidi, Vici)人才项目的申请格式在过去几年发生了根本性的变化。带有出版物、引用、讲座等客观信息的简历内容都被“叙事”所取代。不管审核者对申请的看法如何,他们将不再对其进行评分,而被要求填写包含优点和缺点的列表。在一些NWO的比赛中,由于强调“团队科学”,简历被完全删除。

评估委员会的反馈令人不安。委员们不知道如何比较候选人,在网上搜索能表明他们表现的数字是被禁止的。评审通常是从荷兰以外的国家招募来的,他们抱怨这种形式耗时太多,有时干脆拒绝对“叙事”进行评判。

我们认为,狈奥翱有责任分配公共资金,以支持最优秀、最有才华的科学家发现新的见解并进行创新。我们强烈支持对那些不完全以科学为导向的学者进行“认可和奖励”,但我们认为这是高校的责任,而非狈奥翱的责任。高校的人力资源政策必须为擅长非科学能力的学者提供不同的职业道路。

定量分析问题是科学实践的一个重要特征,特别是在医学、生命科学和精密科学中。在这些学科中,全世界都在寻求具有创造性的解决方案。因此,科学上的成功更容易被衡量和比较。我们可以理解对于偏向定性的科学来说,可以使用其他方法来评估成功与否。我们强烈支持以不同方式来评估不同的科学学科,并建议由学科本身决定如何评估其学科内的科学家。

乌得勒支大学的政策非常强调开放科学、公众参与水平、数据的公众获取性、研究团队的组成和领导力。这些标准不是科学性的,而是政治性的。此外,衡量这些因素是极其困难的,更不用说用它们对不同科学家进行公平地比较了。因此,这些因素不应该成为评估科学家的主要标准。尤其是对于医学、生命科学和精确科学的研究轨道,国际公认的和可衡量的标准必然是最重要的。

世界科学强国美国的发展轨迹和荷兰完全不同。大型公共资助者,如美国国家卫生研究院(National Institutes of Health, NIH)和美国国家科学基金会(National Science Foundation, NSF)只关注科学上的成就,并没有签署将影响因子排除出评价指标的(Declaration on Research Assessment, Dora,也称《旧金山宣言》)。

我们认为,狈奥翱和荷兰大学应该为主要专注于研究的学者保持客观和可衡量的标准。我们更喜欢那些能产出最佳科学成果的科学家。这是造福社会和维护荷兰在国际排名中的有利地位的最佳途径。

雷蒙德·普特(Raymond Poot)是鹿特丹伊拉斯谟大学(Erasmus University)医学中心的副教授。威廉·穆徳(Willem Mulder)是内梅亨大学(Radboud University)医学中心教授兼埃因霍温科技大学(Eindhoven University of Technology)精准医学教授。本文曾在荷兰《科学指南》(Science Guide)杂志上首次发表,并附有172名学者的签名;此为初版文章的翻译编辑版。

本文由陆子惠为泰晤士高等教育翻译。

后记

Print headline:?Journal impact factor ban is bad for Dutch science

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (6)

I hope you also publish the excellent reply https://www.scienceguide.nl/2021/07/we-moeten-af-van-telzucht-in-de-wetenschap/ by Annemijn Algra et al.
I found this article rather confusing, as it contains a lot of opinion but precious little evidence. It also starts with an illogical argument, saying "Utrecht's assertion that the journal impact factor plays a disproportionately large role in the evaluation of researchers is misleading. For a considerable number of research fields, impact factors are not that relevant." If that is so, why are the authors so worried about dropping the impact factor? They then go on to say to contrast the quality of papers in Cell/Nature/Science with that of papers in "technical journals" - I'm not sure what is meant by that - there are plenty of journals that fit neither category that report excellent work. Reviewing standards are at least as high in many society journals, which have editors who are familiar with the specific topic of a paper - more so than the journalist editors of the 'top' journals. The pressure to get publications in CNS is known to create perverse incentives (https://royalsocietypublishing.org/doi/10.1098/rsos.171511), and these journals have a relatively high rate of retractions: https://www.nature.com/articles/nature.2014.15951. It's also interesting that they see open science as a political rather than scientific matter. I could not disagree more: it's frustrating to read these 'high impact' papers in 'top' journals that make extraordinary claims but then just say 'data available on request' (it never is). If we cannot see the data and have a clear account of methods, then the research paper remains more like an advertisement than a scientific contribution. Finally, the authors' concern for early-career researchers is laudable, but have they surveyed early-career researchers to ask what they think about the new criteria?
And here is a response from some Dutch scientists who take a different perspective from the authors (including some of early-career researchers). https://recognitionrewards.nl/2021/08/03/why-the-new-recognition-rewards-actually-boosts-excellent-science/
Sigh. I suppose that there will always be some dinosaurs who oppose progress. The journal impact factor has been totally debunked for decades now. None of that work is referred to. I'll cite one example from my own experience. In 1981 we published a preliminary account of results that we'd obtained with the (then new) method for recording the activity of single ion channels. It was brief and crude, but in the early 80's anything with single channels in it sailed into Nature. After four more years of work we published a much better account of the work, 57 printed pages in the Journal of Physiology. The idea that the short note is worth more than the real paper is beyond absurd. How about reading the applicant's (self-nominated) three best papers? It doesn't matter a damn where they are published (or even whether they're published yet).
If the venue does not matter then every university should have an inhouse journal and academics should publish in those. Why bother with other journals in the first place?
So Poot and Mulder, both from Dutch medical centres, want to retain academic evaluation by journal impact factor. Their logic is hard to follow and harder to swallow: top journals have high impact factors and publish the best papers because they get assistance from world experts who safeguard high impact and quality. What does this mean? In many research fields, they say, journal impact factors are not nearly as significant as they are in medicine. Oh really? Academic performance is measured almost entirely by metrics these days and by far the most important of these is the journal impact factor. As David Colquhoun says, this is not because the journal impact factor is a good measure, but because it has long been gamed, and manipulation has been most successfully in medicine. For years, the editors of the Lancet and BMJ have bewailed the corruption that produces the high impact factors boasted by the top journals in medicine. Papers are written to order and to a formula that will generate citation and thereby contribute most to journal impact factor. Dozens typically claim authorship of a paper in a medical journal; equally typically, none of them wrote it. As citations grow older and ever more positive, the research base becomes shakier. Some of most prolific authors have never actually existed, nor have the papers they have written, or the journals in which they have published. The system is absurd and has long been recognized as quite daft, but a lot of capital has been sunk into working this system, and probably in no discipline more than medicine.
ADVERTISEMENT