中国A片

Managers who ‘fetishise’ certain journals warp incentive culture

UK research culture likened to 1970s car industry, with individuals ignoring quality concerns

November 11, 2019
Source: Alamy

Academics and universities that “fetishise” publication in certain journals are the root cause of poor incentives in science – not the design of assessments such as the UK’s research excellence framework, an event on reproducibility has been told.

The use of measures such as journal impact factors to make decisions on REF submission – even when the framework explicitly stated that it would not use them to assess articles – further demonstrated that they were still “part of a culture that exists in universities”.

Marcus Munafò, professor of biological psychology at the University of Bristol and chair of the UK?Reproducibility Network, told an event in London that much of research culture “remained rooted…in the 19th century”, with ways of working “that are not necessarily fit for purpose any?more”.

He compared it to the 1970s car industry,?in which poor-quality vehicles were built without the constituent parts being checked along the way.

中国A片

ADVERTISEMENT

“We produce many of our scientific papers without necessarily having a concern about whether they are right”, operating in the belief “that someone will fix them later as somehow science will self-correct”, said Professor Munafò.

But asked whether journals seen as high impact, such as Nature, were a major part of the problem, he said “the problem arises…when we essentially fetishise publication in that kind of journal. It is the institutions in particular and the line managers that do that, not the journal itself.”

中国A片

ADVERTISEMENT

This fixation has become so ingrained in the research culture that even when funders and assessments such as the REF state that they do not use journal impact factors – a widely criticised metric that looks at the average citations received by articles in a particular journal – “we don’t believe them; it is still part of a culture that exists in universities when it comes to REF preparation and REF submission”, Professor Munafò said.

Earlier, the event – hosted by the British Neuroscience Association at the Sainsbury Wellcome Centre – heard from the head of REF policy at Research England, Helena Mills, who said that although the organisation had repeated “until we’re blue in the face” that journal impact factors were not used in assessments, people “can still turn around and say, ‘You do though, don’t you?’”

However, asked if the REF could do more to insist on open research practices such as data sharing, she accepted that “there is probably a lot more that [future exercises] could do” on reproducibility and open science.

Professor Munafò said that although he hoped the UKRN would help to influence the design of the next REF, there were “levers” in the current exercise that could be used to improve research practices. These included statements made by universities on their research environment, where they could declare what they were doing to support open science.

中国A片

ADVERTISEMENT

simon.baker@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (3)

In the discipline of business and management research you have the ominous ABS (now AJG) journal list and FT journal ranking in addition to this obsession with "top" journals. In business and management that usually means US-based and dominated journals (in terms of editors, focus and scope). This is particularly detrimental if you work in a new, emerging, niche or contentious field/sub-field (especially for those areas that challenge the status quo of received wisdom/worldviews, methodological approaches, and vested interests in regard to scholarly prestige and research income). More of a serious problem is that the REF and the above metrics (in addition to a plethora of others such as "grant capture") are used as crude tools for the performance management of individual academics by most universities with often devastating effect on the morale, career and well-being of academic staff; not to mention the negative selective effect on research (in terms of objects, foci and approach) and the overall research culture (skewed towards competition and envy away from collaboration and debate). The system is broken, skewed and may I say largely corrupt.
An interesting analogy, with the car industry’s emission scandal fresh in our memory!
The fetishisation of journals with high impact factors cements the status quo - the established hierarchy of knowledge. The high citation index indicates a journal is placed in the heart of an academic silo. The system works for the silos (and for those who have been propelled by it to the top), but it fails badly with transdisciplinary or interdisciplinary research and other new work on the cutting edge. Sometimes bespoke journals are better conduits for such knowledge. Think of The Journal of Mental Health and Urban Design for instance. It probably has no index factor worth mentioning, but the confluence of those disciplines is genuinely fruitful, and this is the journal that is being read for people doing the transdisciplinary work - not Nature Neuroscience.

Sponsored

ADVERTISEMENT