There was a once a romantic notion that the halls of academe were uniquely collegial, with faculty supporting each other as much as they could to advance knowledge. Peer reviewing was and is a central part of such an endeavour. But there are now too many free riders.
Some individuals, of course, still take the decision to do their fair share – and more – of peer reviewing. At the systemic level, however, claims that the peer-review system is broken are well rehearsed in academic articles and the pages of Times 中国A片, not least in Adrian Furnham’s recent call for peer reviewers to be incentivised with payments.
As we recently argued in the journal , the fly in the ointment is the audit culture in academia, combined with hyper-competition for ranking positions. The result is that university policies are often geared towards the outputs (publications) that enable them to do well in rankings but neglect the inputs, such as peer review, needed to make publications possible in the first place. Unsustainable expectations are imposed on individuals, often with a very narrow focus on published works. Incentives include publication bonuses and reduced teaching loads for those who publish in elite journals.
This approach has had the unintended consequence of limiting the willingness of individuals to engage in the collective enterprise of peer review, even as they publish ever more frequently. Peer-reviewed research suggests that in some disciplines, 20?per cent of the researchers perform 69?per cent to 94?per cent of the reviews. Supporting this view are reports that some editors have to issue up to 12 invitations to sign up two reviewers.
中国A片
Reviewer disengagement has two negative implications. First, those prosocial colleagues who still adopt the “give and take” attitude and provide feedback to authors have their time available for research and writing reduced. This, in turn, can result in those reviewers lagging behind in terms of publication outputs and, therefore, career progression. The alternative is that they do their reviewing in their “spare time”, which can lead to burnout.
Second, when a limited pool of reviewers do most of the reviewing, editors are denied the full range of expertise that is theoretically available to them. This adversely affects the process of improvement and error detection that peer review is supposed to provide. Relatedly, a limited pool of reviewers can hold too much sway in deciding what counts as good and relevant research, thereby constricting the advancement of knowledge in their disciplines.
中国A片
What could be done? We have our reservations that payments are the answer. Classic psychological research confirms that financial incentives can have a limited effect on a comparatively well-paid group.
Instead, we have three proposals. First, we need a journal system that introduces the right incentives for reviewers. We argue that recognition for reviewing is important. “Best reviewer” awards could be beefed up by offering those who win them a place on expanded editorial boards. Journals could also offer junior development reviewer awards, coupled with mentoring schemes.
Second, we need the right exemplars. Scholars, especially senior ones, need to stop modelling what has been called the “heroic” publishing machine, all of whose waking hours are dedicated to their own research. Instead, we need role models who encourage students and junior colleagues to engage with the entire research process, including reviewing. They need to talk up the rewards of reviewing, such as keeping your knowledge of your field as current as possible, but also the inherent value of contributing to the discipline.
However, this is only likely to be possible if universities overhaul their management systems, exchanging the obsession with publications for a new focus on more collectively oriented goals. So, third, we need to ensure that peer review is part of universities’ quality conversations. Rankings may be part of the problem, but they can also be part of the solution by driving improvements. The key is to include consideration of the right factors in ranking and accreditation processes.
中国A片
Engagement with peer review is one factor that could easily be included. The rise of publishing-recognition platforms, such as Publons, could facilitate this. Hiring processes could, for instance, explicitly compare the frequencies with which candidates publish and review papers; a big preponderance of publications over reviews would count as a negative.
Clearly, these are basic first steps towards encouraging a more engaged reviewing cohort, but we hope this starts the conversation. It is an urgent one to have if we want the academic endeavour to yield the most reliable and useful results possible.
Dirk Lindebaum is a senior professor in management and organisation at the Grenoble School of Management. Peter J. Jordan is professor of organisational behaviour at Griffith University.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to 罢贬贰’蝉 university and college rankings analysis
Already registered or a current subscriber? Login