A young scientist is threatened with legal action for breaching copyright after she republishes a journal's graph on her website to illustrate its deficiencies. Meanwhile, another is too scared to Twitter the fascinating results from his Antarctic explorations for fear it could jeopardise his chances of being published in Nature.
Elsewhere, a researcher knows that the negative results of her experiment are essentially worthless - who is going to publish them? And a university, keen to bolster its standing, is deciding who to appoint based on where the applicants have been published.
All around the world, from grant funding to the peer review, publication, reproduction and public dissemination of research results, elite scientific journals wield huge influence and control over just about every facet of scientists' lives.
"The hegemony of the big journals has enormous effects on the kind of science people do, the way they present it and who gets funding," observes Peter Lawrence, a researcher in the department of zoology, University of Cambridge, and emeritus scientist at the Medical Research Council's Laboratory of Molecular Biology.
中国A片
"Everybody is in thrall to them, and it is those who are not manipulating the system who feel their power most acutely."
"Academic credit goes with publishing in a handful of really prestigious journals and, as a result, they exert a horribly powerful influence," notes Richard Smith, former editor of the British Medical Journal, former chief executive of the BMJ Publishing Group, author of the book The Trouble with Medical Journals (2006) and visiting professor at Warwick Medical School, University of Warwick.
中国A片
But have these gatekeepers for what counts as acceptable science become too powerful? Is the system of reward that has developed around them the best for science - and what does the future hold?
Unpicking the power of academic and scholarly journals, with their estimated global turnover of at least $5 billion (?3 billion) a year, is a complex business. There are an estimated 25,000 scholarly peer-reviewed journals in existence, about 15,000 of which cover the science, technical and medical communities.
It is these - particularly the elite titles, including Nature, Science, Cell and The Lancet - that are at the heart of the recognition-and-reward system for scientists. From career progression to grant income, "wealth" within the academy is determined by the production of scientific knowledge as recorded in peer-reviewed scholarly journals.
What matters is both the extent of production as measured by the number of publications and how brilliant the publications are held to be as measured by citations, the number of times academic work is cited by peers. Papers in top journals are more likely to be cited, and so scientific life becomes geared to chasing publication in elite journals with the highest impact factor and high performances in a complex array of journal metrics. The so-called Journal Impact Factor is calculated by dividing the number of citations a journal receives in any particular year by the number of articles deemed to be citable in the previous two (see boxes on page 33 and on page 34).
The norm is for scientists to start at the top - submitting papers to the most prestigious journal that they think they might have the remotest chance of getting into - and then working their way down until they are eventually selected. Journals become a means to academics' ends.
Jon Copley, a lecturer in marine ecology at the School of Ocean and Earth Science at the National Oceanography Centre, University of Southampton, is an ambitious young scientist with no illusions about what he must do to get to the top. "Having a couple of papers in Nature pretty much makes your career. It has real currency ... this is how people keep score," he says.
The pressure to publish in top journals has increased even further with the recent announcement by the 中国A片 Funding Council for England that citations will be available for use by panels to help them judge the quality of academics' output in the new research excellence framework. As academics strive to increase their citation counts, it seems likely that the new system will only serve to intensify the publish-or-perish mentality.
From one point of view, such an approach makes perfect sense, and Hefce is only the latest in a long line of various career-making and grant-awarding bodies that rely on journal metrics. Funders need a way of dispensing precious resources to the best people, and academics are more likely to cite others' work if it advances the field. Why not tap into the evidence of strong scholarship offered by journal metrics to help make a more objective and less burdensome judgment of quality?
中国A片
Yet to some observers, this is acutely disturbing, and further entrenches a skewed system of credit that is deeply flawed and damaging to the scientific enterprise.
"(Journal metrics) are the disease of our times," says Sir John Sulston, chairman of the Institute for Science, Ethics and Innovation at the University of Manchester, and Nobel prizewinner in the physiology or medicine category in 2002.
He is also a member of an International Council for Science committee that last year drafted a statement calling for collective action to halt the uncritical use of such metrics.
Sulston argues that the use of journal metrics is not only a flimsy guarantee of the best work (his prize-winning discovery was never published in a top journal), but he also believes that the system puts pressure on scientists to act in ways that adversely affect science - from claiming work is more novel than it actually is to over-hyping, over-interpreting and prematurely publishing it, splitting publications to get more credits and, in extreme situations, even committing fraud.
The system also creates what he characterises as an "inefficient treadmill" of resubmissions to the journal hierarchy. The whole process ropes in many more reviewers than necessary, reduces the time available for research, places a heavier burden on peer review and delays the communication of important results.
The sting in the tail, he says, is the long list of names that now appears on papers, when it is clear that few of the named contributors can have made more than a marginal contribution. This method provides citations for many, but does little for the scientific enterprise.
It is not only scientists but journal editors, too, who see the growing reliance on metrics as extremely damaging, with journals feeling increasing pressure to publish certain work.
Richard Horton, editor of The Lancet, describes the growth of the importance of citations and impact factors as "divisive" and says it is "outrageous" that citation counts will feature in the REF.
"If I could get rid of the impact factor tomorrow, I would. I hate it. I didn't invent it and I did not ask for it. It totally distorts decision-making and it is a very, very bad influence on science," he says.
Noting that the medical journal articles that get the most citations are studies of randomised trials from rich countries, he speculates that if The Lancet published more work from Africa, its impact factor would go down.
"The incentive for me is to cut off completely parts of the world that have the biggest health challenges ... citations create a racist culture in journals' decision-making and embody a system that is only about us (in the developed world)."
Peter Lawrence has written extensively, in Nature and elsewhere, about what he calls the "mismeasurement" of science. His view is that a reliance on journal metrics is damaging science and has created a new generation of scientists obsessed with how many publications they have to their credit.
"As with feminism in the 1970s, it is awareness that needs raising. I think many people realise the system is all wrong, but they can't see a way out of it," he says.
His argument is that although metrics make journals very powerful, journals are not to blame for the situation.
Instead, he points the finger at the system - including the scientific-information businesses that have exploited a market in supplying metrics, and "the bureaucrats that give you a grant if you get a paper published but not if you don't". Above all, he blames the scientists who have been complicit in giving journals substantial power.
Graham Taylor, director of educational, academic and professional publishing at the Publishers Association, concurs.
"Brands are being used to recognise quality, but it is the scientific community itself, not the journals, that is doing it."
Peter Murray-Rust, reader in the department of chemistry at Cambridge, campaigns for open scientific content and data. He accepts journal metrics as part of modern science, but argues that the real problem is that they are owned by private companies that produce metrics to make a profit rather than support the scientific domain.
"Higher education has to take control of academic metrics if it is to control its own destiny ... it should determine what is a metric and what isn't," he says.
Horton agrees. Why, he asks, should Thomson Reuters - which produces and owns the Journal Impact Factor, a proprietary metric - have such a massive influence over how public money is spent?
"You have basically got a system owned by the private sector that is deeply distorting public money. If there is a measure of quality, it should be owned by the public sector."
Although Horton does not blame private companies that have simply spotted a commercial opportunity, he believes it is up to editors to "resist the push towards a system that is so deeply divisive".
Unfortunately, say observers, there is no incentive for people on the inside to change things. The scientists who have learnt to play the "complicated game" of getting their papers into the top journals are reluctant to ditch it because they fear losing out, Smith notes.
Similarly, many scientific societies - which might be expected to speak out about the system's problems - run large revenue-raising scientific-publishing businesses, meaning that it is against their interests to lobby for improvements.
This leaves funding bodies such as the Wellcome Trust and the National Institutes of Health in the US to fly the flag for change.
"We have lost the distinction between a scientific society and a scientific publisher, and that, I think, is a very serious one," Murray-Rust notes. "We desperately need scientific societies that are independent of the problems of publication."
Yet beyond the question of how journals have acquired their power is the issue of how they use - or misuse - it. If Lawrence's experience is anything to go by, scientists are not happy. He accuses the elite journals of treating them like "supplicants in a medieval court".
"What do the big journals do? They reject 90 per cent of papers submitted without review. That in itself makes one feel a bit like one is in a lottery ... they frequently take a long time to make decisions; they can be very offhand with authors, making one feel one has no rights or proper respect," he says.
The problem, he explains, is that because so much hangs on whether a journal takes a particular paper, researchers live in constant fear of "irritating the editor". Scientific conferences become elaborate courtship rituals as scientists try to demonstrate that they are leaders in the field to ensure that their papers are looked at when they are submitted. "One's entire fate is in the journal's hands," Lawrence says.
Academics highlight other concerns over the control journals exert. Murray-Rust puts them in the dock over the copyright restrictions they impose, describing them as a "major impediment" to progress.
As an example, he points to the way so-called text or data mining - the use of technological tools to extract and tabulate data automatically from online papers - is becoming "increasingly expressly forbidden" by most major subscription-based publishers (although Nature has recently changed its policies to allow some).
"You are actually barred from using modern techniques to enhance your science ... it has taken us back ten years in the use of scientific information," Murray-Rust says.
Top universities, working together, could force the reform of copyright laws, Murray-Rust believes, but, given their inaction, he thinks that a better answer might be "civil disobedience on a mass scale".
He envisages scientists focusing on one or two areas, such as medicine and climate change, where there are strong moral grounds for allowing science in journals to be reproduced - and "sticking the whole bloody lot" on their websites.
"I think the publishers would find it very difficult to carry anybody with them" in complaining about such a move, he adds.
Horton's concern is the way journals control the media. He is highly critical of journals presenting scientific findings to journalists under embargo. Hyping science as an "event" misrepresents the real nature of the endeavour as a gradual accumulation of knowledge and understanding, he asserts.
Another complaint academics have made is that journals set the terms and conditions on which scientists may discuss with each other, the media and the public work before it is submitted, after it is submitted and once it has been accepted. Jon Copley understands why they feel such control is appropriate and is supportive of it, but says there are grey areas, for example Twittering, that need clarification.
According to Smith, one of the most unfortunate consequences of the elite journals' power is that they are holding back progress towards open access, the alternative publishing model where papers are free for the public to read, but publishing and peer-review costs (which all parties agree are substantial) are paid by funders.
Almost all the prestigious journals remain subscription-based, and thus publicly funded research is effectively locked away, he notes. Given their high status, and the fact that their publishers are doing well in the present system, there is little incentive for their owners to propose, let alone accept, change.
Maxine Clarke, publishing executive editor at Nature, says open access would not work for the journal, which she says puts "a lot of resources" into its processes.
But both Smith and Horton question whether journals truly add much to the scientific enterprise, although publishers argue that the orderly process of authentication, dissemination and archiving they provide is invaluable.
中国A片
"Journals have become part of a vast industry where it is not always clear what we are giving back," Horton says. "What is the quality of the peer review? What is the quality of the editing? How fast and efficient are they in getting science out into the public domain? We need to ask some very tough questions."
Smith says: "Journals are getting rich off the back of science without, I would argue, adding much value." Publishers, he says, take scientists' work, get other scientists to peer review it for free, and then sell the product back to scientists' institutions through subscriptions.
Horton believes the real crisis facing journals, and in particular the top health titles, is in defining the "purpose" of their role. While journals founded 300-odd years ago had an explicit mission, which was to "use knowledge to change society for the better", today's journals have "lost their moral compass", he contends.
"We have an industry in which most journals exist to perpetuate an inward-looking academic-reward system, and there is no clear purpose that has anything to do with science."
He argues that if the situation continues, journals will "die - and deserve to. Quite honestly, if that is all journals are - a production line for the scientific community - you don't need journals in the internet age, you just need a big database."
So what do those advocating change recommend? Horton believes that to survive, journals need to rediscover their original purpose and ethos. He argues that they do have an important dual role - setting a strong public agenda around what matters in science and communicating the public's concerns back to the scientific community. "Right now they are doing very badly at both," he says.
Lawrence calls for an ombudsman to give scientists who feel unfairly treated at the hands of journals a route to recourse. In addition, he urges the scientific community to take a stand against journal metrics and fight for a return to decisions that value a paper for what it is worth, rather than where it is published.
"There is no substitute for reading papers and we need collective action by scientists to go for quality, not quantity," Sulston says.
Scientists should use the furrow that open access is slowly ploughing to take things further, Smith argues. As open-access journals fall prey to journal metrics in exactly the same way as subscription-based journals, the answer is not only to opt for open access but, in the process, choose an approach that is less driven by a paper's perceived importance, he says.
Smith sits on the board of the Public Library of Science (PLoS), the open-access journal publisher that produces PLoS One - a journal that is both open access and has a philosophy of minimal peer review.
"We peer review to say the conclusions are supported by the methods and the data, but we won't spend a lot of time working out whether the work is original or important," he explains. This model is "beginning to reinvent the whole process", he argues.
"The philosophy is, let's get it out there and let the world decide ... That way, there is much less power in the hands of journals and it is a less distorted view of the world."
Others see a more threatening cloud on the horizon for the scientific-publishing industry: an internet revolution that sees scientists self-publishing via the web.
A small but growing number of scientists are simply ignoring journals and putting their work on web pages and blogs, where there is no limit on the length of articles, raw data can be published with ease and peer review can take shape through discussions and comments.
Rejecta Mathematica is a new online open-access journal that publishes only maths papers that have been rejected by peer-reviewed journals. Its editors argue that previously rejected papers can still be of value.
Michael Nielsen, a physicist, author and advocate of free content, articulates the threat from the web in a paper titled "Is scientific publishing about to be disrupted?", published on his website, michaelnielsen.org, earlier this year.
He argues that scientific publishing is in the early days of a major upset. Just as newspapers have been challenged by bloggers, journals can expect significant upheaval, too.
"Scientific publishers should be terrified that some of the world's best scientists ... are spending hundreds of hours each year creating original research content for their blogs, content that in many cases would be difficult or impossible to publish in a conventional journal. What we're seeing here is a spectacular expansion in the range of the blog medium. By comparison, the journals are standing still."
Yet to publishers, the notion that bloggers could replace journals borders on the abhorrent. Taylor is concerned by the lack of peer review. "Blogs and wikis are a sort of cloud of knowledge as opposed to the bricks in the big wall of knowledge that are journals," he says.
Certainly no one is betting on what the future will hold.
Clarke does not reject the notion of the web leading to the collapse of the present system, although she doesn't know precisely what will replace it. She says she is taking bets only on the likelihood that Nature, which has been experimenting with the web more than most journals of its stature, will be part of it.
Murray-Rust says it is "extraordinary" that, while scientists may have invented the web, science has yet to be transformed by it in the way that other areas, such as finance, travel and retail, have.
"I am a radicalist. I think the system will crash but I am not exactly sure how," he says. Nevertheless, "a few determined people in the right area with modern tools can completely change the way we do things", he concludes.
TRANSPARENT AND NON-NEGOTIABLE: THE JOURNAL IMPACT FACTOR
James Testa, vice-president (editorial development and publisher relations) at Thomson Reuters, explains how the Journal Impact Factor works and why it continues to be useful to authors and academics.
Since the Journal Impact Factor metric was created in the early 1960s by Eugene Garfield and the late Irving H. Sher, Thomson Reuters (then the Institute for Scientific Information) has delivered a clear and consistent message on its proper use and meaning.
Garfield and Sher developed the metric to help select journals for the new (at that time) Science Citation Index.
Properly used, the Journal Impact Factor indicates relevant information about a journal as a whole, namely the extent to which its recently published papers are cited in a given year. It does not reveal anything concrete about a specific paper or a specific author published in the journal.
In his address to the International Congress on Peer Review and Biomedical Publication in 2005, Garfield noted: "Most articles in most fields are not well cited, whereas some articles in small fields may have unusual impact, especially where they have cross-disciplinary impact. The well-known '80-20' rule applies, in that 20 per cent of articles may account for 80 per cent of citations."
Acceptance of a paper by a journal that ranks high in its category may be an indicator of potential importance to the community, but it is not a guarantee of citation. It only provides some evidence that the science presented has passed through a level of peer review that, for a high-impact journal, may be extremely rigorous. This is the extent to which the Journal Impact Factor may be construed as a reflection of the quality of an individual author or paper.
Further, an individual paper and its citations are included in the Journal Impact Factor calculation for only two years - the second and third year after publication.
This is a relatively short interval compared with the citation lifespan of a paper, and therefore does not consider the weight of the final contribution of the paper to its subject.
Given the statistical evidence of the actual occurrence of citation across all papers in a journal, and the time span of the citation of individual papers, the relationship between a single paper or author and the impact factor of the publishing journal is tenuous.
The calculation of the Journal Impact Factor is based on two elements: the numerator, which is the number of citations in the current year to any items published in the journal in the previous two years; and the denominator, the number of substantive, scholarly items (source items) published in the same two years.
Citations in the numerator are aggregated from the cited reference lists of more than 1.9 million items indexed by Thomson Reuters in 2008. Scholarly items counted in the denominator are determined based on expert judgment and data analysis and are decided solely by Thomson Reuters.
While we respect publishers' interest in understanding these determinations, the content of the denominator is never negotiated or altered to suit their requests.
Millions of subscribers to the Web of Science website can easily view the items included in the denominator. This information is thus widely available and fully transparent.
The application of review and analysis over the years has resulted in a very consistent situation, where the overwhelming majority of citations in the numerator of the Journal Impact Factor are attributable to items counted in the denominator.
This clear association of citations with the items identified by Thomson Reuters as citable reinforces the usefulness of the Journal Impact Factor for the assessment of a journal's use by the wider community of authors and scholars.
IMPERIAL METRICS
While scholarly journals date back more than 300 years, journal metrics came to prominence in the years following the Second World War, notes Rob Iliffe, professor of intellectual history and the history of science at the University of Sussex.
Multinational companies Thomson Reuters and Elsevier run large scientific-information businesses based on their citation datasets - Web of Science and Scopus respectively.
Thomson Reuters' Journal Impact Factor, which was devised by Eugene Garfield, measures the quality of scholarly journals based on the number of their citations (see box page 33).
But the measure is controversial, and not just because of the general debate about the usefulness of citation metrics. Critics say the criteria are unclear, that the item types other than papers (for example, front-end magazine content such as editorials and letters to the editor) that may be eligible for inclusion in the algorithm are largely a matter of negotiation between journals and the company.
"The metrics are secret, uncontrolled, highly influential and created for the commercial purposes of organisations that should have no part in the funding of science," says Peter Murray-Rust, reader in the department of chemistry at the University of Cambridge.
Other influential metrics include the H-index for an individual researcher, proposed by Jorge Hirsch, based on the number of published papers and how often they are cited.
The United Kingdom Serials Group, which brings together libraries and publishers, is working on an alternative method to judge work based on web usage as measured by downloads.
JOURNALS BY NUMBERS
25,000: estimated number of scholarly peer-reviewed journals worldwide
?3 billion: their estimated global turnover
175: average number of papers Nature receives each week. About 5-10 per cent of those in the biological sciences and 20 per cent in the physical sciences are accepted for publication
10 per cent: proportion of journals that are open access
85-95 per cent: proportion of journals available electronically
20 per cent: weighting given to the number of articles published in Nature and Science in the calculation of the academic ranking of world universities compiled by Shanghai Jiao Tong University.
中国A片
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to 罢贬贰’蝉 university and college rankings analysis
Already registered or a current subscriber? Login