The fragile state of climate research is such that a small piece of gravel tossed into the pool causes major ripples. At the end of June, a concrete block was thrown in.
The prestigious journal Proceedings of the National Academy of Sciences published "Expert Credibility in Climate Change", a paper in which William R.L. Anderegg, James W. Prall, Jacob Harold and the late Stephen H. Schneider use citation and publication data to examine the academic credentials of those who agree that human activity is driving global warming and those who are sceptical and believe, for example, that the climate data offered in support of human influence on atmospheric temperature exhibits a natural cyclical variability. The article concludes by stating that those convinced of man's role in global warming have better academic credentials than the sceptics when judged by the numerical metrics the authors adopt for citation analysis.
The reception to the paper has been predictable: proponents of anthropogenic global warming have hailed it as proof that critics do not know what they are talking about, while those who have been critical have accused the authors of creating a blacklist of opponents and employing a flawed methodological approach. However, even academics in the first camp have expressed some major worries.
Almost certainly the most informed critique comes from Roger Pielke Jr, one of the world's top environmental policy researchers. He argues that it sits uneasily within a scientific publication because of its political nature. In support of this, he cites an article in the magazine Scientific American that says that one of the researchers, Schneider, a distinguished climate scientist, "admits that it is born of frustration with 'climate deniers', such as physicist Freeman Dyson or geologist Ian Plimer, being presented as 'equally credible' to his peers and granted 'equal weight' as science assessments from the IPCC (Intergovernmental Panel on Climate Change) or US National Academy of Sciences, both of which ascribe ongoing climate change to increasing concentrations of atmospheric greenhouse gases due to human activities."
Pielke also points out a major methodological flaw in the paper - the authors used, as their division point, those who agreed with the 2007 report of the IPCC and those who did not. The problem is that the majority of names on the paper's list of those who were sceptical of anthropogenic global warming were taken from a series of open letters and petitions in circulation before 2007. The authors of those documents could have little if any idea of their views on a yet-to-be-written report.
So what must we make of the article? A footnote in the paper indicates that Anderegg and Harold designed the research, Anderegg and Prall performed the research, Anderegg analysed the data, and all four collaborated on the writing.
Schneider was a climate scientist of international repute, but who are the others? Anderegg is a doctoral candidate in biology at Stanford University; Prall is a computer systems manager and programmer at the University of Toronto; and Harold is a manager at the charitable institution set up by the Hewlett family of Hewlett-Packard fame. If we were to judge the quality of the paper by the credentials of its authors (which is the criterion implicit in the article), the case for its import and relevance would be hard to make. None of the authors seems to have engaged heavily with the field of bibliometrics in terms of publications and consequent citations; two are not academics; and one is learning how to be an academic researcher by carrying out doctoral research.
But, of course, this is exactly how not to judge a publication, by the academic penis-waggling that is now codified by bibliometrics. The proper assessment involves a variety of processes including close reading of the content, examining results and evaluating them against theory or experience, correlating results with one's own research and weighing the relevance of the research methods.
There are a number of problems with "Expert Credibility in Climate Change". The first is that the authors - perhaps unwittingly - have subscribed to the increasingly prevalent Taylorist view of academic research as the creation of a commodity - and not just a commodity (that would be bad enough) but a measurable commodity.
This is an approach that is rapidly losing ground in areas such as health and social care, where the quality of healthcare and the interaction of social workers with clients were reduced to numerical measures that meant little in reality, were easily subverted and caused service to deteriorate. Opposition to such measures is gaining ground in academic management, with reports that citation statistics used to inform expert opinion are to be dropped by the 中国A片 Funding Council for England in the forthcoming research excellence framework.
"Expert Credibility in Climate Change" has been published in a prestigious journal, not in one of the anorak publications that chronicle forms of academic accountancy. Bibliometrics is a controversial technique. At best it can be regarded as a minor supplemental activity that provides small support to the important activity of actually reading a paper; at worst it is regarded as only slightly removed from the inspection of entrails, the examination of the results of a blood sacrifice of a senior research Fellow and the counting and codifying of shooting stars.
A second problem with the paper is that it posits bipolarity within the climate research community. This community is diverse: it contains academic beliefs bounded at one end by a small minority who think that our only solution is to revert to an agrarian existence and on the other by a tiny band who believe that nothing needs to be done because capitalism and human ingenuity will prevail. Between these outliers are academics who, for example, believe that global warming is a serious threat but disagree with cap and trade policies or with specific measures such as the building of wind farms; academics who disagree on methodology such as using computer models that employ proxies for prediction; academics who point out that global warming might have a beneficial effect in the Third World; and academics who honestly don't know, but are working hard to establish their beliefs.
All academic research outcomes documented in publications are carefully edged with statements offered as preconditions to their conclusions, but the paper by Anderegg et al does not give me that impression. It takes a Bruce Willis/Die Hard approach that celebrates black and white rather than an Eric Rohmer/nouvelle vague approach that embraces shades of grey.
A third problem with "Expert Credibility in Climate Change" is that it implicitly promotes exclusivity. At a time when Lord Rees, president of the Royal Society and current Reith lecturer, is promoting greater engagement between scientists and the public, the paper erects barriers between those academics who believe in a specific tenet of climate science and those who do not. If academics wish to stratify their own research communities in such a way, what hope is there for the ideas propounded by Lord Rees about public engagement?
But it is not just within the community of mainstream climate science that Anderegg et al draw their exclusionary lines. They also extend to academics who have an interest in climate issues but do not publish in climate journals - people such as statisticians who analyse climate data, physicists and applied mathematicians who examine the equations that underlie the models used for climate prediction, and computer scientists who develop the models as computer programs.
Another negative effect of the article is the dampening of perversity. Academics should be perverse, they should question, they should take contrary views and they should, above all, be sceptical. A person publishing a paper that adds a collection of minutiae to an existing body of research is someone to cherish; but the scholar who questions the status quo and is prepared to put his or her career on the line should be valued at least as much - if not more so. If Galileo lived in an age where citation metrics were a tool, I am sure that when he was condemned for being "vehemently suspect of heresy", his prosecutors would have pointed out there were no citations to his heliocentric view of his world in what was then regarded as the authoritative work: the Bible.
Despite the heavy criticism it has received, "Expert Credibility in Climate Change" is a very important work. It marks the point where a large community of scientific researchers realised that just doing science was inadequate and that there was a need to engage much more with politics (even though I regard the paper as a poor way to do so).
The article is important for the climate science community as a wake-up call. It does nothing for inclusivity and promotes a perverse idea of the community as fractured. The next few years will be tough for all academics - not just those in climate science. In the US, there is the prospect of a government receptive to the arguments about anthropogenic climate change being diluted by Republican gains in this year's congressional elections. In Europe, there is the prospect of a major reduction in public spending on research, the result of myriad austerity budgets announced in the past few months. A climate-research community that is perceived to be split would be hit by a double whammy: the first being the harsh economic climate and the second being a reluctance of funders to make grants available to a community that turns in on itself rather than looks outwards.
There are two communities of researchers unconnected with climate research that will benefit from the furore. The first is that of philosophers whose work is concerned with ethics. "Expert Credibility in Climate Change" provides data and internet links that enable the identification of academics in both camps: those who believe in human-caused global warming and those who do not. This has inevitably given rise to claims of blacklisting. However, one of the criticisms made of researchers in some areas of climate science is that of a reluctance to release data so that their work can be validated. If the authors of the paper had withheld their lists, they would have been criticised for just this. There is a tension here between an engagement in Popperian scientific method and the ethics of making statements about academic competence.
The second group that is likely to gain from the controversy is that of sociologists of science. The article is a marvellous source document for them: it is an irresistible cocktail of sociology, statistics, politics, bean counting, science, cultural studies and intellectual augury exposure that - with the resulting research articles, documents, blogs and emails that it will draw out - will enable such sociologists to publish, wine and dine off the results of their research for decades.