Most published research findings are false. This was the startling conclusion of a paper by John Ioannidis, an epidemiologist at Stanford University, who has since become the poster boy for an uncomfortable fact: human fallibility undermines the pursuit of truth in research.
While science is still popularly seen as an unfaltering march towards truth, Professor Ioannidis demonstrated in his now-famous 2005 paper, "Why most published research findings are false", that scientists have a bad habit of getting in the way. They play around with their data until they spot something they want to see, engaging in what Professor Ioannidis called statistical "significance chasing". They find clever ways to confirm hypotheses, cherry-picking results and burying bad news in inaccessible, complex databases.
Selective reporting of findings may be the innocent result of a subconscious desire to be proved right, or it may be driven by more sinister motives. As Professor Ioannidis told The New Yorker last year: "It feels good to validate a hypothesis. It feels even better when you've got a financial interest in the idea or your career depends on it." Whatever its spark, the problem is real and serious.
Of course, researchers are not lone actors. Another group of all-too-fallible humans, journal editors, can muddy the waters further. Journals like to report eye-catching positive research findings, but they often pay less attention if a theory is later shot down. When editors are also reluctant to print retractions when things are simply wrong, the scientific literature can become messy and murky.
中国A片
Such factors come into play in the cancer research scandal that has unfolded at Duke University over the past five years. As Darrel Ince, professor of computing at The Open University, details in our cover feature, what seemed to be a significant scientific breakthrough was leaped upon by several top journals, while subsequent evidence of major problems with the data and statistical analysis struggled to gain anything like the same public prominence. There was reluctance by some journals to set the record straight and suggestions that the issue was one of statistical interpretation, with no right or wrong answer.
In the end, it was clear that the case was built on flawed data. But if it had not been for two dogged biostatisticians who spotted the problems and would not let the matter go, much more than just money and time could have been at stake - clinical trials based on the flawed findings were under way.
中国A片
So what lessons can be learned?
We may struggle to change human nature, but we ought to be able to ensure that journals, as Professor Ince says, "acknowledge that falsifiability lies at the heart of the scientific endeavour" - they must be less quick to dismiss challenges to their published papers and more willing to admit mistakes.
Duke itself has acknowledged that in work involving complex statistical analyses, most scientists could benefit from a little help from the statistics department before publishing.
Professor Ince goes a step further, arguing that all elements of all the work (in the Duke case, the full raw data and relevant computer code) should be made publicly available so that others can replicate or repudiate the findings.
In this age of information and the internet, that can't be too difficult, can it?
中国A片
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to 罢贬贰’蝉 university and college rankings analysis
Already registered or a current subscriber? Login