Science is simple! At least, great minds have found simple formulas to describe it. For Paul Feyerabend it’s Anything Goes. For Karl Popper it’s Falsify! Falsify! Falsify! And Carl Sagan called science a self-correcting process that is perfect for finding out what’s true.
But how do these grand ideas hold up in practice?
Science writer Carl Zimmer sounds a bit disappointed in the New York Times in June 2011:
“As a series of controversies over the past few months have demonstrated, science fixes its mistakes more slowly, more fitfully and with more difficulty than Sagan’s words would suggest. Science runs forward better than it does backward.” *
I’m not surprised. When you talk to editors of major journals, they tell you three criteria to get your paper in: (1) Conceptual advance, (2) novelty of findings and (3) validation. You see, validation is in there, but only after two questions of Is it new? Is it hot? Will it be good for our impact factor?
One of Carl Zimmer’s example is a Science paper about A Bacterium That Can Grow by Using Arsenic Instead of Phosphorus. Using arsenic to build DNA defies the known rules of biology; and many scientists objected to the paper’s claims and published a series of eight critiques a few months later again in Science.
Isn’t that a perfect example of scientific self-correction? No, it isn’t!
Why not? Because none of the critics had actually tried to replicate the initial results!
For progress in science, it is not enough just to claim the paper is rubbish; if you’re a serious critic you need to do the experiments to prove it! That the whole scientific establishment seems to disagree with the paper is not such a strong argument as it might seem: We remember Galileo exactly because he held an unpopular (and at that time un-scientific) view.
Now, replication experiments take time and cost money – that’s part of the reason why nobody has tried. But there are other reasons, too, as Carl Zimmer found out when speaking to some of the critics:
“I’ve got my own science to do,” John Helmann, a microbiologist at Cornell and a critic of the Science paper, told Nature. The most persistent critic, Rosie Redfield, a microbiologist at the University of British Columbia, announced this month on her blog that she would try to replicate the original results — but only the most basic ones, and only for the sake of science’s public reputation.*
It’s not that I don’t understand these reasons. I wouldn’t change my own research agenda just because someone else is wrong (unless Duty Calls). But it gives the first author of the Science paper the opportunity to portrait herself as an iconoclast who defends unconventional ideas that the scientific community can’t deal with. The paper is still not retracted. And without experimental evidence to the contrary the possibly incorrect result stands.
So far, this is certainly not a good example of the self-correcting power of science.
What is needed here is de-discovery: a serious attempt to clean science of wrong results (if they are indeed wrong). And scientific forensics: to find out what went wrong in the first place. Was it fraud? Or an honest mistake? Only if we answer these questions can we avoid similar mistakes in the future.
This process might be easier in computational biology than it is in experimental biology. Maybe.
At least, here are three examples of computational de-discovery and forensics that I came across in the last few years:
- The first example is an approach for Predicting Gene Expression from Sequence. Achieving this would be great: if, given only the DNA sequence, you could predict which genes are active, you would be one step closer to explaining phenotypes. However, when the results got re-examined they turned out to be an artifact of poor model selection and bad cross-validation. Not that anyone bothered to retract the paper, though.
- The second example is the infamous Duke breast cancer meltdown. A biostatistics paper titled Deriving Chemosensitivity from Cell Lines: Forensic Bioinformatics and Reproducible Research in High-Throughput Biology debunked several high-impact publications because of serious technical flaws. So serious that two clinical trials had to be stopped. In this case statistics may in fact have saved human lives!
- My last example is a recent one on RNA editing: One paper found Widespread RNA and DNA Sequence Differences in the Human Transcriptome and was immediately contradicted by a re-analysis of the data finding Very Few RNA and DNA Sequence Differences in the Human Transcriptome. Who is right? The jury is out …
If you follow the links and look at the papers you will find that the re-examinations always got published in lower-impact journals than the original (wrong) papers.
There is no fame in forensics.
Carl Sagan would be very disappointed by the speed of this self-correcting process. And Karl Popper certainly got it wrong: falsification is not the central principle of science; it doesn’t even lead to papers being retracted.
The only one I can see smiling at this situation is Paul Feyerabend. Anything goes! Indeed.