Tags

, , ,

(Spoiler alert: I don’t know the answer)

It will come to no surprise to regular readers that I’m fascinated by the process of science, as well as the science itself.  I was very late to the game of RSS feeds, which is how I now keep on top of about 70 blogs, message boards, and news sites.  But one of the first science blogs that I followed (after Dynamic Ecology, of course!) was Retraction Watch.

For those unfamiliar, Ivan Oransky and Adam Marcus keep tabs on the often opaque and ambiguous retractions that crop up, mostly in psychological and medial literature (and also other fields, including ecology).

Recently, Robert Primack, the editor of Biological Conservation, was quoted as saying that there’s only 1-3 out of 10,000 published papers that were retracted in his journal.  That’s a far cry from a lot of other journals, and got me thinking – why don’t we see retractions in ecology, evolution, and conservation biology?

When reviewing papers, I’ve come across fundamental errors with data analysis, like analysing count data with an ANOVA, or multiple testing instead of a multivariate analysis (I don’t think this is a case of “statistical machismo”, since linear models are among the most basic of frequentist statistics).  More egregious, though, are some of the Methods sections of lab papers that don’t report quality assurance/control (QA/QC) procedures or results (never mind the complete lack of any QA/QC in field studies, but that’s a post for another time).  For example, if I want to measure the amount of mercury in a feather, I need to know how accurate my machine is.  This is accomplished by running a material of similar composition with a known mercury concentration.  I can then adjust the values I obtained for my unknowns based on how much mercury I measured in the standard.  It’s amazingly bizarre how frequently this kind of QA/QC isn’t reported at all.

I’ve also found that, after I review a paper and it’s eventually published, not all of my suggestions* have been incorporated by the authors.  I’d anecdotally estimate this as 50/50 for substantive changes.  And since I don’t get to see the authors’ ultimate response to the editor, I don’t know what they said that made the editor agree with them over me, especially for some fundamental issues.  Heck, even recommendations of “accept”, “revise”, or “reject” are at the whim of the editor, and may not reflect the reviewers’ recommendations.

The result is that there are some papers in the corpus of scientific literature that shouldn’t be there.  And once there, at least in ecology, it seems that it’s up to the reader to determine whether or not the paper is valid.  But these papers can inform future research, government policy, and species management.

To be clear, I’m not advocating the retraction of studies that are subsequently found to be incorrect because of future research, and I think that the case for retraction is on somewhat shakier ground for the more conceptual flaws found in Zombie Ideas.  (Jeremy Fox at Dynamic Ecology has discussed (at great length, I might add) the concept of Zombie Ideas – those ideas that, even though proven false, refuse to die, to which I can add the concept of “fishing down the food chain” thanks to Ray Hilborn’s lovely rebuttal).

The medical and biochemical literature is replete with retractions because results could not be replicated due to contaminated reagents, mislabelling, and outright fraud.  In most cases, these flaws are fatal and can’t just be corrected, so the paper is retracted.  What I’m taking issue with in ecology (and the related fields of organismal biology and conservation biology) are problems that for the most part CAN be fixed, but AREN’T.

Is this a cultural phenomenon? Are we, as ecologists, far above the flawed medical profession with its “Big Pharma”, and rampant publication bias for clinical studies?  I don’t think so.  But the culture in ecology is to let these shite papers remain part of the canon and operate on the belief that “everyone knows that study X is flawed”.  Is this a case of ecologists trying to be collegial (rather than the cut-throat world of medical research)?  Or, like Edward Drinker Cope, do we just rush off to our favourite competing journal (or in his case, keep detailed notes of his colleagues’ flaws?), or write a rebuttal (that will often not get noticed)?

I think it’s time to call a spade a spade.  Papers that are fundamentally flawed for technical reasons should be either corrected or retracted.  And it’s up to us as authors, reviewers, and editors to make sure that the manuscripts we submit, review, and accept are technically sound.  These seemingly minor issues (which some might call pedantic, though I disagree) can affect the results, and therefore the conclusions of a paper, which is why I’m going to change my strategy when reviewing or editing a paper: if the analyses (lab, field, or statistical) aren’t “ship-shape and Bristol fashion” back it goes.  Granted, this can be easier for some papers than others, but why devote significant time to reviewing a Results or Discussion section (often the meat of a manuscript) if they’re possibly going to change anyway?  The reviewers’ and editors’ job isn’t to rewrite the authors’ paper.

I’ve been told that I’m “rather generous” in my time spent reviewing (which, from some people, is code for “You spend too much time reviewing”).  To the point that I’ve almost debated asking for coauthorship!  But no more. I’m tired of explaining fundamental statistics or lab procedures in each and every review.  And I’m tired of seeing shoddy analyses and false conclusions drawn from the data in otherwise good papers.  I don’t think it’s intentional on the part of most authors, but perhaps speaks to a broader statistical illiteracy and disconnect from lab analyses as point-and-click stats packages become the norm, and per-sample lab costs go down.

 So let’s keep the science rigorous, or else risk losing the good science in the bad, like a needle in a haystack.

*Suggestion is too soft a word; perhaps “requirement” is better?