, , ,

When I started my career in science more than a decade ago, I had no idea that I would find aspects of how and why we science so interesting. When I first came across scientific papers on these subjects (rather than on birds or mercury or migration, which I was studying at the time), I put them in a folder called “Thought Papers”, and even blogged about one of them here.

I think it was this folder of papers (which now stands at >100) has generated more “deep thought” about science than an equivalent number of ecological / marine / conservation papers. I’ve even written what I would call a “thought paper” on the problems with unpaid work, which is prevalent in science. But lately I’ve wondered about the efficacy and impact of these contributions.

Back in 2012, Fields Medalist Tim Gowers initiated a campaign dubbed “The Cost of Knowledge” aimed at publishing giant Elsevier, wherein signatories pledged to not serve on editorial boards, review for journals, or submit their work to titles published by Elsevier in protest of its practices. This week, an analysis published in the journal Frontiers in Research Metrics and Analytics examined the publishing record of approximately 1000 signatories from psychology and chemistry (out of the roughly 16,000 signatories in total) who had pledged to not publish in an Elsevier title, and found that more than a third had actually done just that in the intervening four years. They outline a number explanations and interpretations of the data, and I encourage your to check it out.

My point here isn’t to dive into the potentially questionable business practices of Elsevier, but to contemplate the laundry list of things that significant portions of the scientific community view as “bad”, and that have been pointed out in a variety of fora, yet continue. One could add to this list the proper citation of computer packages/software or taxonomic authorities, reporting effect sizes rather than just p-values, acknowledging reviewers, putting figures & legends together, making meaningful statements about author contributions, using reproducible methods (or describing methods in sufficient detail that they could be recreated), managing and archiving data, the relative importance of the Impact Factor, and more. In fact, PLOS Computational Biology has a very successful series called “10 simple rules“, which invited authors to propose, well, 10 simple rules for their topic of choice.

The social scientists among you are probably all to familiar for the reasons why these practices, which as a scientific community we generally see as Good Things, aren’t adopted more widely, or are adopted in only a “flash in the pan” way, and quickly die off. I certainly don’t expect the ideas espoused <5 years ago to propagate across all of Science in such a short time, but I find myself exasperated when, for the nth time, I mention these ideas and am met with a blank stare.

One reason for this is that very often these ideas are broadcast to those who are already likely to espouse them (the whole “preaching to the choir” syndrome). Most recently, as Morgan Jackson pointed out on Twitter, a paper that highlights the importance of citing taxonomic papers was published in a taxonomic journal.

The other is that science is a very distributed community – there’s no Head of World Science, and even influential organizations like the Royal Society, or the National Academy of Science of the USA, or Росси́йская Aкаде́мия Hау́к (Russian Academy of Sciences) have little, if any, influence on their members to follow what might be called “best practices”. Ultimately, it comes down to journal editors and reviewers (and even if I make some of these points as a reviewer, they can easily be ignored by authors or over-ruled by editors). And given that there are ever more suggestions for How To Science each year, it can be tough to keep up with them all.

As a MSc student, my supervisor mandated that we attach a checklist to the front of our manuscripts for his review (yes, we printed them off!). Were all tables necessary? All figures? Were any duplicative? Were all references cited listed, and vice versa? Had it been reviewed by another lab member? Were pages numbered? He would only read it if these were all checked off. Is it time to think about a broader checklist? True, many journals have something equivalent in their Author’s Guidelines, but they’re often ignored or inconsistent.

While I could come up with some things with which to populate such a list, it’s likely to be very field specific. And even then, dear reader, I’m likely already preaching to the choir, and adoption, anyway, will be far less than 100% and likely decrease with time.