Ah, publishing. That oft-lamented, but relentlessly-pursued metric of academic “success”. So many problems, so many frustrations, so little time. For many in scientific research (particularly those in academia), publishing research papers is the currency of success. Papers get more grants that get more students that collect more data for more papers that get more grants … you get the idea. So why is there so much apathy towards a system that is largely regarded as broken (or at least severely malfunctioning)?
There’s a general perception that the world of scientific publishing is a buyer’s market, meaning that most of the power lies with the journals (“Oh, thank you, editor. May I have another?”). I think this is particularly true of grad students and other early-career researchers. I’m going to argue that we, as individual researchers, need to flip the power dynamic, even in our own little way, but that doing so is hard (impossible?), and one’s ability to do so will depend on one’s career stage, and privilege.
Journals should want to publish our research. Right now, we want journals to publish our research. In fact, we want it so badly that numerous predatory journals that charge “author processing fees” akin to page charges or open access with the promise of an easy review, rapid publication, and a quick line on a CV. But many of these have poor editorial control and may even lack peer review (see here for a list of questionable publishers, and here for stand-alone journals). We use largely meaningless (or easily-manipulated) metrics, like turn-around time, or Journal Impact Factors to decide where we want to publish (and in evaluating the quality of our research, and that of our colleagues). See Terry McGlinn’s thoughts on this from a primarily undergraduate institution here, and my own previous thoughts here.
Now, let’s flip things around. Imagine a world where journals competed to publish authors’ best papers, and weren’t jerking authors around. Here are two examples:
Case study 1: A colleague and I had a relative straight-forward ecotoxicology paper that we submitted to the journal EcoHealth in December 2011. We received our first round of reviews in April 2012, along with the platitude “We feel your paper has potential as an important contribution to our journal”, and the decision of “major revisions”. It was resubmitted in July 2012, and a second decision of “minor reviews” rendered in November 2012. Two weeks later, we resubmitted revision #2, and in December, back to “major revisions”, though the editors “feel your paper has potential as an important contribution to our journal, providing valuable insights to to the relationship between marine and human health”. Again, resubmitted in December 2013, and a further decision of “major revisions” in January 2013. After 13 months, and reviewers and editors just plain disagreeing, we withdrew the paper, and submitted it to another journal, where it was published in 6 weeks. Now, one could argue that the 13 months and four rounds of revisions at EcoHealth helped improve the paper, making it’s review process much faster, but 13 months’ worth? Really? No. And at least for the time being, I won’t be submitting or reviewing for this journal again.
Case study 2: I had a paper left over from my MSc, and as a group, my coauthors and I decided to submit it to PLOS One. The philosophy there is that if the science is sound, it should be published. In fact, our paper was rejected outright. Quoth the editor: “your manuscript does not meet our criteria for publication and must therefore be rejected”. OK, no problem. Lick one’s wounds, etc. vent on Twitter, and move on.
Wow – paper outright rejected from PLOS One (though it could be revised in my view). Fantastic way to start a Tuesday. #postdoclife
— The Lab & Field (@thelabandfield) December 10, 2013
But not so fast! PLOS One associate editor matt Hodgkinson chimed in and explained to me that by “reject” the journal meant that I could still revise my paper and send it back. Wait wut? Scroll down the linked Twitter conversation for the whole thing. Why on earth would the editor say that my submission DID NOT MEET THE JOURNAL’S CRITERIA FOR PUBLICATION, yet could still be published? That’s not performance, and if it’s true, represents unacceptable heterogeneity among associate editors (which is possibly a side effect of having >5000 editors. But it’s not the only case of editorial decisions obfuscating the actual process.
Services like Peerage of Science (with which I have experience), and Axios Review (with which I have no experience) offer an alternative. At Peerage, anyone signed up can review any paper on the authors’ timeline. The authors then revise the manuscript, resubmit it, and then the manuscript and reviews are all evaluated. Now for the best part: journals can offer to publish papers directly, and in one case, resulted in three competing offers from various journals. How cool is that?
And to be fair, some journals are doing a better job at sharing reviews. Many journals published by the British Ecological Society funnel their rejected papers to Ecology and Evolution, and in the ornithological world, sharing manuscripts between The Auk and The Condor got easier this January when the two started sharing an editorial office, and adopting different foci.
Now the downsides. Not many journals have yet bought into the Peerage of Science model (particularly outside of ecology/zoology/behaviour). Time can be important for some, especially graduate students who are, at some institutions (and very foolishly, I believe), required to have one paper published and another accepted in order to obtain their PhD. And the looming effects of the Journal Impact Factor mean that, at least for many, the ball is still in the journals’ court.
Don’t review for shitty journals. I think that sort of goes without saying. But what defines a “shitty journal” doesn’t just depend on its content. Some journals published by the Royal Society appear to be gaming the system to drastically reduce (or rather, present the appearance of reducing) their submission-to-decision time. And some journals have downright terrible editorial practices. In December 2012, I submitted a paper that I thought was a good fit for the SETAC journal Environmental Toxicology and Chemistry (or ET&C). It went out for review, suffered from the “minor revisions, but reject” syndrome, but my coauthor and I appealed to the editor, informing them that the changes suggested by the reviewers would be easy to make. Three weeks later, we resubmitted, and it went out for review. The decision? Reject. But the reason for rejection was what really irked me. One of the reviewers didn’t think the paper fit the aims and scope of the journal. On that basis, the editor rejected it. One would think that call (which should be done before being sent out for review!) should be the editor’s. So for the next while, I’ll give these folks a pass.
Many scientists have taken a stance against the publishing house Elsevier, signing a pledge to not publish, review, or edit with any of their journals. I think this is the best example of a case where scientists are trying to collectively say something about the state of academic publishing.
And now the sad part
Regardless of what we as individuals decide to do (and Margaret Mead quotes aside), it’s unlikely to make a difference. Perhaps over time, things will shift, but there will always be someone else to review that paper, and there are always many more manuscripts being submitted than can be published. I’m under no illusions that my own little acts of improving scientific publishing will bring about meaningful change. I also recognize that as someone who has permanent full-time employment, and is under less pressure to publish in Science, Nature, or PNAS, I’m privileged to be able to take certain positions that others can’t. But like most cultural shifts, the if we each do what we can, the result will be (slow, gradual) change.