Looking back at the last year, I’ve had terrible luck with journals. Earlier in my (brief) career as a publishing scientist, I had a pretty good sense of where a paper I was working on would end up – it fit with the journal’s mandate, was similar in scope to previously published works, and my coauthors agreed with my choices.
But in the last year, I’m not so sure. I don’t think I’m aiming artificially “high” (whatever that means), and when I relay the editorial decisions to coauthors, they seem almost equally baffled. Here are three examples:
- A natural history paper, but founded in theory, previous research, and high sample size, was rejected from a lower-tier organismal journal because it wasn’t sufficiently broad.
- A paper was rejected from a toxicological journal despite two positive reviews. We appealed the decision, it went out for review again, and one reviewer (note: not the Editor) thought it didn’t fit with the scope of the journal. Reject.
- A methods paper was rejected from a methods journal in part because the maths were too technical.
Now, before everyone thinks this is just sour grapes, let me explain why this is a problem. In total, these represent the time of 9 people, plus us authors. Even 4-5 years ago, I’m almost certain that these papers would have been accepted after revision. When authors have a harder time predicting the outcome of peer-review, it wastes the time of the editors, subject editors, reviewers, and authors. I’m not talking about “Let’s try this at Ecology Letters & see if it sticks”, but considered thought about where a manuscript a) would be presented to the target audience, and b) is likely to be accepted based on the authors cumulative experience with the journal (both as readers and as authors).
The result is a tendency (especially of grad students, post-docs, and other early-career researchers) to play it safe, since they can’t be bouncing a paper around 3-5 journals, each of which takes 1-4 months to review it (that’s almost two years in an extreme case, and something I’ve experienced. Trust me, it’s not pleasant).
And when even positive reviews are no guarantee of acceptance (and the justification isn’t one of space in the journal), my journal-selecting confidence takes a hit.
True, there are journals like PLoS One that evaluate only on technical soundness, but as I’ve pointed out before, that starts to get pricey (and how many waivers will they grant before they start to catch on, or we reach a tragedy of the commons scenario?), and gets to the question: is publication just about (peer-reviewed) publication no matter where, or is it about reaching a certain group of people via a particular journal? Have journals become so cosmopolitan in this online age that it doesn’t matter where you publish, as long as it’s indexed by Web of Knowledge or Scopus or Biological Abstracts? Does that explain the rapid rise in predatory Open Access journals? If publication without regard for the journal is the end goal, why are some journals viewed as “better” than others (by authors, readers, and more importantly for early-career researchers, by search committees)?
Part of the solution lies in pre-print servers like PeerJ PrePrints and biorXiv, or refereeing services like Peerage of Science, or Axios Review. But in this age of the engaged academic who’s tuned into the topic of ethics in scientific publishing, let’s take a hypothetical example.
I’m in environmental science, and let’s say that I have a paper on contaminants in birds that I want to publish in a “mid-range” journal (i.e., not Science, Nature, PNAS, etc). I also want to deposit a pre-print in biorXiv, and don’t want to publish in an Elsevier journal.
Well, Elsevier publishes Environmental Pollution, Science of the Total Environment, Chemosphere, Marine Pollution Bulletin, Environmental Research, Ecological Indicators, and Ecotoxicology & Environmental Safety.
That leaves Springer journals (like Environmental Monitoring and Assessment, or Ecotoxicology), and some society journals like Environmental Science & Technology (published by the American Chemical Society, ACS), and Environmental Toxicology & Chemistry (published by the Society for Environmental Toxicology and Chemistry, SETAC).
ACS doesn’t currently allow preprints, and Environmental Toxicology & Chemistry is published by Wiley, which hasn’t set a formal preprint policy yet.
But let’s not forget – the only reason this matters is because where academics publish (not [just] what they publish) is considered important by hiring, tenure, and promotion committees (among others). This assertion isn’t just idle speculation by a grumpy postdoc, either – I will offer three examples. In 2013, I was long-listed for a faculty position at a well-respected UK university. After I was eliminated, one of the search committee members approached me about collaborating, and I asked if he could provide any feedback on my application. His response was that I hadn’t been short-listed because I lacked any papers in “high impact journals”, by which he meant PNAS, Proceedings of the Royal Society B, Science, Nature, and their ilk. A colleague in Australia was recently informed by the head of department that papers “published in journals with an impact factor < 4 don’t matter”. Lastly, one of my own colleagues (a senior academic who sits on hiring and promotion committees) authored/coauthored >2 papers in PLOS One in a given year, and quipped that they wouldn’t be sending anything there for a while because “you don’t want everything just in PLOS”.
In a perfect world, the where is secondary (nay, completely irrelevant) to the what of academic publishing. If no one cared where a paper was published, we could eliminate the “aiming high” mentality that wastes everyone’s time, and I’d have everything on a preprint server, and submitted to exclusively open-access sources like PLOS, or PeerJ.
But for someone looking for work in research, my experience tells me this isn’t a good idea. Change must come from those within who are making the decisions, even if they must be pressured to do so by those of us on the outside.