• Home
  • About
  • Contact
  • Languishing Projects
  • Beyond Science
  • Other Blogging
  • Queer in STEM

The Lab and Field

~ Science, people, adventure

The Lab and Field

Tag Archives: peer review

Suggestions for responding to reviewer comments

04 Sunday Nov 2018

Posted by Alex Bond in how to

≈ 4 Comments

Tags

peer review, publishing

One of the often frustrating things about the scientific process is finally getting the manuscripts published. This is true of reports, theses, journal articles, white papers, and more. Anything that undergoes any mechanism of external review where a response is needed. Journal manuscripts are the most common in my line of work, so that’s where I’ll focus, though this applies elsewhere, too.

When scientists submit manuscripts to a journal, journal editors who think the submission is suitable for their journal (in terms of scope and quality) will send it out to other experts in the field to comment on and provide an assessment. But in a throwback to the pre-computer age, where carbon copies of types manuscripts were mailed and returned, reviewers provide this feedback by referring to page or line number in a separate document rather than, say, a tracked changes function in a word processor.

If the manuscript isn’t rejected at this stage, authors are invited to respond to these comments, and either comply, rebut, or present new arguments to convince the editor that the work is publishable in that particular journal. Again, as a separate document, often called a “response to review”. And it’s this document that is the focus of this post because seldom is any guidance given, and how one approaches it can be one of those “unwritten rules” about science.

Steve Heard has this covered well on his blog as well, and in his excellent book on science writing (and I hasten to add, he was the first person who explained this process to me when I was a wee masters student, n years ago!).

Here’s an example reviewer comment:

L283 – while this may be true for chocolate cookies, what do the authors expect in their study system of apple pies?

As reviewer comments go, this one is pretty good. It’s specific, and makes a clear suggestion, highlighting what they see as a weakness (in this case, perhaps applying an incorrect interpretation from a different system).

So, how to respond?

If the journal system lets you upload a response to review as a separate document (my preferred method!), then my approach has 3 parts:

  1. Put the reviewer comment in boldface. Just copy & paste it. It’s then easier for you, your coauthors, the editor, and other reviewers to see which comment you’re replying to. I dislike colours because some folks print things out, and bold text is easily distinguished.
  2. Immediately below, explain what you did (or didn’t do) to address the comment in normal type.
  3. Quote ANY new or changed text in italics. Don’t refer to line numbers (which can get easily muddled); just put it right here for everyone to see.

So if we take our example above, it might look something like this:

L283 – while this may be true for chocolate cookies, what do the authors expect in their study system of apple pies?

We thank the reviewer for pointing out this comparison. Indeed, the approach for consuming chocolate cookies (i.e., using one’s hands) is less often applied in the case of filled pastries, including apple pies. We have changed the text to: “Desserts are easily consumed with hands (Monster, 2018) or can be eaten with assistance from cutlery (Garfield 2015)”

Garfield [The Cat]. 2015. Refined dining for modern felines. J. Arbuckle Press, Samoa.

With a quick look, the editor (or reviewer, as it often gets sent back for Round n+1) can see how the comment was addressed, and doesn’t have to wade through the entire manuscript, comparing it to an old version. And a happy editor/reviewer is often a kinder reviewer/editor.

For minor suggestions, like word choice, typos, or where the reviewer comment is obvious, it’s fine to respond with “Fixed” or “Changed as suggested”. But when in doubt add more information rather than less.

At the end of the day, though, the precise formatting doesn’t matter. What matters is that the information is presented clearly and can be easily assessed. Some journals (or some programs) use plain text for responses to review. In this case, I paste the reviewer comment, and below start my comment with “Response” or “R:”, and sadly the new/inserted text part gets left off.

Few things frustrate reviewers or editors more than a response of “Changed” without indicating where or how. I just stumbled on these formatting methods, and I’m sure there are others. The general advice of clearly indicating a response (to each and every comment) and marking any new or inserted text can be accomplished in many ways.

Happy responding!

 

It’s time to reclaim scientific publishing

15 Saturday Mar 2014

Posted by Alex Bond in opinion

≈ 11 Comments

Tags

journals, peer review, publishing

Ah, publishing.  That oft-lamented, but relentlessly-pursued metric of academic “success”.  So many problems, so many frustrations, so little time.  For many in scientific research (particularly those in academia), publishing research papers is the currency of success.  Papers get more grants that get more students that collect more data for more papers that get more grants …  you get the idea.  So why is there so much apathy towards a system that is largely regarded as broken (or at least severely malfunctioning)?

There’s a general perception that the world of scientific publishing is a buyer’s market, meaning that most of the power lies with the journals (“Oh, thank you, editor. May I have another?”).  I think this is particularly true of grad students and other early-career researchers.  I’m going to argue that we, as individual researchers, need to flip the power dynamic, even in our own little way, but that doing so is hard (impossible?), and one’s ability to do so will depend on one’s career stage, and privilege.

As authors

Journals should want to publish our research.  Right now, we want journals to publish our research.  In fact, we want it so badly that numerous predatory journals that charge “author processing fees” akin to page charges or open access with the promise of an easy review, rapid publication, and a quick line on a CV.  But many of these have poor editorial control and may even lack peer review (see here for a list of questionable publishers, and here for stand-alone journals).  We use largely meaningless (or easily-manipulated) metrics, like turn-around time, or Journal Impact Factors to decide where we want to publish (and in evaluating the quality of our research, and that of our colleagues).  See Terry McGlinn’s thoughts on this from a primarily undergraduate institution here, and my own previous thoughts here.

Now, let’s flip things around.  Imagine a world where journals competed to publish authors’ best papers, and weren’t jerking authors around.  Here are two examples:

Case study 1: A colleague and I had a relative straight-forward ecotoxicology paper that we submitted to the journal EcoHealth in December 2011.  We received our first round of reviews in April 2012, along with the platitude “We feel your paper has potential as an important contribution to our journal”, and the decision of “major revisions”.  It was resubmitted in July 2012, and a second decision of “minor reviews” rendered in November 2012.  Two weeks later, we resubmitted revision #2, and in December, back to “major revisions”, though the editors “feel your paper has potential as an important contribution to our journal, providing valuable insights to to the relationship between marine and human health”.  Again, resubmitted in December 2013, and a further decision of “major revisions” in January 2013.  After 13 months, and reviewers and editors just plain disagreeing, we withdrew the paper, and submitted it to another journal, where it was published in 6 weeks.  Now, one could argue that the 13 months and four rounds of revisions at EcoHealth helped improve the paper, making it’s review process much faster, but 13 months’ worth? Really?  No.  And at least for the time being, I won’t be submitting or reviewing for this journal again.

Case study 2: I had a paper left over from my MSc, and as a group, my coauthors and I decided to submit it to PLOS One.  The philosophy there is that if the science is sound, it should be published.  In fact, our paper was rejected outright.  Quoth the editor: “your manuscript does not meet our criteria for publication and must therefore be rejected”.  OK, no problem.  Lick one’s wounds, etc. vent on Twitter, and move on.

Wow – paper outright rejected from PLOS One (though it could be revised in my view). Fantastic way to start a Tuesday. #postdoclife

— The Lab & Field (@thelabandfield) December 10, 2013

But not so fast! PLOS One associate editor matt Hodgkinson chimed in and explained to me that by “reject” the journal meant that I could still revise my paper and send it back.  Wait wut?  Scroll down the linked Twitter conversation for the whole thing.  Why on earth would the editor say that my submission DID NOT MEET THE JOURNAL’S CRITERIA FOR PUBLICATION, yet could still be published?  That’s not performance, and if it’s true, represents unacceptable heterogeneity among associate editors (which is possibly a side effect of having >5000 editors.  But it’s not the only case of editorial decisions obfuscating the actual process.

Services like Peerage of Science (with which I have experience), and Axios Review (with which I have no experience) offer an alternative.  At Peerage, anyone signed up can review any paper on the authors’ timeline.  The authors then revise the manuscript, resubmit it, and then the manuscript and reviews are all evaluated.  Now for the best part: journals can offer to publish papers directly, and in one case, resulted in three competing offers from various journals.  How cool is that?

And to be fair, some journals are doing a better job at sharing reviews.  Many journals published by the British Ecological Society funnel their rejected papers to Ecology and Evolution, and in the ornithological world, sharing manuscripts between The Auk and The Condor got easier this January when the two started sharing an editorial office, and adopting different foci.

Now the downsides.  Not many journals have yet bought into the Peerage of Science model (particularly outside of ecology/zoology/behaviour).  Time can be important for some, especially graduate students who are, at some institutions (and very foolishly, I believe), required to have one paper published and another accepted in order to obtain their PhD.  And the looming effects of the Journal Impact Factor mean that, at least for many, the ball is still in the journals’ court.

 

As reviewers

Don’t review for shitty journals.  I think that sort of goes without saying.  But what defines a “shitty journal” doesn’t just depend on its content.  Some journals published by the Royal Society appear to be gaming the system to drastically reduce (or rather, present the appearance of reducing) their submission-to-decision time.  And some journals have downright terrible editorial practices.  In December 2012, I submitted a paper that I thought was a good fit for the SETAC journal Environmental Toxicology and Chemistry (or ET&C).  It went out for review, suffered from the “minor revisions, but reject” syndrome, but my coauthor and I appealed to the editor, informing them that the changes suggested by the reviewers would be easy to make.  Three weeks later, we resubmitted, and it went out for review.  The decision? Reject.  But the reason for rejection was what really irked me.  One of the reviewers didn’t think the paper fit the aims and scope of the journal.  On that basis, the editor rejected it.  One would think that call (which should be done before being sent out for review!) should be the editor’s.  So for the next while, I’ll give these folks a pass.

Many scientists have taken a stance against the publishing house Elsevier, signing a pledge to not publish, review, or edit with any of their journals.  I think this is the best example of a case where scientists are trying to collectively say something about the state of academic publishing.

 

And now the sad part

Regardless of what we as individuals decide to do (and Margaret Mead quotes aside), it’s unlikely to make a difference.  Perhaps over time, things will shift, but there will always be someone else to review that paper, and there are always many more manuscripts being submitted than can be published.  I’m under no illusions that my own little acts of improving scientific publishing will bring about meaningful change.  I also recognize that as someone who has permanent full-time employment, and is under less pressure to publish in Science, Nature, or PNAS, I’m privileged to be able to take certain positions that others can’t.  But like most cultural shifts, the if we each do what we can, the result will be (slow, gradual) change.

Does where academics publish matter? Yes (but it shouldn’t)

07 Saturday Dec 2013

Posted by Alex Bond in opinion

≈ 10 Comments

Tags

hiring, jobs, open access, peer review, publishing

Looking back at the last year, I’ve had terrible luck with journals.  Earlier in my (brief) career as a publishing scientist, I had a pretty good sense of where a paper I was working on would end up – it fit with the journal’s mandate, was similar in scope to previously published works, and my coauthors agreed with my choices.

But in the last year, I’m not so sure.  I don’t think I’m aiming artificially “high” (whatever that means), and when I relay the editorial decisions to coauthors, they seem almost equally baffled.  Here are three examples:

  • A natural history paper, but founded in theory, previous research, and high sample size, was rejected from a lower-tier organismal journal because it wasn’t sufficiently broad.
  • A paper was rejected from a toxicological journal despite two positive reviews. We appealed the decision, it went out for review again, and one reviewer (note: not the Editor) thought it didn’t fit with the scope of the journal. Reject.
  • A methods paper was rejected from a methods journal in part because the maths were too technical.

Now, before everyone thinks this is just sour grapes, let me explain why this is a problem. In total, these represent the time of 9 people, plus us authors.  Even 4-5 years ago, I’m almost certain that these papers would have been accepted after revision.  When authors have a harder time predicting the outcome of peer-review, it wastes the time of the editors, subject editors, reviewers, and authors. I’m not talking about “Let’s try this at Ecology Letters & see if it sticks”, but considered thought about where a manuscript a) would be presented to the target audience, and b) is likely to be accepted based on the authors cumulative experience with the journal (both as readers and as authors).

The result is a tendency (especially of grad students, post-docs, and other early-career researchers) to play it safe, since they can’t be bouncing a paper around 3-5 journals, each of which takes 1-4 months to review it (that’s almost two years in an extreme case, and something I’ve experienced.  Trust me, it’s not pleasant).

And when even positive reviews are no guarantee of acceptance (and the justification isn’t one of space in the journal), my journal-selecting confidence takes a hit.

True, there are journals like PLoS One that evaluate only on technical soundness, but as I’ve pointed out before, that starts to get pricey (and how many waivers will they grant before they start to catch on, or we reach a tragedy of the commons scenario?), and gets to the question: is publication just about (peer-reviewed) publication no matter where, or is it about reaching a certain group of people via a particular journal?  Have journals become so cosmopolitan in this online age that it doesn’t matter where you publish, as long as it’s indexed by Web of Knowledge or Scopus or Biological Abstracts?  Does that explain the rapid rise in predatory Open Access journals?  If publication without regard for the journal is the end goal, why are some journals viewed as “better” than others (by authors, readers, and more importantly for early-career researchers, by search committees)?

Part of the solution lies in pre-print servers like PeerJ PrePrints and biorXiv, or refereeing services like Peerage of Science, or Axios Review.  But in this age of the engaged academic who’s tuned into the topic of ethics in scientific publishing, let’s take a hypothetical example.

I’m in environmental science, and let’s say that I have a paper on contaminants in birds that I want to publish in a “mid-range” journal (i.e., not Science, Nature, PNAS, etc).  I also want to deposit a pre-print in biorXiv, and don’t want to publish in an Elsevier journal.

Well, Elsevier publishes Environmental Pollution, Science of the Total Environment, Chemosphere, Marine Pollution Bulletin, Environmental Research, Ecological Indicators, and Ecotoxicology & Environmental Safety.

That leaves Springer journals (like Environmental Monitoring and Assessment, or Ecotoxicology), and some society journals like Environmental Science & Technology (published by the American Chemical Society, ACS), and Environmental Toxicology & Chemistry (published by the Society for Environmental Toxicology and Chemistry, SETAC).

ACS doesn’t currently allow preprints, and Environmental Toxicology & Chemistry is published by Wiley, which hasn’t set a formal preprint policy yet.

But let’s not forget – the only reason this matters is because where academics publish (not [just] what they publish) is considered important by hiring, tenure, and promotion committees (among others).  This assertion isn’t just idle speculation by a grumpy postdoc, either – I will offer three examples.  In 2013, I was long-listed for a faculty position at a well-respected UK university. After I was eliminated, one of the search committee members approached me about collaborating, and I asked if he could provide any feedback on my application.  His response was that I hadn’t been short-listed because I lacked any papers in “high impact journals”, by which he meant PNAS, Proceedings of the Royal Society B, Science, Nature, and their ilk.  A colleague in Australia was recently informed by the head of department that papers “published in journals with an impact factor < 4 don’t matter”.  Lastly, one of my own colleagues (a senior academic who sits on hiring and promotion committees) authored/coauthored >2 papers in PLOS One in a given year, and quipped that they wouldn’t be sending anything there for a while because “you don’t want everything just in PLOS”.

In a perfect world, the where is secondary (nay, completely irrelevant) to the what of academic publishing.  If no one cared where a paper was published,  we could eliminate the “aiming high” mentality that wastes everyone’s time, and I’d have everything on a preprint server, and submitted to exclusively open-access sources like PLOS, or PeerJ.

But for someone looking for work in research, my experience tells me this isn’t a good idea.  Change must come from those within who are making the decisions, even if they must be pressured to do so by those of us on the outside.

Why are there so few retractions in ecology & conservation biology?

06 Thursday Jun 2013

Posted by Alex Bond in opinion

≈ 8 Comments

Tags

peer review, publishing, retraction, writing

(Spoiler alert: I don’t know the answer)

It will come to no surprise to regular readers that I’m fascinated by the process of science, as well as the science itself.  I was very late to the game of RSS feeds, which is how I now keep on top of about 70 blogs, message boards, and news sites.  But one of the first science blogs that I followed (after Dynamic Ecology, of course!) was Retraction Watch.

For those unfamiliar, Ivan Oransky and Adam Marcus keep tabs on the often opaque and ambiguous retractions that crop up, mostly in psychological and medial literature (and also other fields, including ecology).

Recently, Robert Primack, the editor of Biological Conservation, was quoted as saying that there’s only 1-3 out of 10,000 published papers that were retracted in his journal.  That’s a far cry from a lot of other journals, and got me thinking – why don’t we see retractions in ecology, evolution, and conservation biology?

When reviewing papers, I’ve come across fundamental errors with data analysis, like analysing count data with an ANOVA, or multiple testing instead of a multivariate analysis (I don’t think this is a case of “statistical machismo”, since linear models are among the most basic of frequentist statistics).  More egregious, though, are some of the Methods sections of lab papers that don’t report quality assurance/control (QA/QC) procedures or results (never mind the complete lack of any QA/QC in field studies, but that’s a post for another time).  For example, if I want to measure the amount of mercury in a feather, I need to know how accurate my machine is.  This is accomplished by running a material of similar composition with a known mercury concentration.  I can then adjust the values I obtained for my unknowns based on how much mercury I measured in the standard.  It’s amazingly bizarre how frequently this kind of QA/QC isn’t reported at all.

I’ve also found that, after I review a paper and it’s eventually published, not all of my suggestions* have been incorporated by the authors.  I’d anecdotally estimate this as 50/50 for substantive changes.  And since I don’t get to see the authors’ ultimate response to the editor, I don’t know what they said that made the editor agree with them over me, especially for some fundamental issues.  Heck, even recommendations of “accept”, “revise”, or “reject” are at the whim of the editor, and may not reflect the reviewers’ recommendations.

The result is that there are some papers in the corpus of scientific literature that shouldn’t be there.  And once there, at least in ecology, it seems that it’s up to the reader to determine whether or not the paper is valid.  But these papers can inform future research, government policy, and species management.

To be clear, I’m not advocating the retraction of studies that are subsequently found to be incorrect because of future research, and I think that the case for retraction is on somewhat shakier ground for the more conceptual flaws found in Zombie Ideas.  (Jeremy Fox at Dynamic Ecology has discussed (at great length, I might add) the concept of Zombie Ideas – those ideas that, even though proven false, refuse to die, to which I can add the concept of “fishing down the food chain” thanks to Ray Hilborn’s lovely rebuttal).

The medical and biochemical literature is replete with retractions because results could not be replicated due to contaminated reagents, mislabelling, and outright fraud.  In most cases, these flaws are fatal and can’t just be corrected, so the paper is retracted.  What I’m taking issue with in ecology (and the related fields of organismal biology and conservation biology) are problems that for the most part CAN be fixed, but AREN’T.

Is this a cultural phenomenon? Are we, as ecologists, far above the flawed medical profession with its “Big Pharma”, and rampant publication bias for clinical studies?  I don’t think so.  But the culture in ecology is to let these shite papers remain part of the canon and operate on the belief that “everyone knows that study X is flawed”.  Is this a case of ecologists trying to be collegial (rather than the cut-throat world of medical research)?  Or, like Edward Drinker Cope, do we just rush off to our favourite competing journal (or in his case, keep detailed notes of his colleagues’ flaws?), or write a rebuttal (that will often not get noticed)?

I think it’s time to call a spade a spade.  Papers that are fundamentally flawed for technical reasons should be either corrected or retracted.  And it’s up to us as authors, reviewers, and editors to make sure that the manuscripts we submit, review, and accept are technically sound.  These seemingly minor issues (which some might call pedantic, though I disagree) can affect the results, and therefore the conclusions of a paper, which is why I’m going to change my strategy when reviewing or editing a paper: if the analyses (lab, field, or statistical) aren’t “ship-shape and Bristol fashion” back it goes.  Granted, this can be easier for some papers than others, but why devote significant time to reviewing a Results or Discussion section (often the meat of a manuscript) if they’re possibly going to change anyway?  The reviewers’ and editors’ job isn’t to rewrite the authors’ paper.

I’ve been told that I’m “rather generous” in my time spent reviewing (which, from some people, is code for “You spend too much time reviewing”).  To the point that I’ve almost debated asking for coauthorship!  But no more. I’m tired of explaining fundamental statistics or lab procedures in each and every review.  And I’m tired of seeing shoddy analyses and false conclusions drawn from the data in otherwise good papers.  I don’t think it’s intentional on the part of most authors, but perhaps speaks to a broader statistical illiteracy and disconnect from lab analyses as point-and-click stats packages become the norm, and per-sample lab costs go down.

 So let’s keep the science rigorous, or else risk losing the good science in the bad, like a needle in a haystack.

*Suggestion is too soft a word; perhaps “requirement” is better?

Reviewer to author: acknowledge

13 Wednesday Feb 2013

Posted by Alex Bond in friday scribbles

≈ 6 Comments

Tags

acknowledgements, peer review, polls, writing

Though flawed, the cornerstone of scholarship has always been the peer review.  The cut-and-dry version goes like this: academics research and write a manuscript, and then send it to a journal.  The editor first makes a decision about whether the manuscript is suitable for the journal, and meets some standard.  If it does, s/he will send it to anywhere from 1-4 other experts in the field, who will read it, and provide their comments on the submission back to the editor, who will make a decision and inform the author(s).

Now, let’s look at it from the reviewers’ perspective.  Potential reviewers are contacted by the journal and asked to assess the manuscript.  They can either accept or decline.  If they accept, they have some period of time (generally 3-4 weeks) to provide their review back to the editor.

Obviously, the quality of reviews varies A LOT.  I’ve had reviews that were 5 lines, and others that were 8 pages.  But length alone shouldn’t be an indicator of the quality of a review.

In my experience, I’ve received more good reviews than bad reviews (in terms of their quality, not their decision about whether the journal should accept my manuscript).  And I try to pick out what I like from others’ reviews (techniques, format, etc) and incorporate it into my own style.  I tend to write lengthy reviews, but many of the comments are usually relatively minor, and would take 1-2 minutes to fix (e.g., suggestions to improve readability, grammar, etc).  But the bottom line is that I try to improve the manuscript by giving critical feedback.

Which is why it irks me when I see a manuscript I’ve reviewed (sometimes 2-3 times) finally appear in a journal, but fail to acknowledge the editor or reviewers.  Did they not contribute to the manuscript (though not enough to merit authorship*)?  It’s a simple one-liner in the acknowledgements:

We thank Person A, Person, B, and n anonymous reviewers for improving this manuscript.

Person A and Person B would be people that the authors asked to review the manuscript themselves before submitting (something everyone should do!), or the editor if they provided substantive feedback.

But perhaps this is just because most authors can’t stand my reviews, and purpously snub the reviewers in their acknowledgements by not mentioning them.  So I decided to look at a recent issue of a couple of journals I haven’t reviewed for in the last 2 years: Ecology, Journal of Animal Ecology, and Oecologia, and see what proportion of published articles acknowledged those involved in the review process.

I’ll reveal the results in this week’s Friday Scribbles, so I’d like to know what you estimate:

 

*though I can think of a couple of reviews I’ve done where I contemplated asking, given the amount of time and effort I put in to them!

Science Borealis

Science Borealis

Follow me on Twitter

My Tweets

Archives

Recent Posts

  • 2020 by the numbers
  • Science, people, and surviving in the time of a global pandemic
  • Queer in STEM ask me anything – another LGBTQ&A
  • Overseas field courses and equity, diversity & inclusion.
  • What a long year the last month has been

Blog at WordPress.com.

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Follow Following
    • The Lab and Field
    • Join 12,875 other followers
    • Already have a WordPress.com account? Log in now.
    • The Lab and Field
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar