In praise of researching (and publishing) “local” conservation science


, ,

If you’ve published a scientific paper in a journal, you’ll know that part of the challenge is making it relevant to a broad audience. Why should a conservationist in Outer Mongolia, Zambia, Murmansk, or Baton Rouge care about your study? Chances are they study )or are concerned with/interested in) different species in different places. The pressure, therefore is to wrap much of our conservation science in global policy and priority frameworks: the Aichi Targets, multilateral environmental agreements, globally threatened species, or highly imperilled habitats. Which is good and fine and has resulted in lots of policy relevant science and conservation action.

But conservation also operates on a much smaller, more local scale, and with individuals on the ground in communities who can influence local, regional, and national policies and conservation actions. And this requires the science underpinning these actions to be, at least in part, local in nature. Sure, we all know that global warming is driving our planet further down the 6th Great Extinction, but most people will only take action when they see this manifest in their own backyards. Why have the doves returned a month early? Where did all the swifts go? Weren’t there fish in this lake?

And this is where “local” conservation science comes in. And it’s some of the most rewarding science with which I’ve been involved, even though it can be some of the most difficult science to publish. Providing the evidence base for local problems gives scientists and conservationists a better bargaining chip when holding governments to account, to speaking with the public and with media. A local story is usually more relatable than one from a seemingly abstract land far away.

Local conservation needn’t be novel, ground-breaking, cutting-edge, or revolutionary. It’s purpose is rather different, though from an implementation perspective just as important (if not more so). But this very nature makes it a more difficult problem for academic researchers to tackle as it’s unlikely to be of global significance, gain copious citations, or end up in a journal with an impact factor >4. It therefore often falls to scientists in government agencies, independent researchers, and non-governmental organizations to contribute to this science.

I’ve been lucky enough to be involved in a couple of these kinds of studies, and have a few more in the pipeline. We showed migratory patterns and geographic distribution of a Flesh-footed Shearwaters in the northeast Pacific Ocean (Bond & Lavers 2015), and described the current status & threats facing Streaked Shearwaters in the Korean peninsula (Hart et al. 2015). In these papers, we learned a heck of a lot about the species involved, and hope that these will become go-to papers when someone compiles details into whole-species assessments of status, distribution, and threats.

Overall, the key to success with local conservation science is the involvement of local people. The paper on shearwaters in Korea was only possible because of people in Korea. The same is true of the other (as yet unpublished) bits of work I’m involved with. These local connections make the work more likely to be well received (if received at all) by the people who matter (those who will enact policy or implement conservation interventions on the ground). The days of colonial science, where outsiders (often from the UK, US, or other countries with an advanced state of scientific inquiry) come in, do something, leave, and then issue what amount to scientific edicts (which are often promptly ignored) are over (or at least should be).

But, for me, the bottom line is that I find this kind of science fun. It’s adding a piece to a puzzle, and I find it very rewarding, especially when it’s highly driven by local collaborators (I usually just provide some stats, and editing… they do the real work of data collecting, and then working with the community to influence change). And at the end of the day, I like to think that it has some benefit for the species and sites we’re trying to look after.

2016 by the numbers


It’s time once again for my annual round-up of science, and science blogging by the numbers. You can also read the 20132014, and 2015 editions.


The number of posts on The Lab and Field this year, which is low, but I found that blogging took more energy/effort this year than I had to give.

The most popular posts this year were:

  1. Personal academic websites for faculty & grad students: the why, what, and how
  2. Landing an academic job is like an albatross
  3. Beware the academic hipster (or, use what works for you) UPDATED
  4. Volunteer field techs are bad for wildlife ecology: the response
  5. How did we learn that birds migrate (and not to the moon)? A stab in the dark
  6. The advantages of Google Scholar for early-career academics
  7. Languishing Projects
  8. Why the #LGBTSTEMinar succeeded & was needed
  9. I am not an academic (for now)
  10. Manuscript necromancy: challenges of raising the dead

I’m always amazed that a blog post about how to build a basic website is still, by a long shot, the most popular post year after year. It had >3x more visitors than the next most popular post. Go figure!

31,905 (give or take)

The number of page views this year. Good heavens you people, don’t you have anything better to do?


The number of countries/autonomous regions represented by those readers. Wow. About a third of visitors were from the US, with >4000 from each of Canada and the UK. Shout out to the one visitor this year from Bolivia, Barbados, British Virgin Islands, Dominica, Samoa, Honduras, Jersey, French Polynesia, Montenegro, New Caledonia, Solomon Islands, Senegal, Cambodia, U.S. Virgin Islands, Papua New Guinea, and Curaçao!


Days I spent in the field, the shortest time of any year where I’ve had field work. All done on Lord Howe Island, Australia, which I hope to return to in 2017.


The number of new papers published this year, up from 10 last year thanks to some exceptionally productive co-authors! A bunch of these were also from a Special Issue that I co-edited, and that took much longer than anyone expected (2.5 years).


The number of co-authors I had in 2015.


My Gender Gap for co-authors in 2016 (the ratio of female:male coauthors). A step up from last year (0.29), but far from parity.

Heaps (metric)

The number of brilliant people from Twitter that I’ve met in the last 12 months, mostly at conferences like the LGBT STEMinar, at ornithology meetings in Edinburgh and Barcelona, or because we both happened to be in the same place at the same time. Lots of connections strengthened, much laughter, and a few collaborations, too. And tea.

8 (maybe 9)

Number of graduate/honours students I’m co-supervising in 2017. Certainly wouldn’t be possible without the university-based supervisors spread across the UK, Canada, and Australia. This is largely a new adventure for me, and I’m sure there will be peaks and valleys. Or perhaps swings and roundabouts.


Number of staff I was involved in recruiting this year, from seasonal posts to 2-year positions. Let’s just say I’ve gotten to know our HR department rather well lately. But I’ve also had a chance to see what makes a good interview (from both sides), which has been rather instructive.


Number of emails I sent in 2016. That’s roughly 21/day (or 34/working day). Some were long, others much shorter. This is the first year I’ve kept track, officially. The volume of email is something I struggle with this most in my day-to-day job, and I highly recommend this post by Meg Duffy over on Dynamic Ecology for strategies to cope. I will try to send fewer emails in 2017.


New countries visited this year: Germany, Spain, Switzerland. Or 4 if you count Wales.


2016’s been a tough year for a lot of people, me included, for reasons that can’t be put into numbers. Let’s all look after each other in 2017.

Reading 365 (or, rather, 230) papers a year


, , ,

A couple of years ago, a number of scientists on Twitter decided to try and read 365 scientific papers in a calendar year. Joshua Drew summarized his efforts and results quite well, and Jacquelyn Gill provides a great introduction to the motivation for “365 papers” (among other efforts) on the Contemplative Mammoth. And over the holiday break, as I was sorting out my “To Read” folder, I realized that it was getting rather full, and I needed a strategy to get that number down from about 972 to something more… manageable.

I think this is a neat idea, and by joining together with others completing the same goal can act as encouragement (something we use with Shut Up and Write sessions), but it can also make those not participating feel guilty about not keeping on top of their own “To Read” lists. The Twitter hashtag #365papers also explicitly implies that all 365 days of the year are available for work, which is far from the truth. At my organization, we budget for 230 working days in a year, which accounts for weekends, statutory/bank holidays, annual leave, and other non-work days (e.g., professional development). So I’m going to try for #230papers.

The astute among you will have noticed that 230 < 972, and there’s a very high probability that papers published in 2017 will be of interest to me, so it’s certain that my “To Read” folder won’t shrink by much.

972 > 230

It turns out that 972 is not less than 230.

So in addition, I’m adopting a ruthless culling approach: if I can’t remember why I downloaded the paper after reading the abstract, I’ll delete it (unless it appears to have taken some effort to obtain, or is from an obscure source).

I’m also hoping this will also kick-start my blog posts this year, which have lagged of late. I certainly won’t write about every paper, but I’ll post a list of papers and monthly tallies below for those playing along at home, and I’ll try to tweet links periodically with the #230papers hashtag.

Here’s to a brain-expanding knowledge-assimilating 2017!

See the full list here.

Astrophotography as a gateway to science



Doctor PMS on Twitter pointed out a news release about a paper that use astrophotography as a “gateway to science” at the university level, which reminded me that as a wee lad in high school in the late 90s/early 2000s, we did quite a bit of astrophotography (which involved some creative arrangements of sitting in a car, or a basement, and not freezing to death in the Canadian winter).

What enthralled me at the time was that one could, relatively easily, see things like the rings of Saturn, or the Orion nebula, or the red spot on Jupiter. Recall that this was at a time when it would take hours to download a music album, and the print magazine was still the king in terms of photography.

So while I’m fairly sure I would have ended up in science regardless, those cold nights were the first time I can recall the spark of scientific discovery, even if what we had “discovered” had actually been done hundreds of years before, with much simpler equipment.

Here are a selection of images from the Riverview High School Science Club / Astrophotography@RHS / Saturn Labs [so named for the make of car we would hide in to prevent frostbite]. The images were taken in black & white, and we added some artificial colour.

Saturn and its moon Triton

Saturn and its moon Triton



The lunar surface

The lunar surface

Whirlpool Galaxy (M51)

Whirlpool Galaxy (M51)



Orion Nebula (M42)

Orion Nebula (M42)

Jupiter and some of its moons

Jupiter and some of its moons

FAQ, and answers thereto (Christmas 2016 edition)


The latest in bizarre search terms, and slightly facetious answers. Whatever your motivations for visiting The Lab and Field, I hope the holiday season treats you well.


Labrador duck sightings

None, since at least ca. 1878.


birds migrate to the moon

We thought they did, but turns out they don’t. Also: early diet studies of birds clearly overestimated the contribution of cheeses.


scientistseessquirrel blog

That’s Stephen Heard’s blog you’ll be after then.


lab management is a diverse field. i am interested in learning what fields of lab work you have done or …

Googling interview questions is always prudent.


dead parakeets

It’s passed on. This parakeet is no more! It has ceased to be. It’s expired and gone to meet its maker.This is a late parakeet. It’s a stiff. Bereft of life, it rests in peace. If you hadn’t nailed it to the perch it would be pushing up the daisies. It’s rung down the curtain and joined the choir invisible. This is an ex-parakeet.


the job search never ends



post-doc never ends

It can sometimes feel that way. See also above.


oedsex m

You want to do what?


define lunar migration

When the moon follows the seasonal resource pulse across the celestial landscape.


i’m a science student but i am doll my solution

I first read this as “I’m a science student, but I am a doll, here is my solution”, which I sort of imagining is an advice article for mannequins into botany or quantum physics.


benefits for being a google schollar

Free spell check?


if i were a bird,i would fly to the fact

Doubtful, but keep reaching for the stars!


research seminar sucks life

They often do. But you can fix them!


nserc gives priority to women

Pretty sure you’ll find they don’t.


#917248324497 kiska no h

How did this bring you to my site?!




i have a gay colleague in the lab

How fabulous!


how to use magic to excel in academia

As in the card game? Or as in the Harry Potter-type of wizarding? Because the answers will be rather different.


laboratory #4 answers

A, C, B, D, A, Yellow, subdermal, and hoatzin.


single bird with name

Lauren. She enjoys long flights by the beach, sunflower seeds, and migrating like nobody’s watching.


fluffy backed tit babbler

You called?

Example interview questions in conservation


, ,

One of the things I tend to do a fair bit of is recruitment/hiring, usually for seasonal or short (<2 year) contracts. In the last two years, I’ve been involved in well over a dozen competitions as part of the interview or selection panel for what would be termed alt-ac or field tech science jobs.

Recently, several friends have asked for my advice on what to expect from an interview, so I thought it would be worth posting here. I’ve already written a bit geared towards applying for field jobs.

I’ll heavily caveat this, though – my take isn’t everyone’s take, and practices likely vary among (and even within!) organizations. Always remember to consider the albatross. And I’ll assume that you’ve already passed the hurdle of the paper application.

My general take is that if you’re being interviewed, chances are you tick all the basic boxes and the organization is at least considering hiring you. Basically, you meet all the technical requirements; the interview will be about how you approach problems, and other things that can’t be easily assessed on paper.

I always advise folks to think of the sorts of questions the interview panel is likely to ask, and how you might answer them.

Usually, there is some sort of fact-based questions pertinent to the job. For a recent post about marine protected areas (MPAs), we asked which international agreements/treaties were important for MPA designation. The purpose here is to see that you know your stuff (or at least where to look for it). The interview panel will likely have various keywords that they’re looking for here (or their general gists), so there can be a right/wrong answer.

There is often a problem solving part of the interview. “How would you do X”? These are almost always technical in nature, and interview panels will probably be looking for broad grasp of how you approach a problem. This could be analysis, data manipulation, supervision, … the content depends on what the job specification includes. No right/wrong answers here, but trying to understand how you might do in the work environment.

And there are usually a couple of more personal questions. Why did you apply for this job? What is your greatest strength? What is your greatest weakness? We ask these, or similar variations, almost every time. In these cases, there are no right or wrong answers, but are about seeing how your self-assessment might match up against what your referees might say about you (we often ask referees similar questions). Everyone has strengths and weaknesses, so the “I can’t think of any” cop-out isn’t advisable.

In all cases, look for “added value” – moving beyond the question to the next logical step. And if at all possible, use concrete examples of how you’ve done X (or similar to X) in the past successfully, e.g., “We did something similar in a recent paper…”. And in all cases, specific, tangible examples are to be encouraged (yay evidence!).

Lastly, we always give candidates an opportunity to ask questions of the panel, so it’s always good to prepare a few queries in a couple of areas – technical specifications about the job like start/decision dates, questions about resources available, like computing, employment policies and benefits, scope of the job and possibility of branching out, details about the ultimate goals or deliverables, etc.

As I said, these are highly directed towards those looking for work in the non-academic conservation sector, but some of the themes are likely to be broadly applicable based on my experiences in academia as well. And feel free to add your 2¢ in the comments below!

On finding an error in my own published paper


, ,

Dan Bolnick (on Eco-Evo-Evo-Eco) and Meg Duffy on Dynamic Ecology have both posted stories of how they were confronted with, and subsequently addressed, the need to retract or correct their published papers. This fall into the “scientists are humans; humans make mistakes; therefore scientists make mistakes” logical tenet, and they both addressed it wonderfully. Sadly, that’s not always the case.

So to further demonstrate that scientists, as all humans, make mistakes, here is my tale of finding a fairly significant error in my first ever paper.

In my undergrad, I spent a spring at the Point Lepreau Bird Observatory in southern New Brunswick. Yes, past the nuclear exclusion zone and next to a 19th-century lighthouse was a little hut with electricity, a portable heater, radio, and view of the majestic Bay of Fundy. As a one of the more southerly points in the area, it was also a hotspot for migratory birds, mostly seaducks, on their way north to breed. My job: figuring out how many scoters (3 species of mostly-dark seaducks: the Black, Surf, and White-winged Scoters) passed the site in April and early May. I was also generously allowed to analyse the previous 8 years’ data (and have since heard through the grapevine that a student may be updating this work soon!).

The Point Lepreau Bird Observatory

The Point Lepreau Bird Observatory in 2004.

I was, at the time, terrible at data analysis and statistics. I had more pivot tables than I knew what to do with, graphs were made in Excel, and I think I used JMP for the various ANOVAs.

But I, and my supervisors, were able to churn out some basic stats on the timing of migration, the peaks, and come up with a crude estimate of how many birds passed the point each year. I would almost certainly analyse the data completely differently today (and I hope the aforementioned student does!). After some fairly minor revisions, it was published in 2007 in Waterbirds, and I was elated – my first publication!

One of the challenges was that the counts were done in 15-minute stints (15 on, 15 off), so in essence I had to double all the counts with the assumption that the number and composition of birds was identical in the counted and uncounted periods.

Except I forgot to do that.

Is that one Black Scoter?

Is that one Black Scoter?

Is that one Black Scoter?

Or two?

I got an email from a member of one of the naturalist club’s members (the observatory was run by the Saint John Naturalists Club at the time) in 2009 pointing out that he thought my numbers were too low. I dug into the terribly formatted awkward files, and realised what we had done (or rather, not done).

I was devastated.

I immediately wrote my supervisors, contrite, and apologetic. We quickly prepared a correction (which essentially doubled the population, so not that insignificant), and emailed the editor who agreed a correction was in order, which we subsequently published.

Unlike Dan or Meg’s stories, this wasn’t a high profile paper, but it was my first one, and one of the very few for which I have a printed issue of the journal on my shelf. But everyone understood it was an honest mistake, and we did what we could to fix it.

I’ve opined before about why there are so few retractions or corrections in conservation biology/ecology, and I don’t see this changing, or being any different. But in the meantime, if anyone finds an error in any of my other published papers (I’m sure there are some floating around), I will happily try to set the scientific record straight.

Best Practices


, , ,

When I started my career in science more than a decade ago, I had no idea that I would find aspects of how and why we science so interesting. When I first came across scientific papers on these subjects (rather than on birds or mercury or migration, which I was studying at the time), I put them in a folder called “Thought Papers”, and even blogged about one of them here.

I think it was this folder of papers (which now stands at >100) has generated more “deep thought” about science than an equivalent number of ecological / marine / conservation papers. I’ve even written what I would call a “thought paper” on the problems with unpaid work, which is prevalent in science. But lately I’ve wondered about the efficacy and impact of these contributions.

Back in 2012, Fields Medalist Tim Gowers initiated a campaign dubbed “The Cost of Knowledge” aimed at publishing giant Elsevier, wherein signatories pledged to not serve on editorial boards, review for journals, or submit their work to titles published by Elsevier in protest of its practices. This week, an analysis published in the journal Frontiers in Research Metrics and Analytics examined the publishing record of approximately 1000 signatories from psychology and chemistry (out of the roughly 16,000 signatories in total) who had pledged to not publish in an Elsevier title, and found that more than a third had actually done just that in the intervening four years. They outline a number explanations and interpretations of the data, and I encourage your to check it out.

My point here isn’t to dive into the potentially questionable business practices of Elsevier, but to contemplate the laundry list of things that significant portions of the scientific community view as “bad”, and that have been pointed out in a variety of fora, yet continue. One could add to this list the proper citation of computer packages/software or taxonomic authorities, reporting effect sizes rather than just p-values, acknowledging reviewers, putting figures & legends together, making meaningful statements about author contributions, using reproducible methods (or describing methods in sufficient detail that they could be recreated), managing and archiving data, the relative importance of the Impact Factor, and more. In fact, PLOS Computational Biology has a very successful series called “10 simple rules“, which invited authors to propose, well, 10 simple rules for their topic of choice.

The social scientists among you are probably all to familiar for the reasons why these practices, which as a scientific community we generally see as Good Things, aren’t adopted more widely, or are adopted in only a “flash in the pan” way, and quickly die off. I certainly don’t expect the ideas espoused <5 years ago to propagate across all of Science in such a short time, but I find myself exasperated when, for the nth time, I mention these ideas and am met with a blank stare.

One reason for this is that very often these ideas are broadcast to those who are already likely to espouse them (the whole “preaching to the choir” syndrome). Most recently, as Morgan Jackson pointed out on Twitter, a paper that highlights the importance of citing taxonomic papers was published in a taxonomic journal.

The other is that science is a very distributed community – there’s no Head of World Science, and even influential organizations like the Royal Society, or the National Academy of Science of the USA, or Росси́йская Aкаде́мия Hау́к (Russian Academy of Sciences) have little, if any, influence on their members to follow what might be called “best practices”. Ultimately, it comes down to journal editors and reviewers (and even if I make some of these points as a reviewer, they can easily be ignored by authors or over-ruled by editors). And given that there are ever more suggestions for How To Science each year, it can be tough to keep up with them all.

As a MSc student, my supervisor mandated that we attach a checklist to the front of our manuscripts for his review (yes, we printed them off!). Were all tables necessary? All figures? Were any duplicative? Were all references cited listed, and vice versa? Had it been reviewed by another lab member? Were pages numbered? He would only read it if these were all checked off. Is it time to think about a broader checklist? True, many journals have something equivalent in their Author’s Guidelines, but they’re often ignored or inconsistent.

While I could come up with some things with which to populate such a list, it’s likely to be very field specific. And even then, dear reader, I’m likely already preaching to the choir, and adoption, anyway, will be far less than 100% and likely decrease with time.

What’s in an affiliation?


, ,

Every scientific paper has a few key ingredients, but the one that may receive rather little attention is the authors’ affiliation(s). Absent until the early/mid 20th century, authors’ affiliations were probably added to facilitate correspondence with an ever-growing community of readers.

Nowadays, it can be a loaded, political, and/or much discussed part of putting a paper together for a variety of reasons. Organizations, departments, and schools use it as evidence of research output for publicity, or internal and external evaluations. Indeed, in some places, authors (or their departments/institutions) receive financial renumeration for published papers, which hinges on the affiliations. Sometimes this can be gamed (in obvious ways), as more and more researchers are offered honourary positions at institutions where they used to work*, or collaborate frequently. Seeing authors repeatedly list 3, or even 4 affiliations often causes me to raise my eyebrows. Most (some?) of these are legit, but how many simply list affiliations so the institution can add the paper to their list of “papers published by our department/school/institute”? Adding such affiliations has no material cost to the authors, is highly unlikely to be questioned, and so becomes a question of personal ethics. I should add, though, that I suspect this is an extreme minority of cases.

As a general rule, authors should list their affiliation as the place where they did most of the work. In my case, this is fairly straightforward: if I primarily use data collected during my MSc, my affiliation is the University of New Brunswick (and I list my current affiliation as “Present/current address”). In some cases, though, the distinction between data/ideas/projects started at Affiliation A and those at Affiliation B may be more opaque.

Author affiliations can also be a political tool. Some institutions (primarily those outside academia) require approvals to publish, or authors may want to publish on topics that are outside the scope of their work. In extreme cases, authors may wish to make particular points or conclusions that could be counter to those of their employer (e.g., government policy), or their employer may not wish to be affiliated with a particular piece of work. I discussed this last scenario with a friend of mine who was told he couldn’t list his institution’s affiliation on a manuscript. His solution was to basically invent an affiliation (we amusingly settled on the nonsensical “Giraffe & Sons, Ltd.”, though I don’t think he ended up using it in the end, sadly) for work that he did outside of his day job. Similarly, it’s fairly common in ecology/conservation for researchers to do small bits of independent consultancy, which could include publication based on work done while on one’s own dime, so to speak.

On a more annoying/foolish/sinister side, affiliations have likely been used by some to infer the quality of the output (“Check out this new paper from Cambridge” can, to some, sound more impressive than “Check out this new paper from North-central Podunk State University” because some use affiliations as a proxy for quality. Which is utter bollocks).

I think affiliations do matter, though perhaps more so outside academia.


*I’m an adjunct professor at the university where I did a postdoc, but this was a requirement to co-supervise a student, and this affiliation appears only on papers associated with that student’s work.

Scientific meetings and diversity



At the end of the day, science is people and their interactions, whether that’s face-to-face, through a journal submission system, or by email. And having a diverse array of people present a diverse array of views and doing science in a diverse set of ways is a Good Thing. And as the scientific community gradually comes to the realization that the diverse scientists in its midst put up with a heck of a lot of diverse crap in their day-to-day lives, especially those from minority groups, women, and the financially insecure, for example, there’s been what one might call an evolution towards considering people when thinking about science.

One of the major ways scientists interact is at conferences or meetings, united by a common research area or theme, and it’s at these meetings where some not-that-good stuff can happen, which has prompted many organizations (though still a small minority) to establish codes of conduct for attendees. Huzzah progress! There are also things you can do to be an ally at conferences.

It was with this floating around in my noggin that I was very interested to see the tweet below from the 2016 Animal Behavior Society:

HB2, or the Public Facilities Privacy & Security Act is a large piece of rubbish that discriminates again transfolk by preventing them from using the toilet of their gender. Like I said, utter rubbish. So it was rather heartening to see a scientific society taking a stand on a social and legal issue that affects some of its members.

So it was with some sadness that, 2 days later, the 5th International Marine Conservation Congress announced its location:


Malaysia is rather unfriendly to LGBTQ folk (to put it mildly), and even depictions of them in film must show “good triumphing over evil.” Good grief. So needless to say, I won’t be attending. Which is sad personally, and professionally. And while I do understand the international nature of science, and the need to engage with a diverse range of scientists from across the world, I wonder if the topic of LGBTQ attendees even came up.

Is it the job or the purview of professional scientific societies to consider all these various factors when choosing their meeting location? Or should their goal be to be as international as possible regardless of the social or legal conditions of some of their members? Societies are of course welcome to have their meetings wherever they wish, but I think they should also think about what message that sends (be it positive or negative) to the full diversity of its membership.