, , ,

Over at Dynamic Ecology, guest blogger Allison Moody laid out the issue of recommendation vs. implementation in conservation biology, and mused that this could be, in part, due to the increasing complexity of the maths and modelling, and the resulting schism between academic researchers and conservation practitioners.

Well, it’s not just a problem for conservation biology.  Most of my research involves stable isotopes of carbon and nitrogen to look at foraging, food webs, niche partitioning / resource competition.  Here’s a bit of a primer:

13C/12C (δ13C) and 15N/14N (δ15N) are incorporated from prey tissues into consumer tissues (“you are what you eat”), with an adjustment for predictable metabolic processes.  For example, 14N is excreted preferentially in nitrogenous waste, so the δ15N value goes up by about 3-5 ‰ (that’s permil, or parts per thousand).  This effect on δ13C is generally smaller, and together, these changes between prey and predator are called “discrimination factors” (because the metabolic processes discriminate against one of the isotopes).

For the more mathematically inclined, you’ve already thought that we can use isotope values from predators and prey to estimate the proportional contribution of each prey source.  Don Phillips at the EPA did this 10+ years ago.  In the last 5 years, though, the number and sheer complexity of models has increased.  Here are a couple of the other options now available:

Models can account for different elemental concentrations (i.e., plants have low N and high C relative to animal protein).  Prey with drastically different elemental concentrations “behave” differently. A classic case is bears eating salmon, berries, and human garbage.  Now, this is fairly straightforward, and you get the elemental concentrations at the same time as δ13C and δ15N.

Models can include variation in discrimination factors.  This is typically in the form of SD around a mean estimate.  More complex models allow different discrimination factors for different prey.

There are a bewildering array of ways to “partition the error”, or figure out what error is due to variation in the prey, the predators, the discrimination factors, the assimilation efficiency, the mass spectrometer used to do the measurements, etc.  There was even a series of three papers in Ecology Letters in 2008/2009 that discussed, at great length, a residual error term:

Bayesian models can include prior information (e.g., gut contents, diet observations).

These are just a few examples, but regardless of the model, the following assumptions must be met:

  1. The predators must be adequately characterized isotopically.  In other words, the sample has to be large enough to be reflective of the population (or subpopulation) of interest (for the moment, I’ll leave out modelling individuals, though this is also possible).
  2. The prey must be adequately characterized.  This is one of the more challenging since it’s predicated on us knowing the system.  Contrary to popular belief, mixing models can’t identify novel prey.  If there’s a missing endpoint (prey species), then the model can’t estimate its contribution (the old GIGO principle).  And as with predators, the sampling must be sufficient to capture the population variance.
  3. The discrimination factors must be known.  This is an area to which I’ve given much thought.  First, these discrimination factors depend on the consumer.  In other words, the discrimination factors for a sockeye salmon are different from those of an Atlantic puffin.  But at the same time, those of Atlantic puffins are different from tufted puffins.  Second, discrimination factors depend on the tissue.  This is fairly intuitive since the metabolic processes that produce muscle are different from those that produce liver, bone collagen, feathers, hair, scales, etc.  Most annoyingly, discrimination factors depend on the diet.  In birds, there’s been only one study where captive birds were fed two different diets, and the discrimination factors measured.

OK, you say, I’ll just use discrimination factors from another closely related species.  It can’t matter that much, right?  Unfortunately, small changes in discrimination factors have massive effects on the mixing models.  They can be the difference between a prediction that a predator eats fish, and one that a predator eats krill.

The long and the short of it is that we don’t know a lot of these assumptions for most systems.  We can sample predators and prey intensively, but until we have a method for estimating discrimination factors other than keeping a captive population, you can’t trust the model output.

As we pointed out in our 2011 paper in Ecological Applications, this can have real conservation implications.  Balearic shearwaters, endemic to the Mediterranean, rely on fisheries discards to some degree.  But we showed that the estimate could be anywhere from 2-56%.  Obviously these two extremes have drastically different conservation and management actions.

This brings me back to the main point I’m trying to make – stable isotope mixing models are progressing rapidly, but are increasingly inaccessible to those who wish to use them (except for a few cases).  And researchers who can’t critically assess the value of including or excluding a residual error term face an increasingly large selection of possible models and options for which their data is unsuitable.

I’m not arguing against refining models or theoretical frameworks, but when the emphasis is always “look – we can tell what these animals are eating!” and this is becoming increasingly untrue as we learn more about how the models, their assumptions, and the reality of the systems in which we work, that signals an implementation gap.

It’s partly to do with the complexities of the mathematics behind the models, and black box syndrome, where end-users don’t fully understand what the model does, or how it does it.

I just think we should be investing more into making these models available to researchers (or pointing out how and when they might not be suitable) than producing further refinements that will benefit just a small (or even infinitely small) number of special cases.