Why We Disagree About Forest Carbon. II. Objectivity, Peer Review, Predictions and Funding Sources

In the quest to understand Why People Disagree About Forest Carbon (if last year summer was the Summer of Fuel Treatments, this summer can be the Summer of Forest Carbon), I’ve been thinking that this would be a great topic to explore from two angles. The first is what scientists have to say about it, and why they disagree. But what I’ve found out by exploring around the edges is that it can also be a case study in the sociology of science. Looking at what’s out there can help people understand how scientific research is produced and used in policy or management. This includes the unimaginably complicated and preternaturally vitriolic field of climate science/policy.

Fergus McLean, in a previous comment here, raised four points that can add to this discussion. First, he mentioned that Law’s paper is objective and peer-reviewed.

1. Peer review.
Without reviewing the literature on peer review in detail, I can only say here that if people think something is important, it’s not usually a volunteer activity in the sense that people aren’t paid to do it. Let’s use an analogy. We think wildland firefighting is important. We pay people to do it. We even pay them overtime and hazard pay. Doing peer review well is time-consuming and, if done well, may have hazards like falling behind in what you are being paid to do, and critical reviews, if not completely blind, have risks of irritating your colleagues. They can retaliate in a variety of unpleasant ways. People who are close enough to the topic are probably not objective (is Professor A objective when Professor C models the same phenomenon as she does, using different techniques and coming to different conclusions?). People who are far away enough to be objective usually don’t know the topic as well. We are asking for a quality product with incentives that run against the basic principles of human nature. These problems are well known in the literature, and a variety of tweaks to the process have been proposed (e.g. this Lancet paper has quite a round-up of approaches).

2. Objectivity
But don’t believe me about objectivity here’s an article in the Stanford Encyclopedia of Philosophy. Through the dense fog of academic philosophy lingo, we can see the vague outlines of observations within our own experiences.

3. Handling uncertainty
When Fergus said “Choosing to regard existing export arrangements as permanent is a political, rather than a scientific judgement, and deserving of in-depth and critical analysis.” I agree but I would tend to take a different approach. I would imagine that I had crossed disciplinary thresholds into the world of economics.
Economists have a long history of making projections about imports, exports, demand, supply, prices, and so on. Even now, I suspect there are cadres somewhere (in the US and Canada) working on today’s installment of the Softwood Lumber Agreement. One of the tools economists use is “sensitivity analysis”. When you make many assumptions about things that are unpredictable, this tool helps you figure out which ones are really meaningful to the end you are studying. Let me restate, rather than arguing about which predictions will be more accurate based on whatever criteria (after all, how good have we really been about predicting the future?), they include a range of for each unknown and it helps them understand which assumptions are more important to the outcome. I think that this is important to consider, because we rely on models so much these days, and some disciplines use sensitivity analysis much more than others.

4.How much research on carbon does forest industry fund? Does funding source matter?
“Another relevant point that can be drawn from the Atkins article is the much greater scale of resources backing research oriented to the industry point of view about forest carbon (including TreeSource itself) in contrast to Law who, since Harmon’s recent retirement, essentially works alone. “
The idea that industry backs more carbon research than NSF, the Forest Service, EPA and so on is interesting, and possible to investigate. I’m not much of an industry person but I believe there was some corporate restructuring at some point that led to them pretty much getting out of the research business. Many of my industry colleagues lost their jobs. Has that changed?

4 thoughts on “Why We Disagree About Forest Carbon. II. Objectivity, Peer Review, Predictions and Funding Sources”

  1. Sharon, We really were non-identical twins separated at birth! Forest science would seem to be a prime territory for the sociology of protracted scientific disputes. (Incidentally, once, quite a long time ago, I took a shot at exactly this kind of sociology in another controversial domain science had struggled to colonize — see http://www.roizen.com/ron/cont-dri.htm .) Although it is not quite on-point, let me mention a forestry-related frustration from my own past. When I was dutifully working on the “Not Without a Fight” blog I ran across an article claiming that thinning projects and forest management in general did not lessen the risk and extent of catastrophic wildfire. A little digging on my part ended up revealing that the would-be canonical science lying behind this contention turned out to be a regression analysis. A little more digging suggested that the anti-thinning author of the article that first caught my eye simply did not have the requisite understanding of what this regression analysis did and did not mean. I toyed with the idea of sitting down and writing a careful explanation of what the source’s regression analysis could and could not mean. But then I just gave up in the face of how difficult is would be to teach a whole course on regression in a single article. And, in any case, my jaded experience in the field by that point had suggested to me that no one on the anti-thinning side would pay any attention to what I was trying to get across anyhow. Such are the frustrating and seemingly impregnable barriers of discourse where science has taken over the talkfield.

    Reply
    • At the risk of sounding like an Old Fuddy Duddy, in places I worked like the Forest Service it was required that a statistician review your experimental design and analysis at the proposal stage. That, IMHO is the time to catch these things before someone has a bunch of time invested in the analysis.

      There were also peers around with a vaguely competitive edge who would thoughtfully and carefully tell you if your conclusions went beyond what the data and analysis could support (you would give a seminar or present it at a meeting prior to publication). I don’t know what pressures and disciplinary trends have led to the current situation. Hypotheses from others would be appreciated.

      Yes we could be twins except you inherited all the quality writing genes!

      Reply

Leave a Reply to Sharon Cancel reply