Mt. Hood (lack of) science loses in 9th Circuit

The way courts approach scientific controversy is a common thread on this blog.  We happen to have a perfect example from the Ninth Circuit Court of Appeals (link to the opinion included) last week.  And it happens to involve the science of “variable density thinning” to reduce wildfire threats, another popular topic here.

The Project is the Crystal Clear Restoration Project on the Mt.  Hood National Forest.  The stated primary purpose of the Project is to reduce the risk of wildfires and promote safe fire-suppression activities.  It would use “variable density thinning” to address wildfire concerns, where selected trees of all sizes would be removed.  According to the plaintiffs, it  would encompass nearly 12,000 acres and include almost 3000 acres of logging of mature and old-growth forests along with plans to build or re-open 36 miles of roads.  The court held that an EIS was required because of scientific controversy about the effects of variable density thinning on what plaintiffs characterized as “mature, moist forest.”  The court also found that the Forest failed to show that cumulative effects would not be significant.

In both cases, the court found that the Forest “did not engage” with the information provided by the plaintiffs after, “The plaintiffs, especially Bark, got people out into the landscape and spent thousands of hours collecting information about what was going on in the land and gave that information to the Forest Service,” said attorney Brenna Bell, who spent four years on the case.  Failing to engage is a common reason for the Forest Service failing to win in court, especially when under pressure to meet “timber volume targets imposed by President Donald Trump’s administration.”

The EA stated that the Project would assertedly make the treated areas “more resilient to perturbations such as . . . largescale high-intensity fire occurrence because of the reductions in total stand density.”  Plaintiffs had provided “substantial expert opinion” that disputed that outcome.  As plaintiffs point out in their victory notice, here is how the court viewed it:

“Oregon Wild pointed out in its EA comments that “[f]uel treatments have a modest effect on fire behavior, and could even make fire worse instead of better.” It averred that removing mature trees is especially likely to have a net negative effect on fire suppression. Importantly, the organization pointed to expert studies and research reviews that support this assertion

Oregon Wild also pointed out in its EA comments that fuel reduction does not necessarily suppress fire. Indeed, it asserted that “[s]ome fuel can actually help reduce fire, such as deciduous hardwoods that act as heat sinks (under some conditions), and dense canopy fuels that keep the forest cool and moist and help suppress the growth of surface and ladder fuels . . . .” Oregon Wild cited more than ten expert sources supporting this view.”

Even the fuels report by the Forest Service acknowledged the possibility of increased fire severity. The court held (emphasis added):

“In its responses to these comments and in its finding of no significant impact, the USFS reiterated its conclusions about vegetation management but did not engage with the substantial body of research cited by Appellants. Failing to meaningfully consider contrary sources in the EA weighs against a finding that the agency met NEPA’s “hard look” requirement as to the decision not to prepare an EIS. This dispute is of substantial consequence because variable density thinning is planned in the entire Project area, and fire management is a crucial issue that has wide-ranging ecological impacts and affects human life.”

The opinion is short and worth reading as a good example of how not to approach NEPA effects analysis (i.e. “let’s make this fit into an EA instead of an EIS”).  The court cited 9th Circuit precedent for this requirement: “To demonstrate a substantial dispute, appellants must show that “evidence from numerous experts” undermines the agency’s conclusions.” The court is not choosing the science; only faulting the Forest Service for ignoring conflicting views that it found rose to a level of scientific controversy.  Under NEPA, evidence of scientific controversy requires an EIS to fully explore how the use of that science may be important to determining environmental impacts.

Science Friday: When is Research Useful and Who Decides? Uncertainty and Current Decisions

A pretty view of picnic grounds on Homestake Road (1890) Library of Congress https://www.loc.gov/pictures/item/2004665560/

Jon made a comment yesterday that I think is worth exploring in detail. He said:

If this is just a compilation of existing data, that’s one thing. If this will be their basis for future planning, and they are saying they are going to ignore future climate change, it might be hard to argue that’s the best available science.

I’m sure that no one wants to “ignore climate science.” On the other hand, how exactly should specific pieces of climate science be applied to a decision? Who makes that call? In the past, when the topic was less adversarial (say reforestation practices), the National Forests hired experts who would make that determination and decide what Forests should do. Now, I’m sure it won’t surprise you that even with these less-charged kinds of decisions, there was sometimes disagreement between practitioners and researchers (as well as within each community). But most of the time these did not boil over into public spats as it was taken for granted that the authority to decide lay with the local technical expert. Researchers were content to publish, and practitioners were content to pick the best approaches based on experience. Many times FS researchers and National Forests worked closely together in what we might call today “co-design and coproduction”. Today, though, we have broader questions, with more disciplines involved, so that there may not be one “expert,” and the public wants to understand the scientific questions where they have policy relevance. Both of those changes present challenges.

As I’ve argued before, we don’t have a clue as to how microclimates perceived by trees will change due to climate change, and we also don’t know how those changes might affect living trees, nor do we know how those changes might affect their offspring. Remember, climate models as used for projecting future conditions have economics as an input. I think, reading the views of experts right now, they have no clue how much the Coronavirus may set back worldwide economies and emissions. If you run out this string of cumulative cluelessness about the future, it becomes a decision question for stakeholders and decisionmakers- we don’t know, so how should that uncertainty affect our decisions? We also don’t have a clue as to whether trade policies will let an invasive diseases or insects into the US which will decimate the ponderosa pines on the BH. The future, is indeed, unknown and uncertain. In fact, there are decision sciences that research how best to decide under conditions of uncertainty.

So, what to do about our cluelessness about future tree growth? I belong to what I might call the Pete Theisen school. He was the R-6 Regional Geneticist who said that additional growth due to tree improvement would come out in future measurements, so we shouldn’t try to model it in growth and yield models. Applying that to climate change, we would measure tree growth every ten years or so (or whatever the cycle is today) and incorporate that into future decisions. What we might call “monitoring the forest plan.” As we would then instances of insects, diseases, and fires.

But as Jon points out, we also have the question of what is the “best available science?”. We’d have to ask “who decides what is best? Based on what criteria?”. Peter Williams has spoken of the concept of “research utility.” I like that approach because it involves practitioners and stakeholders in determining whether a study is useful or relevant to the decisions that, at least on public lands, are essentially public decisions.

For me, the “best science” of tree growth in the Black Hills is what people have recently measured, knowing that it could change due to climate change or a variety of other factors, unknown, uncertain or unknowable.

Science Friday on Saturday: Pandemic Science, Dan Sarewitz, Federalism and the “Politicization of Models”

From the Science article linked at the end of this post.

In this post, we’ll juxtapose two articles about the science of coronavirus, one from Dan Sarewitz, who is a scientist who studies the interface between science, technology, and policy, and one a journalist at the WaPo. Plus we’ll also link to an essay by a law professor about Federalism and pandemic responses, and finally go to an article in the journal Science about pandemic modeling. Apologies for the length of this post, but I’ve tried to point you to some interesting takes on the same issue and also relate it to science and our standard TWS policy disputes.

Here are excerpts from Dan’s essay (worth reading in its entirety):

The facts, that is, are being made authoritative not through scientists telling us what to believe about an invisible virus, but by occurrences in the real world, visible for all to see. If a researcher claims that a certain chemical in the environment, such as the glyphosate in Roundup, will cause a certain number of increased cancer deaths per year or that a particular economic policy will lead to a certain number of new jobs, in most cases no one will ever be able to confirm that prediction. Even if the mechanism by which the chemical causes some variety of cancer is clear in lab rats, it is likely to have many plausible causes in humans. Even if the new jobs do appear, the cause might be trade decisions made by other countries, or the expansion of new industries. In the years that might be necessary to test such claims (though usually they cannot be tested), other researchers may come up with entirely new explanations. No wonder scientific and political debates about such matters never seem to end. But for COVID-19, the basic scientific inferences quickly play out—through changing incidence of the disease and its consequences—in ways that allow both scientists and the public to assess the current level of scientific understanding and the facts on the ground.

..

For many problems at the intersection of science and policy, scientists use mathematical models to make inferences about the future, for time periods ranging from decades to centuries or more: How can new energy technologies best be deployed to reduce greenhouse gas emissions? How will nuclear waste behave in a geological repository over coming millennia? How much will economic productivity increase if more investments are made in research? But such questions always involve enormous uncertainties, and the models used to try to answer them are laden with assumptions about more basic questions that are themselves unanswerable: How will the price of solar panels change in the coming decades? How many centuries will it take for groundwater to corrode the nuclear waste storage vessels? How efficiently do universities create economically valuable knowledge? Different assumptions about these sorts of questions allow models to fuzz the boundary between science and politics by providing competing views of the future, in support of competing political agendas.

While epidemiological models used for predicting the future of COVID-19 are also assumption-laden and highly uncertain, they can be constantly tested and refined based on data that is emerging on a daily basis, to accomplish what everyone agrees must be done. For the most part models are being used to help put boundaries around the range of plausible futures that we face, and we can see different versions of these futures unfold as different countries implement different policies at different speeds. The models are valuable because they allow us to test our assumptions about both the behavior of the virus and the impacts of different policy approaches, in real time. They are not crystal balls deployed to make the case for one preferred future or another, but navigation charts that help us narrow the plausible pathways to the future that we all hope for.

..

But when it comes to fighting COVID itself, rather than fixing the economy, the combination of shared values and clear chains of causation makes it tough to import second-order political agendas into debates about what actions to take—despite the ongoing and acknowledged uncertainties. Politicians as ideologically distinct as New York’s Mayor Bill de Blasio, a liberal Democrat, and Ohio’s Governor Mike DeWine, a conservative Republican, are implementing essentially equivalent strategies for addressing the pandemic. While President Donald Trump is at the moment threatening to loosen up social distancing rules, his spasmodic approach to pandemic policies isn’t turning out to be significantly different from that of many other national political leaders. For this crisis, the things that unite us are outranking those that divide us; pandering and opportunism, while never absent from politics, are being brought to heel by the pincer combination of shared values and facts on the ground.

Now, let’s take a brief aside to Federalism and the coronavirus response from Dan Farber posted on Legal Planet (also worth reading in its entirety, the Constitution is only one part of the discussion):

These constitutional rules reinforce the statutory and practical reasons why states have been doing so much of the heavy lifting during this viral outbreak. The federal government could do a lot more than it has so far, but its powers are not unbounded. Don’t get me wrong, the role of the federal government in addressing the pandemic is vitally important. The Feds have resources and funding the states can’t match. But the way our system of government is designed, states and cities are inevitably going to be on the front lines.

Finally, let’s check back with the WaPo.. It’w worth reading the whole thing thinking about “what does the author mean by “politicization”? What evidence does the author use to support that claim?

This is why epidemiology exists. Its practitioners use math and scientific principles to understand disease, project its consequences, and figure out ways to survive and overcome it. Their models are not meant to be crystal balls predicting exact numbers or dates. They forecast how diseases will spread under different conditions. And their models allow policymakers to foresee challenges, understand trend lines and make the best decisions for the public good.

But one factor many modelers failed to predict was how politicized their work would become in the era of President Trump, and how that in turn could affect their models.

I don’t find the WaPo’s evidence more convincing that “what do do” has “become politicized” than Sarewitz’s. Some people disagree about models (including modelers, I’m sure) and President Trump issues statements that don’t make sense (same old, same old). I guess having one’s models “politicized” is bad, but models being used in policy is necessary. Which goes back to our old forest discussion about what is the role of elected political leaders (legitimate?) or is “politics” really bad when making decisions? What is the bad part- values of elected officials or only those you happen to disagree with? Or is the bad part of politics only when the decision is solely based on tribal loyalties (party politics) or light or dark money, or ???

If you want to dive into the detail of some of the models without going too far into the weeds, this Science article “Mathematics of life and death: How disease models shape national shutdowns and other pandemic policies” seems to cover it.

Here’s a quote about models from that piece:

Policymakers have relied too heavily on COVID-19 models, says Devi Sridhar, a global health expert at the University of Edinburgh. “I’m not really sure whether the theoretical models will play out in real life.” And it’s dangerous for politicians to trust models that claim to show how a little-studied virus can be kept in check, says Harvard University epidemiologist William Hanage. “It’s like, you’ve decided you’ve got to ride a tiger,” he says, “except you don’t know where the tiger is, how big it is, or how many tigers there actually are.”

Models are at their most useful when they identify something that is not obvious, Kucharski says. One valuable function, he says, was to flag that temperature screening at airports will miss most coronavirus-infected people.

There’s also a lot that models don’t capture. They cannot anticipate, say, the development of a faster, easier test to identify and isolate infected people or an effective antiviral that reduces the need for hospital beds. “That’s the nature of modeling: We put in what we know,” says Ira Longini, a modeler at the University of Florida. Nor do most models factor in the anguish of social distancing, or whether the public obeys orders to stay home. Recent data from Hong Kong and Singapore suggest extreme social distancing is hard to keep up, says Gabriel Leung, a modeler at the University of Hong Kong. Both cities are seeing an uptick in cases that he thinks stem at least in part from “response fatigue.” “We were the poster children because we started early. And we went quite heavy,” Leung says. Now, “It’s 2 months already, and people are really getting very tired.” He thinks both cities may be on the brink of a “major sustained local outbreak”.

Modelling For Decisions III. Energy Modelers and Black and Dying Swans

Photo by Tom Waugh

Yesterday’s piece was by economists- today’s is about energy systems modeling, from some energy policy and analysis experts. Of course, the energy sector is key to reducing carbon emissions, so perhaps their experience and perspective is valuable.

My favorite quote is the last line.

“But perhaps a start is for decision-makers to adapt to an increasingly uncertain and dynamic world by creating a more imaginative discourse, one that welcomes nuance and doubt as spaces for opportunity and transformative change, and sees forecasts as the beginning of a policy or investment discussion rather than the end, and forecasters not as Delphic oracles of outcome, but as the people who know best why attempts at prediction must fall short.”

I would argue that we stakeholders and public, and of course, practitioners and scientists outside the modelling community should participate in that imaginative discourse. Nuance and doubt are our friends, IMHO, and yield more robust paths forward.

“We explore two challenges to forecasting complex systems which can lead to large forecast errors. Though not an exhaustive list, these two challenges lead to a significant fraction of large forecast errors and are of central importance to energy system modeling. The first challenge is that in complex systems, there are more variables than can be considered. Often described as epistemic uncertainty, these un-modeled variables—the unknown unknowns—can lead to reality diverging dramatically from forecasts. The second challenge in forecasting complex systems is from the inherently nonlinear nature of many such systems. This results in a compounding of stochastic uncertainties—the known unknowns—which in turn can result in real-world outcomes that deviate significantly from forecasts.”

Remember from yesterday, Idea 1 was “weather-like vs. climate-like” forecasting with weather-like having more chances to check in with the real world. Idea 2 was the concept of Big Surprises which seem unpredictable. Today’s authors’ experience is that “the future is often directed by unlikely events.” They also relate epistemic uncertainty to black swans and stochastic uncertainties to dying swans. Perhaps the longer the projection, the greater the probability that unlikely events will overwhelm likely events? For that reason, some have suggested focusing on the short to medium term for projections.

In many cases, modeling apologetics are insightful and accurate, and meaningfully contribute to improved future forecasts. But there are reasons to question the universality of this narrative. Explaining away modeling errors as due to one-off unlikely events misses the prevalence of errors caused by such events, and may lure us (especially those of us who are non-modelers but rely upon model outputs) toward a heuristic of naturalistic equilibrium: a belief that “now things are normal,” or that they will soon be. The history of the energy system teaches us that the future is often directed by unlikely events, and that there is value in questioning whether naturalistic analogies of equilibrium are appropriate in many cases. Energy systems may experience multiple years, or even decades, of disequilibrium due to complex and shifting market rules, uncertainties of technological or economic feasibility at nonlinear scales of deployment, and extraordinary diversity in market structure, composition, and actors. The enormity of such extraneous uncertainties places any forecaster in very deep water.

The questions raised here, and the types of forecast errors described, should be expanded to other sectors. The rapid pace of advancement and interconnectedness of the world means that epistemic uncertainty is larger than ever [50]. For many newer technologies, the degree of uncertainty regarding future generation has increased in recent years. Cost declines in technologies such as wind, solar, and energy storage place them on competitive terms with conventional generation technologies. These technologies have shown even more stark learning-by-doing effects than shale gas production. Markets for electric cars and demand response, to name a few, similarly pose the possibility for dramatic shifts. Even moderate changes in cost and policies can lead to large changes in the future adoption of these technologies.

Decision-makers and investors would benefit from learning more about why they were caught unawares by the shale revolution, and how they can be better prepared the next time such a surprise occurs. The answer to that second question is not immediately apparent. Tautologically, if we were prepared for them, surprises would cease to be surprises. But perhaps a start is for decision-makers to adapt to an increasingly uncertain and dynamic world by creating a more imaginative discourse, one that welcomes nuance and doubt as spaces for opportunity and transformative change, and sees forecasts as the beginning of a policy or investment discussion rather than the end, and forecasters not as Delphic oracles of outcome, but as the people who know best why attempts at prediction must fall short.

Modelling For Decisions II: Escape From Model-Land: A Guide To Temptations and Pitfalls- Thompson and Smith

Figure 1: A map of Model-land. The black hole in the middle is a way out.

Here’s a link to a paper called “Escape From Model-Land”. It’s written by an economist/modeler at the London School of Economics and a mathematician/statistician at Pembroke College, Oxford and LSE.

From the abstract:

The authors present a short guide to some of the temptations and pitfalls of model-land, some directions towards the exit, and two ways to escape. Their aim is to improve decision support by providing relevant, adequate information regarding the real-world target of interest, or making it clear why today’s model models are not up to that task for the particular target of interest.

I like how the authors (Idea 1) distinguish between “weather-like” tasks, and “climate-like” tasks. In our world, a “weather-like task” might be a growth and yield or fire behavior model, for which more real-world data is always available to improve the model (that is, if there is a feedback process and someone whose job it is to care for the model). Model upkeep, however, it not as scientifically cool as model development, so there’s that.

Our image of model land is intended to illustrate Whitehead’s (1925) “Fallacy of Misplaced Concreteness”. Whitehead (1925) reminds us that “it is of the utmost importance to be vigilant in critically revising your modes of abstraction”. Since obviously the “disadvantage of exclusive attention to a group of abstractions, however well-founded, is that, by the nature of the case, you have abstracted from the remainder of things”. Model-land encompasses the group of abstractions that our model is made of, the real-world includes the remainder of things.
Big Surprises, for example, arise when something our simulation models cannot mimic turns out to have important implications for us. Big Surprises invalidate (not update) model-based probability forecasts: the conditions I in any conditional probability P(x|I) changes. In “weather-like” tasks, where there are many opportunities to test the outcome of our model against a real observed outcome, we can see when/how our models become silly (though this does not eliminate every possibility of a Big Surprise). In “climate-like” tasks, where the forecasts are made truly out-of-sample, there is no such opportunity and we rely on judgements about the quality of the model given the degree to which it performs well under different conditions.

In economics, forecasting the closing value of an exchange rate or of Brent Crude is a weather-like task: the same mathematical forecasting system can be used for hundreds or thousands of forecasts, and thus a large forecast-outcome archive can be obtained. Weather forecasts fall into this category; a “weather model” forecast system produces forecasts every 6 hours for, say, 5 years. In climate-like tasks there may be only one forecast: will the explosion of a nuclear bomb ignite and burn off the Earth’s atmosphere (this calculation was actually made)? How will the euro respond if Greece leaves the Eurozone? The pound? Or the system may change so much before we again address the same question that the relevant models are very different, as in year-ahead GDP forecasting, or forecasting inflation, or the hottest (or wettest) day of the year 2099 in the Old Quad of Pembroke College, Oxford.

……
(Idea 2). The unpredictable or Big Surprise. People working with “weather-like” tasks may develop a high level of humility regarding the many unknowns small and Big that can happen in the real world. It’s possible that people working with “climate-like” tasks have fewer opportunities to develop that appreciation, or perhaps that they must trudge forward knowing about known unknowns and unknown unknowns. The authors’ discussion of the difference between how economists use model outputs compared to climate modelers (intervening expert judgement)is also interesting.

And the questions we asked in the last post:

“It is helpful to recognise a few critical distinctions regarding pathways out of model-land and back to reality. Is the model used simply the “best available” at the present time, or is it arguably adequate for the specific purpose of interest? How would adequacy for purpose be assessed, and what would it look like? Are you working with a weather-like task, where adequacy for purpose can more or less be quantified, or a climate-like task, where relevant forecasts cannot be evaluated fully? Mayo (1996) argues that severe testing is required to build confidence in a model (or theory); we agree this is an excellent method for developing confidence in weather-like tasks, but is it possible to construct severe tests for extrapolation (climate-like) tasks? Is the system reflexive; does it respond to the forecasts themselves? How do we evaluate models: against real-world variables, or against a contrived index, or against other models? Or are they primarily evaluated by means of their epistemic or physical foundations? Or, one step further, are they primarily explanatory models for insight and under-standing rather than quantitative forecast machines? Does the model in fact assist with human understanding of the system, or is it so complex that it becomes a prosthesis of understanding in itself?

“A prosthesis of understanding” (for prosthesis, I think “substitute”) reminds me of linear programming models (most notably, in our case, Forplan).

Modelling for Decisions I: What is Good Enough? For What? And Who Decides?

From a review of decision support models by Lo et al. Quality was determined by mention in peer-reviewed pubs

One of the challenges facing policy folks and politicians today is “how seriously should we take the outputs of models”? Countries spend enormous amounts of research funds attempting to predict changes from climate change. How far down the predictive ladder from GCM’s to “the distribution of this plant species in 2050” or “how much carbon will be accumulated by these trees in 2100” can we go, before we say “probably not worth calculating, let’s spend the bucks on CCS research or other methods of actually fixing the climate problem.”.

There have been several recent papers and op-eds looking at modeling with regards to economics and climate. And because the models used by IPCC include economic prediction to predict future climate, there is an obvious linkage. Before we get to those, though, I’d like to share some personal history of modeling efforts back in the relatively dark ages (1970s, that is 50 years ago). Modeling came about because of the availability of computer programs. I actually took a graduate class in FORTRAN programming at Yale for credit towards my degree. My first experience with models was with Dr. Dave Smith, our silviculture professor at Yale, talking about JABOWA (this simulation model is still available here on Dan Botkin’s website). Back in the 70’s, the model was pretty primitive. Dr. Smith was skeptical. He told us students something like “”that model would grow trees to 200 feet, but when they know they only grow to 120 feet, they just but a card in that says “if height is greater than 120, height equals 120.”” The idea at the time was that models would be helpful for scientists to explore what parameters are important, and then to test their hypotheses derived from the models with real world data. I also remember Dr. Franklin commenting on Dr. Cannell’s physiological models (Cannell is Austrialian) “it’s modeling to you, but it sure sounds like muddling to me.” Back in those days, the emphasis was on collecting data with modeling as an approach to help understand real world data.

It is interesting, and perhaps understandable, that as models improved, that the concept of “what models are good for” would change to “if we could predict the future using models, we would make better decisions.” Hence, the use of models in decision-making.

I’m not saying that they’re all unhelpful (say, fire models), but there is a danger in taking them too far in decision-making, not considering the disciplinary and cross-disciplinary controversies with each one, not linking them to real-world tests of their accuracy, or especially having open discussions of accuracy and utility with stakeholders, decision-makers and experts. Utility, as we shall see in the papers in the next posts, is in the eye of the beholder. Even the least scientifically “cool” model (say, compared to GCM’s), an herbicide drift model, or a growth and yield model, usually has a feedback mechanism by which you contact a human being somewhere whose job it is to make tweaks to the model to improve it. The more complex the model, though, the less likely there is a guru (such as a Forplan guru-remember them?) who understands all the moving parts and can make appropriate changes. The more complex, therefore, the more acceptance becomes “belief in its community of builders and their linkages” rather than “outputs we can test.” And for predictive models, empirical tests are conceptually pretty difficult because just because you predicted 2010 well doesn’t mean you will predict 2020 or 2100 well.

And at some level of complexity, it’s all about trust and not so much about data at all. But who decides where the model fits between “better than nothing, I guess” and “oracle”? And on what basis?

Please feel free to share your own experiences, successes and not so much, with models and their use in decision-making.

Trump Administration sage-grouse plans stopped

The district court for Idaho has enjoined the Trump Administration’s attempt to cut back protection of sage-grouse on BLM lands in Idaho, Wyoming, Colorado, Utah, Nevada/Northeastern California, and Oregon from that provided by plan amendments in 2015. (A similar decision has been pending for national forest plans.) The changes made in the 2019 amendments to BLM land management plans can not be implemented, and the provisions in the 2015 amendments will apply (projects must be consistent with the 2015 amendments) until the case is decided on the merits.  (A link to the opinion is included with this news release.)

Moreover, the court telegraphed the merits pretty clearly:

“… the plaintiffs will likely succeed in showing that (1) the 2019 Plan Amendments contained substantial reductions in protections for the sage grouse (compared to the 2015 Plans) without justification; (2) The EISs failed to comply with NEPA’s requirement that reasonable alternatives be considered; (3) The EISs failed to contain a sufficient cumulative impacts analysis as required by NEPA; (4) The EISs failed to take the required “hard look” at the environmental consequences of the 2019 Plan Amendments; and (5) Supplemental Draft EISs should have been issued as required by NEPA when the BLM decided to eliminate mandatory compensatory mitigation.”

(1) “The stated purpose of the 2019 Plan Amendments was to enhance cooperation between the BLM and the States by modifying the BLM’s protections for sage grouse to better align with plans developed by the States. While this is a purpose well-within the agency’s discretion, the effect on the ground was to substantially reduce protections for sage grouse without any explanation that the reductions were justified by, say, changes in habitat, improvement in population numbers, or revisions to the best science contained in the NTT and CTO Reports.” The agencies did not fulfill their duty to explain why they are now making a different decision based on the same facts.

(2) The no-action alternative did not meet the purpose and need, and there was only one action alternative. “Common sense and this record demonstrate that mid-range alternatives were available that would contain more protections for sage grouse than this single proposal.”

(3) The BLM prepared six EISs based on state boundaries, but failed to provide the “robust” cumulative effects analysis this situation required. In particular, “connectivity of habitat – requires a large-scale analysis that transcends the boundaries of any single State.”

(4) “Certainly, the BLM is entitled to align its actions with the State plans, but when the BLM substantially reduces protections for sage grouse contrary to the best science and the concerns of other agencies, there must be some analysis and justification – a hard look – in the NEPA documents.” The court took particular note of the EPA comments that were ignored, and Fish and Wildlife Service endorsement of the 2015 amendments in deciding not to list the species under ESA because they adopted scientific recommendations (see below).

(5) Compensatory mitigation measures were eliminated after the draft EIS, which “appears to constitute both “substantial changes” to its proposed action and “significant new circumstances” requiring a supplemental EIS.

The case provides a good example of how science is considered by a court, which allowed declarations from outside experts to determine if relevant environmental consequences were ignored. The court relied heavily on earlier scientific reports that included normative “recommendations,” but the court focused on their scientific conclusions, such as “surface-disturbing energy or mineral development within priority sage-grouse habitats is not consistent with the goal to maintain or increase populations or distribution,” and “protecting even 75 to >80% of nesting hens would require a 4-mile radius buffer.” The Final EISs stated that there would be no measurable effects or they would be beneficial to sage-grouse, but the BLM either had no analysis or ignored this contrary information.

 

The Case for Intellectual Hospitality- Dr. Roger Pielke, Jr.

In this piece, Roger Pielke, Jr. argues for the importance of “facilitating democratic discourse” and suggests that there is room for improvement in academia. My own experiences have been that we had more open discussions within the Forest Service (due to having different experts needing to agree on documents) than my recent experiences with academia, both as a student and as an alumna. As Roger says, it’s not for everyone, but I think it should be encouraged, especially in institutions whose mission involves education of young people who need to enter a diverse work world. Of course, it’s also one of the main missions of this blog.

More than a century ago, American pragmatist John Dewey emphasized the importance of “intellectual hospitality.” By this he meant “An attitude of mind which actively welcomes suggestions and relevant information from all sides.” Today academics and other experts face a crisis of intellectual hospitality, with implications not just for the art of science communication but also for the broader roles of experts in democracy.

….

A counter argument is that in an era where politics might matter more than ever, why should experts engage with their opponents? Maybe the stakes nowadays are just too high for the high-minded luxury of intellectual hospitality. In fact, perhaps we should actively be opposing delegitimized academics, the GMO industry, climate skeptics and even Republicans, lest we help their causes. I hear this argument a lot, as do others who seek to cross political lines in scientific engagement and communication.

Despite my experiences, I persist in believing that not just science but also democracy is well served by intellectual hospitality.

Experts reinforce democracy and work against authoritarianism when we adopt a stance of intellectual hospitality. A half century ago, American political scientist E. E. Schattschneider made a powerful case for the importance to democracy of a willingness to engage different points of view: “Democracy is based on a profound insight into human nature, the realization that all men are sinful, all are imperfect, all are prejudiced, and no one knows the whole truth.”

To paraphrase Walter Lippmann, democracy is not about getting everyone to think alike, but about getting people who think differently to act alike. Intellectual hospitality will not lead to uniform thinking, but it may facilitate collective action.

As experts we face important choices in how we deploy the authority that we have gained in society. We can use that power to delegitimize those we disagree with, to seek to dominate the intellectual arena. Alternatively, we can allow some in our ranks to try to serve as a corrective to the hyper-partisan politics of our era, to seek to facilitate democratic discourse. The choice is profound, not just for the politics of specific issues like climate change and GMOs, but for the practice of democracy itself.

Is Science on a Path to Irrelevance in Policy and Management? Keynote by Dr. Bob Lackey

Dr. Bob Lackey
This paper from Lackey’s plenary session at the recent SAF 2018 Convention in Portland is well worth reading in its entirety. Lackey has a variety of real world experience in environmental science and policy, forests and fish in Oregon, for those who are not familiar with his work.

Lackey raises many points worth examining. Many people don’t trust scientists generally as voices of authority. Based on the ideological proclivities of university folks (seen in studies), could we expect them to design and carry out unbiased studies? Can choices of research priorities be carried out in an unbiased way by biased people? What about ways to mediate these impacts like co-production, co-design and extended peer review (as noted by Sir Peter Gluckman here).

Do these biases lead to an imbedded assumption in research studies that “natural is better”? If it is, shouldn’t we be discussing that openly in a forum of all disciplines and people that may be involved or impacted? If we really believe that, what does it say about human beings.. we are pretty much cockroaches in the kitchen of the planet? Ultimately those are not scientific beliefs, those are about what humankind is about- philosophical or metaphysical, depending on your worldview.

I’m not saying “natural” isn’t better, but there are different reasons that it might be better or not and those reasons could be important to policy. Further, with the climate changing, if we impute greater than 50% of the reason for climate change to human forces (at the risk of not, and seeming to be Zinkean or Pruittesque), then nothing will really be “natural” in the original sense. Where does that leave us policy-wise (we can already see this playing out in some ESA discussions)?

Here’s a quote related to that point:

Is Our Science Biased Toward Natural?”
A simple question, but I’m working in academia these days, so a straightforward yes or no will not suffice.
To start, put on your science hat, and be honest here, imagine that the public owns a 5,000 acre stand of old growth fir. Is preserving this stand of old growth preferable to removing the trees and building a destination resort and golf course on the same 5,000 acres?
It is not! At least not without assuming, perhaps unwittingly, a policy preference, a value choice. The result? A classic example of normative science.
It may look like a scientific statement. It may sound like a scientific statement. It is often presented by people who we assume to be operating as scientists. But such statements in science are nothing more than policy advocacy masquerading as science.

Anyway, there’s plenty of discussion fodder in this paper, so have at it!

Validated Science versus Unproven Scientific Hypothesis – Which One Should We Choose?

In a 6/13/18 article, David Atkins provides a critique of the assumptions behind the Law et al article titled: “Land use strategies to mitigate climate change in carbon dense temperate forests” and shows how hypothetical science can and has been used, without any caveat, to provide some groups with slogans that meet their messaging needs instead of waiting for validation of the hypothesis and thereby considering the holistic needs of the world.

I) BACKGROUND

The noble goal of Law et. al. is to determine the “effectiveness of forest strategies to mitigate climate change”. They state that their methodology “should integrate observations and mechanistic ecosystem process models with future climate, CO2, disturbances from fire, and management.”

A) The generally (ignoring any debate over the size of the percentage increase) UNCONTESTED points regarding locking up more carbon in the Law et. al. article are as follows:
1) Reforestation on appropriate sites – ‘Potential 5% improvement in carbon storage by 2100’
2) Afforestation on appropriate sites – ‘Potential 1.4% improvement in carbon storage by 2100′

B) The CONTESTED points regarding locking up 17% more carbon by 2100 in the Law et. al. article are as follows:
1) Lengthened harvest cycles on private lands
2) Restricting harvest on public lands

C) Atkins, at the 2018 International Mass Timber Conference protested by Oregon Wild, notes that: “Oregon Wild (OW) is advocating that storing more carbon in forests is better than using wood in buildings as a strategy to mitigate climate change.” OW’s first reference from Law et. al. states: “Increasing forest carbon on public lands reduced emissions compared with storage in wood products” (see Law et. al. abstract). Another reference quoted by OW from Law et. al. goes so far as to claim that: “Recent analysis suggests substitution benefits of using wood versus more fossil fuel-intensive materials have been overestimated by at least an order of magnitude.”

II) Law et. al. CAVEATS ignored by OW

A) They clearly acknowledge that their conclusions are based on computer simulations (modeling various scenarios using a specific set of assumptions subject to debate by other scientists).

B) In some instances, they use words like “probably”, “likely” and “appears” when describing some assumptions and outcomes rather than blindly declaring certainty.

III) Atkins’ CRITIQUE

Knowing that the modeling used in the Law et. al. study involves significant assumptions about each of the extremely complex components and their interactions, Atkins proceeds to investigate the assumptions which were used to integrate said models with the limited variables mentioned and shows how they overestimate the carbon cost of using wood, underestimate the carbon cost of storing carbon on the stump and underestimate the carbon cost of substituting non-renewable resources for wood. This allows Oregon Wild to tout unproven statements as quoted in item “I-C” above and treat them as fact and justification for policy changes instead of as an interesting but unproven hypothesis that needs to be validated in order to complete the scientific process.

Quotes from Atkins Critique:

A) Wood Life Cycle Analysis (LCA) Versus Non-renewable substitutes.
1) “The calculation used to justify doubling forest rotations assumes no leakage. Leakage is a carbon accounting term referring to the potential that if you delay cutting trees in one area, others might be cut somewhere else to replace the gap in wood production, reducing the supposed carbon benefit.”
2) “It assumes a 50-year half-life for buildings instead of the minimum 75 years the ASTM standard calls for, which reduces the researchers’ estimate of the carbon stored in buildings.”
3) “It assumes a decline of substitution benefits, which other LCA scientists consider as permanent.”
4) “analysis chooses to account for a form of fossil fuel leakage, but chooses not to model any wood harvest leakage.”
5) “A report published by the Athena Institute in 2004, looked at actual building demolition over a three-plus-year period in St. Paul, Minn. It indicated 51 percent of the buildings were older than 75 years. Only 2 percent were demolished in the first 25 years and only 12 percent in the first 50 years.”
6) “The Law paper assumes that the life of buildings will get shorter in the future rather than longer. In reality, architects and engineers are advocating the principle of designing and building for longer time spans – with eventual deconstruction and reuse of materials rather than disposal. Mass timber buildings substantially enhance this capacity. There are Chinese Pagoda temples made from wood that are 800 to 1,300 years old. Norwegian churches are over 800 years old. I visited at cathedral in Scotland with a roof truss system from the 1400s. Buildings made of wood can last for many centuries. If we follow the principle of designing and building for the long run, the carbon can be stored for hundreds of years.”
7) “The OSU scientists assumed wood energy production is for electricity production only. However, the most common energy systems in the wood products manufacturing sector are combined heat and power (CHP) or straight heat energy production (drying lumber or heat for processing energy) where the efficiency is often two to three times as great and thus provides much larger fossil fuel offsets than the modeling allows.”
8) “The peer reviewers did not include an LCA expert.”
9) The Dean of the OSU College of Forestry was asked how he reconciles the differences between two Doctorate faculty members when the LCA Specialist (who is also the director of CORRIM which is a non-profit that conducts and manages research on the environmental impacts of production, use, and disposal of forest products). The Dean’s answer was “It isn’t the role of the dean to resolve these differences, … Researchers often explore extremes of a subject on purpose, to help define the edges of our understanding … It is important to look at the whole array of research results around a subject rather than using those of a single study or publication as a conclusion to a field of study.”
10) Alan Organschi, a practicing architect, a professor at Yale stated his thought process as “There is a huge net carbon benefit [from using wood] and enormous variability in the specific calculations of substitution benefits … a ton of wood (which is half carbon) goes a lot farther than a ton of concrete, which releases significant amounts of carbon during a building’s construction”. He then paraphrased a NASA climate scientistfrom the late 1980’s who said ‘Quit using high fossil fuel materials and start using materials that sink carbon, that should be the principle for our decisions.’
11) The European Union, in 2017, based on “current literature”, called “for changes to almost double the mitigation effects by EU forests through Climate Smart Forestry (CSF). … It is derived from a more holistic and effective approach than one based solely on the goals of storing carbon in forest ecosystems”
12) Various CORRIM members stated:
a) “Law et al. does not meet the minimum elements of a Life Cycle Assessment: system boundary, inventory analysis, impact assessment and interpretation. All four are required by the international standards (ISO 14040 and 14044); therefore, Law et al. does not qualify as an LCA.”
b) “What little is shared in the article regarding inputs to the simulation model ignores the latest developments in wood life cycle assessment and sustainable building design, rendering the results at best inaccurate and most likely incorrect.
c) “The PNAS paper, which asserts that growing our PNW forests indefinitely would reduce the global carbon footprint, ignores that at best there would 100 percent leakage to other areas with lower productivity … which will result in 2 to 3.5 times more acres harvested for the same amount of building materials. Alternatively, all those buildings will be built from materials with a higher carbon footprint, so the substitution impact of using fossil-intensive products in place of renewable low carbon would result in >100 percent leakage.”
d) More on leakage: “In 2001, seven years after implementation, Jack Ward Thomas, one of the architects of the plan and former chief of the U.S. Forest Service, said: “The drop in the cut in the Pacific Northwest was essentially replaced by imports from Canada, Scandinavia and Chile … but we haven’t reduced our per-capita consumption of wood. We have only shifted the source.”
e) “Bruce Lippke, professor emeritus at the University of Washington and former executive director of CORRIM said, “The substitution benefits of wood in place of steel or concrete are immediate, permanent and cumulative.””

B) Risks Resulting from High Densities of Standing Timber
1) “The paper underestimates the amount of wildfire in the past and chose not to model increases in the amount of fire in the future driven by climate change.”
2) “The authors chose to treat the largest fire in their 25-year calibration period, the Biscuit Fire (2003), as an anomaly. Yet 2017 provided a similar number of acres burned. … the model also significantly underestimated five of the six other larger fire years ”
3) “The paper also assumed no increase in fires in the future
4) Atkins comments/quotes support what some of us here on the NCFP blog have been saying for years regarding storing more timber on the stump. There is certainty that a highly significant increase in carbon loss to fire, insects and disease will result from increased stand densities as a result of storing more carbon on the stump on federal lands. Well documented, validated and fundamental plant physiology and fire science can only lead us to that conclusion. Increases in drought caused by global warming will only increase the stress on already stressed, overly dense forests and thereby further decrease their viability/health by decreasing the availability of already limited resources such as access to minerals, moisture and sunlight while providing closer proximity between trees to ease the ability and rate of spread of fire, insects and disease between adjacent trees.

Footnote:
In their conclusion, Law et. al. state that“GHG reduction must happen quickly to avoid surpassing a 2°C increase in temperature since preindustrial times.” This emphasis leads them to focus on strategies which, IMHO, will only exacerbate the long-term problem.
→ For perspective, consider the “Failed Prognostications of Climate Alarm