Featured lawsuit – Flathead National Forest plan revision (and the effect of no Chevron judicial deference)

Court decision in Swan View Coalition v. Haaland (D. Montana)

On June 28, the district judge adopted the magistrate judge’s findings that the Forest Service violated the Endangered Species Act because it failed to adequately consider the effects on grizzly bears and bull trout of closed roads and unauthorized use of roads when it adopted its revised forest plan (discussed in depth here).  This article has a link to the court order, which remands the Fish and Wildlife Service’s Biological Opinion, but does not disturb the forest plan, or enjoin any projects.

This is the second time the Forest Service has lost this case, and I thought it might be interesting to explore this example of the role the courts have in considering scientific issues related to land management, and whether such judicial review looks different with Chevron deference to agency expertise no longer the law.  The scientific question in this case concerns the effects of “closed” roads on grizzly bears and bull trout – more specifically the difference between the effects of roads closed by obliteration and restoration (as required by the prior forest plan) and roads closed by signs and barriers, without removing culverts (as required by the revised forest plan, and referred to as “impassable” roads).  There are documented violations of the latter kinds of closures, and the agencies agree that they are less than 100% effective.  However, the Fish and Wildlife Service declined to consider the effects of these closed roads or their use on the listed species in its biological opinion.

This court found held that the FWS violated the ESA because the agency “offered an explanation for its decision to exclude impassable roads from (total road density) that runs counter to the evidence before the agency.”  With regard to grizzly bears, the agency “failed to address the exclusion of unauthorized motorized use from road density calculations and, to the extent the agency did address this issue, failed to articulate a satisfactory explanation regarding its decision.” Regarding bull trout, the court ruled that FWS violated the ESA because the agency “failed to address its decision to abandon the culvert removal requirement with respect to impassable roads.”

This court largely followed the reasoning in a prior case regarding deference to the agencies (referring to that prior opinion):  “The Court explained that, while it would defer to the agencies’ expertise on how to account for unauthorized motorized access going forward, ‘the agencies must actually exercise that expertise for their decision to stand.’ Id. at 1138. In summary, ‘[c]laiming a total inability to ascertain, or even estimate, effects of unauthorized motorized use on OMRD, TMRD, and Core—and, by extension, the effects on grizzly bears—despite the evidence in the record . . . does not suffice.’”

Courts can tell from the administrative record whether an agency actually considered the scientific facts in the record, and the agency will be found arbitrary if it did not.  This is an application of the requirements of the Administrative Procedure Act, not an interpretation of a subject matter statute.  Such cases wouldn’t have invoked Chevron deference in the past, and so nothing should change.

We might perceive a little wiggle room in the question of a “satisfactory” explanation of how the facts were considered.  The court walks through this question on pp. 29-33, where it refers to “logic” and whether the facts in the record support the statements made by the agencies.  It concludes that the explanation in this case “runs counter to the evidence before the agency.”  This still looks like an APA record review rather than one involving interpretations of law or fact that could have raised questions about Chevron deference.  I don’t see evidence that the judge is usurping agency expertise.

Four Questions Journalists Should Always Ask About Research Studies; and Saved From Disciplinary Encroachment by Western Watersheds

Thanks to Wyoming Public Media


These days, in several studies we’ve reviewed on The Smokey Wire, it appears that exploring mechanisms is an afterthought, perhaps left to unnamed parties.  What was a basic scientific principle- that correlation is not causation,  seems to be on the verge of being thrown out.   A wide variety of variables.. social and others.. are not found or explored, and thinks which are unlike are lumped together.  In fact, new research has emerged without direct connection to the traditionally involved disciplines.  I think that this is definitely a science “Situation That Shouts Watch Out”.   And I think what we have to be very aware of.. since most journalists are not (this post will helpfully encourage them to be more aware).. is what I call “disciplinary encroachment.”

Anyway, I think this is a great column to illustrate some of my concerns.

1) Framing.  First, look at the proposal and how it’s framed.  Does it seem like it would inform any decisions?  If not, read no further.

2) Data.  Does the data collected (or used without collecting) relate to the question directly? Is this relationship explained clearly? Do they discuss the weaknesses of using these data?

3) Disciplines.  If the study is touching on, say, economics or birds, are economists or bird scientists involved? If wildfire, are wildfire scientists involved?

4) Correlation is Not Causation. For the same reasons this has always been true. Correlations can be tested by designed experiments.  But those would tend to be carried out by experts in the field (see number 3).

Not to be an Old Person, but these were values upheld by fields in forest science at one time.  If we are changing those scientific community values, I think we should announce the fact, so the public can be clued in.

Sammy Roth of the LA Times, one of my favorite reporter,  didn’t seem to notice that it might be odd for a post-doc economist at the University of Geneva to study birds in the US. Note that Sammy wrote this article as a “column” which I think might mean “opinion piece” and not an ordinary article.  My view is that when reporters write both op-eds and news articles on the same subjects, it does not lead to confidence in the objectivity of the products.  See the James Bennet piece on how this has become murky in some outlets.

Erik Katovich, an environmental economist and postdoctoral researcher at the University of Geneva, had been following all the news coverage of wind power and bird deaths, and he feared it was being “weaponized by those opposed to renewable energy.” A longtime birder himself — he grew up in Minnesota bird-watching with his dad — he wanted to know if the harm to avian life from wind energy development in California, Iowa and other states was getting blown out of proportion.


As even those of us non-ornithologist know who follow this, different species respond differently.  Avoiding sites and getting killed are different things..eagles, sage grouse are all different.

So like any good scholar, he ran the numbers.

Katovich turned to data from the National Audubon Society’s Christmas Bird Count, an annual effort dating back to 1900 during which tens of thousands of volunteers methodically record bird sightings at consistent locations around the world. Last winter’s count produced more than 36 million sightings of 671 bird species in the United States alone.

In a clever bit of science, Katovich compared the Christmas Bird Count numbers with data showing where wind turbines were built in America’s lower 48 states between 2000 and 2020. He did the same comparison for bird counts and new oil and gas extraction in shale fields — a process defined by the drilling technique known as hydraulic fracturing, or fracking.

His peer-reviewed study was published last month. The conclusions are fascinating.

Katovich found that wind energy development had no statistically significant effect on bird counts, or on the diversity of avian species within five kilometers of a Christmas Bird Count site. Fracking, on the other hand, did have an impact. The drilling of shale oil and gas wells “reduces the total number of birds counted in subsequent years by 15%,” Katovich wrote in the study.

But no one, as far as I know,  was concerned about bird counts, they were concerned about critters like eagles and sage grouse.  See how that works when the framing is not clear at the outset? Sleight of science. And get this conclusion:

In other words: Oil and gas drilling is worse for birds than wind power.

That’s a pretty grandiose conclusion from one study.  So let’s back up and go back to framing, which was apparently “is oil and gas drilling “worse” for birds than wind power?”.  I don’t think anyone has this as a choice, and in fact I’ve seen both turbines and oil and gas facilities in the same general area.  I don’t recall a single BLM EIS that compares those two alternatives.  Then.. there are a lot of bird species.  I don’t know of any eagles that have been shredded by oil and gas facilities, so which species count more? And what if wind needs new transmission lines?

Can you imagine a study that amalgamates.. say.. mammals?  A more useful question perhaps would be “how can O&G or wind have fewer impacts on different bird species?” Which, of course, bird scientists are “avi”dly studying..

Here’s what Sammy has to say:

But to my mind, his results are a sign that we pay too much attention to bird-related criticisms of wind energy — probably in part because those criticisms are trumpeted by right-wing provocateurs, including some funded by fossil fuel industry money.

This is where it gets kind of funny.  Fortunately, Roth checked with various National Audubon people.

They told me Katovich probably underestimated the harm to birds from wind energy, in part because he included all turbines within five kilometers of Audubon bird count locations. Prior research has found that wind farms are much more likely to kill or injure birds that spend time right near the turbines.

I don’t think anyone can argue that birds need to be close to turbines to be injured or killed by them.

Fortunately, Sammy happened to also ask Erik Molvar of Western Watersheds. If he hadn’t met Erik.. I guess we wouldn’t have gotten this (more accurate) side of the story.

Molvar, who leads a conservation group called the Western Watersheds Project, offered several criticisms of the new study.

For one thing, the Audubon bird counts are done by volunteers, meaning the data aren’t perfect. Also, Katovich didn’t analyze the number of birds of each species recorded at each location — a crucial measure of biodiversity, Molvar said via email.

“A bird count that records one cardinal gets exactly the same weight as a count that records 250 cardinals,” he wrote.

Molvar also pointed out that wind farms and fossil fuel extraction can affect birds in different ways. Wind farms are more likely to kill birds than displace them. And some birds are more sensitive to wind farms than others. “Extreme habitat specialists” such as sage grouse — the focus of my first meeting with Molvar — could suffer greatly even as other species get by fine.


When I ran those criticisms past Katovich, he responded gracefully, describing them as “thoughtful and informed” and agreeing that several issues he examined could use further research. He also noted that in the same way his study could be missing some of the damage to bird populations from wind energy, it could be underestimating the harm from oil and gas, too.

But why wouldn’t you ask bird scientists.. about birds and how to measure their populations and so on, what we know about how they respond to O&G and wind installations?  Certainly wind folks collect monitoring data.

As we shall see with other studies, disciplinary encroachment is not unusual.

Oh, but the funny part. If you circle back to “But to my mind, his results are a sign that we pay too much attention to bird-related criticisms of wind energy — probably in part because those criticisms are trumpeted by right-wing provocateurs, including some funded by fossil fuel industry money.”  And we have to wonder whether anyone thinks Western Watersheds are closet “right-wing provocateurs”  or closet right-wingers (just kidding!).

Hotshot Wakeup Interview with Jason Forthofer of the US Forest Service Missoula Fire Laboratory

Jason Forthofer, mechanical engineer, stands in an area burned by the Carr Fire, one of the devastating California wildfires in 2018. (Photo provided by Bret Butler, U.S. Forest Service)

I thought this was a terrific podcast, an interview with Jason Forthofer at the Missoula Forest Sciences Lab by the Hotshot Wakeup on his Substack.  He has a gift for talking about fire models in a way that is easy to understand, at least for TSW-ites and our ilk.   And the range of tools he’s working on, and their practical applications, are fascinating. At least to me.

He talks about AI and machine learning .. I’ve always been interested in these new-fangled analysis contraptions, so asked Jason these questions.

When you say “AI” what do you mean exactly? Do you mean machine learning? I kind of thought that that was empirical also, based on loading data into it. But then you mentioned a combination of using your physical model with AI.  We have many older readers so if you could explain this a bit more (or anything else you wanted to say but did not get to, or links to key papers), that would be great!

Below are his answers.

Yes, when I was saying “AI” I was primarily talking about machine learning.  I often use these terms interchangeably, but I understand that there are some differences.  In the context of the spread model work we are doing with Google, we are using machine learning, and specifically a method called deep learning which uses the idea of neural network.

I would say that you are correct that AI and machine learning could be considered essentially empirical models.  And yes, often these models use learning data that comes from measurements of actual phenomena.  So in the case of fire spread for example, you could burn some fires in a laboratory and vary, say, the wind speed and measure the outcome (let’s say you measure the fire’s spread rate).  An empirical model would, in one way or another, correlate the input (wind speed) to the outcome (fire spread rate).  For simple cases like this you could do a curve fit to the data, just like you might learn in an elementary physics or math class (one common method is the “least squares” fit).  You could also use a more sophisticated and complex method like machine learning.  From my limited experience with machine learning, I would say that it really is like a kind of very sophisticated “curve fitting” method.  As the phenomena you are trying to model get more complex, for example many different inputs and outputs and also complex relations between the variables, more complex methods like machine learning may work better than the simpler methods.

But machine learning can also use data that is output from another model instead of actual measurements of the real phenomena, which is what we are doing in collaboration with Google.  Instead of the machine learning algorithm using lab or field measurements to learn from, we are feeding it input/output from our relatively slow running physically-based model of fire spread.  The whole purpose of doing this is to have a predictive model that is fast running.  To give you an idea of the speed up in the computation, some preliminary investigations we have done show that the machine learning model (that learns from our physically based model) can predict fire spread somewhere around 100,000 times faster than the original physically-based model.  The huge benefit of this is that it essentially allows us to use our machine learning model to predict fire spread over large landscapes (where tens of thousands or more of these small fire behavior calculations must be done).  It would not be feasible to do such a simulation using the original physically-based model.


When I think about different research that uses machine learning, I think it’s safe in the hands of folks like Jason who are experienced with the real-world processes he models.  If you are trying to relate machine learning to old processes that you understand, like, say linear regression, I thought that this Forbes article is helpful.  Let’s be careful out there!

Pattern verification is an especially powerful way of using machine learning models to both to confirm that they are picking up on theoretically suggested signals and, perhaps even more importantly, to understand the biases and nuances of the underlying data. Unrelated variables moving together can reveal a powerful and undiscovered new connection with strong predictive or explanatory power. On the other hand, they could just as easily represent spurious statistical noise or a previously undetected bias in the data.

Bias detection is all the more critical as we deploy machine learning systems in applications with real world impact using datasets we understand little about.

Perhaps the biggest issue with current machine learning trends, however, is our flawed tendency to interpret or describe the patterns captured in models as causative rather than correlations of unknown veracity, accuracy or impact.

One of the most basic tenants of statistics is that correlation does not imply causation. In turn, a signal’s predictive power does not necessarily imply in any way that that signal is actually related to or explains the phenomena being predicted.


And Then There Is This – Globally Wildfires Decreasing Since 2001

Italics and bolding added by Gil

#1)  WSJ ByBjorn Lomborg,

Climate Change Hasn’t Set the World on Fire

a) It turns out the percentage of the globe that burns each year has been declining since 2001.

b) For more than two decades, satellites have recorded fires across the planet’s surface. The data are unequivocal: Since the early 2000s, when 3% of the world’s land caught fire, the area burned annually has trended downward.

c) In 2022, the last year for which there are complete data, the world hit a new record-low of 2.2% burned area. Yet you’ll struggle to find that reported anywhere.
d) Yet the latest report by the United Nations’ climate panel doesn’t attribute the area burned globally by wildfires to climate change. Instead, it vaguely suggests the weather conditions that promote wildfires are becoming more common in some places. Still, the report finds that the change in these weather conditions won’t be detectable above the natural noise even by the end of the century.
e)Take the Canadian wildfires this summer. While the complete data aren’t in for 2023, global tracking up to July 29 by the Global Wildfire Information System shows that more land has burned in the Americas than usual. But much of the rest of the world has seen lower burning—Africa and especially Europe. Globally, the GWIS shows that burned area is slightly below the average between 2012 and 2022, a period that already saw some of the lowest rates of burned area.
f) The thick smoke from the Canadian fires that blanketed New York City and elsewhere was serious but only part of the story. Across the world, fewer acres burning each year has led to overall lower levels of smoke, which today likely prevents almost 100,000 infant deaths annually, according to a recent study by researchers at Stanford and Stockholm University.
g)  Likewise, while Australia’s wildfires in 2019-20 earned media headlines such as “Apocalypse Now” and “Australia Burns,” the satellite data shows this was a selective narrative. The burning was extraordinary in two states but extraordinarily small in the rest of the country. Since the early 2000s, when 8% of Australia caught fire, the area of the country torched each year has declined. The 2019-20 fires scorched 4% of Australian land, and this year the burned area will likely be even less.
h) In the case of American fires, most of the problem is bad land management. A century of fire suppression has left more fuel for stronger fires. Even so, last year U.S. fires burned less than one-fifth of the average burn in the 1930s and likely only one-tenth of what caught fire in the early 20th century.


#2)  The Canadian Take by LIFESITE News,Thu Aug 31, 2023

New research shows wildfires have decreased globally while media coverage has spiked 400%

Recreation effects on wildlife conference

The effect of recreation on wildlife is a topic that has come up a few times here.  It has apparently reached the visibility of a “conference theme,” at least in Canada: “Responsible Recreation: Pathways, Practices and Possibilities.”  This conference in May focused on the Columbia Mountains in southern B. C., but may be of broader interest.  You can still sign up to see the recorded conference until the 16th, but the written proceedings are available from this website.

From the conference description:

Recreation and adventure tourism opportunities and activities are expanding globally, with the Columbia Mountains region being no exception. From hiking, mountain biking, snowmobiling, dirt biking, cross-country skiing, to motorized and non-motorized watercraft use, all activities can have an impact on wildlife and ecosystems. However, empirical measures of impacts are often difficult to obtain, with unknown thresholds that ultimately affect the viability of wildlife populations and ecosystems. This limits policy development and impact management. Furthermore, the cumulative effect of multiple overlapping recreational and industrial activities on the landscape are seldom considered or addressed.


A Paean to Skepticism I: Where the “Wood-Wide Web” Narrative Went Wrong

This is a great story of the science-journalism interface and how things can go wrong, even when everyone has the best of intentions. What I like is that the scientists admit to their own biases and perhaps a bit of overenthusiasm.. Skepticism by all of us.. is so important! This is the theme of the next three posts.

A compelling story about how forest fungal networks communicate has garnered much public interest. Is any of it true?


Over the past few years, a fascinating narrative about forests and fungi has captured the public imagination. It holds that the roots of neighboring trees can be connected by fungal filaments, forming massive underground networks that can span entire forests — a so-called wood-wide web. Through this web, the story goes, trees share carbon, water, and other nutrients, and even send chemical warnings of dangers such as insect attacks. The narrative — recounted in books, podcasts, TV series, documentaries, and news articles — has prompted some experts to rethink not only forest management but the relationships between self-interest and altruism in human society.

But is any of it true?

The three of us have studied forest fungi for our whole careers, and even we were surprised by some of the more extraordinary claims surfacing in the media about the wood-wide web. Thinking we had missed something, we thoroughly reviewed 26 field studies, including several of our own, that looked at the role fungal networks play in resource transfer in forests. What we found shows how easily confirmation bias, unchecked claims, and credulous news reporting can, over time, distort research findings beyond recognition. It should serve as a cautionary tale for scientists and journalists alike.

First, let’s be clear: Fungi do grow inside and on tree roots, forming a symbiosis called a mycorrhiza, or fungus-root. Mycorrhizae are essential for the normal growth of trees. Among other things, the fungi can take up from the soil, and transfer to the tree, nutrients that roots could not otherwise access. In return, fungi receive from the roots sugars they need to grow.

As fungal filaments spread out through forest soil, they will often, at least temporarily, physically connect the roots of two neighboring trees. The resulting system of interconnected tree roots is called a common mycorrhizal network, or CMN.

Years ago, when the early experiments were being done on forest fungi, some of us — the authors of this essay included — simply got caught up in the excitement of a new idea.

When people speak of the wood-wide web, they are generally referring to CMNs. But there’s very little that scientists can say with certainty about how, and to what extent, trees interact via CMNs. Unfortunately, that hasn’t prevented the emergence of wildly speculative claims, often with little or no experimental evidence to back them up.

One common assertion is that seedlings benefit from being connected to mature trees via CMNs. However, across the 28 experiments that directly tackled that question, the answer varied depending on the trees’ species, and on when, where, and in what type of soil the seedling is planted. In other words, there is no consensus. Allowed to form CMNs with larger trees, some seedlings seem to perform better, others worse, and still others seem to behave no differently at all. Field experiments designed to allow roots of trees and seedlings to intermingle — as they would in natural forest conditions — cast still more doubt on the seedling hypothesis: In only 18 percent of those studies were the positive effects of CMNs strong enough to overcome the negative effects of root interactions. To say that seedlings generally grow or survive better when connected to CMNs is to make a generalization that simply isn’t supported by the published research.

Other widely reported claims — that trees use CMNs to signal danger, to recognize offspring, or to share nutrients with other trees — are based on similarly thin or misinterpreted evidence. How did such a weakly sourced narrative take such a strong grip on the public imagination?

We scientists shoulder some of the blame. We’re human. Years ago, when the early experiments were being done on forest fungi, some of us — the authors of this essay included — simply got caught up in the excitement of a new idea.

One of us (Jones) was involved in the first major field study on CMNs, published more than 25 years ago. That study found evidence of net carbon transfer between seedlings of two different species, and it posited that most of the carbon was transported through CMNs, while downplaying other possible explanations. This is what’s known as “confirmation bias,” and it is an easy trap to fall into. As hard as it is to admit, it was only due to our skepticism of the recent extraordinary claims about the wood-wide web that we looked back and saw the bias in our own work.

Over decades, these and other distortions have propagated in the academic literature on CMNs, steering the scientific discourse further and further away from reality, similar to a game of “telephone.” In our review, we found that the results of older, influential field studies of CMNs have been increasingly misrepresented by the newer papers that cite them. Among peer reviewed papers published in 2022, fewer than half the statements made about the original field studies could be considered accurate. A 2009 study that used genetic techniques to map the distribution of mycorrhizal fungi, for instance, is now frequently cited as evidence that trees transfer nutrients to one another through CMNs — even though that study did not actually investigate nutrient transfer. In addition, alternative hypotheses provided by the original authors were typically not mentioned in the newer studies.

As these biases have spilled over into the media, the narrative has caught fire. And no wonder: If scientists themselves could be seduced by potentially sensational findings, it is not surprising that the media could too.

Among peer reviewed papers published in 2022, fewer than half the statements made about the original field studies could be considered accurate.

Journalists told emotional, persuasive, and seductive stories about the wood-wide web, amplifying the speculations of a few scientists through powerful storytelling. Writers imbued trees with human qualities, portraying them as conscious actors using fungi to serve their needs. Fantasy moved to the foreground, facts to the back. In an odd kind of mutual reinforcement, the media blitz may have convinced experts in other subfields of ecology that the claims about CMNs were well-founded.

The episode underscores how important it is for journalists to seek out a broad range of expert opinions, and to challenge us scientists when our assertions aren’t clearly backed up by rigorous research. By directly asking scientists questions such as “What other phenomena could explain your results?” and “How many other studies support this hypothesis?” journalists may be able to better understand and convey some of the uncertainty around scientific conclusions. The best science writing can capture the hearts and minds of the public, but it must be true to the evidence and the scientific process. If not, the consequences can be far-reaching, affecting policy decisions that impact real people.

There are many captivating and scientifically well-grounded stories we can tell about fungi in forests — and we should. Mycorrhizal fungi underlie many of our favorite edible mushrooms, including truffles, chanterelles, and porcinis. And some herbs in the understories of forests, rather than photosynthesizing sugars like a normal plant, use CMNs to connect to trees and steal their sugars. Forests are fascinating places, marked by a rich diversity of interactions between plants, animals, and microbes. The stories are endless. We just have to tell them with care.

Melanie Jones is a professor in the Biology Department at the University of British Columbia’s Okanagan campus. She and her students have been studying mycorrhizal fungal communities in forests, clearcuts, and wildfire sites in British Columbia for 35 years.

Jason Hoeksema is a professor in the Department of Biology at the University of Mississippi. His research addresses a diversity of questions regarding the ecological and evolutionary consequences of species interactions on populations, communities, and ecosystems.

Justine Karst is an associate professor in the Department of Renewable Resources at the University of Alberta. She has been studying the mycorrhizal ecology of forests for 20 years.


A Framework for Federal Scientific Integrity Policy and Practice

The 2021 Presidential Memorandum on Restoring Trust in Government Through Scientific Integrity and Evidence-Based Policymaking charges the Office of Science and Technology Policy to (1) review agency scientific integrity policy effectiveness and (2) to develop a framework for regular assessment and iterative improvement of agency scientific integrity policies and practices (Framework). In January, the Biden Administration released the Framework. It includes a “first-ever Government-wide definition of scientific integrity,” a roadmap of activities and outcomes to achieve an ideal state of scientific integrity, a Model Scientific Integrity Policy, as well as critical policy features and metrics that OSTP will use to iteratively assess agency progress.  Here is that definition:

Scientific integrity is the adherence to professional practices, ethical behavior, and the principles of honesty and objectivity when conducting, managing, using the results of, and communicating about science and scientific activities. Inclusivity, transparency, and protection from inappropriate influence are hallmarks of scientific integrity.

The 2021 Presidential Memorandum on Restoring Trust in Government Through Scientific Integrity and Evidence-Based Policymaking also charges OSTP and NSTC to “review agency scientific integrity policies and consider whether they prevent political interference in the conduct, management, communication, and use of science …”  The “Model Scientific Integrity Policy for United States Federal Agencies” says this:

It is the policy of this agency to: 1. Prohibit political interference or inappropriate influence in the funding, design, proposal, conduct, review, management, evaluation, or reporting of scientific activities and the use of scientific information.

Ensure that agency scientists may communicate their scientific activities objectively without political interference or inappropriate influence, while at the same time complying with agency policies and procedures for planning and conducting scientific activities, reporting scientific findings, and reviewing and releasing scientific products. Scientific products (e.g., manuscripts for scientific journals, presentations for workshops, conferences, and symposia) shall adhere to agency review procedures.

It defines these terms:

Political interference refers to interference conducted by political officials and/or motivated by political considerations.

Inappropriate influence refers to the attempt to shape or interfere in scientific activities or the communication about or use of scientific activities or findings against well-accepted scientific methods and theories or without scientific justification.

I found it rather interesting, given the way the these terms are used, that the 2021 Presidential Memorandum on Restoring Trust in Government Through Scientific Integrity and Evidence-Based Policymaking actually says this:

Improper political interference in the work of Federal scientists or other scientists who support the work of the Federal Government and in the communication of scientific facts undermines the welfare of the Nation, contributes to systemic inequities and injustices, and violates the trust that the public places in government to best serve its collective interests.

Executive departments and agencies (agencies) shall establish and enforce scientific-integrity policies that ban improper political interference in the conduct of scientific research and in the collection of scientific or technological data, and that prevent the suppression or distortion of scientific or technological findings, data, information, conclusions, or technical results.

Deliberate or careless?  Could there be “proper” political interference, especially given the distinction made about “inappropriate” influence (which is defined in terms of “interference”)?

Any way, it’s good to know someone is working on this aspect of scientific integrity.  And it seems to be helping – compare these results of the Union of Concerned Scientists 2023 surveys of scientists at federal agencies with those from 2018.  (Unfortunately, while the 2023 survey includes USDA, it did not include the Forest Service.)

“Proforestation” It Aint What It Claims To Be

‘Proforestation’ separates people from forests

AKA: Ignorance and Arrogance Still Reign Supreme at the Sierra Club.

I picked this up from Nick Smith’s Newsletter (sign up here)
Emphasis added by myself as follows:
1)  Brown Text for items NOT SUPPORTED by science with long term and geographically extensive validation.                                                                                                                                                        2) Bold Green Text for items SUPPORTED by science with long term and geographically extensive validation.
3) >>>Bracketed Italics for my added thoughts based on 59 years of experience and review of a vast range of literature going back to way before the internet.<<<

“Proforestation” is a relatively new term in the environmental community. The Sierra Club defines it as: “extending protections so as to allow areas of previously-logged forest to mature, removing vast amounts of atmospheric carbon and recovering their ecological and carbon storage potential.”          >>>Apparently, after 130 years of existence, the Sierra Club still doesn’t know much about plant physiology, the carbon cycle or the increased risk of calamitous wild fire spread caused by the close proximity of stems and competition driven mortality in unmanged stands (i.e. the science of plant physiology regarding competition, limited resources and fire spread physics). Nor have they thought out the real risk of permanent destruction of the desired ecosystems nor the resulting impact on climate change.<<<

Not only must we preserve untouched forests, proponents argue, but we must also walk away from previously-managed forests too. People should be entirely separate from forest ecology and succession. >>>More abject ignorance and arrogant woke policy based only on vacuous wishful thinking.<<<

Except humans have managed forests for millennia. In North America, Indigenous communities managed forests and sustained its resources for at least 8,000 years prior to European settlement. It is true people have not always managed forests sustainably. Forest practices of the late 19th century are a good example.                                                                                                                                                 >>>Yes, and the political solution pushed on us by the Sierra Club and other faux conservationists beginning with false assumptions about the Northern Spotted Owl was to throw out the continuously improving science (i.e. Continuous Process Improvement [CPI]).  The concept of using the science to create sustainable practices and laws that regulated the bad practices driven by greed and arrogance wasn’t even considered seriously.  As always, the politicians listened to the well heeled squeaky voters.  Now, their arrogant ignorance has given us National Ashtrays, destruction of soils, and an ever increasing probability that great acreages of forest ecosystems will be lost to the generations that follow who will also have to cope with the exacerbated climate change.  So here we are, in 30+/- years the Faux Conservationists have made things worse than the greedy timber barons ever could have.  And the willfully blind can’t seem to see what they have done. Talk about arrogance.<<<

Forest management provides tools to correct past mistakes and restore ecosystems. But Proforestation even seems to reject forest restoration that helps return a forest to a healthy state, including controlling invasive species, maintaining tree diversity, returning forest composition and structure to a more natural state.

Proforestation is not just a philosophical exercise. The goal is to ban active forest management on public lands. It has real policy implications for the future management (or non-management) of forests and how we deal with wildfires, climate change and other disturbances.

We’ve written before about how this concept applies to so-called “carbon reserves.” Now, powerful and well-funded anti-forestry groups are pressuring the Biden Administration to set-aside national forests and other federally-owned lands under the guise of “protecting mature and old-growth” trees.

In its recent white paper on Proforestation (read more here), the Society of American Foresters writes that “preservation can be appropriate for unique protected areas, but it has not been demonstrated as a solution for carbon storage or climate change across all forested landscapes.”

Proforestation doesn’t work when forests convert from carbon sinks into carbon sources. A United Nations report pointed out that at least 10 World Heritage sites – the places with the highest formal environmental protections on the planet – are net sources of carbon pollution. This includes the iconic Yosemite National Park.

The Intergovernmental Panel on Climate Change (IPCC) recognizes active forest management will yield the highest carbon benefits over the long term because of its ability to mitigate carbon emitting disturbance events and store carbon in harvested wood products. Beyond carbon, forest management ensures forests continue to provide assets like clean water, wildlife habitat, recreation, and economic activity.

Forest management offers strategies to manage forests for carbon sequestration and long-term storage.Proforestation rejects active stewardship that can not only help cool the planet, but help meet the needs of people, wildlife and ecosystems. You can expect to see this debate intensify in 2023.

Effectiveness of Fuel Treatments at the Landscape Scale: Jain et al. 2022

I was looking for a “Scientist of the Week” to honor and ran across this JFSP report. I’d like to give a vote of appreciation to these authors, and to the folks at JFSP for funding a useful synthesis of the research.

What I think it interesting about this paper is that the authors used different forms of knowledge (empirical, simulation, and case studies) to look at the question.   I also like that they separated out (1) direct wildfire effects, (2) impacts to suppression strategies and tactics, and (3) opportunities for BWU. Some studies only talk about (1) or, in some cases,  it’s not clear what exactly they are talking about. Here’s one chart.

I particularly liked the section “Identified management and policy considerations and research gaps” on page 25.

Here is an interesting section of that:

Our synthesis focused primarily on how fuel treatments performed in the event of large wildfires, rather than the effect of fuel treatments at keeping wildfires small. Treatments offer suppression opportunities and subsequently influence how many fires are being extinguished in fuel treatments. In the case studies, there were comments that the wildfires ignited outside the fuel treatments and therefore when fuel treatments were burned by wildfires, the wildfires were already large. If fuel treatments allow for effective wildfire management, including successful full suppression compared to untreated areas, our focus may have undervalued their suppression benefit.

Longevity of fuel treatments was mentioned in all three synthesis types. In most cases fuel treatments were shortlived from 1 year to 20 years; however, in most cases the longevity of fuels was focused on surface fuels. Future studies should focus on the longevity of treatment effects in each relevant fuel stratum to test the following hypotheses: 1) surface fuels have the shortest fuel treatment longevity; 2) crown fuels have the longest fuel treatment longevity; 3) ladder fuels longevity decreases when crown fuels are separated creating growing space for latter fuels to flourish. Studies that focus on fuel strata longevity can inform managers when is it necessary to conduct maintenance treatments and choose a method of treatment that extends treatment longevity.

A discussion of research gaps in empirically based studies is premature given the current state of knowledge. Empirical approaches to understanding landscapelevel fuel treatment effectiveness are in their infancy. Indeed, the field is at a point where clear and precise terms and concepts are not broadly recognized. The fundamental issue is the varied and  imprecise use of the term ‘landscape.’ Wildfire is a landscapelevel process. Fuel treatment effectiveness should be evaluated by how it affects that process, functionally, from a landscape perspective. The terms landscape scale and landscape size have little generalizable meaning. Large wildfires and or large treatments may be called ‘landscape’, but our inference on treatment effectiveness will remain constrained to withinsite (i.e., within treatment) effects if the sampling design and analysis are sitelevel and not also measuring effects outside the treatment footprint. Therefore, instead of identifying gaps in understanding, there should be 1) broad recognition of what is meant by landscapelevel fuel treatment effectiveness and how the characteristics of fuel treatments affect wildfire activity outside of treatment boundaries, and 2) longterm commitment to designing and implementing research projects at the landscape level over large areas that can inform questions and test hypotheses about the type, size, density, and configuration of fuel treatments that best affect subsequent wildfire in desirable directions.

The authors say “Wildfire is a landscape-level process. Fuel treatment effectiveness should be evaluated by how it affects that process, functionally, from a landscape perspective.” It seems to me that effectiveness would be measured as “do these treatments make wildfires easier to manage, with management including protecting communities, water and other infrastructure, and protecting species and watersheds from excessively negative impacts.” And I don’t really care about defining “landscape scale” except for the idea that say if you are planning PODs, you obviously have to think at the appropriate scale. But perhaps we all have different definitions. That could certainly make researchers’ lives difficult if we are all operating from different definitions and thinking we mean the same thing.

From Where Wildfires Start to the Forest Service 10-Year Strategy: Tracking the Logic Path of the Downing et. al Paper

Many thanks to Matthew for posting the Downing et al. paper from Nature. There are so many fascinating things about this paper, I thought it was worthy of looking at carefully. 1) what the paper says (results) 2) how those relate to the methods (logic paths) and 3) how the conclusions made their way from the paper to the OSU public affairs and other media reports.  It’s a terrific example, because you don’t have to understand the methodologies to understand the knowledge claims and logic.  I’m hoping this analysis will be helpful to students and the paper is open source so anyone can view. This is a longer post than usual.

The first step is always to look at results and discussion.

Our empirical assessment of CB fire activity can support the development of strategies designed to foster fire-adapted communities, successful wildfire response, and ecologically resilient landscapes.

Then try to translate this from academic talk.  First, they talk about “cross boundary” wildfire, in fact, the whole paper is about “cross boundary” wildfire, so that is fires that move from private to public land or vice versa.  Why is this important?  They discuss this in a few paragraphs.

The tension between ecological processes (e.g., fire) and social processes (e.g., WUI development) in mixed ownership landscapes is brought into stark relief when fire ignites on one land tenure and spreads to other ownerships, especially when it results in severe damages to communities on private lands and/or highly valued natural resources on publicly managed wildlands. These cross-boundary (CB) wildfires present particularly acute management challenges because the responsibilities for preventing ignitions, stopping fire spread, and reducing the vulnerability of at-risk, high-value assets are often dispersed among disparate public and private actors with different objectives, values, capacity, and risk tolerances23–25
. Some CB risk mitigation strategies exist, such as fire protection exchanges, which transfer suppression responsibility from one agency (e.g., state) to another (e.g., U.S. Forest Service), and CB fuel treatment agreements, which allow managers to influence components of wildfire risk beyond their jurisdictional boundaries 2,26 . Improving CB wildfire risk management has been identified as a top national priority 27 , but effective, landscape-scale solutions are not readily apparent.
A common narrative used to describe CB fire is as follows: a wildfire ignites on remote public lands (e.g., US Forest Service), spreads to a community, showers homes with embers, and results in structure loss and fatalities 23,25,28 . In this framing, public land management agencies bear the primary responsibility for managing and mitigating CB fire risk, with effort focused on prevention, hazardous fuel reduction, and suppression—largely reinforcing the dominant management paradigm of fire exclusion 29,30 . An alternative risk management framing of this challenge has emerged, starting with the axiom that CB fire transmission is inevitable in fire-prone mixed ownership landscapes and that private landowners and homeowners are the actors best positioned to reduce fire risk to homes and other high-value assets regardless of where the fire starts 31 . In the absence of a broad-scale empirical assessment of CB fire transmission, it is difficult to determine which of these narratives more accurately reflects the nature of the problem, and whether CB fire risk management is best framed in terms of reducing fire transmission from public lands or decreasing the exposure and vulnerability of high-value developed assets on private lands.

So there’s a framing question here.  Where I live, people don’t much care where a fire starts, just if it’s burning close to them.  So the importance of addressing wildfire with the CB framing is based on a “common narrative” with cites 23, 25, and 28.  28 is an interesting paper by Ager et al., definitely worth taking a look at, about Central Oregon, where they found:

Among the land tenures examined, the area burned by incoming fires averaged 57% of the total burned area. Community exposure from incoming fires ignited on surrounding land tenures accounted for 67% of the total area burned.

I would call that an observation, rather than a narrative, but perhaps I’m being pedantic.  For those who don’t track this stuff,  Ager’s groups’ transmission and scenario analysis forms the basis for the prioritization scheme in the Forest Service 10 year action plan.

As to “Improving CB wildfire risk management has been identified as a top national priority 27 , but effective, landscape-scale solutions are not readily apparent.”  I think this is important as in a brief review of their cite to the Fire Plan Implementation Strategy, I didn’t actually see a reference to “cross boundary”-  maybe others can find it.  The other thing I’d point out is that PODS look like they might be an effective landscape-scale solution and they seem apparent to me.  So outside of a scientific paper, that would be an interesting conversation to have.

I found the conclusions interesting as I have just spent several days working on the logic of “fires are increasingly difficult and unpredictable due to fuel accumulation, climate change and increasing amounts of human infrastructure, therefore we need to keep all the tools in the toolkit, including prescribed fire and the Thing Formerly Known as WFU.”  This was rather well-stated IMHO in Wildfire Resolution Letter on  TFKWFU:

As we have seen over the past few years, especially in California and Colorado, we are now experiencing conditions that are causing extreme fire behavior, which is in part due to past full  suppression policy. The best management approach we have to combat this phenomenon is reducing the amount of fuel available to burn. Similar to how important thinning and prescribed burning are around our communities, the ability to manage wildland fires at appropriate times is equally important for reducing fuels in the wildland environment. We will never be able to reduce fire risk to communities with thinning or prescribed fire alone—we need all hands on deck, and all the tools in the toolbox.

So it looks like the authors of the Downing et al. paper ended up in a different place from many other practitioners and the usual fire science suspects.  To me this is a Science Situation That Shouts “Watch Out.”

So that’s why we need to dig deeper.. let’s go to conclusions again in their paper.

Our empirical assessment of CB fire activity can support the development of strategies designed to foster fire-adapted communities, successful wildfire response, and ecologically resilient landscapes. Adapting to increasing CB wildfire in the western US will require viewing socio-ecological risk linkages between CB fire sources and recipients as management assets rather than liabilities. We believe that a shared understanding of CB fire dynamics, based on empirical data, can strengthen the social component of these linkages and promote effective governance. The current wildfire management system is highly fragmented 74 , and increased social and ecological alignment between actors at multiple scales is necessary for effective wildfire risk governance 14,30 .
Cross-boundary fire activity can contribute to multijurisdictional alignment when fire transmission incentivizes actors to collaboratively manage components of risk that manifest outside their respective ownerships 15 . A broader acknowledgement that CB is inevitable in some fire-prone landscapes will ideally shift the focus away from excluding fire in multijurisdictional settings towards improved cross-jurisdictional pre-fire planning and reducing the vulnerability of high-value assets in and around wildlands 30,31 . Federal agencies like the USFS can provide capacity, analytics, and funding, but given that private lands are where most high-value assets are located and where most CB fires originate, communities and private landowners may be best positioned to reduce losses from CB wildfire.


Now, first of all, it’s kind of hard to parse some of the academic-ese here “will require viewing socio-ecological risk linkages between CB fire sources and recipients as management assets rather than liabilities.”  I hope it’s clear to others, they lost me. 

“We believe that a shared understanding of CB fire dynamics, based on empirical data, can strengthen the social component of these linkages and promote effective governance.”  That’s nice that they believe that, but I’d be curious about the mechanics of how that works.  Communities have their own lived knowledge of fires and it’s hard to tell them “some scientists ran some simulations and came up with …. “.  And effective governance of HOAs, fire departments, counties, states and feds.. there are many problems at coordination at all scales, but not clear that modeling fire transmissions with their model will be more helpful than all the other fire transmission models that have existed for some time.

“shift the focus away from excluding fire in multijurisdictional settings towards improved cross-jurisdictional pre-fire planning and reducing the vulnerability of high-value assets in and around wildland”  Hmm. We have a system that includes both- a very complicated process and alignment of CWPPs, federal lands and and suppression that takes into account all of the above.  Perhaps the authors have developed a straw person? or are they saying “leave the federal lands alone and focus on communities?”  Which wouldn’t be “all hands all tools”? Or perhaps a change in focus? But then you’d have to articulate what the current focus is and what needs to change.

An alternative risk management framing of this challenge has emerged, starting with the axiom that CB fire transmission is inevitable in fire-prone mixed ownership landscapes and that private landowners and homeowners are the actors best positioned to reduce fire risk to homes and other high-value assets regardless of where the fire starts

I agree that private landowners and homeowners , utilities, ski areas, water providers are best positioned to reduce direct risks.  But there are also people (fire suppression folks) working assiduously to keep fires away from infrastructure- and it actually works most of the time (I don’t have a cite, but I can see information on InciWeb).  This is a both/and thing.  Not sure how the simulations in this paper support changes to the current system.  If I’d reviewed it, I would have asked them to draw a logical line between “the system as we see it” “what our data show” and “how we think this information should inform changes.”

And here’s a quote from the OSU piece:

“The Forest Service’s new strategy for the wildfire crisis leads with a focus on thinning public lands to prevent wildfire intrusion into communities, which is not fully supported by our work, or the work of many other scientists, as the best way to mitigate community risk,” Dunn said.

I think this doesn’t take into consideration that that the strategy is the FS chunk of the overall work, of which there are many other bucks going to states and ultimately into communities.  They did not say it’s the best way, only what they can contribute.  Plus thinning and PB have other desirable attributes in making forests more climate-resilient, so not so much destruction occurs within them in the case of a wildfire.  Suppression folks work with a great variety of values at risk that include but are not restricted to communities.

What I would have said based on the data?  More fires go into the NFs than come from the NFs based on the data.  So working on reducing ignitions and spread before fires get to the FS boundary would be a good idea.  But everyone gets to conclude what you want from the data.. what do you conclude?