This will take a deep dive, but it’ll be an interesting one. In “Countering Omitted Evidence of Variable Historical Forests and Fire Regime in Western USA Dry Forests: The Low-Severity-Fire Model Rejected,” by William L. Baker, Chad T. Hanson, Mark A. Williams, and Dominick A. DellaSala. In the journal Fire, April 3, 2023 (open access), the authors critique a 2021 study, “Evidence for widespread changes in the structure, composition, and fire regimes of western North American forests,” by a handful of noted scientists, such as Paul F. Hessburg, Susan J. Prichard, Scott Stephens, and several others, in Ecological Applications, August 2021 (open access).
Abstract of the first paper (emphasis added):
The structure and fire regime of pre-industrial (historical) dry forests over ~26 million ha of the western USA is of growing importance because wildfires are increasing and spilling over into communities. Management is guided by current conditions relative to the historical range of variability (HRV). Two models of HRV, with different implications, have been debated since the 1990s in a complex series of papers, replies, and rebuttals. The “low-severity” model is that dry forests were relatively uniform, low in tree density, and dominated by low- to moderate-severity fires; the “mixed-severity” model is that dry forests were heterogeneous, with both low and high tree densities and a mixture of fire severities. Here, we simply rebut evidence in the low-severity model’s latest review, including its 37 critiques of the mixed-severity model. A central finding of high-severity fire recently exceeding its historical rates was not supported by evidence in the review itself. A large body of published evidence supporting the mixed-severity model was omitted. These included numerous direct observations by early scientists, early forest atlases, early newspaper accounts, early oblique and aerial photographs, seven paleo-charcoal reconstructions, ≥18 tree-ring reconstructions, 15 land survey reconstructions, and analysis of forest inventory data. Our rebuttal shows that evidence omitted in the review left a falsification of the scientific record, with significant land management implications. The low-severity model is rejected and mixed-severity model is supported by the corrected body of scientific evidence.
I’d like to hear the response of Hessburg et al to the charge of “falsification,” something some scientists have accused Hanson of.
Hanson, DellaSalla, Baker and Williams are criticizing “omitting” data? What about inappropriately lumping, analyzing, and statistically assessing data? Picking and choosing what remotely sensed product to used based on the desired outcome? Picking and choosing when to analyze every pixel versus doing “random comparisons” that make no sense, of pixels?
Also, here is one of many recent fun reads about “MDPI”, and the changes they are acknowledging need to happen.
https://phys.org/news/2023-03-death-access-mega-journals.html
No deep dive required when it’s Baker, Hanson, or DellaSella. It’s amazing they continue to get published.
Worth a read. Same old tricks.
https://pubag.nal.usda.gov/download/32610/pdf
Also, isn’t Hanson the same who for years said RAVG was of no use and overestimated tree mortality after wildfires?
https://www.mdpi.com/2673-6004/2/4/29
Guess when the data fits, you use it.
https://www.mdpi.com/2073-445X/11/7/995/html
https://www.mdpi.com/1999-4907/13/3/391
I believe both sides are way too certain about the historical fire regimes, but the Hanson crew does not allow room for nuance and having a discussion with them is similar to talking about the merits of vaccines with with an anti-Vaxxer.
The historical data available for many areas is paltry. Please don’t use photos from the 1930s to suggest that is how the place looked pre-Euro settlement (Hessburg, I am looking at you). There are so many assumptions made by doing this. The idea that we’d manage an entire region to look like pictures taken 70 years after settlement doesn’t sit right. Shoot, we have entire national forests managed to reflect the results of a single study with small sample size that does not include randomization over study sites because they are looking at sites defined as being representative of historical conditions. There holes one can poke in each piece of evidence used to define to HRV/NRV. Too add to this, all these scientists develop models and as the saying goes, junk-in, junk-out.
I think there are two facts everyone can agree on. 1) There were a lot more big trees historically. 2) There are a lot more small trees today.
Defining when a place has a low severity regime vs. a mixed regime isn’t straightforward. I think of it being more of a spectrum, where changes in climate and ignition patterns can push the regime one way or the other.
Personally, for the frequent fire dry and moist forests, I fall on the side of there having been a lot more low severity fire with small high severity patches scattered. I do believe land managers have pushed things too far and are all too willing to classify a forest as dry to justify logging. I also don’t see the agencies getting low severity fire back into the system to maintain their treatments…which caused the mess we are in today. Without low severity maintenance burning ramping up all over the place (how much money did the USFS get with the BIL?), it all seems to be an excuse to log, rather than a desire to manage appropriately.
I like that reasoning. NRV is really a probability distribution, and that includes the range of variation of fire regimes for a location.