The following comment was written by Forrest Fleischman and posted here, but I believe it deserves it’s own post, so here it is. Here’s a link to the Fleischman, et al original paper, “US Forest Service Implementation of the National Environmental Policy Act: Fast, Variable, Rarely Litigated, and Declining.” Also, here’s all the raw data (which the U.S. Forest Service has notoriously—and intentionally?—made nearly impossible to locate, for many years). – mk
It’s an honor to have one’s work being seen as important enough to merit a response. This being said, as the lead author of the original paper, there were two surprising things about this publication.
First of all, in the scientific process, the norm is to inform the original authors of a paper that a critique of their paper is being published, and give them an opportunity to respond in the same issue. We were not given such an opportunity, and this reflects poorly on JOF as a scientific publisher. We learned about the critique the same way you did, I just read this paper yesterday.
Second, we can’t figure out what Morgan et al. disagree with us about. For example, they highlight that the length of time for analysis varies between CEs, EAs, and EISs, which was something we highlighted in our original paper. They highlight some regional differences, and our original paper included an extensive discussion of regional and local differences.
You highlight that they take issue with our statement about Forest Service budgets (which they erroneously describe as “one of our conclusions”). We reported that the number of NEPA analyses was declining over the time period of study, and wrote “This decline is likely related to the combination of flat or declining real budget allocations, retirement of experienced staff without adequate replacements, and increasing fire impacts that divert agency resources away from routine land management (National Interagency Fire Center 2019)” – so we wrote a *clearly* speculative sentence, including declining budgets as one of several well documented issues the agency faces, and cited an *agency* source which discussed these problems (elsewhere in the paper we cite a whole bunch of other sources that document flat or declining agency budgets in the face of rising fire costs). Although we presented this work in language that was clearly speculative, I haven’t found anyone who disagrees with it – including Morgan et al! Morgan et al. look in more detail than we did at the budget and what did they find? Flat or declining budgets (they frame one budget measure as a slight increase but the increase isn’t statistically significant, which is to say, its what a scientist would call no change – looking at the graph presented in their paper, that budget declined through most of the study period, and then had a modest increase in the last few years of data, so even if the increase had been statistically significant, most of our study data was produced in a period of declining budgets).
As far as the data cleaning issues they highlight, give me a break. We spent hours on the phone with PALS people working with them to try to understand the data (and the many data cleaning issues it contains). We spent months cleaning this data before publishing it, and like any good scientist, ran many versions of our analysis using different assumptions about which data were good. Most of the patterns we report are robust to just about any data cleaning assumption. We decided to not analyze the data for ongoing projects because from speaking with the PALS people, we learned that we would not be able to clearly distinguish projects that were discontinued or dropped from those that were suspended from those that were ongoing. Put in other words, since the incomplete projects are ambiguous, analyzing those data are not likely to be very meaningful. According to the people who manage the PALS database, many of the projects that Morgan et al. report having long time frames that are not completed are likely to be projects that were dropped (but were not listed as so in PALS), hence the long time frames. And including the incomplete projects doesn’t actually lead to very different results – as Morgan et al. themselves show. In fact, they don’t point to any data cleaning decisions we made that change any actual results in substantively meaningful ways.
We are debating whether we should write a response to Morgan et al., but its hard to write a response when its apparent that your critic’s analysis almost entirely supports your original analysis. Or to put it in other words, we think Morgan et al. could have written a better paper if they had written it not as a critique of our work (which they don’t really succeed in because their findings aren’t substantively different from ours), but as an addition. I found some of their work interesting on first glance, although I haven’t really been through it in alot of detail yet. For example, the detailed analysis of budgets they did seems valuable (and seems to be consistent with our original story), and its interesting to know that litigation is more common in Region 1 (again, this illustrates a point we made in our original paper, i.e. that there appears to be alot that could be learned by studying regional and forest level variation inside of the agency). We ran numerous analyses, but I don’t recall looking at litigation broken down by region.
There’s lots that can – and probably should be – done with this kind of data. I strongly agree with you that its a shame that the data isn’t collected in a cleaner manner – and it ought also to be openly publicly available – there’s no reason other than a modest investment in building an interactive public-facing web portal – that it can’t be shared openly and in real time with the public. We shouldn’t need a bunch of academics to spend a few months cleaning data when the data should be clean and publicly available in the first place.
6 thoughts on “Response to the Response to the Journal of Forestry Article: “US Forest Service NEPA Fast, Variable, Rarely Litigated, and Declining””
Hi Matthew, I responded on that post as well but thought I’d tie in to this one as well
As a current FS NEPA-adjacent person, and graduate of the University of Montana, I want to point out that the original Fleischman paper and his response here comport closely with my relevant, albeit individual, experience in the field, and I find this response largely confusing aside from raising some methodological questions around the use of current FS databases for drawing conclusions about FS NEPA processes. Given what Fleischman points out above re: not being informed, in addition to some points of confusion about what is actually being argued, I feel the need to respond as this paper does not strike me as the best work of the UMT school of forestry nor the best reflection on their professionalism, frankly.
Bottom line up front:
– The respondents purport to argue against Fleischman et al’s judgement of “fast” but don’t deal with defining and operationalizing “fast” in any comparable manner to the Fleischman paper, they simply assert “it is not fast” by fiat at multiple points.
– The respondents take issue with Fleischman et al presenting NEPA analysis as an accomplishment on par with on the ground management, when the original paper does no such thing. There’s also a really confusing line drawn (or rather implied) between “accomplishment” and “legal mandate” as if the FS, as a government agency, didn’t do literally most of what we do on the basis of legal mandates.
– The respondents frame budget rates as flat to increasing but their own data (inflation-adjusted) by their own admission simply shows flat budgets at best. So why frame it as flat to increasing?
– The state the the Fleischman et al paper didn’t formally test hypothesis but the response makes several conclusion-like statements, including key statements about whether or not NEPA is “fast” and whether or not NEPA analysis is an “accomplishment” in a completely armchair-speculation fashion.
The only formally tested hypotheses, ie. the quantitative ones, may nuance or quibble with the Fleischman paper, but they certainly don’t offer significant counter-conclusions. All of the larger counter conclusions in this paper are at least as speculative as the statements they take issue with, while ironically being framed as the results of more rigorous hypothesis-testing.
– The respondents seem to be unfamiliar with the actual functioning of FS units and planning: “Every dollar spent on NEPA analyses is a dollar not spent accomplishing on-the-ground activities; time and funding used for NEPA represent an opportunity cost to the FS” Such a statement, presented as a conclusion, can only be true if a) NEPA / NFMA planning steps are solely an obstacle, not a tool, and b) fieldwork for project-level NEPA doesn’t result in accomplishments for other program areas.
One can have a philosophical / policy argument about the utility of NEPA / NFMA planning relative to fieldwork, but one cannot present this as a sober conclusion read off of the scientific data. Why? Because the legally-mandated planning is part of getting to the on the ground fieldwork, and fieldwork during the planning process is often itself counts as accomplishments for certain program areas (or at least is recorded).
More detailed thoughts below
First, to the broad points of the paper, it seems to turn on the following points I’ll excerpt as key to their argument:
-“Fleischman used a single large and sparsely documented dataset to draw important conclusions about FS implementation of NEPA” – the Fleischman paper used a dataset appropriate to the conclusions drawn, shorn of specifics not material to the question their paper investigated.
-“In nominal terms, the eight resource program BLIs combined increased significantly at a rate of 2.4 % per year” i.e. the budget isn’t actively decreasing. – but their own inflation-adjusted numbers show a flat budget, not a “significant” increase.
– “However, we believe the success of a land-management agency is better measured by on-the-ground accomplishments, not the number of NEPA analyses it produces.” – This is a telling statement, the kind of silly rhetorical straw-man that we see turned against the planning process by the previous administration. Did Fleischman et al. say that NEPA is more important than on the ground work? Did anyone ever? Not that I know of, this is again nothing but a straw man used to frame those who might disagree with you as valuing paperwork over fieldwork.
-“The presence of 249 ongoing and in progress CEs and 187 ongoing and in progress EAs initiated from FY 2014–2018 does not indicate that NEPA is “fast.” and There are more than 880 NEPA analyses ongoing and not marked complete that will have elapsed days greater than the five-month average the original authors portrayed as proof that NEPA is “fast.” – Without reviewing it right now, I’m can’t recall what Fleischman et al framed as the “fast” part of their judgement, but based on my recollection, was it not comparison to peer agencies? So this seems to be more blatant editorializing from the UMT (BBER) folks that doesn’t respond materially to the claims actually made by Fleischman et al. I often find NEPA subjectively slow and there could be more operationalizing of what constitutes fast, but as it’s done in this paper (i.e. completely arbitrarily), it’s akin to my arguing that a tree reaching CMAI at 90 years isn’t fast because a different species in a different climate can do so at 50 years.
So, to an element Fleischman’s prelim. response here: “We decided to not analyze the data for ongoing projects because from speaking with the PALS people, we learned that we would not be able to clearly distinguish projects that were discontinued or dropped from those that were suspended from those that were ongoing. Put in other words, since the incomplete projects are ambiguous, analyzing those data are not likely to be very meaningful. According to the people who manage the PALS database, many of the projects that Morgan et al. report having long time frames that are not completed are likely to be projects that were dropped (but were not listed as so in PALS), hence the long time frames. And including the incomplete projects doesn’t actually lead to very different results – as Morgan et al. themselves show.”
Yes, yes, and yes – we strive to manage the PALS database as best as possible, but there are a number of problems, particularly at the field and forest level, with being able to keep it accurate enough to draw meaningful conclusions from. The greatest of these problems is simply the interface and workflow making it unnecessarily time consuming to keep projects up to date (multiple approval layers, old software, counterintuitive interfaces). The inclusion of incomplete projects by Morgan et al. at best strikes as a methodological quibble, given the substantially similar conclusions that flow from this.
Finally, the conclusion of the study implications: ” Although conducting environmental analysis is a necessary step in federal land management, completing NEPA analyses is not a substitute for accomplishing on-the-ground management activities, and resources invested in NEPA analyses represent an opportunity cost to the FS” What can this sentence possibly mean? This is nearly endless waffling for a single sentence. So, NEPA is necessary, but costs money, but isn’t the same thing as implementation? Really, this is banal to the point of embarrassing for a published conclusion. Another piece: “Fleischman portrayed the completion of NEPA analyses as an accomplishment measure rather than a necessary step in planning and analyzing potential outcomes of land management activities” Fleischman et al did so, I would guess, because completed NEPA is tracked as an accomplishment in the FS FACTS database, since it’s a legally-mandated activity preceding any and all on the ground management. There’s an interesting implication indicative of sloppy thinking in this paper, specifically that legally-mandated steps for on the ground management are somehow *not* the same as program areas working towards relevant goals within the organization. The work we do in the FS all flows from legal mandates. There isn’t some binary discrepancy between planning and implementation, but that is a talking point often trotted out against the planning process.
Ultimately, I need to spend more time with the quantitative data in this paper, but the qualitative conclusions drawn, especially those that are framed as somehow going against the findings of Fleischman et al, are simply sloppy on my first read here.
Anonymous.. thanks for your thoughtful comment.
“I feel the need to respond as this paper does not strike me as the best work of the UMT school of forestry nor the best reflection on their professionalism, frankly.”
“are simply sloppy on my first read here.”
I honestly don’t get the vehemence here.
I agree with you that “The work we do in the FS all flows from legal mandates. There isn’t some binary discrepancy between planning and implementation, but that is a talking point often trotted out against the planning process.” At the same time, (as a veteran of Process Predicament) and as a former NEPA person in DC, it is legitimate to ask “is this the best way to go about meeting our legal mandates?” and “why do some districts processes go more quickly than others?”. And of course, “what influences the likelihood of litigation?”. These are the same questions we’ve all been asking for the last 20 years at least, in fact the NEPA for the 21st Century research effort was all about that. I, for example, am not “against the planning process” but I wonder whether some ways of planning are more effective/efficient than others. And I acknowledge that’s mostly in the eye of the beholder.
Apologies for letting my wording get a bit grandiose there, a few of the conclusions definitely struck me as a bit close to the stock “NEPA is slow, need to do less of it” thing that gets trotted out as a panacea, at times, both internally and externally to the organization. Obviously I’m strictly speaking in terms of my own personal opinion here. Some of that heat can and should be ignored but I don’t know how to edit comments!
As we’ve seen, the “let’s just do less” thing can lead to litigation about that, ultimately miring the whole effort in the first place. E.g. new FS CE categories, or 2020 CEQ regs. Something I’ve watched with frustration, and I thought the conclusions of the Fleischman paper were more helpful in pointing out problem and solution areas than the framing of a critique that struck me as off base. Just to lay my cards on the table there.
Recognizing that the conclusions are more nuanced than “do less NEPA”, I really thought that the quantitative discussion and the limitations around PALS and the MYTR are worth discussing.
To distill some points, sans vehemence :
– I find the reinforcement of the planning / implementation dichotomy, especially in the form of a scholarly publication, problematic as that’s the kind of dualistic thinking that I think leads to, as alluded to above, some of the more ham-fisted parts of regulatory change (page and time limits in NEPA being an example that really amounts to a process shell-game where you put all the pages in appendices, or something)
– Quantitative data on what influences the likelihood of litigation, like the Fleischman paper, is certainly of value, as is the impact of different measures of project timeliness.
– Saying it “is not fast” is not of much value, a lot more work could be done there.
– What’s the most effective way of meeting the legal mandates – I’m with you there. The future of condition-based projects, for instance, will make that an important question to re-visit.
– Area of potential agreement for all involved: there’s opportunities for better public-facing FS data that combines some of the functions of FACTS and PALS for a more global view that makes it less difficult to tie together NEPA to accomplishments without having to do a multi-day GIS exercise. I complain about seeing a dichotomy between planning and implementation but it’s true that the data reinforces that way of thinking based on how it is housed and presented.
Thanks Anonymous! Your mention of “fast” reminds me of Fred Norbury, Director of EMC in the WO, during the Process Predicament effort “how can we say it takes too long and costs too much if we don’t know how long it takes and how much it costs?” I think this was about 20 years ago (?).
Your last point reminds me of my suggestion for an FIA-like-structured People’s Database effort https://forestpolicypub.com/2013/04/27/needed-coalition-for-public-access-to-information-on-national-forests-aka-the-peoples-database/ .. from 2013.
Forrest, thanks for writing this and engaging in this dialogue. At the meta level, the Journal Editors make the decision to publish and to engage or not the original authors. It seems to me that if we are to depend on the processes of the science biz to identify what is the “best” science, perhaps journals should all have the same rules on this?
It seems to me that this is one of those topics (and it’s not too late IMHO) for which co-designed and co-produced research might fit well. Many of us are curious about finding answers to various questions that can be answered by (potentially a cleaned-up) PALS and objection and litigation databases to answer a variety of questions, including the FS for better management of the process.
I think the underlying issue with the current debate is that the conclusion is “it’s not a problem” which goes against some stakeholders’ observations that “it is a problem” to them.. From the original paper.. I don’t think you can get there (to this conclusion) from the PALS database (that the “FS did not need to update its reg.” It’s a conclusionary bridge too far IMHO. If I’d reviewed the original paper, that’s what I would have said. As I said in my comments here at the time, you can’t (shouldn’t) lump permits for Scout Camps with fuels reduction projects in Montana to arrive at an averaged conclusion.
“Our data do not provide support for the current proposal to expand the number of projects eligible for CEs (US Forest Service 2019) and do not support a significant justification for CEQ’s proposed rule-making (Council on Environmental Quality 2020) and associated executive action surrounding NEPA. In contrast to the justification for the proposed rule changes, the vast majority of USFS projects are completed quickly, use existing authority to decrease paperwork, and are not litigated. The average NEPA project takes the USFS less than 5 months to complete—this is largely due to the existing CEs, which cover the vast majority of the agency’s activities. We have presented evidence that suggests that project processing delays are likely due not to NEPA, and suggest, based on the USFS’s own analysis of its situation, that lack of available budget and staff is the main barrier to efficient project processing. There does not appear to be a general problem with projects being delayed by NEPA, and past studies have found little improvement from efforts to alter rules to speed NEPA timelines (DeWitt and Dewitt 2008). Instead, we find that a very small number of projects take a long time to get through the NEPA pipeline, as should be expected given the highly heterogeneous nature of the USFS’s multiple use mission and the need for careful analysis of complex and/or highly controversial projects. There is substantial variation between units that might be leveraged to improve project management. More efficient use of the National Forest Management Act Planning process to make clear high-level decisions would likely contribute to more efficient project-level NEPA decisionmaking (Brown and Nie 2019).
In regard to budgets and management efforts, money generated by timber sales in R-8 Appalachian forests during the 60’s through the 80’s provided in the neighborhood of half the funds needed for the Forest Service management programs, including road maintenance. Intensive lobbying by the Sierra Club and other Preservation type groups decimating the timber program beginning early in the 80’s, did away, for all practical purposes, with this source of funds by the 90’s. Loss of these funds and the loss of wildlife habitat created by the sales have resulted in loss of important aspects of the Forest Service management program over the past 30+ years.