The following comment was written by Forrest Fleischman and posted here, but I believe it deserves it’s own post, so here it is. Here’s a link to the Fleischman, et al original paper, “US Forest Service Implementation of the National Environmental Policy Act: Fast, Variable, Rarely Litigated, and Declining.” Also, here’s all the raw data (which the U.S. Forest Service has notoriously—and intentionally?—made nearly impossible to locate, for many years). – mk
It’s an honor to have one’s work being seen as important enough to merit a response. This being said, as the lead author of the original paper, there were two surprising things about this publication.
First of all, in the scientific process, the norm is to inform the original authors of a paper that a critique of their paper is being published, and give them an opportunity to respond in the same issue. We were not given such an opportunity, and this reflects poorly on JOF as a scientific publisher. We learned about the critique the same way you did, I just read this paper yesterday.
Second, we can’t figure out what Morgan et al. disagree with us about. For example, they highlight that the length of time for analysis varies between CEs, EAs, and EISs, which was something we highlighted in our original paper. They highlight some regional differences, and our original paper included an extensive discussion of regional and local differences.
You highlight that they take issue with our statement about Forest Service budgets (which they erroneously describe as “one of our conclusions”). We reported that the number of NEPA analyses was declining over the time period of study, and wrote “This decline is likely related to the combination of flat or declining real budget allocations, retirement of experienced staff without adequate replacements, and increasing fire impacts that divert agency resources away from routine land management (National Interagency Fire Center 2019)” – so we wrote a *clearly* speculative sentence, including declining budgets as one of several well documented issues the agency faces, and cited an *agency* source which discussed these problems (elsewhere in the paper we cite a whole bunch of other sources that document flat or declining agency budgets in the face of rising fire costs). Although we presented this work in language that was clearly speculative, I haven’t found anyone who disagrees with it – including Morgan et al! Morgan et al. look in more detail than we did at the budget and what did they find? Flat or declining budgets (they frame one budget measure as a slight increase but the increase isn’t statistically significant, which is to say, its what a scientist would call no change – looking at the graph presented in their paper, that budget declined through most of the study period, and then had a modest increase in the last few years of data, so even if the increase had been statistically significant, most of our study data was produced in a period of declining budgets).
As far as the data cleaning issues they highlight, give me a break. We spent hours on the phone with PALS people working with them to try to understand the data (and the many data cleaning issues it contains). We spent months cleaning this data before publishing it, and like any good scientist, ran many versions of our analysis using different assumptions about which data were good. Most of the patterns we report are robust to just about any data cleaning assumption. We decided to not analyze the data for ongoing projects because from speaking with the PALS people, we learned that we would not be able to clearly distinguish projects that were discontinued or dropped from those that were suspended from those that were ongoing. Put in other words, since the incomplete projects are ambiguous, analyzing those data are not likely to be very meaningful. According to the people who manage the PALS database, many of the projects that Morgan et al. report having long time frames that are not completed are likely to be projects that were dropped (but were not listed as so in PALS), hence the long time frames. And including the incomplete projects doesn’t actually lead to very different results – as Morgan et al. themselves show. In fact, they don’t point to any data cleaning decisions we made that change any actual results in substantively meaningful ways.
We are debating whether we should write a response to Morgan et al., but its hard to write a response when its apparent that your critic’s analysis almost entirely supports your original analysis. Or to put it in other words, we think Morgan et al. could have written a better paper if they had written it not as a critique of our work (which they don’t really succeed in because their findings aren’t substantively different from ours), but as an addition. I found some of their work interesting on first glance, although I haven’t really been through it in alot of detail yet. For example, the detailed analysis of budgets they did seems valuable (and seems to be consistent with our original story), and its interesting to know that litigation is more common in Region 1 (again, this illustrates a point we made in our original paper, i.e. that there appears to be alot that could be learned by studying regional and forest level variation inside of the agency). We ran numerous analyses, but I don’t recall looking at litigation broken down by region.
There’s lots that can – and probably should be – done with this kind of data. I strongly agree with you that its a shame that the data isn’t collected in a cleaner manner – and it ought also to be openly publicly available – there’s no reason other than a modest investment in building an interactive public-facing web portal – that it can’t be shared openly and in real time with the public. We shouldn’t need a bunch of academics to spend a few months cleaning data when the data should be clean and publicly available in the first place.