Response to the Journal of Forestry Article: “US Forest Service NEPA Fast, Variable, Rarely Litigated, and Declining”

There’s a new paper by Morgan et al. out in Journal of Forestry in response to the Fleischman et al. paper we discussed here and here. Since I was one of the peer reviewers for this response paper, I’d like to give some of my meta-thoughts about how all this worked out- related to the biz of scientific paper publishing in general. I respect all the scientists here- we are all victims of the current system. There are some of the same solutions as in the fish disagreement I posted last Friday, even though there is no misconduct in this case.

  • First, this would never have been published had some researchers not felt strongly enough about, had the time, and had access to the original data (thank you to the original authors!) to write another paper. How often does this happen? We have no clue. I’m thinking quite seldom.
  • Second, much of this second paper IMHO was unnecessary and could have been resolved by the first set of reviewers simply saying “stick to the analyses you did, and don’t assert anything that isn’t a direct result of what you found.” Say, in this case, with statements about the FS budget.
  • Third, an open pre-review process (even at the proposal stage) would have helped improve the first paper, and answered the questions raised in this response paper, making the original paper better, and saving us all lots of time. For example, data cleaning procedures might have been discussed, and based on that conversation, clearly documented. In any scientific field, there are only a few people who “get” the specifics of the data, how it was obtained, the lab or modelling protocols, statistics and so on. Certainly editors try to get those people for reviews, but they would be the first to say that’s not easy to do; and many heads are better than three.
  • Fourth, either the Forest Service should do some sort of quality control on the PALS database, or people shouldn’t use it in research without a great many caveats. (It’s true I was involved in the development of PALS, so I do have a bias toward having it up-to-date and available to the public for our own queries.)

What most interested me about Morgan et al.  were their observations about data in PALS and areas for improvement of PALS, as well as the need to make it easier to locate information on objections and litigation. Here’s just a brief example.

Our examination of the PALS-MYTR revealed very different elapsed days to completion for the three NEPA analysis types. Therefore, the type of NEPA analysis the agency uses for a proposed land management action should be a major determinant of the time it takes to complete that NEPA analysis. The agency was seeking, with the proposed rule changes, to increase the number of CEs it can use and to broaden some of the CE authorities. When a proposed action is suitable for a CE, its NEPA analysis is far more likely to take less time and far less likely to be litigated based on the litigation rates discussed below.

The PALS-MYTR contains 1,269 ongoing analyses, which Fleischman did not analyze. Of the ongoing analyses, 320 are identified as “complete”, 655 “in progress”, and 294 “on hold”. Many (505) of the ongoing and in progress analyses were initiated in FY 2016–2019, but there are over 400 ongoing analyses from FY 2005–2015. Presumably the on hold analyses are paused, but there were over 100 documents on hold as of April 2019 that were initiated between FY 2005 and 2013. This is a long time for a NEPA analysis to be “on hold.” This is another indication of a dataset that needs more complete documentation and checking and cleaning before it is used for policy analysis. Further, the number of elapsed days from initiation until April 1, 2019, for all ongoing analyses ranges from 101 to 5,203 days, with an average of 1,180 elapsed days (more than three years). There are more than 880 NEPA analyses ongoing and not marked complete that will have elapsed days greater than the five-month average the original authors portrayed as proof that NEPA is “fast.” By analysis type, 824 (65 %) of the ongoing analyses are CEs, followed by 331 EAs, and 114 EISs. The presence of 249 ongoing and in progress CEs and 187 ongoing and in progress EAs initiated from FY 2014–2018 does not indicate that NEPA is “fast.” These data may suggest that cleaning of the PALS-MYTR dataset should be completed before drawing conclusions about the timeliness or efficiency of NEPA.

I think it is useful to (1) have a relatively accurate database (2) have it available to the public (3) look at the database for information about managing and understanding how the FS does NEPA. Having said that, even the old GAO reports said that R-1 had more than its share of litigation. So in simple language, as I said in previous comments, you can’t just take perceived regional problems nationally and average them into not being important. I mean you can, but that’s a value, not necessarily “what the science says.”

If you want to read the whole paper (it’s a must-read for NEPA aficionados), you can ask for a e-reprint from the main author.

7 thoughts on “Response to the Journal of Forestry Article: “US Forest Service NEPA Fast, Variable, Rarely Litigated, and Declining””

  1. Hi Sharon,
    Its an honor to have one’s work being seen as important enough to merit a response. This being said, as the lead author of the original paper, there were two surprising things about this publication.
    First of all, in the scientific process, the norm is to inform the original authors of a paper that a critique of their paper is being published, and give them an opportunity to respond in the same issue. We were not given such an opportunity, and this reflects poorly on JOF as a scientific publisher. We learned about the critique the same way you did, I just read this paper yesterday.
    Second, we can’t figure out what Morgan et al. disagree with us about. For example, they highlight that the length of time for analysis varies between CEs, EAs, and EISs, which was something we highlighted in our original paper. They highlight some regional differences, and our original paper included an extensive discussion of regional and local differences.
    You highlight that they take issue with our statement about Forest Service budgets (which they erroneously describe as “one of our conclusions”). We reported that the number of NEPA analyses was declining over the time period of study, and wrote “This decline is likely related to the combination of flat or declining real budget allocations, retirement of experienced staff without adequate replacements, and increasing fire impacts that divert agency resources away from routine land management (National Interagency Fire Center 2019)” – so we wrote a *clearly* speculative sentence, including declining budgets as one of several well documented issues the agency faces, and cited an *agency* source which discussed these problems (elsewhere in the paper we cite a whole bunch of other sources that document flat or declining agency budgets in the face of rising fire costs). Although we presented this work in language that was clearly speculative, I haven’t found anyone who disagrees with it – including Morgan et al! Morgan et al. look in more detail than we did at the budget and what did they find? Flat or declining budgets (they frame one budget measure as a slight increase but the increase isn’t statistically significant, which is to say, its what a scientist would call no change – looking at the graph presented in their paper, that budget declined through most of the study period, and then had a modest increase in the last few years of data, so even if the increase had been statistically significant, most of our study data was produced in a period of declining budgets).
    As far as the data cleaning issues they highlight, give me a break. We spent hours on the phone with PALS people working with them to try to understand the data (and the many data cleaning issues it contains). We spent months cleaning this data before publishing it, and like any good scientist, ran many versions of our analysis using different assumptions about which data were good. Most of the patterns we report are robust to just about any data cleaning assumption. We decided to not analyze the data for ongoing projects because from speaking with the PALS people, we learned that we would not be able to clearly distinguish projects that were discontinued or dropped from those that were suspended from those that were ongoing. Put in other words, since the incomplete projects are ambiguous, analyzing those data are not likely to be very meaningful. According to the people who manage the PALS database, many of the projects that Morgan et al. report having long time frames that are not completed are likely to be projects that were dropped (but were not listed as so in PALS), hence the long time frames. And including the incomplete projects doesn’t actually lead to very different results – as Morgan et al. themselves show. In fact, they don’t point to any data cleaning decisions we made that change any actual results in substantively meaningful ways.
    We are debating whether we should write a response to Morgan et al., but its hard to write a response when its apparent that your critic’s analysis almost entirely supports your original analysis. Or to put it in other words, we think Morgan et al. could have written a better paper if they had written it not as a critique of our work (which they don’t really succeed in because their findings aren’t substantively different from ours), but as an addition. I found some of their work interesting on first glance, although I haven’t really been through it in alot of detail yet. For example, the detailed analysis of budgets they did seems valuable (and seems to be consistent with our original story), and its interesting to know that litigation is more common in Region 1 (again, this illustrates a point we made in our original paper, i.e. that there appears to be alot that could be learned by studying regional and forest level variation inside of the agency). We ran numerous analyses, but I don’t recall looking at litigation broken down by region.
    There’s lots that can – and probably should be – done with this kind of data. I strongly agree with you that its a shame that the data isn’t collected in a cleaner manner – and it ought also to be openly publicly available – there’s no reason other than a modest investment in building an interactive public-facing web portal – that it can’t be shared openly and in real time with the public. We shouldn’t need a bunch of academics to spend a few months cleaning data when the data should be clean and publicly available in the first place.

    Reply
    • First, the article has not appeared in print and won’t until at least November. It is impossible to co-publish papers when they are published online first after peer review.

      Second, if you would like publish a response there is plenty of time to get it in the November or later issue.

      Third it was my intention to make you aware of the article in question, but I have been traveling and couldn’t do it immediately after it came out online.

      Reply
      • Keith thanks for responding! I and many others appreciate that being a journal editor is often a fairly thankless job. And yet, we value peer-reviewed research so highly! It’s quite paradoxical IMHO.

        Reply
  2. And I will add to my previous comment that the FS is in a (slow, haha) process to overhaul the eMNEPA tools, which is the larger suite of tools that PALS ties into

    Reply
  3. for those who follow this sort of thing: https://academic.oup.com/jof/advance-article/doi/10.1093/jofore/fvab076/6503657

    Response to the response.

    much boils down to talking past one another, (who said who said what about the relative formality of a hypothesis), ultimately disappointing in that regard.

    example:
    “our paper tested no hypotheses” but it implied several that the other authors attempted to systematically test which they clearly stated, with the upshot being that the conclusions made needed formal hypothesis testing to match the strength of the claim

    “our paper made no causal claims” but it clearly speculated about causes and the necessity for revising regulations, so is it not appropriate for someone to test those claims more formally?

    Will be interesting to see a response to the response to the response. Really seems like the response to the response is to claim “not applicable!” whereas the intent of the original response was to question not the study as a whole but specifically the final conclusions drawn from it as resting more conclusions on the data than it can support

    final note – it seems that the “response to the response” shifts the ground a little bit by claiming much more modest conclusions for the original study than that study itself made. the original article did indeed provide valuable trend data and kick off the conversation about what works and what doesn’t in an interesting direction, but it also claimed to reach conclusions about the role of CEs vs. EAs vs. EISs, conclusions about what to make of the relative abundance of litigation, and even further claims about the merits of policy change. the response to the response doesn’t really touch on these more ambitious claims. you can couch that as speculation all you like, but it doesn’t make the claim off limits for further evaluation.

    general question on examples cited ; is it in any way historically demonstrable that NEPA-mandated public involvement led to the change of the 10am policy or allowable cut measures – inclined to think not, unless you collapse wildly divergent histories into something which NEPA can take credit for, somehow, and ignore NFMA)

    Reply

Leave a Comment

Discover more from The Smokey Wire : National Forest News and Views

Subscribe now to keep reading and get access to the full archive.

Continue reading