SJ and I were having a discussion which I will reproduce here, because it brings up questions beyond the NWFP.
First, I asked these questions.
I would still like to see an independent group examine how well it did on:
1. The social and economic aspects as per principle “ever forget the human and the economic dimensions of these problems,” we know that there were economists involved (with differing opinions) and I wonder how many social scientists were involved. As I recall, Bob Lee, a sociologist, was not a proponent of the approach and it led to a kerfuffle with Charlie Philpot, but I haven’t been able to find copies of the letter exchange, which I think would be useful to future historians.
2. How well did the “government working together” work? The REO.. was it another level of bureaucracy or did it provide value? Has it naturally died out? What happened?
3. Survey and manage pros, cons, was it a good investment, did it add value?
4. Adaptive management areas..the story I heard that was for litigators, cutting trees experimentally was not on the table either.. is that true? What happened that they didn’t apparently work as intended?
5. However many years later, thanks to the NWFP FAC, (even though there are few people from the NWFP terrain on the E side) it is officially recognized that dry forests are different and require different management, and yet many FS people were saying that at the time. Why did it take so long to figure this out? What about SW Oregon? Were regional differences and knowledge adequately considered? To what extent was “the science” Corvallis-centered and has this monopoly been adequately broken up?Why isn’t the FS or BLM or OSU figuring out how to do an independent review of the NWFP?
After all, one of Clinton’s principles was the government should work for us.. the same government that can do an after action review on a vehicle rollover, can’t initiate a review of a much more extensive and expensive process?
It doesn’t make sense, unless powerful entities don’t want to hear what might come up. But all the entities, OSU, FS BLM have a potential COI, so it would be interesting to design a truly independent review.
SJ added links to more information.
The monitoring reports (available here: https://www.fs.usda.gov/r6/reo/monitoring/) and the Science Synthesis (available here: https://www.fs.usda.gov/pnw/pubs/pnw_gtr966.pdf) answer many if not most of your questions.
Another source – perhaps the Cliff Notes version – is the law review article I wrote with Professor Mike Blumm (available here: https://lawcommons.lclark.edu/faculty_articles/146/).
A longer form source is The Making of the Northwest Forest Plan by Franklin, Johnson, and Reeves (available from booksellers).
After examining and reflecting, I thought that perhaps we were talking past each other. I was talking about “how well it worked and what was learned about how we could have done it better (more efficiently, effectively and with fewer negative impacts.” I was basically thinking from the management or public administration perspective.
So I looked up Program Evaluation and it turns out that GAO (the Government Accountability Office) had this interesting report. Check out (3) continuous learning. But as I reviewed this, I thought the NWFP was not alone, there are probably many FS programs that have not had formal program evaluations. Suggestions?
So far it doesn’t appear that the NWFP has had a formal program evaluation, let alone independent external review. It would be a great deal of work but so was (is) the NWFP. Characteristics might include independence, transparency to the public, opportunities for listening to stakeholders. Yes, a program evaluation could become a bureaucratic morass if not carefully designed and implemented, but that could equally be true of the NWFP itself, and how would we know? Finally, I think it was interesting that Colorado and Idaho State Roadless Rules had a national FACA committee to ensure that the national perspective was taken into consideration, apparently this wasn’t thought necessary for NWFP, which covers three states.
I think about this a lot. For big, very visible efforts like the NWFP, but also for smaller experiments like the A to Z project on the Colville NF, or Barry Wiensma’s small sales program on the IPNF–all of these efforts likely resulted in some very important lessons on what worked, what was hung up by path dependency (or what Jennifer Pahlka calls “waterfall culture”), and what could improve future efforts while still working within existing structures. Oh, how I love these questions! The problem is that few within or outside of federal agencies seem to want to fund this type of work. Seems to me it would work better if the funds to accomplish program evaluation were written into the budget for any experiment or pilot program initiated by the FS or other NR agencies. Then it could be put out to bid or accomplished through agreements with Universities.
Chelsea, a couple of thoughts.
There is a real power issue involved in who does the evaluating. For example, when GAO wrote their review of litigation, and I was in WO NEPA, I wrote comments saying that even if litigation was just a problem for Region 1, it was still a problem. That didn’t make it into the final document.
I also have had bad experiences in coop agreements with one university.
My best experiences in similar spaces have been with conflict resolution professionals- their real-world experience at listening to different views and working through disagreements tends to help. Given my experience, an evaluation could be done by the agency, the design assisted by environmental conflict resolution groups, with stakeholder (including academic, including public administration academic) involvement.
It would be fun to try on something relatively small and do a lessons learned.
A couple comments: First, I would hope that your one back experience with a coop agreement with a university didn’t ruin you on the benefits of partnering with universities. Many of us (faculty) are really committed to better government in action.
Also interesting that you cite concern about disagreements and resolving conflicts in the context of program evaluations/reviews–I don’t see program evaluations as needing to communicate any kind of consensus, rather, that it would unearth the full range of perspectives on how well a program met its intended goals and resulted in desired outcomes. The latter (outcomes/goals) are where stakeholder involvement and collaborative process is most crucial.
No, absolutely not. I would not do a coop agreement with that specific group again.
My point was that in my experience ECR professionals were more used to developing ways to organize and get information from people with different perspectives. Different groups would tell the ECR folks things that they wouldn’t tell the agency directly. Let me think on this some more and ask some ECR friends if they have ever been involved in program evaluation.
Thomas, Franklin, et. al. evaluated the early years of the NWFP in 2006:
Abstract: In the 1990s the federal forests in the Pacific Northwest underwent the largest shift in management focus since their creation, from providing a sustained yield of timber to conserving biodiversity, with an emphasis on endangered species. Triggered by a legal challenge to the federal protection strategy for the Northern Spotted Owl (Strix occidentalis caurina), this shift was facilitated by a sequence of science assessments that culminated in the development of the Northwest Forest Plan. The plan, adopted in 1994, called for an extensive system of late-successional and riparian reserves along with some timber harvest on the intervening lands under a set of controls and safeguards. It has proven more successful in stopping actions harmful to conservation of old-growth forests and aquatic systems than in achieving restoration goals and economic and social goals. We make three suggestions that will allow the plan to achieve its goals: (1) recognize that the Northwest Forest Plan has evolved into an integrative conservation strategy, (2) conserve old-growth trees and forests wherever they occur, and (3) manage federal forests as dynamic ecosystems.
https://faculty.washington.edu/jff/Thomas_Franklin_Gordon_Johnson_NW_Forest_Plan_Review_CB_2006.pdf
Mulder et. al. developed a detailed NWFP monitoring strategy in 1999:
https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=bf627ec2c4082718c12868bbd0b43ab5540c0791
I don’t know if the agencies ever fully implemented this strategy, but I doubt it, since monitoring is the first thing to get axed when resources are scarce, as this report from 2019 makes clear:
https://www.researchgate.net/profile/Tzeidle-Wasserman/publication/331848980_Broader-Scale_Monitoring_for_Federal_Forest_Planning_Challenges_and_Opportunites/links/5cba04ee4585156cd7a46ffb/Broader-Scale-Monitoring-for-Federal-Forest-Planning-Challenges-and-Opportunites.pdf?_tp=eyJjb250ZXh0Ijp7ImZpcnN0UGFnZSI6InB1YmxpY2F0aW9uIiwicGFnZSI6InB1YmxpY2F0aW9uIn19
Although not focused on the NWFP, this report suggests a way forward for monitoring the NWFP after (if?) its forthcoming amendment is completed.
None of these constitutes an outside, independent review of the NWFP, and any such effort today would likely be at least somewhat compromised by the data that wasn’t gathered and the analysis that wasn’t done over the last couple of decades.
Rich J. yes, certain authors of the effort evaluated the NWFP but that lacks a bit of independence, plus it doesn’t appear that the voices of other affected parties were involved.
The Mulder monitoring strategy is just about the ecological part. Critiques of the monitoring I have heard (and remember, I was far away at the time), it seemed like a cash cow for certain biologist to study certain things and tap into the pool of NFS $. The other thing I heard (from a plant person) was their frustration at having to keep track of a “species” that only one scientist could identify as a separate species. My suspicion is that that was the tip of the skeptical iceberg, but it was not cool to question too publicly at the time.
What is interesting to me about the “Broader scale” paper is that it assumes that larger scale monitoring is useful and the problem is how to do it (answer, more funding). Another question would be “what forest monitoring has been useful? To whom? What are the characteristics of that monitoring?” Of course, the 2012 Rule says to do monitoring. But the effort to draft the Rule never did an evaluation of previous forest monitoring efforts and what turned out to be useful, for what kinds of decisions and whether any adaptation based on that was ever done. In other words, was it a good investment in terms of decision-making?
If we go to the Blue Mountains Forest Partners example, stakeholders had specific questions and seem to have used monitoring to answer those questions in real time outside the Plan Revision framework. I can’t help but point out that as a scientist, sometimes designed experiments are better than monitoring to answer specific questions.
It is assumed that monitoring will lead to adaptation based on that info. We also don’t know how often that occurs for plan-level monitoring.
“It has proven more successful in stopping actions harmful to conservation of old-growth forests and aquatic systems than in achieving restoration goals and economic and social goals.”
I don’t know why this should surprise anyone. Meeting standards for projects is under the control of the agency and mandatory (often necessary in forest plans because of substantive legal requirements). Goals and objectives are “aspirational,” and depend on funding and who knows what else. Add to this that pulling things together to achieve restoration goals on the land is much more under the control of the agency than influencing economic and social goals for a community. What I have often seen with forest plans is that there was pressure to projected unrealistic (“optimal”) output levels, given the other operating requirements, which naturally looks like a failure when they don’t happen.
Yes, there have been several NWFP monitoring reports – check out Treesearch. The 30-year report will be coming out soon.
Monitoring reports are interesting but not the same as evaluation.
I thought this was interesting from the 20 year.. https://www.fs.usda.gov/r6/reo/monitoring/downloads/lsog/Nwfp20yrMonitoringReportSummaryOldGrowth.pdf
“Losses of about 2.5% from wildfire and 2.5% from timber harvesting were expected each decade. Observed losses from wildfire were about what was expected (5% over two decades), but losses from timber harvesting were about one quarter of what was anticipated. Results are consistent with expectations for older forest abundance, diversity, and connectivity for this period of time. Nothing in the findings indicate that attainment of desired outcomes over the next few decades is not feasible; however, we noted some portions of the NWFP area have been setback by decades from achieving those outcomes particularly resulting from large wildfires in the fire-prone portions of the NWFP area.” Clearly before the recent large fires.
Sharon, where did you pull those screenshots from?
https://www.gao.gov/assets/gao-21-404sp.pdf
It hard to imagine how one can fix an issue without a full evaluation of how the current situation is working it equally hard to imagine a group to fix something that dosent include on the ground folks who were responsible for the NWFP working on the ground.. Could be that the objective is not to fix something but implement a predetermined policy such as the administrations position on mature and old growth –