Monitoring- Solving the Puzzle

Alex Dunn raises another question that is definitely a piece of the planning rule discussion. What about monitoring?  People do a lot of monitoring; at the same time, there is never enough money for monitoring. Once I spent some time attempting to frame the “monitoring problem,” and even remember doing some interviews, but  could not even achieve consensus on the  framing of the problem. That’s when you know you have a serious problem.

Here are a couple of pieces to the puzzle:

A. Who decides what at what scale? Conundrum.

1. Logically each forest would develop an integrated monitoring plan from broadscale to project level.  Yet a variety of handbooks have different required monitoring, so it seems like it’s a patchwork. One unit told me once “we don’t know what we’re going to monitor because it depends on what the new wildlife biologist is interested in.” So it seems to be a constantly shifting patchwork.

2. But some very important things don’t make sense to be monitored separately by forests,  and have regionwide or species-wide plans for monitoring. Like a species, why would monitoring plans vary by forest?

3. Monitoring should be done across all lands, so how does that fit? Should the FS work with other agencies, the States, landscape scale collaboratives?

4. Watershed monitoring makes sense down a drainage/river. This scale would then be larger than the landscape scale collaboratives.

It’s almost like we should distinguish some basic things to monitor, say air and water quality, and basically do them the same nationwide and across all lands, and then the other important things to monitor each deserve consideration of what scale is appropriate.  Yet, we expect “forest plan monitoring” to be some kind of anchor. Why? What’s that about?

B. Another piece to the puzzle is that there are units that have monitoring programs that seem fairly successful; that annually stakeholders go out and review the results; and the stakeholders and the unit talk about potential causes of the results, and future research questions and potential changes in management practices.

These two pieces don’t really fit. Difficulty, challenges, and yet perceived success.

I’m sure there are more pieces to the monitoring puzzle; perhaps by carefully examining all the pieces we could attempt to solve the puzzle.   If we could decide, and explain how we would be accountable, it might be a convincing approach to appropriators, which would then possibly get around the funding problem.

10 thoughts on “Monitoring- Solving the Puzzle”

  1. Sharon,

    I’m also glad that Alex puts this important issue of monitoring on the table. I agree that monitoring presents a number of tough problems, including the scale and funding issues you two raise.

    How to implement a successful monitoring program, in an adaptive management framework, deserves serious consideration and an explanation of how exactly it would work. This is one reason I disliked the 2005/07 planning regulations. Here we have the USFS proposing a “paradigm shift” in planning, recommending a move towards more adaptive management. But nowhere did the agency clearly explain how exactly monitoring would be done and paid for in this new framework. Whenever I read plans that are supposedly adaptive management-based, I head straight to the monitoring section, which is usually of course the vaguest.

    It’s hard to be against something like adaptive management (I certainly don’t want stagnant mismanagement), but there are numerous policy issues to be worked out beforehand. If they are not, adaptive management becomes “muddling through”—same wine, new bottle.

    I’ll try to cajole a few wildlife people to speak to this issue, as monitoring wildlife populations (given NFMA’s diversity mandate) might put our discussion on an interesting track.


  2. I agree with Martin that the 2005/2008 rule didn’t clearly explain how monitoring would actually be done and paid for. But I think the problem with that rule was actually worse – we didn’t design our plans themselves to match up well with a monitoring strategy. On the surface, the basic idea was that the achievement of desired conditions would form the basis of a monitoring strategy. But in practice, we were struggling with the aspirational and timeless nature of the DCs, and the scale problems between forestwide or management area wide DCs and implementation of smaller project areas. We also struggled with the design of measurable performance measures in a planning process where we typically didn’t have existing forestwide data while having substantial uncertainty about future trends. We couldn’t commit to quantitative targets over 15 year periods, much less the time horizons that traditional silviculture seems to require. It’s also very difficult to make commitments in a plan, because only the zoning aspects and standards/guidelines really have any meaning at the project level. So structually, we never designed our plans to allow monitoring/evaluation to make a difference in changing them. Fundamentally, we never adopted the critical elements of adaptive management to establish learning objectives and hypotheses to test, and diagnostic models to validate – those elements were simply not in our plan model. Interestingly enough, those elements aren’t even in the EMS design that was adopted by the 2005/2008 rule, which seems more oriented to documentation and transparency than the classic adaptive management model.

  3. So let’s take a look at what the law requires vis a vis “monitoring.” NFMA says:

    “(C) insure research on and (based on continuous monitoring and assessment in the field evaluation of the effects of each management system to the end that it will not produce substantial and permanent impairment of the productivity of the land.”

    The statute’s monitoring focus is narrow — productivity of the land. Some may argue that land productivity should include wildlife populations and water quality. But NFMA provides no support for these views. When read in context and in light of NFMA’s legislative history, land productivity clearly refers to the soil’s ability to grow trees. Congress was worried that inappropriate clearcutting and practices like the Bitterroot’s infamous terracing were degrading soil productivity.

    There are certainly good reasons to monitor water quality and wildlife, probably as research efforts by FS scientists (e.g., northern spotted owl population monitoring). But such monitoring has no place in NFMA forest plans.

  4. There are definitely two different ideas here..

    One is that we need to monitor effectiveness and implementation of practices, whether it is “plan do check act” from a business or a natural resource perspective. Then we need to take those monitoring results and if the practices are not having the desired effect, try other things.

    The second is regardless of the utility of such an idea, NFMA plans are not where monitoring belongs.

    I am with John in that desired conditions may be a monitoring nightmare. If we only have so much funding, a strategy would be to focus on the basics, air and water quality and populations of key species. A simple plan for forests would be to describe what key questions they need to be answered through monitoring, how that monitoring would be achieved (design, where and how often), give it a price tag and commit to doing it. Every year results would be shared with the public and new directions and ideas to try developed with public discussion.

    The original idea of EMS was that it would link to those key questions (not necessarily DCs) and be a framework for disciplined monitoring and discussion of adaptive management.

    The idea is not difficult. Yet it has been difficult in practice. And I agree with Andy that it does not need to be associated with NFMA planning.

    Yet monitoring and changing is something that everyone supports- so we should do it, through forest plans or not.

    • Just want to make sure I’m following this correctly, as I see the virtues of simplifying the planning process, but also believe that monitoring has a place in a forest plan—even if its limited to some degree (e.g., soil and wildlife).

      Michael Anderson and Charles Wilkinson’s comprehensive history of NFMA summarize the Act’s diversity mandate to require “FS planners to treat the wildlife resource as a controlling, co-equal factor in forest management.” If the agency is to ensure diversity (and probably viability), monitoring is a must. How else to ensure it but by using a valid and reliable monitoring program?

      Andy, are you saying that monitoring wildlife is a necessity obviously, but that the specifics don’t belong in the plan?

      My other question for you Andy is whether or not your KISS proposal would simply move additional analytical requirements to the project level? In other words, the forest plan becomes NFMA/timber-centric (and simplified), but all the tough analysis gets shifted to project level? (that was one of my primary concerns about the 05/07 planning regulations).

      I do feel strongly that monitoring has a role to play in the process, somewhere at least. I’m reminded of the controversial Ecology Center v. Austin decision (later reversed by 9th Cir.):

      “It is arbitrary and capricious for the FS to irreversibly ‘treat’ more and more old-growth forest without first determining that such treatment is safe and effective for dependent species….the Service is asking us to grant it the license to continue treating old-growth forests while excusing it from ever having to verify that such treatment is not harmful.”

      I realize that this decision is not planning-based, but it raises some monitoring questions nonetheless.

      At some point I would like evidence—coming from a valid and reliable monitoring program—that proposed actions are having desired effects (e.g., on old growth-dependent species, soil productivity, etc.). I want more than unverified hypotheses in a plan, or project proposal for that matter.

      • Martin,

        Congress did not require wildlife monitoring in a NFMA plan. I’m of the view that if Congress didn’t require it, it doesn’t belong. I think this because forest planning is so broken that its best salvation lies in stripping the planning process of everything that isn’t required by law.

        Plan vs. projects . . . a minimalist forest plan should consist of imminently implementable timber management decisions that don’t require further project-level or NEPA planning. Then the plan can truly be said to have made tangible on-the-ground decisions. And, thus, people might be more engaged in the forest planning process.

        NEPA tiering is not mandatory. The FS could choose to write one EIS for its timber sale program (i.e., the minimal forest plan required by NFMA) and proceed to implement the individual sales thereafter without further NEPA planning.

  5. Martin- as a forest biologist I would have to say that designed studies have an equally or more valid role in determining the scientific defensibility as monitoring. Both designed experiments and monitoring have roles. If they disagree, then that’s a situation that “shouts watch out.”

    PS If we applied your perspective to climate science and GCM models without the proven ability predict on the ground climatic changes.. Just sayin’ 😉

  6. I’m delighted to see these multiple postings on monitoring, as the practice, coupled with evaluation of monitoring results, can lead to the learning-based planning that interests me so much. The Journal of Forestry will soon be publishing a paper with Tom DeLuca as the lead author on a three-tiered monitoring approach that encourages “contextual” large scale variables, well-distributed, plot-based experiments guided by science staff, and “citizen-science” effectiveness monitoring, with the latter being the hardest nut to crack. Who cares what NFMA does or doesn’t say about monitoring – if measurement of consequences doesn’t occur, there is no point in pretending to “plan.” The hard part is actually DOING this, which we’ve been trying with limited success here on Forest Service and University of Montana management projects over the past year. First, you’ve got to pick a very few simple, robust indicators and have an idea of WHO is really going to go out there with a soil infiltrometer, WHO will store the data, WHO will set up the sampling design. And then you need assurance so that the same measurements, in the same places, will occur at a meaningful frequency. Martin is right – if this isn’t addressed at the outset with adequate funding, then there’s no point in trying, because getting people out there to conduct the measurements has real costs in terms of time, equipment, and eventual evaluative analysis. This raises the ugly truth that if we actually want to engage in planning that has meaning, we need to reverse where we spend our resources, putting them at the end of the project, instead of the beginning.

  7. I think monitoring things is good. But what I hear Andy saying is don’t associate it with NFMA unless we can avoid it. I tend to agree with him, because I think it should be integrated across plans, projects, etc. and not have a disparate set of different monitoring topics, procedures, etc. that add up to ??

    I would like a set of monitoring procedures that are meaningful, at an appropriate scale, specific, integrated and prioritized. I would like the results reviewed regularly with the public and changes made to management based on that (or to policy, or requests for more R&D). If part of it “counts” as NFMA and part “doesn’t” I think it makes it confusing to us and collaborators.

    However the question of whether monitoring anything is required by NFMA is not something I know enough about to weigh in on.

  8. Back in 2005 I addressed some of this puzzle in A Simpler Way (Monitoring and Evaluation Edition).

    A Snip:

    For monitoring, as for the other aspects of adaptive management or management writ larger, I highly recommend W. Edwards Deming’s The New Economics: For Industry, Government, Education (1994). In The New Economics Deming outlines theory and practice for monitoring and evaluation that is much simpler and much more owned-by-practitioners than what we normally see. How many Forest Service managers are familiar with Deming? …

    Here are two article-length classics that help frame monitoring and evaluation, and highlight both the simplicity and excitement practitioners can find by framing things appropriately and finding simple means to deal with complex systems:

    Margaret Wheatley and Myron Kellner-Rogers, “What Do We Measure and Why? Questions About the Uses of Measurement.”

    John J. DiIulio, Jr. “Measuring Performance When There Is No Bottom Line.”

    Finally, it does little good to attend mainly to evaluation and monitoring without also paying attention to management in general. For those desirous to learn a bit more about this most complex and wicked undertaking, try these:

    A Simpler Way, Margaret J. Wheatley and Myron Kellner-Rogers, 1996: A magical treatment of how to deal simply with the complexity of natural and social systems that enfolds us. Treating information and relationships as co-equals, Wheatley and Kellner-Rogers lead us forward away from rigidity and over-complexification, and toward self-organization, personal identity, and coherence.


Leave a Comment