Science Forum Panel 3: Species Diversity

This morning, the third panel of the science forum discussed the latest approaches to species diversity.

Kevin McKelvey of the Forest Service Rocky Mountain Research Station began by emphasizing monitoring of species, where quantification is essential. (slides pdf) In the overall framework of planning, then monitoring, then a new plan, there is a very different data standard for planning than for monitoring.  In planning, things really can’t be quantified – you ask what will fit the mission statement.  The plan is the broad aspirational thing, but monitoring is the reality.  He used a business analogy, where planning is launching a new product line, and monitoring is looking at the quarterly profits.

If monitoring is so important, why haven’t we done more?  McKelvey gave two reasons.  First, science has not provided the appropriate tools and direction.  Second, monitoring has been too expensive and difficult.  But there are new monitoring methods that haven’t been around, even the last time we did a planning rule. 

In the old way of monitoring, the gold standard was collecting population trend and size data over time.  But you can’t get population data across large areas.  You’d have to capture repeatedly a large portion of the population, which isn’t really posible across large geographic domains.  Plus, even if you can get the data, it’s not really that useful.  McKelvey used the example of the dynamics of voles, whose population data jump all over the place, so you can’t tell how they are doing.  He also explained that indices aren’t useful, because we often don’t know the relationship between the index number and the population.  One index often used is habitat.  But he used an example where more grass doesn’t necessarily mean more elk: maybe the elk got shot out, or the grass was in the wrong place.  Another index often used is surrogate species.  But he used the examples of elephants which don’t indicate what other species are present.

Over the past three years a new idea has developed: looking at presence or absence.  Presence/absence is not a surrogate for species abundance, but is a separate idea.  It has to be both: presence and absence.  You can estimate the likelihood of the false absence rate.  Today, nearly an entire branch of statistics have developed on this idea.  You can look at the area occupied over time, and whether or not the range is contracting.  McKelvey said that species range is a better metric for population size for long term viability.  It’s also important for monitoring range shifts due to climate change.  Plus, the spatial part of the analysis is a nice hook to looking at management being done on the land.  Presence/absence is easier to collect than abundance data. 

There is an exciting new suite of tools:  forensic DNA stuff – collecting hairs, scats, etc.  Just by sampling water, you can find the presence of fish DNA.  This radically decreases the cost, and level of skill for person collecting the data.  It’s not the skills of something like a breeding bird survey.

Presence/absence has some problems because it is insensitive to things like habitat fragmentation.  It tells you the “wheres” of it all but not how the population is put together.  So McKelvey urges a second monitoring method: looking at landscape genetics, which is really good at filling the gaps in knowledge. In most cases if you’re handling the data or detritus you have enough data to do genetic samples.  Historically, lab costs have been a big deal.  In 1990 it cost $10 for one base pair.  Those costs were reduced to only $1 for up to 1 million pairs, and now it’s even 1/3 of that.  This stuff will be cheap and fast.  McKelvey gave an example of a study on wolverines, where they identified corridors based on empirical data. 

Marilyn Stoll, a biologist with the Fish and Wildlife Service in Florida discussed the integration of science, policy, and stakeholder involvement in her recovery work in the Everglades. (slides pdf) She said that there are many legal authorities for an ecosystem approach, including the affirmative obligations of the section 7(a)(1) of the Endangered Species Act. 

Stoll said that recovery plans contain important information that should be used in land management planning.  For instance, the South Florida recovery plan covers 68 species, and it contains new science and recovery tasks.  It addresses 23 natural ecosystems and lists restoration tasks.  She explained the need to communicate with others in order to implement the actions.  She mentioned that recovery plans are helpful when you look at the reasons the species were listed, and what were the threats.

Stoll described a conceptual model under ESA section 7(a)(2) where you look at how species status can decline from healthy populations, to concern species, candidate, proposed, threatened, endangered, and lastly in jeopardy.  It might be helpful to think about where in the model the species currently is, and which way it’s headed.  You look at the baseline, status of the species, effects of the proposed action and cumulative effects  (state and private actions)– those are the components of the jeopardy analysis.  

In the Everglades, she is looking at water quality, timing, and distribution.  There is an engineered system they are trying to restore.  They are developing an integrated science strategy, with adaptive management at different scales.  More importantly, they are communicating the science for all different audiences at all different times.  This varies from easy to read stuff like maps colored green or red, to more complex ecological models, focusing on hypothesis clusters.  Stoll gave a few examples, including restoring the channelized basin around the Kissimmee River, and addressing the barrier created by a road and canal.

Gary Morishima, a private consultant in Washington state, emphasized a holistic approach. (slides pdf) In explaining how we must look beyond borders, he quoted Chief Seattle:   “All things are connected.  Whatever befalls the earth, befalls the sons of the earth.” Morishima said that tribes have traditionally managed lands according to this tradition.

Morishima said there is no firm definition of biodiversity.  He said the U.N. Convention on Biological Diversity has said that it’s the variety of life, but it’s valued by different people and cultures for different reasons which range from aesthetic to economic.  Over history most species have become extinct.  Nature is indifferent. Ecology and evolution are intertwined.  The environment is inherent unstable, species adapt or they die.  Human societies have evolved during the holicene period, a period of relative stability.  Although some human influence is not bad, and sometimes species richness is improved due to human involvement, some have talked about a 6th great mass extinction era being cause by humans.  We care because of our ethics and values.  2010 is the international year of biodiversity by the U.N.   No international reports since 1992 have showed any improvements.  2011 will be the U.N. international year of forests.

Morishima said that our society’s emphasis on individualism, concept of property, and the drive to accumulate private capital lead us to becoming “pixelized” at the smallest possible unit.  This leads to isolation, fragmentation, compartmentalism, and costs being transfered to others.  This pixelized view of the world is hard to put back together again.  He gave an example of King County, Washington, with a land ownership pattern that is highly pixelized: missing landscape components, disconnected properties, externalities, and divergence of management goals.  The land ownership is not coincident with ecosystem function, and even the National Forest has been pixelized by management allocation schemes of allowed/restricted/prohibited.  Nationwide, forests are disappearing, as landownership is becoming more fragmented into smaller parcels.

Morishima said that it’s time to step back and think things over, to determine the right goals, and whether the goals are effective to manage for a suite of benefits.  We need to ask what society wants from our forests, both economic and ecological.  Often, we can’t get there from here, and many of the things we want we can’t get, when we just look within administrative boundaries.  The big question is how do we coordinate and integrate.  He said that species goals can’t be done on Forest Service lands alone.  We need a system view.  Not all forest lands are equal, and we can’t expect everything for everyone, everwhere all the time.

Morishima gave some examples of “greenprinting” in King County, where ecological values are assigned to each parcel.  Another concept is to use “anchor forests” on the landscape to support transportation, manufacturing, forest health, and ecosystem function. 

He talked about the obstacles of communication and distrust. Distrust arises from perceptions of risk.  He cited Ezrahi’s work on pragmatic rationalism, describing the relationship between politicians and scientific experts.  If they both agree, there is an efficient means to the end.  If they both disagree, you can have biostitution.  There is a need to search for “serviceable truth” that does not sacrifice social interests for scientific certainty.  There is no free lunch.

Morishima explained a new paradigm of panarchy (see Dave Iverson’s related post), resiliency, and consideration of social and ecological systems.  Later, in a followup question, Morishima said that panarchy is based on the recognition that we can’t know, much less control everything in the system.  So we need to develop systems adapted to change and disturbance – both socially and ecologically.  He said that integration work has to be supported.  What the public wants is not input – they want influence.

He quoted the Secretary of Agriculture’s all lands/all hands approach, to work collaboratively to effectuate cooperation. The Forest Service must change, overcome institutional barriers to collaboration, and a relectance to devolve decision making.  We must support collaboration at the local level, stakeholder involvement, independent facilitation, and multidisciplinary communication. We need to replace our pixelized window on the world with a landscape view of social and economic processes and realities.  Morishima concluded by saying that this new approach is not so new after all: it’s a reflection of native Americans for generations.  All things are connected, part of the earth, part of this, for we are merely a strand in the world – we didn’t create the world.

Bill Zielinski of the Forest Service Pacific Southwest Research Station concluded the panel presentation with a discussion of the conceptual thinking of conservation planning. (slides pdf) He talked about the two common components, sometimes called “coarse filter” and “fine filter.”  A coarse filter approach assumes a representation of ecological types and ecological process.  A fine filter approach is a complement to look at specialized elements.  Most scientists seem to agree that a combination is a decent compromise, as used by the Nature Conservancy, and the forest restoration approach described by Tom Sisk in the first day of the science forum.  Zielinski said the coarse filter is cost effective and easy to implement (Schulte et. al. 2006), but it assumes you have information on vegetation composition and structure.  It’s more than cover types and seral stages, which are not a predictor of populations.  More commonly, the literature (for instance Noon et. al., 2009; Schlossberg and King, 2008; Flather, et. al, 1997)  shows a wide distribution of population sizes within a vegetation type.  The other shortcoming of a coarse filter approach is the rare local types.  The fine filter approach assumes you are have indices, now the use of presence/absence monitoring. 

Zielinski said that we need to respect our ignorance – we don’t understand the complexity of nature sufficiently to develop a protocol for sustaining ecosystem.  We often don’t know system dynamics, we don’t know what to protect, what to restore, or what to connect.  But we can’t want to delay our actions until we understand the extent of diversity on public lands.  He argued for a spatially extensive and economic method to collect info on species.  Zielinski said that we can exploit existing platforms, like plots from the Forest Inventory and Analysis (FIA), and models like the Forest Vegetation Simulator (FVS).  He used this existing information for small, high-density species to build a habitat model that can predict occurrence at all FIA plots.  For instance, he was involved in a study that looked at 300 2.5-acre plots in Northern California for terrestrial mollusks.   For larger animals with larger areas, he uses the same approach, but looks at an important habitat element like resting places.  For instance, he was involved in a study that used FIA plots in four Southern Sierra Forests to predict resting habitat for fischers.  He did it for two time steps of FIA series.  He used FVS to predict conditions of stands and plots in the future.  He linked FVS to FIA models that he built, so he could answer questions like the effects of thinning on fischers.  He added that the approach can be expanded to multiple species with simple detection surveys.  Later in the science panel session, Zielinski responded to a question from the audience about how sparse the FIA samples are, and how forests and districts will sometimes supplement the data.

Discussion

There were a number of follow-up questions for the panel.

Regarding the question of monitoring systems, Morishima said that they aren’t paying attention to the information that tribes have.  They have a permanence of place: they will observe things, but won’t always express things in the terms that we use.

Responding to a question about conflict between ecosystem goals and species goals, Zielinski talked about ecosystem goals for thinning, but not all species benefit from those broader goals.  Stoll said we need to manage for the at-risk species, because they are probably endangered because they are niche specialist.

Morishima said that the rule needs to avoid “rigidity traps”.  We should focus on flexibility for species and social systems to adapt.  Under panarchy and social/ecology integration, we need to let things self organize in order to provide for resilience.

Zielinski referred to the previous day’s examples in Connie Millar’s talk about pikas in high elevation microclimates.  Forestry in the past has created homogenized environments.  We need to take our guide from ecological proceses and look at heterogenous landscapes.  Many species in a bind will have more local refugia – like the pika below the talus.  Stoll added that climate change will also affect human populations and she encouraged landscape conservation cooperatives to look at human movement to climate change.

The panel also discussed how species should be selected for monitoring.  McKelvey said there are number of criteria, some social, some functional.  In region 1 they ranked species, emphasizing things like primary excavators or pollinators.  Zielinski said that just when you try to believe there are key species, then the science shifts to the other species.   The suite of species changes with science and society’s interest.  McKelvey added that once you start monitoring – you’re in for a run.  If 5 years into the game you want to do something else, you won’t produce anything coherent.  Same with models – everything is changing: computer capacities, remote sensing capacities.  But the monitoring is what you put in place 25 years ago.   Regarding reduction in monitoring costs, McKelvey said you just go down the priority list until you run out of money. Zielinski takes a more practical approach, looking at “gravy data” or “incidental data.”

Somewhat surprising to the panel, a question about the 1982 rule viability provision didn’t come up until the end.  Zielinski said that viability has had an interesting history.  In its most formal definition it includes difficult data collection: you can’t compute the probability of persistence.  Viability may be nuanced more these days, like maintenance of geographic range.  Zielinski said that viability analysis is not his area of expertise but he thought the concept is evolving.  McKelvey said the formal population viability analysis (PVA) process can’t be done.  The determination of the probablility of surviving can’t be done.  Think of Lewis and Clark predicting what the area they were exploring would look like in 200 years.  McKelvey said that if you back up and look at the concepts, you think about what is well distributed and connected.  If a population is doing that, it’s probably viable.  If we monitor them, then we can contribute to viability.  Morishima added that for many species, you can’t guarantee viability looking at forest service lands alone.  The whole issue of diversity in the 1982 rule could set the Forest Service up for an impossible situation when you expand the concept to nonvertebrates.  You could set up a “rigidity trap”.  Stoll acknowledge that you still need PVAs for listed species.

Secretary of Agriculture Remarks at Science Forum

Secretary of Agriculture Tom Vilsack started the second day of the science forum this morning to emphasize his attention to the Forest Service planning rulemaking effort.   He said that the Obama administration believes in science.  He noted that forests are a source of enormous economic opportunity.  They are a source of water for economic livelihood and drinking water.  This rule is an important blueprint.   He said that past rulemaking processes haven’t been collaborative, resulting in litigation and misguided policies while forests have suffered.

Vilsack said that people are surprised that he even knows about the Forest Service.  But he called the Department “an every way every day agency” which is not just about farming.  He said many people don’t even know the Forest Service is in Agriculture.  But he said that this is a personal matter – when he first got the job, his son in Colorado (an outdoor enthusiast) told him that he’s got responsibility for the Forest Service.  Vilsack later talked about his interest in forestry after family experiences in Iowa planting pine trees and watching them grow.

He told the audience that they were all important in this process.   Vilsack said that the President is focused on this, as part of his conversation on his Great America Outdoors initiative, and an upcoming White House summit.  The country is faced with obesity issues, and part of the problem is that people don’t spend enough time in the great outdoors.   The planning process is part of this larger dialogue.

 

K.I.S.S. in Rule Form, Part 5

In drafting these K.I.S.S. model rules, I look first at what the original 1979 and subsequent 1982 rules have to say on each subject. I use the 1979 rules because I have a ragged paper copy of that day’s federal register with the rules in it. This heirloom was given to me when I was hired as an assistant to teach NFMA planning to Forest Service interdisciplinary teams. My boss told me to read the rules, which were hot of the press, and be prepared to “teach” them the following week. I look to the 1982 rules because they are the rules under which all forest plans were promulgated.

It was with some amusement that I noticed, for the first time, that the 1982 rules fail to faithfully implement NFMA’s nominal public participation requirement (see the link’s paragraph (d)). I have fixed that problem below:

36 CFR 219.6: Public Participation

(a) The revised land management plan shall be made available to the public electronically and at convenient locations in the vicinity of the unit for a period of at least three months before the revised plan is adopted. During this three-month period, public meetings at locations that foster public participation in the plan revision shall be held.

(b) If the land management plan revisions, singly or in combination with the vegetation management and timber harvest program, require an environmental impact statement pursuant to Section 102(2)(C) of the National Environmental Policy Act (“NEPA”), 42 U.S.C. § 4321 et seq., the public participation process set forth in Council on Environmental Quality regulations, 40 CFR Part 1500 et seq., shall be followed.

(c) In addition to the requirements of (a) and (b) above, other processes suited to the decisions made in the plan revision, including the vegetation management and timber harvest program, may be used to involve the public.

Stickin’ to the Science- Models and More

John has done a fantastic job of summarizing the panelists’ presentations at the Science Forum. However, I think we need to carefully watch what is claimed as scientific information, especially when that information tends to be uniquely privileged and thus can remove debate from the democratic, public sphere if it becomes a “science” issue. “Science” at its extreme, can become an ever-broadening mantle that can run to personal experiences of scientists, pontification by scientists, and so on.

But what is scientific information, given the variety of fields involved in a complicated field like natural resource management? “Science” can be models, field measurements, interviews with people, GIS exercises, and so on.
Scientific information gets its privileged status from claims of objectivity and physical and biological reality.

So here are some of my impressions, as a scientist and an observer of the science enterprise. First of all, I think modelers cannot be objective about models. No more than botanists can be objective about plants, or wildlife biologists about wildlife. There is an inherent connection between love of a thing and choosing it as a vocation. A good scientist, like a good manager, has a fire in her/his belly for the work. One ecologist notably said “ecosystems are more complex than we think, they are more complex than we can think.” I actually think that that is a paraphrase of J.B.S. Haldane, who said “the universe is queerer than we think, it is queerer than we can think.” How can we believe that they are more complex than we can think, and yet expect managers and the public to put energy into consideration of model outputs without independent empirical evidence that predictions are somewhat accurate?

Role of modeling in forest planning

Nevertheless, the scientists involved in modeling focused on models. Dr. Williams, who runs monitoring programs but is probably not one of the modeling community, focused on monitoring and real world observations, due to uncertainty and the complexity and potential unmodelability of complex systems. I agree with his emphasis on observations. Is that related to the fact we don’t work in models? Does the fire in the belly come as a precursor, or an effect, of working on something like models?

It is the essential conundrum of science – those who know the most have the most inherent conflict of interest in the importance and utility of their work. As the expression goes, if all you have is a hammer…everything looks like a nail.

I think we need to have more serious discussions about the appropriate role of models when there is as much uncertainty as there is about the future. From the science perspective, no doubt, models can synthesize existing information and they are useful to inform scientific understanding. But to then say that they need to be used in planning is a leap. Are they good enough to be better than talking to the public about “we don’t really know, this could happen or that, let’s think through some scenarios?” Are quantitative computer models necessarily better than explaining to the public what scientists currently think about interactions?

If interactions are too complex to predict, then they are too complex to predict- and let’s admit it and use adaptive management. Or use simple, explainable heuristic models. If we are going to use them, then we should wait for 10 years and select the ones with the most predictive value. Weather models were discussed at the Science Forum as a potential approach for the use of models. The problem seems to be that no one in natural resources wants to wait to get the data points. I think we can honor the role of models in increasing scientific understanding without determining that they are predictive enough to be useful in planning.

Sticking to Science

When we invite scientists to speak, we have to be careful about their knowledge claims. Telling stories about their experiences with collaboration isn’t scientific knowledge, it is practitioner experience. In another example, the Precautionary Principle is a human value about how to make decisions under uncertainty. Decision science is, need I say, a separate discipline from biology and ecology. When scientists or scientific organizations advocate for a position like that, in my view, they should separate their science claims from their personal values. Roger Pielke in his book “The Honest Broker” calls these “stealth advocates.” It takes just a minute to add “this isn’t a scientific point of view, it is a value judgment” when you make such a statement , but the power of trust in science and scientists is, indeed, priceless. Ask the climate scientists.

Synchronistically, Roger Pielkei recirculated a quote today in his blog
from the book Breakthrough by Norhaus and Shellenberger which may be as relevant to the planning rule as to climate policy.

The questions before us are centrally about how we will survive, who will survive, and how we will live. These are questions that climatologists and other scientists can inform but not decide. For their important work, scientists deserve our gratitude, not special political authority. What’s needed today is a politics that seeks authority not from Nature or Science but from a compelling vision of the future that is appropriate for the world we live in and the crises we face.

Science Forum Panel 2: Landscape Models and Monitoring

This afternoon, the second panel at the science forum addressed the technical questions of landscape scale modeling and monitoring, and the related question of adaptive management.

Eric Gustafson of the Forest Service Northern Research Station started the panel by describing the merits of landscape models, which he called the “computational formalism of state-of-the-art knowledge.” (slides pdf) He said that well-verified models are as good as the science they reflect.  They are a synthesis of the state of the art.  They can be generalizations, but taken as a general expectations they are useful for research and planning. Models can account for spatial processes and spatial dynamics, consider long temporal scales and large spatial scales, and make predictions about the expected range of future states (composition, pattern, biomass).  These models don’t accurately predict individual events.

Gustafson mentioned the “pathway based succession models” often used in the West (VDDT/TELSA, LANDSUM, SIMMPPLE, RMLANDS, Fire-BCG, FETM, HARVEST)  In these models, the disturbance simulation may be process-based, or mechanistic.  He has direct experience with a different type of model, the “process based successional model” (LANDIS, LANDIS-II) for eastern forests with less predictable successional trajectories.  He said that climate change, or novel conditions, may require a process-based approach.   For these models, the landscape scales and long time frames mean that observable validation is not possible, so modelers validate independent model components that are as simple and discrete as possible, then verify component interactions, and compare model behavior with known ecosystem behavior.  Modelers also conduct sensitivity and uncertainty analysis.  Open source models are preferable because their code is available for many eyes to spot a problem.

Gustafson said that models have been used to compare outcomes of management alternatives, looking at plans once developed, or to compute the effects of proposed management (species and age class, biomass, spatial pattern, habitat for a species of interest).  Gustafson prefers looking at comparisons of alternatives rather than absolute projections. 

When projecting the impacts of alternatives, he suggests a focus on appropriately large spatial and temporal scales for evaluating ecosystem drivers and responses;  accounting for any important spatial dynamics of forest regenerative and degenerative processes; and accounting for interactions among the drivers of ecosystem dynamics and conditions (disturbance, global changes).

Steve McNulty of the Forest Service Southern Research Station talked about climate change and water. (slides pdf) He began with some observations about addressing water resources.  First, if we only focus on climate change and water resources, the forest plan will fail.  We need to look at the big picture.  Second, if we consider water as a stand alone ecosystem service, the forest plan will fail.   If we just stress water, then we’ll miss the impact on forest growth, carbon, or biodiversity.  Also, if we only look at the effects of climate change on water, we’ll miss more important factors on water quantity:  population change, changing in seasonal timing of precipitation, vegetation change, and all the sectors of water demand.

McNulty used as an example the results from a model called WaSSI (Water Supply Stress Index) which looks at environmental water use and human water use.  The model shows that the vast majority of water is used by the ecosystem (only a sliver of 2 to 5 percent is human use).  Human water use in the West is mainly used for irrigation.  The WaSSI model looks at both climate change and population change.

McNulty also gave other examples of the breadth of the analysis:  Groundwater loss can be a factor, and it doesn’t matter what climate change does to surface flows if we’ve lost our groundwater.  Another example is the interaction between land use change and climate change interactions.  If we reduce irrigation by 20%, it has as big an impact as climate change.  McNulty also described the relationships between water availability, carbon sequestration, and biodiversity.  The more you evapotranspire, the more productivity you have – it’s good for carbon storage but bad for water yield. 

McNulty also mentioned the TACCIMO project (Template for Assessing Climate Change Impacts and Management Options) with the Research Station and the Forest Service Southern Region.   This project looks at current forest plans, climate change science, and you can search by geographic location to obtain a planning assessment report. 

Ken Williams, the chief of the USGS Cooperative Research Units, spoke about adaptive management in the Department of the Interior. (slides pdf) Ken was one of the authors of the DOI adaptive management technical guide.   The DOI has worked for many years to develop a systematic approach to resource systems, recognizing the importance of reducing uncertainty.  The DOI framework can be useful to forest planning.

The management situation for the Forest Service involves complex forest systems operating at multiple scales; fluctuating environmental conditions and management actions; decision making required in the near term; uncertainty about long term consequences; and lots of stakeholders with different viewpoints.  There are four factors of uncertainty: environmental variation; partial controllability; partial observability; and structural uncertainty (lack of understanding about the processes).  All of this uncertainty limits the ability to manage effectively and efficiently.

The DOI approach emphasizes learning through management, and adjusting management strategy based on what is learned (not just trial and error); focusing  on reducing uncertainty about the influence of management actions; and improving management as a result of improved understanding.  Williams said we need to understand that this is adaptive MANAGEMENT and not adaptive SCIENCE.  Science takes it value in the contribution it makes. 

Here are the common features of an adaptive management approach: a management framework; uncertainty about management consequences; iterative decision making; the potential for learning through the process of management itself; and potential improvement of management based on learning.  Adaptive management implementation features the iterative process of decisionmaking; followed by monitoring; followed by analysis and assessment; followed by learning; followed by future decisionmaking. At each point in time, decisions are guided by management objectives; assessments provide new learning, and the process rolls through time. 

There are two key outcomes: improved understanding over time, and improved management over time.  Williams described two stages of management and learning:  a deliberative phase (phase I) with stakeholder involvement, objectives, potential management altnernatives, predictive models, monitoring protocols and plans.  Then, an iterative phase (phase II) with decision making, and post decision phase feedback on the system being managed.  Sometimes you have to break out of the iterative phase to get back to the deliberative phase.

What about the fit to forest planning?  Williams talked about considering: forests are dynamic systems; planning is mandated to be science based and stakeholder driven, acknowledging the importance of transparency, acknowledging uncertainty, recognizing the value of management adaptation, building upon experience, and understanding.  Every one of these planning attributes is a hallmark of adaptive decisionmaking.

The last panelist was Sam Cushman from the Forest Service Rocky Mountain Experiment Station. (slides pdf) Cushman cited several of his papers which talked about the problems with generalizations and the need for specific detailed desired conditions and specific monitoring.  He said that his research shows the problem with the indicator species concept – no species can explain more than 5% of the variability of the bird community; even select poolings of species fail to explain more than 10% of the variability among species.  He said that ecosystem diversity cannot be used as a surrogate for species diversity.  In practice there is a conundrum: there is a limit to the specificity that you can describe diversity, i.e. we have limited data, so that we can only define diversity very coarsely.   His research shows that habitat explains less than half of species abundance.  Cover types are inconsistent surrogates for habitat.  Also vegetation cover types maps don’t predict the occurrence and dominance of forest trees: his analysis found you can only explain about 20%, at the best only 12%.  Classified cover type maps are surprisingly poor predictors of forest vegetation.  80% of variability in tree species importance among plots was not explained even by a combination of three maps. 

To be useful, desired conditions statements should be: detailed, specific, qnatitiative, and appropriately scaled.  Research shows that detailed composition and structure of vegetation and seral stages, and its pattern across the landscape has strong relationship.  Cushman described an example looking at grizzly bear habitat, looking at detailed pixels rather than patches.  Cushman also discussed his research on the species gene flow in complex landscapes, and his research on connectivity models. 

Cushman added that monitoring is the foundation to adaptive management.  If desired condition statements are specified in detail with quantitative behcnmarks at appropriate scales, we then need to have good monitoring data, and representative data that is carefully thought out.  We need large samples for precision, recent data, appropriate spatial scale, standardized protocols, statistical power, and high precision.  We need to look at the long term.   Adaptive management needs to be supported by monitoring and it must be funded.

Science Forum Panel 1: Landscape Ecology

The first panel of today’s science forum emphasized landscape scale planning across multiple ownerships.

Tom Sisk from Northern Arizona University began with the word: “practical”.  (slides pdf) The next 20 years of the science about landscape scale management will focus on transparency, inclusion, and public deliberation.   We are learning to “scale up” our management because of our experience with uncharacteristic large fires and changes in fire regimes.  Sisk showed a map of the large footprint of the Rodeo-Chediski fire in state of Arizona.  The fire spread until it ran out of forest.   Also, wide-ranging species operate at landscape scales, especially when land uses have the potential to fragment habitat.  The protection of diversity will require a continued focus on viability, but with advances in monitoring and assessment, we can estimate species distribution by presence/absence data (which was done for goshawks on over 2 million acres in Northern Arizona).  Finally, exotic species are changing the ecosystem dynamics across the continent.  Sisk pointed out the predicted occurrence of cheatgrass using preliminary maps of areas most vulnerable to invasion.  He added that the covariates driving these models involve climate, which are useful in predicting effects of climate change.

Later during the panel discussion period, Visk said that an agreement in Northern Arizona brought people together.  Those efforts filtered into a state-based process.  The State office took the next steps, which led to a statewide strategy.  Visk said that different people who participated in the overall process had different levers they could use to shape the decisions.  He added that there was a pulling of people together through the various jurisdictions.  For scientists, these processes can be a challenge since they provide the analytical framework, matching analysis to questions.  He said that data in Landfire and other programs help provide capacity.

Sisk also pointed out the collaborative work being done on 3 million acres in northern New Mexico.  He said that one way to address the historic controversies is to empower people with information.  This new science can reside in everybody’s minds.   Land ownership can fragment our thinking rather than looking at integrated landscape properties.  Sisk described working with 50 people in breakout groups to identify priority areas for management attention, and recommending types of management in those priority areas.   Next, they looked at the predicted effects and what the landscape would look like. 

Sisk said that science must be transparent, but not dumbed down.  It must be rigorous, repeatable, and defensible so that is will inspire confident action.  The public must own the science if they are to trust and accept decisions based on it.  The planning process should provide a predictive capacity and allow exploration of alternative scenarios.  Scientists should inform and not attempt to dictate decisions.  He said that landscape ecology is a mature field, but it will require a bold attempt to embrace multiple scales. 

Jim Vose from the Forest Service Southern Research Station talked about watershed science. (slides pdf) Much of what is known is at small scales, because watersheds are so complex.  The challenge becomes taking the small scale knowledge and moving it to the watershed and landscape scale.  He added that watersheds don’t operate in isolation – they are connected.  He said that disturbances and habitats go beyond watersheds.  Movement of materials is the primary ways that watersheds interact, and hydrologic flowpaths are the primary way that small watersheds interact with larger watersheds. 

Water is an excellent integrator of watershed health: the cleanest water comes off certain landscape types.  As water flows through land uses, water quality declines.  Landscape design and configuration has a big effect on water quality. Any type of land management alters water processes and impacts resilience.  Most best management practices (BMPs) deal with the  physical aspects of land disturbance. 

Vose described innovations in linking “bottom-up” (streamflow gauging) and “top-down” (satellite imagery) technology.  There is more data from the last 20 years than the previous 80 years.  There is a variety of sensing technology, which can be used to develop monitoring networks.  Networks can go to satellites, to monitor what’s going on with landscapes.  We can quantify watershed processes, as data comes together from the bottom to the top, all the way to landscapes.  We can address the relationship of large scale carbon and water cycling.  Vose also mentioned the importance of large networks of experimental forests and the new megamonitoring network of the National Ecological Observatory Network (NEON).  Regarding climate change, Vose mentioned that we can look at long-term data in new ways. 

Vose said that we need to focus on the Where? How? What? and Why?  for restoration.  The “where” is really important (it’s not random acts of restoration).  We need to look at high risk areas and “avoid the unmanageable and manage the unavoidable.”Finally, the really big challenge is the “how” – looking across ownerships need partnerships and approaches; and recognition of connectivity.

Connie Millar from the Forest Service Pacific Southwest Station said that we need to confront all the dimensions of climate change, not just greenhouse gases. (slides pdf) We need to understand that climate is a driver of change. Climate often changes in cycles, nested at different scales from annual changes to millenial changes.   Different physical mechanisms drive changes at different cycles, like an orchestra where the music is the combined forces of instruments:  individual cycles, gradual or directional, extreme or abrupt. However, species and ecosystem respond to the orchestra with “a cacophony of responses.” 

Forests we manage are still responding to the long term cycles of history that they have encountered.  That makes a difference in how we can address climate in the future. For instance, when looking at glacial-interglacial cycles, we have evidence of over 40 cycles of greenhouse gas levels.  Over time, species have moved up and down in elevation to respond to climate (north and south in the east).  There are enduring changes in genetic diversity which we’re still seeing today.   There have been a number of ecosystem “flip flops”: some transitions had rapid change, analogous to what we have now.  Some warming has occurred in less than 75 years, with replacement of whole forest types.

We’ve had cycles of 1000 year periods due to changes in solar activity – we’ve just come out of a cold phase.  A little ice age ended in 1920.  Before that, there was a medieval anomaly with some warming and century long droughts.  Tall fast growing trees are now above tree line due to a change over 650 years, the lifespan of a single tree.  Their size and survival now depend upon their area of origination not where they currently reside.

Millar says that the shortest scale changes are the decadal or the annual scale changes.  The 20-50 year cycles affect parts of world differently – internal ocean circulation and ocean temperature.  The shortest is the El Nino/ LaNina 2-8 yr cycle.  She said that anthropogenic forcing is important at these shorter time scales.  We could see type conversions, invasions into barren lands, massive forest dying, higher elevation mortality as insects move upslope, and fires occurring in winter.   There are also important aspects of climate change at local scales, either natural forces or greenhouse gases.  Millar showed examples of the ice cover in the Great Lakes affecting shipping, and how 70% of wildfires in the West are La Nina events during decadal cycles. For local management at small scales, the local climate responses can be decoupled from effects at a regional scales.  For instance the American pika (the temperate polar bear) can escape the warm heat by the air conditioned ground. 

Millar said that climate is a fundamental architect of ecosystem change in all its flux and dynamics.  The historical conditions are increasingly poor reference and we need to look forward.  Also, human land use cover creates new challenges.

Later during the panel discussion period, Millar said that the planning rule should address climate as a macro-disturbance process.  The rule should provide for adaptive capacity of species (species ranges and species diversity) by understanding climate’s role.

Millar also answered a follow-up question from the audience about definitions.  She advised the use of definitions specifically.  The term “resistence” describes the ways to armor species in the short run against climate effects.  The term “resilience” is the most widely interpreted term, from the meaning used in the field of engineering, to the idea of returning something to its previous state.  It can be thought of as the capacity of a system to absorb change without changing its state.  Millar said that resistence and resilience are mid-term strategies.  Meanwhile, “restoration” can be a range of options from small scale to large scale.  It will be important to define what we mean.

Max Moritz from UC Berkeley concluded the first panel with a discussion of fire ecology. (slides pdf) He  said that the conventional wisdom is that climate change has resulted in increased fire activity because of warmer spring temperatures and earlier snowmelt.  But for shrublands, we haven’t seen the same effects.  Fire regimes respond to changes to multiple climate variables (temperature, precipitation quantity, and timing).

We’ve been worried about the problem of fire exclusion.  But scales of climate change and fire are quite broad.  Fires cross administrative boundaries, and different ecosystems may be adapted to very different fire regimes. 

We also only think about one kind of fire, a low severity fire.  Yet a lot of ecosystems have different fire sensitivities even to these types of fire.

Moritz said he liked the ecosystem service framework, which can give us common terms and currencies.  For fire, we need some common definition and understanding of both beneficial and detrimental roles.  Healthy?  Resilience?  A lot of the definitions and a lot of the goals can actually contradict each other.

Mortitz described a case study of range shifts to show the benefits and detriments of fire.  The rate and intensity of climate change is important to the plasticity and dispersal ability of species.  Many species will have to move 1 km per year to keep up with their suitable climate envelope.  There is a huge extinction potential.  Moritz also mentioned that many habitat models are missing the effects of fire.    He said that we might see large scale invasions and retreats of fire.  As far as species movement, fire can hasten the movement of “trailing edge” losses and benefits of habitat change. 

Regarding how fire releases carbon, he said we tend to emphasize stocks, instead of fluxes.  Ecosystems have a carrying capacity – forests will regain a standing carbon stock.  Fire is a temporary blip – but if you factor in regrowth it’s not quite as drastic.   Black carbon or “biochar” charcoal is a relative stable form of carbon – with every fire the carbon is fixed relative permanently.

Moritz said that the ongoing fragmentation of land ownership is a problem, especially when landowners have a decreasing interest in ecosystems and the resources needed to reduce fire related consequences.

Finally, Moritz said that there is lots of uncertainties.  Fire regimes are fuzzy concepts to begin with.  There is often almost no knowledge of some areas: for instance we have very little info on wind patterns.  However, the broad scope of the planning rule is a plus.

Discussion

After the presentations, there was a panel discussion and question and answer period.  The discussion centered on the topics of uncertainty, scale of analysis, integration of science with local knowledge, and the value of historical data in light of climate change.

Uncertainty

Sisk observed that uncertainty is one of the two or three key pieces of landscape planning.  Scientists try to reduce uncertainty, which is a foreign concept to nonscientists.  A scientist must take responsibility for being clear about that.  People aren’t ready to receive uncertainty – they are ready to receive the data, but combining data sources each with its own level of uncertainty will blow up the whole process.   Vose added that there is a tremendous amount of pressure on scientists to tie practice and science in an uncertain way.  Sisk observed that scientists can relate to a scenario based exploration with a sensitivity component.

Sisk emphasized that adaptive management and management flexibility aren’t the same.  Adaptive management is a structured process where you collect monitoring data and feed it into a learning structure established in the plan a priori.  The results of data collection constrain the decision space.  So adaptive management is less flexible.  

There was a followup question from the audience about the uncertainty of science.  Sisk said that the literature has shown for a long time that technical experts can be wrong.  He added that science informs the citizens and leaders in a democracy.  How to interpret uncertainty is best left to deliberation in a democratic society. Millar added that having panels when there are known differences in the science is important.  It’s important to display the differences in a transparent process.

One audience participant expressed frustration at Forest Service project NEPA documents that fail to address climate change because the issue is uncertain.  Millar said that there always is insufficient knowledge, so that is a cop out.   When addressing climate change we need to think about nesting a local analysis within larger spatial scales, and work with the knowledge we have.  Vose said that some body of information can be synthesized.

Sisk said that when we lack information we still need to make tough decisions.  Sisk said that the “Precautionary Principle” has been used in Europe and mentioned in the Rio conference.  In order to protect the environment, management should be appled according to the ecosystem capabilities.  Sisk said that a lack of uncertainty should not be used to adopt environmental protections.  Later, there was a follow-up question from the audience about whether we should be using adaptive management instead of the Precautionary Principle.  Sisk reminded everyone that the Precautionary Principle cuts both ways: both taking and not taking action.  We should not let uncertainty stop us in our tracks.

Regarding adaptive management, Vose gave examples of testing that was done, and talked about the importance of the network of experimental forests.  He said we should go slow, but test, monitor, and adapt.  He said that models are becoming increasingly sophisticated for making projected assessments.  Moritz said that in the absence of knowledge, if we can identify thresholds that we can seek and test for, we can provide some boundaries.

Scale of Analysis

At the beginning of the discussion period, Sisk noted that one thread throughout all of the talks was the need to be able to work at different scales to do intelligent planning.  Vose said that the scale of operation is the larger scale.  Sisk said that for the West, THE issue is water and the future of water resources.  Water influences the level of organization and how we scale planning.  John Wesley Powell had this figured out.  In some areas watersheds make sense, other areas they don’t, as an organizing tool.  Millar added that we should also crosswalk our analysis with nested temporal scales.  Moritz added that although “firesheds” is a term that is not well defined, it brings up this idea of nested scales.  But we first need a common language and common definitions.

There was a question from the audience about NFMA planning at a forest scale.  Vose said that watersheds are all relative, and you need to look at all scales of watersheds in planning, but a common focus is the 8th level HUC.  The real challenge is when you mix National Forest System and private land with different land uses, with no jurisdiction or a means to work in a coordinated way. 

Integrating Science with Other Sources of Knowledge

One audience participant was concerned about science being compromised in local collaborative processes.  The question is “how can the public own the science while keeping science sound?”  Sisk said that science has a high social relevance, and science efforts are supported by society.  Traning is important. Scientists have a responsibility to get their information back into social side of decisionmaking.  We need to make a concerted effort to improve the integrity of the knowledge.   It’s the responsibility of the scientists to participate in the discussions to talk about the valid use of their science.  In local collaborative processes, when conducted in a respectful and inclusive manner, there is a receptivity to science.  This isn’t the case when decisions “based on science” are laid out for people.  Sisk mentioned the idea of “braiding” of knowledge, referring of an example of Hopi collaboration.  It’s not watering down the knowledge, but braiding the knowledge together, which is stronger.  Sisk said this is the most promising vision to address these huge challenges.  Millar added that there is a long history of observation among tribes.

Moritz said that most scientists working at the landscape scale are not as much reductionists as other scientists.  At this scale, the problems are all connected.  Vose added that the most successes are when landscape and smaller scales come together.  The NEON program is an attempt to bridge that gap.

There was a later followup question from the audience about communcations with the public.  Moritz said that you can’t expect the public to dig through the journals, but the public has to have some level of literacy.  It has to go back and forth.  Millar talked about the role of a “boundary spanner”,  someone that can communicate from the formal science to the applied world.  We might not have answers at the scale of the operational level.  Sisk added that a collaborative process can bridge communication.  Of course some collaborative processes are imperfect, but communications in collaboration processes are not engineered but take place more organically.  Mortiz added that sometimes science is out there, and there is a responsibility for scientific literacy.

Role of the Historic Range of Variability (HRV) Assessment Given Climate Change

There was a question from the audience on the reliance of an HRV assessment in a landscape analysis when climate change may be creating conditions not previously experienced.  Millar said that we can still be informed by an historic process.  The HRV concept is still important but we need a different pair of glasses.   We need a realignment while using knowledge from the past about where we’ve been, looking at different earlier patterns that are more similar to where we are going.

Moritz said we need to be educated by HRV, but not be blinded by it.  There are two pieces of info: where we came from and where we are going.

Climate Change Strategies

Moritz said that the idea of assisted migration of species is relatively new and scary.  After a fire, if a species has taken a big hit, assisted migration might be a tool. Millar said that extreme events historically have major changes in ecological trajectories.  Large scale disturbance are opportunities for quickly catching up.  We can enable succession rather than force it.  Systems can reequilibrate or catch up. Sisk said that our first line of defense should be to maintain connectivity so that organisms can move on their own.  Some animals and long lived plants can do that better than we can think. Regarding extreme events, Vose said that we can understand hydrologic responses to extreme events.  There are tools and models available.

K.I.S.S. in Rule Form, Part 4

After a day listening to ecologists talk about landscape models, I am further inspired to urge planning rules that keep-it-simple-sweet.

There are three kinds of government rules. Most government rules regulate the behavior of private concerns, e.g., point-source pollution and building codes. A few regulate the behavior of other government agencies, e.g., Endangered Species Act consultation and CEQ NEPA process. Fewer still self-regulate an agency’s own behavior. The NFMA planning rule falls in this last category.

I don’t know about you, but if I wrote enforceable rules to regulate my own behavior, I’d make sure the rules were as spare and flexible as possible. Thus I offer the following rules to implement NFMA’s inventory and interdisciplinary mandates:

36 CFR 219.4: Inventories

The revision shall be based upon inventory data, maps, graphic material, and explanatory aids, of a kind, character, and quality, and to the detail appropriate for the land management plan revisions and vegetation management and timber harvest program decisions made.

36 CFR 219.5: Interdisciplinary Preparation

An interdisciplinary approach shall be used in the revision of the land management plan. The disciplines of the preparers shall be appropriate to: 1) the formulation of the vegetation management and timber harvest program; and, 2) the new information and changed circumstances and conditions in the unit that warrant revision of the plan.

Learning from Failure

One of the Meridian Institute consultants (working with the Forest Service on the “new planning rule”) recently asked me how I might frame discussions for the NFMA rule. Here is what I offered, adding that I thought it already too late for the kind of slow, thoughtful reflection/conversation that might make for effective change:

Suppose we could begin again. The public lands have just now been declared public. All laws, customs, and values are as they are, except that there is no RPA/NFMA. How might we begin to design a public process for managing the national forests as part of the nations’ public lands intermingled with private lands? How might we begin to discuss the possibility of a design process? How best to engage stakeholders?

Now fold in the RPA/NFMA, and the customs and history of the US Forest Service. What might we do now with the NFMA rule? How might that step fit within other design steps that might lead us to a useful outcome for managing the national forests?

Today I’ll add that we might want some “framing” to evaluate the eventual outcome of this NFMA “rule” effort. As I was pondering that, and remembering the many past failed attempts to reform planning and management in the Forest Service, I reread The Logic of Failure, by Dietrich Dörner. (book review) Then I did some internet sleuthing, and found a nice little reflective design blog that also built from Dörner’s wisdom. One tidbit of wisdom was titled “Metamorphosis: Transforming Non-designers into Designers” (pdf). I thought of Forest Service planners and managers. Maybe, if ever they are to learn, some might learn from this little paper. Here, altered a bit to get closer to the Forest Service’s task at hand, is the heart of the message:

[Consider] moving through three transitions:

(P) Pre-emergence
(T) Transitional
(D) Designerly Thinking

Characteristic of each of these transitions is a penetration of barriers. Rather than progression along a smooth continuum, you penetrate these (intellectual, practical, psychological and social) barriers in a step-like function. …

Barriers (numerals in parentheses indicate the transitional stage(s) where the barrier occurs):

  1. Design definitions. Naïve designers [tweak what has been framed too narrowly]; experienced designers also include [interaction, experience from others, emotion, and a ‘systems perspective’]. (P)
  2. Best solution. Naïve designers hold onto the belief that there is a best solution; experienced designers believe there exist many solutions and judged by critical criteria and presented through a design argument or explanation. (P)
  3. Technology-centered vs. human-centered. Naïve designers focus on the technology; experienced designers study human behavior, motivation and need. It’s very difficult to “let go” of gadgets and things; there’s an over-fascination with techno-fetishism among naïve designers. (P, T)
  4. Me and we. Naïve designers defend their own designs; experienced designers look to their team for inspiration and solutions. (P, T)
  5. User research. Naïve designers underplay the role of user research; they know what people want. Tools such as personas [pdf] are resisted rather than embraced naturally in the design process. Experienced designers do not make assumptions about human desires and motivations; they study it instead. (P, T)
  6. Algorithm / design paradox. Naïve designers expect to memorize algorithmic solutions to problems; experienced designers learn to deal with ill-structured problems, seemingly paradoxical situations and design thinking. (P, T)
  7. IT domination. Naïve designers tend to overemphasize efficiency, effectiveness, scalability; experienced designers include experience and emotion. (T)
  8. Idea loyalty. Naïve designers hold onto a single idea; experienced designers engage in systematic exploration of multiple ideas. (T)
  9. Critique culture. Naïve designers worry about [internally generated performance measures]; experienced designers welcome critique. (T, D)
  10. Notebook. Naïve designers [focus on] a particular project; experienced designers sketch continuously, deriving inspiration from all contexts. (T, D)
  11. Role. Naïve designers are learning what they do and how to do it; experienced designers begin to defend the position of design in a multi-person development team made up of designers and non-designers. (T, D)
  12. Research and philosophy. Naïve designers find solutions [patterned from past experience, “best management practices”, etc. — single-loop learning]; experienced designers explore philosophical foundations of design as well [i.e. double-loop learning]. (D)
  13. Reflective designer. Naïve designers spend little to no time reflecting on how they are designing versus experienced designers who can look at themselves “out of body” as they design. (D)
  14. Omnipresence. Naïve designers see design embedded in objects [or events]; experienced designers see systems that affect designs and designs that affect systems. (D)
  15. External / internal. Naïve designers find external answers to design problems; experienced designers begin to look internally and introspectively for inspiration and resolution. (D)

Maybe we can use these barriers/’barrier busters’ to see flaws in the Forest Service’s design strategy/tactics. Or maybe the Forest Service and its bevy of consultants can use them. Or maybe the roundtable participants can use them. Or maybe I’m just once-again wandering about in the wilderness of esoteric thought. More on Dörner’s book in a later post. But it is a “must read” for planners and managers.

Science Forum Introductory Remarks

The two-day science forum began this morning with introductory remarks from Harris Sherman,  Department of Agriculture Undersecretary for Natural Resources and Environment.  Sherman will be the responsible official for the planning rule.   Sherman began his remarks by saying that the planning rule must be rooted in integrity – it must be based on sound science. 

He mentioned how the Secretary of Agriculture has focused on restoration – 115 million acres of NFS lands nationwide are in need of restoration.  For the remainder of the lands, we need to maintain forest health.  We need to address these challenges through landscape approaches.  We also sometimes overlook the importance of forests for water.  Over 60 million people get their water from forests.   We need to think about climate change, and we might mitigate the effects, or adapt to the changes.  We need to be concerned about the diversity of species.  Also, job creation is an important part of this – jobs in restoration or biofuels. 

Sherman said that collaboration is important, and it’s exciting to see the way that people are coming together.  He said that there is common recognition that we have serious problems.  He said that good science underpins each of these areas.  Without good science we’ll have a flawed rule that people will not buy into.  Sherman concluded by emphasizing vision, integrity, and support: developing a rule that will survive legal challenges.

Other introductory speakers included Ann Bartuska, FS deputy chief of Research and Development.   Bartuska pointed to the concept of best available science, and the challenge to answer what it is, when it is, and how we should use it.  She said that science should be the underpinning of the discussions.  She said that the Forest Service tries to make science-based decisions, and has built a science organization to support that, and many scientists from the organizations are present at this conference.  She pointed out challenges such as uncertainty, variability, scale, and complexity.  She also asked what happens if science changes – is it still best science?

Hank Kashdan, associate chief, said that the science forum is an indicator of the collaborative rulemaking effort.  He said there is a clear intent to involve people across the country.  He also said that we can’t overstate the importance of science in this process. 

The director of Ecosystem Management Coordinator, Tony Tooke, finished the introductory remarks by mentioning the target to complete the rule by November 2011.  He defined success as having a rule that is grounded in science and robust collaboration, with plans that can respond to current needs and improve management.

Some Loose Threads Around Accountability

Let’s take a look at this article in the Eugene Press Register Guard from this morning. Note that our fellow blogger
Andy Stahl is also quoted in this piece.

Oregon Wild, a nonprofit conservation group also has submitted written comments, said Doug Heiken, who characterized previous efforts at revision as attempts to weaken the rules and limit public engagement.
“We want to keep the agency accountable and make sure the forests are focused on clean water and biodiversity and carbon storage and recreation and all the wonderful things that come from our national forests,” he said.

This started me thinking about the concept that John Rupe originally brought up on this blog as a critical component of the Rule- accountability.

My first question was “accountability for what?” Accountability generally sounds like a good idea (we don’t want a bunch of feds running around willy-nilly getting taxpayers’ money doing whatever they feel like). Nevertheless, if you asked our colleagues in the timber industry community- they would probably also like accountability- for meeting timber objectives in plans. Do they and Mr. Heiken mean the same thing by accountability? I doubt it.

Does accountability mean to you “if the FS agrees to do something, then it should do it?”Or “when Congress funds the FS to do something, they should do it” or ??? In the context of NFMA, when people such as Mr. Heiken bring it up, I think they mean “accountability is having viability in the NFMA regulations.” But if people disagree on what they want the FS to be accountable for, how can the FS “be accountable” to meet desires that may be inherently incompatible?

The Administration has set goals for agencies for GHG reduction. Will agencies not be “accountable” for that? Congress asks the agency to do many things through the budget process. Are we “accountable” for those? Is being taken to court the only way to get the agency to do something?

While I was ruminating on the various possible meanings of accountability, our Regional Forester gave us some information from their Safety Learning Trip to Los Alamos. Note: I may not have captured this accurately.

According to what I heard, the national leadership folks were talking about how to become a learning organization in terms of safety. One thing I wrote down was something like “accountability is seen as a codeword for culpability”. I think the idea is something along the lines of “if you think you are going to get beat up for an error, you aren’t going to report it.” That does not lead to a learning organization. Safety is an important thing our agency does, we want to be a learning organization. Environmental protection is an important thing our agency does, so we want to be a learning organization in terms of this as well. In the case of environmental protection, how does accountability mesh with being a learning organization?

We assume that people want to be safe, but make mistakes through cultural habits or for other reasons. Can’t we assume that people also want to protect the environment, and examine what keeps them from doing that?

So here are three questions- 1) What does FS accountability mean to you?

2) How does this fit (if it does) with the FS becoming a learning organization, or perhaps better, the FS plus collaborators becoming a “learning community”

3) Do the above thoughts have implications for a planning rule? If so, what are they?