This afternoon, the second panel at the science forum addressed the technical questions of landscape scale modeling and monitoring, and the related question of adaptive management.
Eric Gustafson of the Forest Service Northern Research Station started the panel by describing the merits of landscape models, which he called the “computational formalism of state-of-the-art knowledge.” (slides pdf) He said that well-verified models are as good as the science they reflect. They are a synthesis of the state of the art. They can be generalizations, but taken as a general expectations they are useful for research and planning. Models can account for spatial processes and spatial dynamics, consider long temporal scales and large spatial scales, and make predictions about the expected range of future states (composition, pattern, biomass). These models don’t accurately predict individual events.
Gustafson mentioned the “pathway based succession models” often used in the West (VDDT/TELSA, LANDSUM, SIMMPPLE, RMLANDS, Fire-BCG, FETM, HARVEST) In these models, the disturbance simulation may be process-based, or mechanistic. He has direct experience with a different type of model, the “process based successional model” (LANDIS, LANDIS-II) for eastern forests with less predictable successional trajectories. He said that climate change, or novel conditions, may require a process-based approach. For these models, the landscape scales and long time frames mean that observable validation is not possible, so modelers validate independent model components that are as simple and discrete as possible, then verify component interactions, and compare model behavior with known ecosystem behavior. Modelers also conduct sensitivity and uncertainty analysis. Open source models are preferable because their code is available for many eyes to spot a problem.
Gustafson said that models have been used to compare outcomes of management alternatives, looking at plans once developed, or to compute the effects of proposed management (species and age class, biomass, spatial pattern, habitat for a species of interest). Gustafson prefers looking at comparisons of alternatives rather than absolute projections.
When projecting the impacts of alternatives, he suggests a focus on appropriately large spatial and temporal scales for evaluating ecosystem drivers and responses; accounting for any important spatial dynamics of forest regenerative and degenerative processes; and accounting for interactions among the drivers of ecosystem dynamics and conditions (disturbance, global changes).
Steve McNulty of the Forest Service Southern Research Station talked about climate change and water. (slides pdf) He began with some observations about addressing water resources. First, if we only focus on climate change and water resources, the forest plan will fail. We need to look at the big picture. Second, if we consider water as a stand alone ecosystem service, the forest plan will fail. If we just stress water, then we’ll miss the impact on forest growth, carbon, or biodiversity. Also, if we only look at the effects of climate change on water, we’ll miss more important factors on water quantity: population change, changing in seasonal timing of precipitation, vegetation change, and all the sectors of water demand.
McNulty used as an example the results from a model called WaSSI (Water Supply Stress Index) which looks at environmental water use and human water use. The model shows that the vast majority of water is used by the ecosystem (only a sliver of 2 to 5 percent is human use). Human water use in the West is mainly used for irrigation. The WaSSI model looks at both climate change and population change.
McNulty also gave other examples of the breadth of the analysis: Groundwater loss can be a factor, and it doesn’t matter what climate change does to surface flows if we’ve lost our groundwater. Another example is the interaction between land use change and climate change interactions. If we reduce irrigation by 20%, it has as big an impact as climate change. McNulty also described the relationships between water availability, carbon sequestration, and biodiversity. The more you evapotranspire, the more productivity you have – it’s good for carbon storage but bad for water yield.
McNulty also mentioned the TACCIMO project (Template for Assessing Climate Change Impacts and Management Options) with the Research Station and the Forest Service Southern Region. This project looks at current forest plans, climate change science, and you can search by geographic location to obtain a planning assessment report.
Ken Williams, the chief of the USGS Cooperative Research Units, spoke about adaptive management in the Department of the Interior. (slides pdf) Ken was one of the authors of the DOI adaptive management technical guide. The DOI has worked for many years to develop a systematic approach to resource systems, recognizing the importance of reducing uncertainty. The DOI framework can be useful to forest planning.
The management situation for the Forest Service involves complex forest systems operating at multiple scales; fluctuating environmental conditions and management actions; decision making required in the near term; uncertainty about long term consequences; and lots of stakeholders with different viewpoints. There are four factors of uncertainty: environmental variation; partial controllability; partial observability; and structural uncertainty (lack of understanding about the processes). All of this uncertainty limits the ability to manage effectively and efficiently.
The DOI approach emphasizes learning through management, and adjusting management strategy based on what is learned (not just trial and error); focusing on reducing uncertainty about the influence of management actions; and improving management as a result of improved understanding. Williams said we need to understand that this is adaptive MANAGEMENT and not adaptive SCIENCE. Science takes it value in the contribution it makes.
Here are the common features of an adaptive management approach: a management framework; uncertainty about management consequences; iterative decision making; the potential for learning through the process of management itself; and potential improvement of management based on learning. Adaptive management implementation features the iterative process of decisionmaking; followed by monitoring; followed by analysis and assessment; followed by learning; followed by future decisionmaking. At each point in time, decisions are guided by management objectives; assessments provide new learning, and the process rolls through time.
There are two key outcomes: improved understanding over time, and improved management over time. Williams described two stages of management and learning: a deliberative phase (phase I) with stakeholder involvement, objectives, potential management altnernatives, predictive models, monitoring protocols and plans. Then, an iterative phase (phase II) with decision making, and post decision phase feedback on the system being managed. Sometimes you have to break out of the iterative phase to get back to the deliberative phase.
What about the fit to forest planning? Williams talked about considering: forests are dynamic systems; planning is mandated to be science based and stakeholder driven, acknowledging the importance of transparency, acknowledging uncertainty, recognizing the value of management adaptation, building upon experience, and understanding. Every one of these planning attributes is a hallmark of adaptive decisionmaking.
The last panelist was Sam Cushman from the Forest Service Rocky Mountain Experiment Station. (slides pdf) Cushman cited several of his papers which talked about the problems with generalizations and the need for specific detailed desired conditions and specific monitoring. He said that his research shows the problem with the indicator species concept – no species can explain more than 5% of the variability of the bird community; even select poolings of species fail to explain more than 10% of the variability among species. He said that ecosystem diversity cannot be used as a surrogate for species diversity. In practice there is a conundrum: there is a limit to the specificity that you can describe diversity, i.e. we have limited data, so that we can only define diversity very coarsely. His research shows that habitat explains less than half of species abundance. Cover types are inconsistent surrogates for habitat. Also vegetation cover types maps don’t predict the occurrence and dominance of forest trees: his analysis found you can only explain about 20%, at the best only 12%. Classified cover type maps are surprisingly poor predictors of forest vegetation. 80% of variability in tree species importance among plots was not explained even by a combination of three maps.
To be useful, desired conditions statements should be: detailed, specific, qnatitiative, and appropriately scaled. Research shows that detailed composition and structure of vegetation and seral stages, and its pattern across the landscape has strong relationship. Cushman described an example looking at grizzly bear habitat, looking at detailed pixels rather than patches. Cushman also discussed his research on the species gene flow in complex landscapes, and his research on connectivity models.
Cushman added that monitoring is the foundation to adaptive management. If desired condition statements are specified in detail with quantitative behcnmarks at appropriate scales, we then need to have good monitoring data, and representative data that is carefully thought out. We need large samples for precision, recent data, appropriate spatial scale, standardized protocols, statistical power, and high precision. We need to look at the long term. Adaptive management needs to be supported by monitoring and it must be funded.
John- this is an incredibly excellent summary. Thank you.