Floods, Climate and Fires- Front Range, Sept. 2013

BOULDER_FLOOD

The history of flooding and economic and social hardships is related, of course, to ideas of Nature and to Book Club topics. However, in the interest of topical and timely information, I thought that this piece by Roger Pielke, Jr. is relevant and worth posting outside of Book Club.

Here’s his blog post, titled “Against the 100 year flood.”

He cited his paper the Nine Fallacies of Floods (apparently peer-reviewed, for those who watch that). It’s well worth a read, and a comparison to the western wildfire situation.

No matter what the climate future holds, flood impacts on society may continue to get worse. A study conducted by the U.S. Congressional Office of Technology Assessment concluded that ‘despite recent efforts, vulnerability to flood damages is likely to continue to grow’ (OTA, 1993, p. 253). The study based this conclusion on the following factors, which have very little to do with climate:
1. Populations in and adjacent to flood-prone areas, especially in coastal areas, continue to increase, putting more property and greater numbers of people at risk,
2. flood-moderating wetlands continue to be destroyed,
3. little has been done to control or contain increased runoff from upstream
development (e.g., runoff caused by paving over land),
4. many undeveloped areas have not yet been mapped (mapping has been concentrated in already-developed areas), and people are moving into such areas without adequate information concerning risk,
5. many dams and levees are beginning to deteriorate with age, leaving property owners with a false sense of security about how well they are protected,
6. some policies (e.g., provision of subsidies for building roads and bridges) tend to encourage development in flood plains.
At a minimum, when people blame climate change for damaging flood events, they direct attention away from the fact that decision makers already have the means at their disposal to significantly address the documented U.S. flood problem.

I also thought his cite of the hydrologist and climate change science communities’ discussion of stationarity was interesting. If hydrologists and climatologists disagree, how can we point to “science” as a path forward? Maybe we have to gird our loins, use our own brains and experiences, and talk to people (and their elected officials) about what they think are the best approaches to policy choices.

Here’s the link to the stationarity paper… only have the abstract due to lack of open access.

After 2½ days of discussion it became clear that the assembled community had yet to reach an agreement on whether or not to replace the assumption of stationarity with an assumption of nonstationarity or something else. Hydrologists were skeptical that data gathered to this point in the 21st Century point to any significant change in river parameters. Climatologists, on the other hand, point to climate change and the predicted shift away from current conditions to a more turbulent flood and drought filled future. Both groups are challenged to provide immediate guidance to those individuals in and outside the water community who today must commit funds and efforts on projects that will require the best estimates of future conditions. The workshop surfaced many approaches to dealing with these challenges. While there is good reason to support additional study of the death of stationarity, its implications, and new approaches, there is also a great need to provide those in the field the information they require now to plan, design, and operate today’s projects.

I have a good deal of sympathy for the hydrologists; getting dam management wrong can have more serious and life-threatening implications in either drought or flood conditions than planting trees. If we don’t know what’s going to happen in the future, we need to be flexible and pay attention to what’s really happening on the land. And not so much models. Just a thought.

32 thoughts on “Floods, Climate and Fires- Front Range, Sept. 2013”

  1. Galloway presented the same (similar?) stationarity talk here, full (free) access online: http://www.usbr.gov/research/climate/Workshop_Nonstat.pdf (note: big download 9.5 mb) Along with a number of other talks on the subject from the same workshop. Good stuff Sharon, thanks for bringing it up. Here’s an article I found from a few years back, perhaps the origin of the “stationarity is dead” phrase: http://wwwpaztcn.wr.usgs.gov/julio_pdf/milly_et_al.pdf

    Reply
  2. p.s. I wouldn’t agree with the “not so much models” statement, I think we need even more (and improving) models. After all, models helped put Neil Armstrong on the moon and get him back home again, Voyager 1 wouldn’t now be interstellar space without models, nobody gets behind the controls of a 747 without lots of simulator (model) time, etc. etc. For all kinds of things we’ve never done/encountered before, models are an indispensable tool. Like the old cliché says, prediction is difficult, especially about the future. That doesn’t lessen the value of real-world data, otherwise (to me) it’s like saying, my right hand works just fine, why do I need a left hand? Similarly, just a thought.

    Reply
  3. Obviously, this isn’t as black and white as some would like. We want to be able to model everything perfectly so that we we can make this a perfect world without needing or recognizing God. We have this mindset that nothing is too big for us to figure out and we can take all of the risk out of living if only we can come up with a model that models everything. We are ignorant and arrogant.

    Here is why this discussion about the death of stationarity is so silly:
    As my reference below states “everything is nonstationarity”. Stationarity only exists in small well defined pieces of the totality of nonstationarity which encompasses all that exists anywhere and everywhere. So, in this question of hydrologists versus climatologists considering global warming, we are talking about two separate things. One is a major component (climate change) that is so complex that we can’t understand it well enough to model with any degree of certainty (consider your local weather forecast and extrapolate that out for a hundred years) and the other is a sub component that is small enough to understand well enough to model within certain previously researched ranges of occurrences. To try to convert a stationarity model to a nonstationarity model (as the climatologist were trying to get the hydrologists to do) is worthless because it destroys the predictability of the stationarity model in its range of suitable application. It does this by introducing randomness in all cases including when it is inappropriate.

    Climate modeling (nonstationarity) is pure arrogance. But wait, it is even worse than that. As I understand it, Climate Models are adaptations of Econometric Modeling. Econometric Modeling is the most egregious example of arrogance in the face of continuously proven failure after failure in terms of predictive value.

    Exogenous variables overpower the nonstationarity model every time. A model based on nonstationarity is only a hypotheses, it is a possibility with an unknown probability of occurrence. A validated model based on stationarity is based on proven research, it is a possibility with a fairly high degree of probability of occurrence providing it is used within the scope of its validation data set.

    So, if we are worried about what “could be”, “might be” or “maybe” as a result of climate change; then we don’t buy a home or work below a dam below some elevation point that is a function of known hydrological relationships for bedrock. If you don’t have stable soils then don’t buy a house or work below a dam in the same drain. Some people don’t live in tornado alley because they are afraid of tornadoes while others don’t live on the San Andreas fault because they are afraid of earthquakes. One might consider moving to the moon but then there are those massive craters to consider.

    Interpreted from this reference: http://www.wmo.int/pages/prog/hwrp/chy/chy14/documents/ms/Stationarity_and_Nonstationarity.pdf

    Stationarity applies only to small closed loop systems where we can identify all of the significant variables and develop a model to make reasonably accurate predictions of “if this” then “that will result”. These models are reasonably good in terms of predictive reliability when data don’t fall significantly outside of the normal observations from which the model was built. These models often fail when the underlying mathematical equations are extrapolated to extreme situations (i.e. supersaturated soils in rain events of unforeseen deluges that reconfigure the cross section of the drain).

    Nonstationarity occurs when we don’t know enough to be able to develop a model that does a decent job of predicting outcomes. The apparent randomness that blows any modeling efforts out of the water is the result of not having a closed loop system where we know and understand all of the parameters and their interactions. The randomness is the result of what we call exogenous variables (pertinent variables that we don’t believe are pertinent or variables and their interactions that are so complex that we can’t quantify them and maybe don’t even know that they exist).

    Reply
  4. @Guy Knudsen
    Guy: I think there is a huge gulf between predictive models that are focused on mechanics and those involving biology or climate. Still, the latter models are important for inventory and analytical uses and for providing potential outcomes — just not for predicting the future involving natural events or biological interrelationships outside the lab. Botkin has a very good discussion on this in Part One of his new book. I also presented a paper on this topic in relation to Global Warming and conifer forests for the EPA in 1991 (pp. 193-202): http://www.nwmapsco.com/ZybachB/Reports/1993_EPA_Global_Warming/index.html#Report

    The Conclusions on pg. 201 were written more than 20 years ago, but I think they are still relevant today.

    Reply
  5. @Guy Knudsen

    All of the models that you describe are closed loop systems based on well established science in the area of astronomy, physics, fuels, metallurgy and etc. The variables were well known and easily understood and had known mathematical calculations. So by definition they were stationarity rather than nonstationarity.

    When it was discovered that direct mathematical calculation of trajectory and trajectory corrections wasn’t going to be fast enough on the computers of that day (from my advanced scientific programming class in grad school), they didn’t resort to nonstationarity models. The solution was found in the development of stationarity mathematical approximation algorithms (models of a mathematical equation rather than the actual calculation) that were fast enough on the computers of that day so as to allow for reasonably accurate course correction in a timely manner. They didn’t model calculation times or accuracy. They knew exactly what the calculation time was and what the absolute accuracy of each approximation was. No unproven theories were relied on. Either the approximation was fast enough and accurate enough to allow for timely and effective course correction or it wasn’t.

    Possible exogenous variables such as unknown meteorites, had to be excluded from the model as uncontrollable. Weather was eliminated from the closed loop system by not launching if there was any chance of other uncontrollable variables influencing the outcome. Unfortunately that human judgement wasn’t always correct but a nonstationatory model based on that same human judgement/theory would have been just as wrong.

    So in reply to your statement “For all kinds of things we’ve never done/encountered before, models are an indispensable tool. Like the old cliché says, prediction is difficult, especially about the future. That doesn’t lessen the value of real-world data”
    → That depends on whether or not we have a closed loop system and all of the real world data for all of the predictive variables in that closed loop system. Climate change, macroeconomic modeling, and NSO modeling are open systems subject to exogenous variables like the Barred Owl and reduction of early seral forest for forage that weren’t considered or were totally discounted based on the judgment of the “experts”. GIGO = Garbage In, Garbage Out. Or to put it another way, just because it is a printout on green bar paper doesn’t make it correct. For those of you youngsters who don’t know what green bar paper is – It was the standard paper used for computer paper back in the ’80s and earlier. The 120 character per line paper was colored green for three lines and then the next three lines were plain white paper and so forth.
    → Forgive an old man a little humor here. We had a linear programming optimization program that considered current prices, costs, and production yields in order to determine what products our plywood plant should make in order to maximize the weekly profit under current conditions. The program was to be reconciled against actual results on a monthly basis and where there was a significant discrepancy, the plywood plant was to go through the model and determine what caused the variance and correct either the model for current operating conditions or fix the under performing operating process. The xerox copies of the green bar reports that I got looked a little suspicious, so I went to the plant and got the reports and low and behold the perfect model performance was the result of whiteout and a typewriter with the same ball head printer as was used on the computer printer.

    You can see the white out on a stationatory model which has been validated against actuals. Not so much on a nonstationatory model which can not be validated because there are no actuals to compare it to.

    Reply
  6. @bobzybach
    My only quibble with your conclusions was when you said the “need for accurate predictive models was critical. ”

    Since we still don’t have them 20 years later, I have to wonder if we haven’t done OK stumbling along doing what used to work, watching the results, and course correcting as needed.

    Reply
  7. @Sharon

    Unfortunately, on USFS lands, we don’t use what used to work and I can’t accept that we have done Ok on those lands for all of the reasons that people are probably tired of me citing. We threw established science away on USFS lands for hunches and attempts to freeze forests in a moment of time. We have ended up with a mess much bigger than any messes that people could have made with what used to work and works even better now, where it is used. Advances in the ancillary sciences and BMPs and Independent Audits have helped to identify where tradeoffs are needed and insure that a better job is done now to meet real environmental concerns than in the past. The basic, fundamental, core scientific principles behind maintaining forest vigor haven’t changed.

    Reply
  8. @Sharon
    Sharon: I think I was caught up in the Global Warming excitement of the time to some degree and also becoming enthused about the potential of newer computers, after 10 years on my Apple IIe. Mostly, though, I didn’t say the need for “accurate predictive models” was critical, I said “predictive models that can provide insights to biospheric responses to climate change and large-scale conifer forest disturbances is critical” — with the keyword being “insights.” I can still make a case for the latter statement, I think, but the word “critical” becomes more of a political term than a human survival term in this context. And I didn’t use the word “accurate” at all — where I DID use it was in the 2nd conclusion: “None of the models in current use [1991] has demonstrated an ability to make accurate or reliable projections,” which I think remains true, nearly 25 years later — even since introduction and widespread use of the Internet and the amazing advancement in computer technologies after that time.

    In reviewing this paper to defend myself, I did just notice that I cited Botkin four times, which I think is interesting — more than any other author. I don’t think I ever noticed that before, even when working for him on the Oregon Coast salmon study a few years later.

    [Gil: I never took any classes in predictive modeling, or ever trusted them very much (“zero success rate in predicting the past”). I’d be very interested in your comments if you have the time to read this paper. Much of my subsequent interest in documenting past conditions came from this paper and the realization that predictive models that couldn’t predict past conditions had virtually no chance of predicting future conditions.]

    Reply
  9. Gil and Bob, thanks for your comments. Gil, you’re right about my previous examples, I did cherry-pick them only to support the general contention that models have their place (finding that place and then using them appropriately is the challenge). I would quibble with your descriptions of stationarity and non-stationarity; first, stationarity doesn’t need a small loop or well understood variables, it typically is a descriptive rather than mechanistic approach, and stochasticity (randomness) is inherent in the process. For example, gathering flood data over many years, determining the mean and variance, and using the resulting model to estimate probabilities of future events. Non-stationarity doesn’t necessarily involve the introduction of (more) randomness into the process, it can be something very deterministic (non-random) like putting a dam in the river (causing the stationary model to longer fit). Bob, I read your article and enjoyed it, thanks. I agree with (all) your conclusions. Glad to see you mentioned individual-based models a little, in my opinion that’s the wave of the future with ever-increasing computational power (and admittedly I’m enthusiastic about them because that’s the main approach I use). I will add that I think the utility of models becomes more apparent when we free ourselves from the notion that they’re only supposed to supply answers (not their strong suit, as Gil/Bob/Sharon noted), but rather think of them as tools to ask questions, and to focus that process. In other words, research tools and not (necessarily) management tools. That’s my bias, of course, here’s a few I’ve published over the years (please don’t feel obliged to read them )
    -Sudarshana, P., and G. R. Knudsen. 2006. Quantification and modeling of plasmid mobilization on seeds and roots. Curr. Microbiol. 52:455-459.
    -Knudsen, G. R., J. P. Stack, S. O. Schuhmann, K. Orr, and C. LaPaglia. 2006. Individual-based approach to modeling hyphal growth of a biocontrol fungus in soil. Phytopathology 96:1108-1115.
    -Knudsen, G. R., and D. J. Schotzko. 1999. Spatial simulation of epizootics caused by Beauveria bassiana in Russian wheat aphid populations. Biological Control 16:318-326.
    -Knudsen, G. R., D. J. Schotzko, and C. R. Krag. 1994. Fungal entomopathogen effect on numbers and spatial patterns of the Russian wheat aphid (Homoptera: Aphididae) on preferred and nonpreferred host plants. Environ. Entomol. 23:1558-1567.
    -Schotzko, D. J., and Knudsen, G. R. 1992. Use of geostatistics to evaluate a spatial simulation model of aphid population dynamics. Environ. Entomol. 21:1271-1282.
    -Knudsen, G. R. 1991. Models for the survival of bacteria applied to the foliage of crop plants. Pages 191-216 in: C. J. Hurst, ed., Modeling the Environmental Fate of Microorganisms. ASM Publications.
    -Knudsen, G. R., and Schotzko, D. J. 1991. Simulation of Russian wheat aphid movement and population dynamics on preferred and non-preferred host plants. Ecol. Modell. 57:117-131.
    -Knudsen, G. R., and Stack, J. P. 1991. Modeling growth and dispersal of fungi in natural environments. Pages 625-645 in: D. K. Arora, K. G. Mukerji, B. Rai, and G. R. Knudsen, eds., Handbook of Applied Mycology, Vol. I: Soil and Plants. Marcel Dekker, New York.
    -Knudsen, G. R. 1989. Model to predict aerial dispersal of bacteria during environmental release. Appl. and Environ. Microbiol. 55:2641-2647.
    -Knudsen, G. R., Johnson, C. S., and Spurr, H. W., Jr. 1988. A simulation model explores fungicide strategies to control peanut leafspot. Peanut Science 15:39-43.
    -Knudsen, G. R., and G. W. Hudler. 1987. Use of a computer simulation model to evaluate a plant disease biocontrol agent. Ecol. Modell. 35:45-62.

    Reply
  10. @Guy Knudsen
    Guy, I remember that’s what we used to say about models, that they are used to refine thinking and test ideas. And empirical ones can be helpful.. fire behavior, pesticide dispersal.. but I think what the key thing is to be useful it must be capable of being found to be wrong. Which is basically the point of a scientific hypothesis.

    My concern is with models that are not empirically testable, specifically my concern is the interpretation that goes into making public policy choices as if the answers from these models are valid. Thanks for the opportunity to clarify. Everyone uses either heuristic or explicitly mathematical models.

    Reply
  11. @Guy Knudsen

    Your explanation of stationatority and nonstationatoritory really didn’t clear up my confusion. In spite of a great deal of modeling experience, these terms are totally new to me. I find them to be rather esoteric and a distraction to the discussion of the utility of modeling. What does it add to the terminology that I am familiar with in the last three paragraphs of this comment of mine.

    After looking at the titles for your published papers, I just can’t understand why one would resort to a model if the subjects lend themselves to statistically sound experiments. Maybe I am missing something but, unless these models were already the result of such closed loop system research, they would only be conjecture and statistically valid research would seem to be more appropriate.

    @Sharon

    You have gotten to the root of the problem with your very simple and to the point statement “the key thing is to be useful it must be capable of being found to be wrong” and I would add ‘in a meaningful time frame relative to the decision that the model is intended to provide guidance on’.

    I would disagree with your statement that “what we used to say about models, that they are used to “refine thinking and test ideas”. This implies to me that your association with modeling has been mostly with open system models. I reject your statement in that it dismisses the predictive ability of many closed loop system models on the basis of your experience with open system models.

    @Guy Knudsen
    @Sharon

    CLOSED LOOP SYSTEM MODELS that are built from and validated by A POSTERORI (after the fact) data sets that were set up under statistically robust experimental designs that weren’t confounded by exogenous occurrences have good predictive value. They are very useful for more than just to “refine thinking and test ideas”. They may or may not be any better than open system models for “what if scenarios” that extrapolate beyond the underlying data sets.

    As Guy states we commonly limit these closed loop models to mechanical systems but I contend that there are many examples in forestry. We have site index, volumetrics, and growth and yield. These begin to transition to open loop systems in that exogenous variables can affect them but, like the rocket ship trajectory calculations that ignore unknown meteorites as mentioned above, they can be reconciled against actual and adjusted accordingly after the exogenous occurrence just as in the mechanical system example that I gave for a plywood plant. Exogenous variables such as fire, beetles, wetter or dryer than normal occurrences are reflected in inventory and we adjust/reconcile and move on until the next unpredictable event intervenes. These STATIC models are extremely critical for calculation of sustainability of any forest management scheme.

    OPEN SYSTEM MODELS attempt to include exogenous variables by introducing randomness through STOCASTIC processes (relating to, or characterized by conjecture). These systems are A PRIORI (before the fact) and are often used to “refine thinking and test ideas”. However, they are pretty much a waste of time in my opinion because of their poor predictive reliability. If you can’t rely on them for predictive purposes then they are not science and are just talk expressed in mathematical algorithms or hieuristic logic structures. So, why would you want to use them to “refine thinking and test ideas”?

    Reply
  12. @bobzybach

    I just saw your request for me to look over your paper above.

    It shows that you along with Guy and Sharon live/lived in a different modeling world than I do/did. I have totally discounted open system models as pure conjecture. My ability to hold a job meant that I had to do more than conjecture. I had to provide concrete actionable results that were verifiable in their ability to predict past results. In high school through grad school (masters in Quantitative Forest Management in 1969 from UGA under the best minds in growth and yield and forestry operations research at that time), I learned that it wasn’t science if it wasn’t the result of a statistically sound experimental design. To be science you had to be able to reproduce the results with independent testing. Anything else was conjecture no matter what spin you put on it. Theory is conjecture.

    Re: “Much of my subsequent interest in documenting past conditions came from this paper and the realization that predictive models that couldn’t predict past conditions had virtually no chance of predicting future conditions.”
    –> Agree, as I said in comment #11 “If you can’t rely on them for predictive purposes then they are not science and are just talk expressed in mathematical algorithms or heuristic logic structures. So, why would you want to use them to “refine thinking and test ideas”?”

    So your conclusions about predictive models that aren’t predictive are dead on. However, in your conclusion to your paper, I think that your list of modeling needs are basically chasing after the wind. Our time would be much better spent defining and refining closed loop system models than conjecturing with open system models. Conjecturing can be done without a model although I will admit that modeling conjecture may help to reveal some inconsistencies between conjectures.

    What I find disgusting is when Chicken Little claims that conjecture is science just because it came out of a model based on the opinions of some scientists rather than being based on true science.

    Reply
  13. Why is the word “Fires” in the title of this piece? Is there any evidence that fires have much of anything to do with the flooding?

    Reply
  14. I’m sure that people will argue about that.. like they will argue about climate change influences.

    My point was comparing Roger’s paper and the section I quoted about preparation to what people say about wildfires…

    It’s about midway through the post.

    He cited his paper the Nine Fallacies of Floods (apparently peer-reviewed, for those who watch that). It’s well worth a read, and a comparison to the western wildfire situation

    Reply
  15. @Matthew Koehler
    When soils become hydrophobic, stormwater runoff is often enhanced. I’ve seen that MANY times on the salvage projects I have worked on. The Rabbit Creek fire (200,000+ acres) of 1994 had a massive 150,000 cubic yard slide block the Boise River, from a storm event. Flood damages in
    San Bernardino happened after both big fire years. Roads were washed out near Lake Isabella after the McNally Fire burned. After 1994 fires on the Tahoe, there were damaging floods the following winter.

    I would be interested to see burn intensity maps correlated with flood damage intensity. Here is a view of the landscape directly above Boulder, with plenty of fire-scarred barren ground.

    https://maps.google.com/maps?hl=en&ll=40.041678,-105.379486&spn=0.112364,0.264187&t=h&z=13

    Of course, there are plenty of other examples of catastrophic erosion after intense wildfires. I do wonder why most of the roads there are built in the canyon bottoms, except that it was probably cheaper and easier to build them there, instead of using ridges and switchbacks. It’s hard to say which is better and which is worse, if you have to build a road.

    Reply
  16. @larryharrellfotoware

    Once again, I’ll ask, “Why is the word “Fires” in the title of this piece? Is there any evidence that fires have much of anything to do with the flooding?”

    Sorry, but both Sharon and Larry’s responses do nothing to answer the basic question I’ve asked. Should I change the title to say “Floods, Climate and Salvage Logging- Front Range, Sept. 2013?” After all, we can assume that some post-fire salvage logging took place somewhere along the Front Range.

    My opinion is that it’s simply irresponsible to put the word “Fire” in the title of this piece, which has nothing to do with fire. Thanks.

    Reply
  17. Gil: “In spite of a great deal of modeling experience, these terms are totally new to me. I find them to be rather esoteric and a distraction to the discussion of the utility of modeling. What does it add to the terminology that I am familiar with in the last three paragraphs of this comment of mine.” … OPEN SYSTEM MODELS attempt to include exogenous variables by introducing randomness through STOCASTIC processes (relating to, or characterized by conjecture). These systems are A PRIORI (before the fact) and are often used to “refine thinking and test ideas”. However, they are pretty much a waste of time in my opinion because of their poor predictive reliability. If you can’t rely on them for predictive purposes then they are not science…”
    Gil, it’s a big tent and you’re certainly entitled to your opinion about when models are useful or not, and why. Realize that there’s a large community of actual scientists who are doing science and modeling, and publishing their work in the scientific literature, who will disagree with you, which doesn’t invalidate your opinion, but it may mean that folks won’t pay as much attention to it as you would like. Note that equating stochasticity with “conjecture” is inaccurate. Stochasticity, or randomness, permeates the statistical analyses that you (rightly) are an advocate of. Means (averages) and variance, for example, are derived from probability distributions of measured values, so for example, even the calculation of site index has stochastic underpinnings, as do growth and yield models, etc. I wholeheartedly agree with you that models such as the ones you mentioned (site index, growth and yield, production optimization, etc.) are invaluable for management, and more theoretical models typically are not. But the point is, the latter aren’t intended to be management models. Their role (in large part anyway) is to open up new vistas, re-frame the questions, and point to promising experimental approaches (which is similar to what Sharon said above, also her statement that models are in some ways analogous to hypotheses has a lot of truth to it). From those more theoretical approaches, management models may ensue. Best, -Guy

    Reply
  18. @Matthew Koehler

    Let me go one step further… in Roger’s paper he states:

    1. Populations in and adjacent to flood-prone areas, especially in coastal areas, continue to increase, putting more property and greater numbers of people at risk,
    2. flood-moderating wetlands continue to be destroyed,
    3. little has been done to control or contain increased runoff from upstream
    development (e.g., runoff caused by paving over land),
    4. many undeveloped areas have not yet been mapped (mapping has been concentrated in already-developed areas), and people are moving into such areas without adequate information concerning risk,
    5. many dams and levees are beginning to deteriorate with age, leaving property owners with a false sense of security about how well they are protected,
    6. some policies (e.g., provision of subsidies for building roads and bridges) tend to encourage development in flood plains.

    I would say 1 and 4 are similar to development in fire prone areas. I wonder if there are 6-like policies that encourage development in fire-prone areas?

    Reply
  19. I wonder if any post-fire logging took place. Looking at the map of the area you can see why the effects of fire would be included in a any discussion. Though I have heard this was a very unusual rain event.

    Reply
  20. Sharon, My issue is that you have taken the tragedy of what’s likely a 1,000 to 10,000 year flood event on the Front Range and somehow tied it to wildfire in the title, making it seem as if the floods have something to do with wildfire….even though there has been zero evidence that the flooding had anything to do with wildfire.

    Reply
  21. @Sharon
    I see things the same way. There appears to be a parallel between the two, with some people preferring to do nothing about the impacts, and others providing mitigation measures, designed to provide relief to already-existing development. That would include special areas in the forests, including endangered species habitats and places people cherish. Note that in Yosemite, they acted to protect Sequoia groves, instead of letting “whatever happens”. Certainly, “whatever happens” isn’t good for creeks and rivers, too.

    Reply
  22. @Matthew Koehler
    Matthew… check out the below quoted section of Roger’s post. You used the terms “1000-10,000 year flood event” incorrectly, as I read it. The term is flawed as Roger elaborates in his paper. Further, they are not that rare in Boulder as he also points out.

    It’s interesting how insurance and regulatory needs drove something that doesn’t really make sense, and gives the public the wrong idea. I hope that doesn’t happen as regulations and insurance companies enter the fire rating biz.

    After many decades of relatively frequent flooding in the early parts of the 20th century, Boulder has been on a lucky streak which had, until this week, lasted over forty years:

    Serious floods have affected downtown Boulder in 1894, 1896, 1906, 1909, 1916, 1921, 1938, and 1969 with the worst being those of May 31-June 2, 1894 and May 7, 1969. The flood of 1969 was the result of four days of almost continuous rainfall (11.27” measured in Morrison and 9.34” at the Boulder Hydroelectric Plant three miles up Boulder Canyon from town).

    This lucky streak led to concerns, such as these expressed in 2008:

    Eric Lessard, an engineering project manager with the city’s utilities department, said it’s hard not to get complacent, because it’s been so long since the 1894 flood that inundated the city.

    “That’s one of the biggest problems we have — we’ve been really, really fortunate in Boulder. We haven’t had any major floods in many, many years. It starts to give people a false sense of confidence”
    Despite the long lucky streak, in recent decades Boulder, and the Colorado Front Range, have devoted considerable resources to flood mitigation efforts. It will be interesting in the months and years to come to assess the effectiveness of those efforts. Many lessons will no doubt be learned about what might have been done better, but I will be surprised if the many years of planning, investment and structural mitigation did not dramatically reduce the possible impacts of the recent floods.

    and

    In the aftermath of this week’s Boulder flood some observers are already trying to out-do each other by making bigger and bigger claims of the so-called N-year flood. As might be expected the biggest claims (a 1,000-year event has the record so far!) are made by those who seek to link causality of Colorado disaster to human-caused climate change in a simplistic way (those interested in this topic can have a look at the second fallacy covered in the paper below). There has been better reporting too, such as this from NBC.
    And the second fallacy is:

    2.1. FLOOD FREQUENCIES ARE WELL UNDERSTOOD
    Flood experts use the terms ‘stage’ and ‘discharge’ to refer to the size of a flood (Belt, 1975). A flood stage is the depth of a river at some point and is a function of the amount of water, but also the capacity of a river channel and floodplain and other factors. Hence, upstream and downstream levees and different uses of floodplain land can alter a flood’s stage. A flood discharge refers to the volume of water passing a particular point over a period of time. For example, in 1993 St. Louis experienced ‘the highest stage we’ve ever had, but not the biggest volume’.
    We’ve had bigger flows, but the stage was different because the water could flow from bluff to bluff. Now we have communities in the floodplain. Every time you do something on a floodplain, you change the flood relationship. Every time a farmer plants a field or a town puts in a levee, it affects upstream flooding. That’s why you can’t really compare flooding at different times in history (G. R. Dryhouse quoted in Corrigan, 1993).

    According to the World Meteorological Organization’s International Glossary of Hydrology, ‘flood frequency’ is defined as ‘the number of times a flood above a given discharge or stage is likely to occur over a given number of years’ (WMO, 1993). In the United States, flood frequencies are central to the operations of the National Flood Insurance Program, which uses the term ‘base flood’ to note ‘that in any given year there is a one percent chance that a flood of that magnitude could be equalled or exceeded’ (FIFMTF, 1992, p. 9-7). The ‘base flood’ is more commonly known as ‘the 100-year flood’ and is ‘probably the most misunderstood floodplain
    management term’ (FIFMTF, 1992, p. 9-7).

    A determination of the probability of inundation for various elevations within a community is based on analysis of peak flows at a point on a particular river or stream. However, ‘there is no procedure or set of procedures that can be adopted which, when rigidly applied to the available data, will accurately define the flood potential of any given watershed’ (USWRC, 1981, p. 1). For many reasons, including limitations on the data record and potential change in climate, ‘risk and
    uncertainty are inherent in any flood frequency analysis’ (USWRC, 1981, p. 2). Nevertheless, quantification of risk is a fundamental element of flood insurance as well as many aspects of flood-related decision making.

    In order to quantify flood risk, in the early 1970s the National Flood Insurance Program adopted the 100-year-flood standard (FIFMTF, 1992, p. 8-2). The standard was adopted in order to standardize comparison of areas of risk between communities. Since that time the concept of the N-year flood has become a common fixture in policy, media, and public discussions of floods. Unfortunately, ‘the general public almost universally does not properly understand the meaning of
    the term’ (FIFMTF, 1992, p. 9-7). Misconceptions about the meaning of the term creates obstacles to proper understanding of the flood problem and, consequently, the development of effective responses.

    The 100-year standard refers to a flood that has a one percent chance of being exceeded in any given year. It does not refer to a flood that occurs ‘once every 100 years’. In fact, for a home in a 100-year flood zone there is a greater than 26% chance that it will see at least one 100-year flood over a period of 30 years (and, similarly, more than a 74% chance over 100 years). The general formula for the cumulative probability of at least one flood of annual probability P is .1−P/N  C where N equals the number of years from now, and C is the cumulative probability
    over period N (P is assumed to be constant and events are independent from year to year). By choosing values for P and C one can compute the number of years that the cumulative probability (C) covers.

    The concept and terminology of the ‘100-year floodplain’ was formally adopted by the federal government as a standard for all public agencies in 1977 under Executive Order 11988. In 1982 FEMA reviewed the policy and found that it was being used in the agencies and, lacking a better alternative, concluded that the policy should be retained (FIFMTF, 1992, p. 8-3). However, despite the FEMA review, use of the concept of the 100-year flood is encumbered by a number of logical and practical difficulties (cf. Lord, 1994).
    First, there is general confusion among users of the term about what it means. Some use the term to refer to a flood that occurs every 100 years, as did the Midwestern mayor who stated that ‘after the 1965 flood, they told us this wouldn’t happen again for another 100 years’ (IFMRC, 1994, p. 59). Public confusion is widespread: A farmer suffering through Midwest flooding for the second time in three years complained that ‘Two years ago was supposed to be a 100-year flood, and they’re saying this is a 75-year flood, What kind of sense does that make?
    You’d think they’d get it right’ (Peterson, 1995).
    Second, the ‘100-year flood’ is only one of many possible probabilistic measures of an area’s flood risk. For instance, in the part of the floodplain that is demarcated as the ‘100-year floodplain’ it is only the outer edge of that area that is estimated to have an annual probability of flooding of 0.01, yet confusion exists (Myers, 1994). Areas closer to the river have higher probabilities of flooding, e.g., there are areas of a floodplain with a 2% annual chance of flooding (50-year floodplain), 10% annual chance (10-year floodplain), 50% annual chance (2-year
    floodplain) etc., and similarly, areas farther from the river have lower probabilities of flooding. The ‘100-year floodplain’ is arbitrarily chosen for regulatory reasons and does not reflect anything fundamentally intrinsic to the floodplain.
    Third, the ‘100-year floodplain’ is determined based on past flood records and is thus subject to considerable errors with respect to the probabilities of future floods. According to Burkham (1978) errors in determination of the ‘100-year flood’ may be off by as much as 50% of flood depth. Depending on the slope of the flood plain, this could translate into a significant error in terms of distance from the river channel. A FEMA press release notes that ‘in some cases there is a difference of only inches between the 10- and the 100-year flood levels’ (FEMA, 1996). Further, researchers are beginning to realize an ‘upper limit’ on what can be known about flood frequencies due to the lack of available trend data (Bobée and Rasmussen, 1995).

    Fourth, the 100-year floodplain is not a natural feature, but rather is defined by scientists and engineers based on the historical record. Consequently, while the ‘100-year floodplain’ is dynamic and subject to redefinition based on new flood events that add to the historical record, the regulatory definition is much more difficult to change. For instance, following two years of major flooding on the Salt River in Phoenix, Arizona, the previously estimated 100-year flood was
    reduced to a 50-year flood (FIFMTF, 1992, p. 9-7). What happens to the structures in redefined areas? Any changes in climate patterns, especially precipitation, will also modify the expected probabilities of inundation. For example, some areas of the upper Midwest have documented a trend of increasing precipitation this century (Changnon and Kunkel, 1995; Bhowmik et al., 1994). Furthermore, human changes to the river environment, e.g., levees and land use changes, can also alter the hydraulics of floods. Finally, the extensive use of the term ‘100-year flood’ focuses attention on that aspect of flooding, sometimes to the neglect of the area
    beyond the 100-year flood plain (Myers, 1994).

    What can be done? Given the pervasive use of the concept of the ‘100-year flood’ in flood insurance and regulatory decision-making it seems that adoption of an alternative concept is unlikely. Nevertheless, there are a number of steps that can be taken by those who use the concept when dealing with policy makers and the public. First, we need to be more precise with language. The FIFMTF (1992) recommends the phrase ‘one percent annual chance flood’ as a preferred alternative to ‘100-year flood’, ‘base flood’, or ‘one percent flood’. Another alternative
    is ‘national base flood standard’ which removes reference to probability (Thomas, 1996, personal communication). Second, when communicating with the public and the media, flood experts could take care to convert annual exceedances into annual probabilities. And third, policy documents could rely less on the ‘100- year flood’ to illustrate examples and propose policies, and at the very least explicitly discuss floods of different magnitudes.

    Reply
  23. Sharon, you write: “Matthew… You used the terms “1000-10,000 year flood event” incorrectly, as I read it.” Why? Matthew simply said “…what’s likely a 1,000 to 10,000 year flood event on the Front Range.” There is such a thing as a 1,000-year flood, it’s a flood whose magnitude only has a 0.1% chance of happening in a given year. You can plug in any number, the definition still works. Matthew didn’t use it incorrectly, as (somewhat ironically) your lengthy post above explains. Also, to say that “they are not that rare in Boulder” is simply wrong; consider: ” On average Boulder receives about 1.7 inches of rain during the month of September. As of 7 AM on September 16, Boulder had received 17.17 inches of rain so far in the month, smashing the all-time record of 9.59 inches set in May of 1995. 9.08 inches fell on Sept. 12, nearly doubling the previous daily record of 4.80 inches set on July 31, 1919. In fact, Boulder has already broken its yearly record for precipitation—with more than three months left in the year, and the rain still falling. (source: http://science.time.com/2013/09/17/the-science-behind-colorados-thousand-year-flood/#ixzz2fBSWhKsx) It does come back to the stationarity/non-stationarity discussion, since the probabilities underlying the “n-year flood” designation come from a historical time series (stationarity). -Guy

    Reply
    • I think Roger’s point was it is misleading to say “1000 year” when you mean….”.1% chance” But either way, as you point out they are based on stationarity.. they had to use something to do zoning for floods so they came up with that.

      I don’t know if it’s “simply wrong” based on this:

      Serious floods have affected downtown Boulder in 1894, 1896, 1906, 1909, 1916, 1921, 1938, and 1969 with the worst being those of May 31-June 2, 1894 and May 7, 1969. The flood of 1969 was the result of four days of almost continuous rainfall (11.27” measured in Morrison and 9.34” at the Boulder Hydroelectric Plant three miles up Boulder Canyon from town).

      What I was saying is that floods in Boulder have not been that unusual. What you are saying is that this September’s rainfall in Boulder has never been observed since precip has been recorded. Both those things could be true it seems to me.

      Reply
  24. @Guy Knudsen

    Re: “Note that equating stochasticity with “conjecture” is inaccurate. Stochasticity, or randomness, permeates the statistical analyses that you (rightly) are an advocate of. Means (averages) and variance, for example, are derived from probability distributions of measured values, so for example, even the calculation of site index has stochastic underpinnings, as do growth and yield models, etc.”

    –> We are pretty far apart so I will limit my response to the above to saying that stochastic modeling has nothing to do with the variance in the underlying data set. Yes, both stochastic and deterministic/static models are built from data sets that include random variation but only one tries to reintroduce that variation into the model’s predictive outcomes. My experience has been that it is more effective to run a deterministic/static model under a few “what if” scenarios than it is to develop a stochastic model. To avoid a slanted/outlier view of the outcome, you need to repeatedly run a stochastic model which (if the model is properly configured to represent the appropriate PDF) will only bring your mean from the stochastic runs roughly back to what a single run of a deterministic model would have told you in the first place.

    I do recognize that there are a lot of people with a vested interest in disagreeing with me. As you say “it may mean that folks won’t pay as much attention to it as you would like”. It is not about me, I’m not interested in getting published or making a name for myself or making sure that my research isn’t discredited and my money cut off. Even when employed, peer pressure was not a concern of mine. Peer pressure is a tool that people use to box you in to keep you from finding the truth before they do. To me it is about showing some youngsters that they have gotten so smart that they are being hopelessly ineffective trying to produce the ultimate, all comprehensive model which can never be validated so it is conjecture.

    So why do I bother? Sometimes I really wonder but I guess that it is just a drive that I have to give back to the profession and improve the state of our forests by communicating my 48 years of involvement in the forestry and wood products industry. It is one of my loves just as it appears to be one of yours.

    Reply
  25. @Sharon

    Re: “either way, as you point out they are based on stationarity.. they had to use something to do zoning for floods so they came up with that”

    –> Please explain to me why “based on stationarity” couldn’t have been said more simply and more easily understood without the use of esoteric jargon by saying ‘based on history’? I asked Guy a similar question, but he has not answered.

    Reply
  26. @Guy Knudsen

    Re: “It does come back to the stationarity/non-stationarity discussion, since the probabilities underlying the “n-year flood” designation come from a historical time series (stationarity)”

    After further reading, I believe that you and your fellow believers are the ones who would be found lacking in a court of statisticians &/ mathematicians.

    → Are you saying that someone calculated a Time Series Model to predict a “n-year flood” contour? Or did they simply look at the historical data set and calculate the Cumulative PDF of rainfall or river gauge levels and look at the 1% tail on the upper end and say that is the amount of rain in a period that will raise the river level to this point on the river gauge? According to this source http://ga.water.usgs.gov/edu/100yearflood.html it is the latter.
    So again, I fail to see where your reference to a time series or stationarity adds anything other than obfuscation (lack of clarity). In fact, as I will show, your statement reveals your misunderstanding of stationarity.

    → Reference to stationarity and a time series would only be appropriate if you were trying to predict the sequence of annual high river levels for X number of years into the future with a stochastic (probabilistic) model which would require that you had identified some independent variables that could be predicted into the future year by year on a deterministic basis and then introduced their variation into the model in order to convert it to a stochastic model. But your quoted statement above has nothing to do with what I just explained as verified by the following quote:

    → ****“Stationarity is a property of an underlying stochastic process, and not of
    observed data.”**** as quoted from the bottom of page one of this document: http://www.wmo.int/pages/prog/hwrp/chy/chy14/documents/ms/Stationarity_and_Nonstationarity.pdf
    – So, contrary to your claim, historical observations (observed data) have nothing to do with stationarity. Stationarity and nonstationarity only have to do with predictive stochastic modeling. The only difference between stationarity and nonstationarity is that nonstationarity requires a disjointed function introduced by something like a 0/1 variable in order to change the model from a continuous function to a disjointed function.
    – So, please tell me why this paper is wrong and you are right?

    Reply
  27. Gil: “Please explain to me why “based on stationarity” couldn’t have been said more simply and more easily understood without the use of esoteric jargon by saying ‘based on history’? I asked Guy a similar question, but he has not answered.”

    Sorry not to have answered before, you’re right it could be said differently, but “stationarity” is an old and standard concept in statistics, economics, engineering, etc., so not especially esoteric in those and similar contexts. Data “based on history” IS essentially a time series, because history reflects time and things happen serially. You could say it either way but the terms stationarity and time series are intimately and pretty much universally associated (a few references below). And no argument with your quote that stationarity is a property of an underlying stochastic process (as is variance, for example), but we use it (as we use variance) to evaluate data, so they DO indeed have something to do with each other. Beyond that, this isn’t a statistics blog so we’re probably into the overkill zone for this discussion. Best, -Guy

    http://www.cas.usf.edu/~cconnor/geolsoc/html/chapter11.pdf
    “Chapter 11 Stationary and non-stationary time series. G. P. Nason. Time series analysis is about the study of data collected through time. The field of time series is a vast one that pervades many areas of science and engineering particularly statistics and signal processing… the first thing to say is that there are several excellent texts on time series analysis. Most statistical books concentrate on stationary time series and some texts have good coverage of “globally non-stationary” series such as those often used in financial time series. For a general, elementary introduction to time series analysis the author highly recommends the book by (Chatfield 2003).”
    “This article is a brief survey of several kinds of time series model and analysis. Section 11.1 covers stationary time series which, loosely speaking, are those whose statistical properties remain constant over time….”
    “Stationary models form the basis for a huge proportion of time series analysis methods….”
    “In time series analysis the basic building block is the purely random process…. A purely random process is a stochastic process, {ǫt}1t=−1, where each element ǫt is (statistically) independent of every other element, ǫs for s 6= t, and each element has an identical distribution.”
    “11.1.2 Stationarity Loosely speaking a stationary process is one whose statistical properties do not change over time. More formally, a strictly stationary stochastic process is one where given t1, . . . , tℓ the joint statistical distribution of Xt1 , . . . ,Xtℓ is the same as the joint statistical distribution of Xt1+τ , . . . ,Xtℓ+τ for all ℓ and τ .”

    http://www-stat.wharton.upenn.edu/~stine/stat910/lectures/02_stationarity.pdf
    “Statistics 910, #2 Examples of Stationary Time Series”
    “Stationarity. Strict stationarity (Defn 1.6) Probability distribution of the stochastic process fXtg is invariant under a shift in time”

    http://www.investopedia.com/articles/trading/07/stationary.asp
    “Financial institutions and corporations as well as individual investors and researchers often use financial time series data (such as asset prices, exchange rates, GDP, inflation and other macroeconomic indicators) in economic forecasts, stock market analysis or studies of the data itself.” “Data points are often non-stationary or have means, variances and covariances that change over time. Non-stationary behaviors can be trends, cycles, random walks or combinations of the three…Non-stationary data, as a rule, are unpredictable and cannot be modeled or forecasted. The results obtained by using non-stationary time series may be spurious in that they may indicate a relationship between two variables where one does not exist. In order to receive consistent, reliable results, the non-stationary data needs to be transformed into stationary data. In contrast to the non-stationary process that has a variable variance and a mean that does not remain near, or returns to a long-run mean over time, the stationary process reverts around a constant long-term mean and has a constant variance independent of time.”

    http://www.itl.nist.gov/div898/handbook/pmc/section4/pmc442.htm
    (Engineering Statistics Handbook)
    “Stationarity A common assumption in many time series techniques is that the data are stationary. A stationary process has the property that the mean, variance and autocorrelation structure do not change over time. Stationarity can be defined in precise mathematical terms, but for our purpose we mean a flat looking series, without trend, constant variance over time, a constant autocorrelation structure over time and no periodic fluctuations (seasonality).”

    Reply
  28. Guy: You are right that this isn’t a statistics blog, yet it has been a very interesting and relevant discussion to this point, and I particularly appreciate both your and Gil’s efforts to bring the jargon into Plain English that can be readily understood by the rest of us. Despite being a baseball statistics nut for much of my youth, and despite being an excellent mathematician until being waylaid by New Math geometry in high school, I did poorly in stats while in college; almost entirely due to the irritating and unnecessary jargon that was being imposed upon us. I see this — rightly, so near as I can tell — as just one more method of filtering out unwanted members of inner-circle societies by substituting wordplay and numbers games for logic and creative thinking, thereby excluding most of us from the discussion. Gil is right when he uses the word “esoteric,” and there is no apparent purpose for this other than creating a subset of society for unstated reasons. Statistically speaking, if one does a demographic analysis of the racial, sexual, and employment characteristics of individuals who excel in statistics-speak (“statistics”), the result is both predictable and unnerving. Church Latin was used in the Middle Ages for similar purposes.

    Reply
  29. @Guy Knudsen

    Re: “Data “based on history” IS essentially a time series, because history reflects time and things happen serially”

    –> Lets go back to the subject of the original post by Sharon. Your statement above has nothing to do with determining an n year flood stage. The historical sequence (serial order / interdependence between observations) of events IS NOT considered in the calculation. The calculation treats the observations as independent observations in order to come up with the 1% probability of occurrence as a proxy for a 100 year flood stage.

    The sequence of those historical observations could have been reordered into to every possible permutation of the data set and the calculation of the n year flood stage for each possible sequence would produce exactly the same numerical result for the n year flood stage right down to the umpty umpth decimal point. This is true because the sequence of events does not influence the inherent variation in the observation data set so, likewise, it doesn’t have any impact on the calculation of the Cumulative Probability Density Function which is the source of the n year flood stage value.

    As I said before, the sequence of observations is only pertinent to building a predictive time series model which has nothing to do with calculating the n year flood stage.

    With that, I will attempt to let this discussion drop – We come from different statistical worlds in terms of our teachers, associates and applications experience – As best as I can tell, you are not an anarchist so you must be a good guy. 🙂

    – and they all cried ‘Hooray! – Fini!’ 🙂

    Reply

Leave a Reply to Guy Knudsen Cancel reply