Hotshot Wakeup Interview with Jason Forthofer of the US Forest Service Missoula Fire Laboratory

Jason Forthofer, mechanical engineer, stands in an area burned by the Carr Fire, one of the devastating California wildfires in 2018. (Photo provided by Bret Butler, U.S. Forest Service)

I thought this was a terrific podcast, an interview with Jason Forthofer at the Missoula Forest Sciences Lab by the Hotshot Wakeup on his Substack.  He has a gift for talking about fire models in a way that is easy to understand, at least for TSW-ites and our ilk.   And the range of tools he’s working on, and their practical applications, are fascinating. At least to me.

He talks about AI and machine learning .. I’ve always been interested in these new-fangled analysis contraptions, so asked Jason these questions.

When you say “AI” what do you mean exactly? Do you mean machine learning? I kind of thought that that was empirical also, based on loading data into it. But then you mentioned a combination of using your physical model with AI.  We have many older readers so if you could explain this a bit more (or anything else you wanted to say but did not get to, or links to key papers), that would be great!

Below are his answers.

Yes, when I was saying “AI” I was primarily talking about machine learning.  I often use these terms interchangeably, but I understand that there are some differences.  In the context of the spread model work we are doing with Google, we are using machine learning, and specifically a method called deep learning which uses the idea of neural network.

I would say that you are correct that AI and machine learning could be considered essentially empirical models.  And yes, often these models use learning data that comes from measurements of actual phenomena.  So in the case of fire spread for example, you could burn some fires in a laboratory and vary, say, the wind speed and measure the outcome (let’s say you measure the fire’s spread rate).  An empirical model would, in one way or another, correlate the input (wind speed) to the outcome (fire spread rate).  For simple cases like this you could do a curve fit to the data, just like you might learn in an elementary physics or math class (one common method is the “least squares” fit).  You could also use a more sophisticated and complex method like machine learning.  From my limited experience with machine learning, I would say that it really is like a kind of very sophisticated “curve fitting” method.  As the phenomena you are trying to model get more complex, for example many different inputs and outputs and also complex relations between the variables, more complex methods like machine learning may work better than the simpler methods.

But machine learning can also use data that is output from another model instead of actual measurements of the real phenomena, which is what we are doing in collaboration with Google.  Instead of the machine learning algorithm using lab or field measurements to learn from, we are feeding it input/output from our relatively slow running physically-based model of fire spread.  The whole purpose of doing this is to have a predictive model that is fast running.  To give you an idea of the speed up in the computation, some preliminary investigations we have done show that the machine learning model (that learns from our physically based model) can predict fire spread somewhere around 100,000 times faster than the original physically-based model.  The huge benefit of this is that it essentially allows us to use our machine learning model to predict fire spread over large landscapes (where tens of thousands or more of these small fire behavior calculations must be done).  It would not be feasible to do such a simulation using the original physically-based model.

************

When I think about different research that uses machine learning, I think it’s safe in the hands of folks like Jason who are experienced with the real-world processes he models.  If you are trying to relate machine learning to old processes that you understand, like, say linear regression, I thought that this Forbes article is helpful.  Let’s be careful out there!

Pattern verification is an especially powerful way of using machine learning models to both to confirm that they are picking up on theoretically suggested signals and, perhaps even more importantly, to understand the biases and nuances of the underlying data. Unrelated variables moving together can reveal a powerful and undiscovered new connection with strong predictive or explanatory power. On the other hand, they could just as easily represent spurious statistical noise or a previously undetected bias in the data.

Bias detection is all the more critical as we deploy machine learning systems in applications with real world impact using datasets we understand little about.

Perhaps the biggest issue with current machine learning trends, however, is our flawed tendency to interpret or describe the patterns captured in models as causative rather than correlations of unknown veracity, accuracy or impact.

One of the most basic tenants of statistics is that correlation does not imply causation. In turn, a signal’s predictive power does not necessarily imply in any way that that signal is actually related to or explains the phenomena being predicted.

 

Leave a Comment