1) If you haven’t seen, please go back to my post yesterday on Managed Fire. Perhaps because of the Supreme Court case, it doesn’t seem to have gotten much attention. And we haven’t heard from other parts of the country whether the Arizona approach is similar to how other areas go about doing MF.
But on to science..
It seems to me that this is an opportunity to get folks together to get a handle on better managing the USG R&D portfolio. I’m talking about serious change and so it would have to be some kind of bipartisan institution maybe begun in this admin, but dedicated to making better use of the R&D budget, making decisions and prioritization more transparent, and ensuring that research intended to be applied.. can actually be applied. Maybe that’s too big an ask, but I would at least like to see us start on that path. In this effort, little science (us) would have a voice along with Big Science (say, the usual OSTP suspects).
2) People are very concerned about One Wildfire Agency in terms of losing local capacity. But I’m the only one concerned about centralizing fire research in the Intelligence Center (or will it be centralized?) I’m concerned on the basis of losing FS research- the kind of useful work you find in the screenshot above – as more aggressive and larger research agencies may be empowered to set research agendas. More satellites and high tech (nothing wrong with high tech), less emphasis on communities and human firefighters. The idea that you can understand something without understanding any of the moving parts or mechanisms. Maybe it’s my time in DC, trying to hoover up any scraps from the Big Science funding table, and I think that very careful structuring of how R&D is designed, prioritized and funded is important.
3) The previous Admin had an open-access effort, I think that’s a really important to keep up or expand. Fortunately, the linked article on paywalls is not paywalled.
In Washington, D.C., these shifts prompted both Republicans and Democrats to urge the federal government to revise its access policies. In 2013, then-President Barack Obama attempted to strike a compromise—via the 1-year embargo rule—between publishers and open-access advocates.
But many—including Biden, then Obama’s vice president—were not happy with that deal. In a 2016 speech, for example, Biden noted, “The taxpayers fund $5 billion a year in cancer research, but once it’s published, nearly all of that sits behind [pay]walls. Tell me how this is moving the [scientific] process along more rapidly.”
4) In addition to some kind of cross-checking that different agencies (or within agencies) aren’t doing duplicative research, I’d like to see a clearer check on research panels for “research that is supposed to be helpful to policy makers or managers or resource professionals.” That would involve a check on whether something is already known, and whether more knowledge is actually helpful to anyone. Perhaps reviewers are too polite to say “I think we already know this” or “no one cares about that plant and its future under modeled climate change.” Often I found in the FS that folks in NFS would relationally not want to question what R&D folks were doing- “why annoy them?” and “it’s their funding.” This can also be true for university researchers (as I was advised “don’t question them, they’ll write even worse things about us”). So my view is that to get an unbiased look you would need to select people out of the relational line of fire.
5) I think perhaps only in JFSP is there a direct link from managers and resource professionals questions and what gets prioritized, designed and funded. For example, I think we could know how different parts of the country do managed fire, how they do it, what communities think and so on. We have excellent literature on PF, MT and community views. Perhaps a group could have an annual review of what is being studied (broader than, but inclusive of, USG funding) and what gaps there are, and move from their to actually funding work determined to be useful.
6) I don’t mean to pick on these people, certainly they are the right people, FS, USGS and Forest folks. From the abstract, and the FS has an open version on their website here.
However, lack of natural ponderosa pine regeneration in undisturbed forests (i.e., no occurrence of stand-replacing events) may require management treatments to promote regeneration.
In addition to effects on near-surface temperature and soil moisture, management conducive to natural regeneration was associated with the density of competing tree species, understory litter and debris cover, and adult tree cone production.
***********
Our results show that existing forest management treatments have the potential to promote natural ponderosa pine regeneration in the SWUS, but will require assessment and modification through time to remain effective.
********
I think there are questions around “is climate change impacting the things we already know about forests?” and maybe this was it. Another way to prioritize would be.. “hey these things we’ve noticed are different and they seem problematic, what can we do about them?” Of course, the scientists were supported by the Climate Adaptation Science Center; but isn’t every management or non-management practice now considered “climate adaptation”? When I was working, it seemed like there was a burgeoning business in creating new centers with new administrative structures that seemed duplicative, at least to us users. When we asked questions about that, I remember USGS folks saying “it’s too late Congress already funded them, you get to help us decide where to put them.” USGS has Adaptation Science Centers, USDA has Climate Hubs. Is anyone checking for duplication or gaps?
Here’s a study on Forest Ops in the Northern Forest Climate Hub.
7) I thought that this was an interesting idea- Let Unfunded Grant Applications See the Light of Day
Something very similar happened with the National Science Foundation (NSF). In 2023, the Academies called out NSF for not meeting its legal obligation to share data: “Granting access to data on all applicants for program assessment purposes, as is called for in the legislation mandating this review, and establishing processes that would allow for structured evaluation of policies and procedures would help NSF understand the effectiveness of its initiatives and how its programs could be improved.”
Despite expectations from policymakers and statutes that data from science agencies be available for analysis, both of us—longtime open science advocates who have worked in various government and industry positions and who are writing only in our personal capacities and not on behalf of anyone else—have heard top researchers complain that they can’t get access to information on unfunded proposals, or, on the rare occasions when they do, access is conditional on allowing the agency to veto any publications using the data.
Without knowing what proposals go unfunded, there is no way to know whether agencies are supporting a wide range of ideas or favoring a narrow theory. Are “high-risk, high reward” proposals getting a chance? Have hard-won changes in grant policies actually helped early-career researchers? Do the questions researchers ask change in response to demands from Congress or calls from citizen groups? These questions seem both valuable and straightforward. Yet metaresearchers (those who research how research is done) are unable to address such topics with any certainty. The public cannot know, for example, how many NIH grant applications come from historically Black colleges or universities, or how many researchers propose to study gain-of-function in viral genomes, without knowing what is included in all R&D grant applications, funded and not.
..
Researchers could make more efficient progress by using past grant proposals to refine their approach and so avoid wasting months or even years on unrealistic proposals, particularly if reviewers’ comments and scores are also shared. They could learn whether they are pursuing projects already deemed unpromising by funding agencies, which could prompt them to try other areas, or to have a pre-application conversation with a program officer to gain a better understanding of an agency’s interest. Researchers with similar interests would be able to discover each other’s work and potentially join forces, leading to stronger proposals, more impactful research, and collaborations formed much earlier than those enabled by publications and conference presentations.
Though agencies can look across their own applications for insights, access to a complete picture of the research landscape across the federal government would allow funding agencies to make more informed decisions about funding priorities. As articulated in the Foundations for Evidence-Based Policymaking Act signed into law by President Trump in 2019, when agencies can share and access information across the government, they are able to more efficiently identify research trends, understand the derivative impacts of their own work, and craft decisions and policies informed by evidence. They can also build more effective, cross-agency initiatives, such as NSF’s Smart Health funding opportunity, designed to support cross-agency efforts to incorporate information science in health care
8) Indirect costs caps. There has been much concern about NIH reducing its caps. A reasonable person could wonder “why shouldn’t all USG research have the same caps?” I’ve never really heard this kind of thing discussed. Certainly it would seem to be easier for grant administrators and PIs to deal with. I ran across this guide to indirect costs from my old agency, NIFA (that is one agency in one department- USDA!).