For some mysterious reason, my previous post here on the GAO report on the Forest Service has received a great many hits on the blog over the past few weeks. I am interpreting that to mean that at least some readers are interested in the science/policy interface, but who knows? Anyway, this has led to writing a series of posts on my observations on the scientific enterprise. Those who are not interested feel free to flip past.
So here is something I wrote recently, thanks to Bob Zybach and the folks at ESIPRI, when asked what I think a good process would look like for vetting research studies to be used in policy.
Eight Steps to Vet Scientific Information for Policy Fitness
Peer review as currently practiced is probably fine for deciding among proposals or which articles should be published in journals. However, I think that when public funding is at stake for major investments, and people’s lives and property are on the line, we should up our game in terms of review to a more professional level. I was recently asked my thoughts on how to do that, and here they are. It takes all eight steps, outlined below.
1. Is the research structured to answer the policy question? Often the policy question is nuanced.. say, “what should we do to protect homes from wildland fires and protect public and firefighter safety?” This is often where research goes off the rails. Say, historic vegetation ecologists study the past and claim that there was plenty of fire in the past.. but that information is actually not particularly relevant to the policy question. To get funding to do the study (and make the claim that it’s relevant), all they need to do is to pass a panel of scientists who basically use the “it sounds plausible to a person unfamiliar with the policy issues” criterion.
It seems obvious, but for scientific information to be policy relevant, policy folks have to be involved in framing the question. Most, if not all, research production systems that I am aware of do not have this step.
2. Did they choose the right disciplines and/or methods to answer the policy question? Clearly a variety of disciplines could have some useful contribution, as well as an inherent conflict of interest, if you rely on them to tell you if they are relevant or not.
3. Statistical review by a statistician. If you use statistics, this needs to be reviewed, not by a peer, but by a statistician. You can’t depend on journals to do this. The Forest Service used to have station statisticians (and still does?) to review proposals so people worked out their differences in thinking and experimental design before the project was too far down the road.
4. The Quality Assurance /Quality Controls (QA/QC) procedures for the equipment used and data need to be documented (and also reviewed by others). For someone who is unfamiliar with QA/QC applications, you might start with the recent paper attached (lockhart_2009_forest-policy-and-economics), it has a number of citations, and also the implications of the Data Quality Act. What is odd is that the NAPAP program led the way for QA/QC, but it’s not clear how that has been carried forward to today. It might be interesting to take the top-cited papers in forest ecology or management or whatever policy-relevant field you choose and review their QA/QC procedures.
5. Traditional within-discipline peer review.
6. Careful review of the logic path from facts found to conclusions drawn. It is natural for universities or other institutions to hype the importance of research findings. Since people will also use the findings to promote their own policy agenda, and because a paper can be misused even if the scientist is careful (e.g. 4 Mile Fire), it is more important to be specific and careful about your interpretation and conclusions. It is also best that if the findings lead to conclusions that are outside those of the current general consensus, that the authors forthrightly discuss different hypotheses for why their findings are different. Don’t just let the press hype “new findings show” as if the previous studies were irrelevant. The authors know more than anyone else about it, so they should be willing to share what they think about the differences an upfront way. That’s how “science” is supposed to progress, by building on previous work.
7. An important part of professional review for studies involving models should be “what background work did you do?“ Did you use sensitivity analysis for your assumptions? Did you compare model projections to the real world? If not, why not? In fact, the relationship of empirical data to your work should be clearly described, since scientific information derives its legitimacy from its predictive value in the real world, not from being a group hug of scientists within a discipline.
8. Post publication requirements: access by the public to data and open online review. This should be absolutely required for use in important policy discussions.
Do you agree? Do you have additions or deletions or other comments? Why do you think the bar is currently so low in terms of review?”