Practice of Science Friday: Proposed Vectors of Scientific Coolness

The US Temple of Science: Home of Coolness

Previously we’ve talked about how scientific research is funded and how researchers may not think that they will be rewarded for studying something that will help out practitioners- because it is not funded nor cool. In some fields, there is not even a directly linkage between practitioners and researchers.

Penalized for Utility
At one time I was asked to be the representative of research users on a panel for the research grade evaluation for a Forest Service scientist. He had been doing useful work, funded by people interested in what he was doing, and seemed to be doing a good job. Now remember that Forest Service research is supposed to study useful, mission-oriented things. One of the researchers on the panel said that the paneled scientist didn’t deserve an upgrade because he was doing “spray and count” research (he was an entomologist). The implicit assumption in our applied work is that excellence is not determined by the user. I asked “but you guys funded him to do this, how can you turn around and say he should be working on something else?” In my own line of work, what was cool was to apply some new theoretical approach from a cooler field and see if it works. I’d also like to point out that I’ve heard this same kind of story from university engineering departments, e.g. “the State (it was a state university) folks asked Jane to investigate how they could solve a problem, but Jane was denied tenure because the kind of research she did wasn’t theoretical enough.”

Beware of Science Tool Fads
Ecologists did this with systems theory which came from a higher field (math). Sometime in the 90’s, I was on a team reviewing one of the Forest Service Stations. There were some fish geneticists at the Station who had done excellent ground-breaking work learning where trout go in streams. Very important to know for a variety of reasons. The Assistant Director told me that that kind of research was passe, and that in the future, scientists would only study “systems”. But, I asked, how can you understand a system without understanding anything in it? To be fair, GE trees for forest applications were also a fad.

Big Science and Big Money
In 1988, I was at the Society of American Foresters Convention in Rochester, New York, and had dinner with a Station Director. I was a geneticist with National Forests at the time. He said that his station wouldn’t be doing much work related to the National Forests any more, as now their scientists could compete for climate change bucks and do real science. (thank goodness this wasn’t a western or southern Station Director).

I’m not being hard on the Forest Service here, it happens to be where I worked (and FS scientists themselves are generally quite helpful).

Right now I’m thinking of different vectors of coolness. There is the abstraction vector, which may go back to the class-based distinctions from the 19th century that we talked about last time. The more abstract the better. But what you study has a coolness vector as well. Elementary particles are better than rocks, and organisms, and people (social sciences) are at the bottom. Tools have a coolness vector of their own. Superconducting super colliders, great. Satellite data and computer models, lidar and so on, good. Counting or measuring plants, not so cool. Giving people interviews, well, barely science. The feedback from users is another vector. Highest is “what users, we are adding to human knowledge?” somewhere along the line is “national governments should be listening to us, but we’re not asking them what’s important” and finally “yes, we determine our research agenda by speaking to the people with problems.”

There is also some kind of feeling that non-interventional “studying distributions of plants” is better than intervening in the lives of plants. Perhaps that is the old-class based vector of coolness transformed into “environmental” vs. “exploitative.”

Given that both of these are improving the environment, which do you think is cooler and why?

Methane Detectives

Collaborative Adaptive Rangeland Management

Practice of Science Friday: The Lack of Coolness of Applied Science- Some History

I think Experimental Forests and Ranges are scientifically way cool.

Everyone has heard of basic and applied sciences. Generally, basic is thought to be better by scientists, and applied, more useful by others. But why is this, and how does it affect how we affect the practice of science today? First, let’s return to Peter Medawar.

The hard and fast distinction between pure and applied science is a quaint relic of the days when it was widely and authoritatively believed that axioms and generative ideas of some privileged sciences (the ‘Pure’ Sciences strictly so called) were known with certainty by intuition or revelation, while the Applied Sciences grew out of merely empirical observations concerning ‘matters of fact or existence’. The distinction between pure and applied science persisted in Victorian and Edwardian times as the basis of a class distinction between activities that did or did not become a gentleman (‘Pure Science’ being a genteel occupation and ‘Applied Science’ having disreputable associations with manual work or with trade). This class distinction is now widely believed to have been rather damaging to this country.

Medawar also argued strongly in the 70’s that basic and applied funding decisions needed to be made by scientists, and not users of science. Here’s a link to a history paper on this. It did not go without notice by others that the position of scientist must make all funding decisions about science is a bit self-serving for something funded by tax dollars. But what does this have to do with us today?

A few months ago, I attended a Forest Service Partners shindig and ran into a person in Forest Service R&D. I asked him how things were going, and he said Congress was on his back about not funding useful research. Of course, this has nothing to do with FS R&D as currently constituted, but has been an ongoing tension for as long as there has been serious levels of public funding for scientific research, and not just in the US. (This is my version of history and others are invited to add their own perspectives and experiences).

The Fund for Rural America was an attempt by Congress to convince researchers to do useful things for rural Americans. You can check out the provisions of the 1996 Farm Bill here (note that the same conversation was close to 25 years ago):

(C) USE OF GRANT.—
(i) IN GENERAL.—A grant made under this paragraph may be used by a grantee for 1 or more of
the following uses:
(I) Outcome-oriented research at the discovery end of the spectrum to provide breakthrough results.
(II) Exploratory and advanced development and technology with well-identified outcomes.
(III) A national, regional, or multi-State program oriented primarily toward extension programs and education programs demonstrating and supporting the competitiveness of United States
agriculture.

Notice the “outcome-oriented.”

Yet, let’s look at the other forces pushing toward basic research. Here’s a 2014 report from the National Academy of Sciences.

USDA has played a key role in supporting extramural research for agriculture since the passage of the Hatch Act in 1887, but its use of competitive funding as a mechanism to support extramural research began more recently (see Figure 3-1). A peer-review competitive grants program was proposed as a means of moving a publicly funded agricultural research portfolio toward the more basic end of the R&D spectrum.2 A 1989 National Research Council report stated that “there is ample justification for increased allocations for the [competitive] grants program to a level that would approximate 20 percent of the USDA’s research budget, at least one half of which would be for basic research related to agriculture” (NRC, 1989, pp. 49–50).

The National Research Council study (part of the National Academies, what I call the Temple of Science) pushed toward more basic research. Why 50% of the total? I’m sure they have a rationale, but not sure that the Congress would agree. So again, we see the Science Establishment going for more basic (and potentially less utility and accountability) and Congress later pushing back asking public funds to have more accountability (outcome-oriented).

I also had a ring-side seat for part of a transition. At one time USDA had (more) formula funds that were given to land grant schools to figure out useful things. Often for forest science, the Dean would get together users and others to help prioritize research via discussions at the state level, at the best. Or perhaps give it to his buddies or use it as trade for something, at the worst. But then USDA-CSREES now NIFA hired a bunch of folks from the National Science Foundation, who came with the idea that only scientists can judge whether research is worth doing, and the best way is to bring them together for panels in DC. The FRA staff, including me, even got in trouble with the Powers that Were for allowing a user on a panel.

Meanwhile, back at the Forest Service, Forest Service scientists were less funded, and told to look for funding elsewhere than the Forest Service, which necessarily led them to focus on what other scientists who run grant programs think is cool.

If we go back to the history paper above, we can see that politicians can see the self-interest of scientists in this, but it has nevertheless been difficult for politicians (even appropriators!) and the public to get a grip on it. Of course, agriculture, health, nutrition, engineering, and different technologies have different communities, funding sources and approaches, so the applied sciences, like science in general, are not one thing.

If something is useful, then framing the question (as we’ve seen, extremely important in figuring out which disciplines and approaches are helpful) should definitely be done including users (in our case, practitioners, land managers and so on). I’m not saying that researchers don’t do this through their own personal commitment- I’m saying that the systems in place do not necessarily support it and could be changed to support it.

Another tendency is for departments to centralize scientists (e.g. USGS), which can also drive them farther away from research that helps their agency colleagues. IMHO the Forest Service was wise, politically astute, and/or lucky to retain their own research scientists, even if it leads sometimes to intramural drama. A small price to pay for a modicum of independence from the Science Establishment.

Practice of Science Friday: The Abstraction of Science and Who Counts as a Scientist?

For many of us, our natural terrain is not history and philosophy of science. But we need to dip into that world a bit to understand the context for what people mean when they use the term “science” in discussions today.

“Science” itself is an abstraction. In a novel by Andrew Greeley, the sociologist of religion, priest and fiction writer, he puts these words into the mouth of Bishop Blackie Ryan (speaking of individualism): “Actually,” I continued blissfully, “the word is a label, an artifact under which one may subsume a number of often contrasting and sometimes contradictory developments and ideas. Such constructs may be useful for shorthand conversation and perhaps for undergraduate instruction, but they ought not to be reified as if there is some overpowering reality in the outside world that corresponds to them.” Blackie goes on to say “in my experience all words that end in “ism” or “ization” are also constructs that should not be confused with reality.”

Any abstractions can be defined by different people, at different times, for different ends, or just change as ideas drift through time with no discernible cause. Privileged access to forms of communication may lead to other definitions falling by the wayside, or to no open discussion of definitions of abstractions (because generally who has the patience?), which leads to batting abstractions back and forth instead of deeper dialogue. Note: some of this can be laid at the feet of Plato and Aristotle, but I don’t think we need to go there.

Nevertheless, we can see that there are differences between science from the time of Thomas Aquinas- a time when theology was the “queen of the sciences,” to today when some have claimed that physics is now the queen (no doubt agriculture and forest research remain serfs under any classification scheme). But perhaps we should skip ahead and talk about who we define as a “scientist” today.

Maybe a good place to start would be 1981, when Sir Peter Medawar wrote a book called “Advice to a Young Scientist”. Medawar won the Nobel prize for his work in immunology and also wrote quite a bit about science and scientists. The 80’s are when the advisors, or the advisors of the advisors, of today’s students were being trained, and is also within the memory of many alive today.

Here are a few quotes from that book:

There is no such thing as a Scientific Mind. Scientists are people of very dissimilar temperaments doing different things in very different ways. Among scientists are collectors, classifiers and compulsive tidiers-up; many are detectives by temperament and many are explorers; some are artists and others artisans. There are poet-scientists and philosopher-scientists and even a few mystics. What sort of mind or temperament can all these people be supposed to have in common? Obligative scientists must be very rare, and most people who are in fact scientists could easily have been something else instead.

It is not easy and will not always be necessary to draw a sharp distinction between “real” research scientists and those who carry out scientific operations apparently by rote. Among those half-million or so practitioners who. classified themselves as scientists might easily have been the kind of man employed by any large and well-regulated public swimming pool: the man who checks the hydrogen-iron concentration of the water and keeps an eye on the bacterial and fungal flora. I can almost hear the contemptuous snort with which the pretensions of such a one to be thought a scientist will be dismissed.

But wait; scientist is as scientist does. If the attendant is intelligent and ambitious, he may build upon his school science by trying to bone up a little bacteriology or medical mycology in a public library or at night school, where he will certainly learn that the warmth and wetness that make the swimming pool agreeable to human beings are also conducive to the growth of microorganisms. Conversely, the chlorine that discourages bacteria is equally offensive to human beings; the attendant’s thoughts might easily turn to the problem of how best to keep down the bacteria and the fungi without enormous cost to his employer and without frightening his patrons away. Perhaps he will experiment on a small scale in his evaluation of alternative methods of purification. He will in any case keep a record of the relationship between the density of the population of microorganisms and the number of users of the pool, and experiment with adjusting the concentration of chlorine in accordance with his expectation of the number of his patrons on any particular day. If he does these things, he will be acting as a scientist rather than as a hired hand. The important thing is the inclination to get at the truth of matters as far as he is able and to take the steps that will make it reasonably 1ikely he will do so.”

One of the most challenging aspects of increasing total knowledge, in our forest world, is to bring practitioner knowledge and academic knowledge together. Right now we can discuss our own definitions of scientists by paycheck, or by training, or by engaging in structured learning. These distinctions are still contested today and underlie many policy discussions.

Practice of Science Friday: Location, Location, Location of Scientists and Impact on Science

Requiescat in pace , Bend Silviculture Lab

If you have been around the research business long enough, you have seen many research topics, administrative inclinations, and locations come and go. In the early 80’s, I worked for the National Forests in central Oregon.

Most of the research was done, and most of the scientists located, on the wet West Side (WWS) of Oregon. It made sense at the time, as those folks, private and public, were engaged in intensive timber management and wanted to know how to do it right. In addition, the land grant university with responsibility for agriculture and forestry (OSU) was in Corvallis. But that left us with a relative handful of scientists and projects everywhere from Bend to Lakeview to Pendleton. Now, OSU didn’t have an SRUS (a science regional utility standard) that required their research portfolio to be relevant to all parts of the state equally. The east side was a kind of scientific stepchild. At least SW Oregon had the FIR Program, which turned out much helpful information to practitioners in SW Oregon.

I’ll tell a couple of stories about “West-side-itis”. At one time, the Ochoco, Winema, Fremont and Deschutes silviculture folks had a training session with (Drs.) Chad Oliver and Bruce Larsen for a few days at Pringle Falls. I think both may have been at UW (Seattle) at the time. They were both fantastic teachers, especially as a duo. Bruce was rapid-fire, and Chad laid back and Southern, so it made long sessions entertaining as well as informative. They were talking about trees competing for light, and someone asked “how do the models handle it if it is competition for water, not light?” It took a while for them to answer, because in their WWS world, it wasn’t really an important question. I hope you can see my point. If people tend to study and understand what’s around them, in Oregon, the scientific topo lines would have been highest in Corvallis (note that the FS had a big lab there, which made sense, so they could interact with OSU, EPA and other scientists) and gone to zero somewhere between Klamath Falls and Hart Mountain.

One more story. In the early 80’s, an OSU economics prof came out to talk to us, and we met with the Weyerhaeuser folks who also had land in our area. The OSU fellow said that to use the latest science we needed to increase the size of our clearcuts to be more like the Weyco folks. This was, of course, long before (Dr.) Jerry Franklin came up with “big messy clearcuts” so there were the clearcuts that people didn’t like that were then made smaller, but then Jerry thought that big messy ones were better. If we didn’t do it, of course, we weren’t using “the latest science.” Which had been inspired, planned, designed, carried out, and analyzed without our input, and without consideration of (major) differences in the environment. But if you work in an area that folks don’t come out and study, doing things that science funders don’t find to be interesting, can you then be criticized for not having scientific evidence for what you’re doing? Well.. yes.

Of course, research managers have been challenged by declining budgets. But when you think of the closure of the Bend Silviculture Lab or the Macon Fire Sciences Lab, a person has to wonder what geographical diversity and knowledge was lost. Diversity is a good thing, right? At least today, that’s a value. Back in those days, you needed a building, and upkeep, and computer and staff support to have people somewhere. Maybe, especially in today’s virtual society, there’s an argument for locating federal researchers far away from universities and closer to the people working directly with the creatures, people, plants and landscapes or situations they study. Not the least advantage, in my mind, would be to interact more with their practitioner colleagues, and develop a better understanding of what they do and why they do it.

For an excellent history of the ponderosa research on the east side of Oregon, check out this paper by Les Joslin Ponderosa Promise: A History of US Forest Service Research in Central Oregon.

Practice of Science Friday: II. Disciplines Over Time: Why I Miss (Traditional) Forest Economists

You may remember some weeks ago, I asked the question “why doesn’t California export pellets, since they have much material to remove in fuels reduction, and the BC folks seem to be doing it and making money?” Well, the first thing I did was to look at the faculty at University of California, Berkeley, where I noticed a lack of people whose CV’s would indicate that they would know. I followed the path up the coast to OSU, where the key people were once again retired. The best information I received was from the FS PNW Station (current employees) and UBC (another retired fellow). There was an active person at UW who didn’t return my email. Now I’m not saying that a lack of forest economists leads to a lack of information about economic opportunities, but the question was in my mind. Who decided not to hire people in that discipline and why? What disciplines did they pick instead? And do they ask users of scientific information (research stakeholders) when they make those decisions?

Nevertheless, there are people working assiduously on these topics so here’s a shout out to them. Here is a link to presentations from the Western Forest Economists Meeting in June of this year.

But here’s what I remember about some of them, admittedly through the rose-colored filter of nostalgia. I’d be interested in others’ thoughts.

Disagreement is Good and Welcomed

Two economist jokes that illustrate that point. “Ronald Reagan used to say that if trivial pursuit were designed by economists, it would have 100 questions and 3,000 answers.” or “President Truman once said he wants an economic adviser who is one handed. Why? Because normally the economists giving him economic advice state, “On one hand and on the other…”. (For more economist jokes, see this site)

Most of them I ran into held their opinions lightly, and gave them as opinions. Perhaps economics is a discipline that engenders humility because projections are often.. well..wrong and obviously so (think of climate models.. they may be wrong but only experts know). One of (Dr. Tom) Mills’ favorite expressions, when I worked with him, was “reasonable people could disagree” about a topic we were discussing. I don’t think I hear that expression much anymore. Perhaps because more people in forest policy come from a legal background, where people may be trained to be less overtly humble about their opinions (and use strong language about why people who disagree are wrong).

Economists As Policy Experts: Just The Facts or Assumptions

Second, at the time, forest policy was mostly the domain of forest economists. I remember an apocryphal story also about (Dr. Tom) Mills. Back in the day, there was the idea that if folks generated a large amount of information in long-term analysis projects like ICEBMP, Northwest Forest Plan and so on, that people would agree on the science and then, built on that, plus monitoring key things, this would reduce controversy and litigation about land management actions. Like I said, I heard this story and I can’t say that it’s true, that in one of those lengthy summary documents, Mills crossed out all the “shoulds”, with the argument that scientists don’t make normative claims. He was definitely swimming against the tide in the Northwest at the time.

I also remember (Dr. Richard) Haynes saying that it was interesting how different disciplines derived their knowledge claims. For example, at one point he mentioned to me that some of the wildlife biologists he was working with (on ICEBMP?) did not use sensitivity analysis on their assumptions. Sensitivity analysis being an approach when you have uncertainty and you make assumptions and you want to know how sensitive the answer is to your assumption. It seems like a good thing to do, but with no economists around, will be asking those kinds of cross-disciplinary questions?

It’s a sociology of science question as to who or what entities select research topics and approaches, which ultimately determine “what science says.” They seem to be located in funding agencies and hiring at universities (no doubt influenced by what funding is available).

Sharon’s Guide to Understanding Scientific Papers: I. Querying the Abstract

In the spirit of teaching a person to fish, I’ll explain my own process for reviewing scientific papers. Because I’ve been in the forest biology/forest ecology biz for almost 50 years, it may be easier for me. But I’m hoping that curious TSW readers will be able to adapt these steps for your own use, and I’ll give you some hints to make things easier. Along the way, you’ll also find out what peer reviewers may look at, and what they don’t or can’t. I hope others will share their own methods and shortcuts.

We’ll start with the paper Jon posted here:

1. Get a copy of the paper. Some may be open-source (yay!). The next step is to go to Google Scholar and look it up. Often you will find a copy for free there. The last step is to write the corresponding author (there’s usually an envelope and an email if you hover over the list of authors) and ask for a reprint. Back in the day, we would send each other postcards and slip copies in the surface mail. This is pretty much the modern equivalent of that process. So far, no one has turned me down or not replied to an email. That’s how I got the copy I am posting so thanks to author Thom Thom et al (2019) – The climate sensitivity of carbon, timber, and species richness co-varies with forest age in boreal-temperate North America

2. Look at the abstract with an eye to data sources, methods and conclusions. What are they measuring?
(one of the most difficult things to wade through is terminology, but it has to be done).

We focused on a number of ESB indicators to (a) analyze associations among carbon storage, timber growth rate, and species richness along a forest development gradient; (b) test the sensitivity of these associations to climatic changes; and (c) identify hotspots of climate sensitivity across
the boreal–temperate forests of eastern North America.

What is “ESB”? It’s some combination of ecosystem services and biodiversity. There are many indicators of those (e.g. genetic diversity of amphibians, species diversity of insects, and so on for biodiversity). So to relate what they measured to what we know, we’ll have to dive deeper into the methods section. We may have our own experiences with carbon measurements, but not so much with species richness.

The data used was FIA and other plot information, and they used modeling to test the sensitivity to climate change. By now, you may be curious and ask “how can you tell what climate change will do? how can you tell what aspects will be sensitive?” That again, will have to wait for methods section.

Next, I look for the conclusions in the abstract:

While regions with a currently low combined ESB performance benefited from climate change, regions with a high ESB performance were particularly vulnerable to climate change. In particular, climate sensitivity was highest east and southeast of the Great Lakes, signaling potential priority areas for adaptive management. Our findings suggest that strategies aimed at enhancing the representation of older forest conditions at landscape scales will help sustain ESB in a changing world.

Then I try to paraphrase it in my own words. I came up with “if you combine indicators, the regions with low marks get higher marks after climate change and regions that have high marks now will go down, that would be east and SE of the Great Lakes.” A natural question would be “do all indicators go the same way?” “How sensitive are these findings to the way you combine them and which ones you include?”

And how does the above relate to “old forest conditions” that strategies should enhance?

3. Write down your questions. This is particularly helpful if you can’t get back to this for a day or so. In this case, my questions would be:
a) what ESB indicators did they use? It sounds like carbon, timber and species richness, but it could be others as well.
b) how did they figure out what changes would occur due to climate change?
c) how did they figure out whether an indicator was sensitive to climate change?
d) do all indicators change in the same direction and/or how sensitive are the findings to the way they are combined and which ones are in and out?
e) how does all this relate to “old forest conditions?”

Next steps will be in my next post.

Let’s Count the USDA Research Agencies Affected By The Disclaimer Kerfuffle

USDA APHIS Animal Health Laboratory Network

 

One of the benefits of being an Old Person who has lived through many administrations, is the ability to see (or at least introduce the idea of) some outrage-provoking action decried by some media outlets being a tempest in a teapot, or business as usual . Many thanks to Susan Jane Brown for finding this letter from USDA  and including it in her comment here. The letter says that agencies with existing policies don’t have to follow this policy. So the next logical question is “what agencies have their own policies?”

Let’s look at the WaPo story:

“Not every study published by a USDA scientist is required to have this disclaimer. Some research agencies at USDA, including the Agricultural Research Service, the Economic Research Service, the National Agricultural Statistics Service and the Forest Service have “agency-specific policies” that determine when a disclaimer is appropriate, said William Trenkle, the USDA scientific integrity officer.”

Now folks may wonder, as I did, having worked for the Research, Education and Economics mission area at USDA, what scientific agencies are left? Just NIFA, formerly CSREES, which doesn’t hire scientists but grants funding to other scientists.. so is that even relevant, since those studies would not be “published with a USDA scientist?”. All very confusing.

But Trenkle also says:

Others, such as the National Institute of Food and Agriculture and the Animal and Plant Health Inspection Service, have been encouraged to follow the new guidelines.

By my count, that leaves APHIS hanging out by itself to be a center of this controversy. Poor APHIS. Most of us think of it as mostly a regulatory agency anyway, so this whole drama could be actually good press for the APHIS research program. After some hunting around, I found many interesting things they do at this site.  I could be wrong, but it doesn’t look like the kind of research with big partisan payoffs one way or another that is likely to be changed by politicals. But it is good to know that there are scientists working away on extremely important but not particularly trendy research.

Let’s Discuss the Rebuttal to the Peery et al. Agenda-Driven Science Paper

Many thanks to Derek for posting this link to the LBH groups’ rebuttal to the  paper. I think that this is a great thing to discuss, as it gives us insight into the science process as practiced in real life.  Many folks who read this blog have not experienced it directly.

Here’s what I agree with: everyone has an agenda, if only to do research that can be funded and is helpful to people. Having different views and proclivities is part of being human.  It’s only when you make a case that there is a thing called “Science” that has a unique authority and objectivity and therefore deserves a favored place at the decision making table,  that the lived, diverse, conflicted reality of the scientific enterprise becomes an issue.  Frankly, no one believes that scientists are unbiased and objective- except perhaps on topics that don’t have value implications. Remember the old days when people did research on whether bare root or plug seedlings had better survival?

In the rebuttal, the authors state:

 Peery et al.’s personal attacks have no place in science. Like many other scientists, we believe that National Forest management should be motivated and driven by ecological science and conservation biology principles, not timber commodity production imperatives and monetary incentives.

I think that this is a great quote because it lines out exactly what they believe and it turns out that their findings are in line with those beliefs. I think I agree about “personal attacks” and we might agree on what is personal (conduct) versus research critiques.

But imagine if you got another group of scientists together who said:

“like many other scientists, we believe that National Forest management should be motivated and driven by Congressional statutes, which include concepts of multiple use and environmental review and species protection. We believe that the role of science and scientists is to provide insight into the trade offs that may occur and understand the social, economic, physical, and ecological impacts of activities and help develop ways to reduce negative impacts.”

If you didn’t understand the details of their research, which group would you have a tendency to trust?

Peery et al. attack us personally and question our motives, citing our criticism and concerns regarding the USDA Forest Service’s commercial logging program on federal public lands. It is troubling to see Peery et al. personally attacking independent scientists, in the pages of an Ecological Society of America journal, for seeking public access to government-funded scientific data and for raising questions about the scientific integrity of decisions to log public lands. Such personal attacks do not belong in scientific discourse.

But decisions about logging public lands don’t have to do with “scientific integrity”.. because they are not scientific decisions.  Again there seems to be a tendency to think that “science” should determine, rather than inform, policy.  Which it simply can’t, not only for the political science reason that we aren’t a technocracy and voting still counts, but the pragmatic reason that scientists disagree.  Nevertheless,  I do agree that personal attacks don’t belong in scientific journals.

It could be that the Gutiérrez-Peery lab may suffer from funding bias, also known as sponsorship 268 bias, funding outcome bias, funding publication bias, or funding effect, referring to the tendency 269 of a scientific study to support the interests of the study’s financial sponsor (Krimsky 2006). As RJ Gutiérrez wrote when he severed our data-sharing agreement, “We have signed a ‘neutrality 271 agreement’ with the MOU partners associated with the Sierra Nevada Adaptive Management 272 Project. Essentially, this means that use of Eldorado and SNAMP data in a way that could be perceived as conflicting with USFS management or antagonistic to them would be perceived as a  violation of the agreement.” (Supporting Information ‘RGutierrez1’ and 275 ‘USFS&UWisc_contract’). Peery et al. have a long-term financial relationship with the USDA Forest Service—an agency that sells timber from public lands to private logging corporations and retains revenue from such sales for its budget. In light of the Forest Service’s financial interest in commercial logging on public lands, and the fact that the spotted owl has been a major thorn in the side of the Forest Service’s commercial logging program, candid disclosure of  conflicts of interest from spotted owl scientists employed by the Forest Service, including any conditions or constraints associated with that employment, are particularly important.

I am interested in the data sharing agreement, I have never heard of that. Perhaps others know more. But the idea that FS employed and funded scientists come to the conclusions they do because of their source of funding sounds a bit like an attack, not only on the owl folks but pretty much all folks who accept FS bucks for research.

Note: I grew up professionally in the Pacific Northwest with FS scientists Jerry Franklin and Jack Ward Thomas (who weren’t toeing the timber management line in the 80’s) and also having worked for Forest Service R&D for years, my experience is that the timber production part of the FS and scientists mostly have a pretty good firewall. I’d be interested in others’ observations and experiences.

Behind the Science Curtain with One Carbon Science Study: III. Journals, Space and the Streetlight Effect

 

Structural Problem: Using the Power of Hyperlinks in Scientific Publishing.. or Not.

For those of us who grew up with paper journals, journal article size was circumscribed by paper publishing. Now publishing in journals is online, but complex datasets may well require more “room” to get at how things are calculated and which numbers are used from which data sets. This seems especially true for some kinds of studies that use a variety of other data sets.  Without that clarity, how can reviewers provide adequate peer review?  As Todd Morgan says:

In some ways, I think journal articles can be a really poor way to communicate some of this information/science.  One key reason is space & word count limits. These limits really restrict authors’ abilities to clearly describe their data sources & methods in detail – especially when working with multiple data sets from different sources and/or multiple methods.  And so much of this science related to carbon uses gobs of data from various sources, originally designed to measure different things and then mashes those data up with a bunch of mathematical relationships from other sources.
For example, the Smith et al. 2006 source cited in the CBM article you brought to my attention is a 200-page document with all sorts of tables from other sources and different tables to be used for different data sets when calculating carbon for HWP. And it sources information from several other large documents with data compiled from yet other sources. From the methods presented in the CBM article, I’m not exactly sure which tables & methods the authors used from Smith et al. and I don’t know exactly what TPO data they used, how they included fuelwood, or why they added mill residue…

As Morgan and I were discussing this via email, Chris Woodall, a Forest Service scientist, and one of the authors of the study added:

“This paper was an attempt to bring together very spatially explicit remotely sensed information (a central part of the NASA carbon monitoring systems grant) with information regarding the status and fate of forest carbon, whether ecosystem or HWP.  We encountered serious hurdles trying to attribute gross carbon changes to disturbances agents, whether fire, wind, or logging.  So much so that deriving net change from gross really eluded us resulting in our paper only getting into CBM as opposed to much higher tier journals.  The issue that took the most work was trying to join TPO data (often at the combo county level) with gridded data which led us to developing a host of look up tables to carry our C calculations through. “

Woodall brings up two structural problems:

Structural Problem: NASA Funding and the Scientific Equivalent of the Streetlight Effect.
https://en.wikipedia.org/wiki/Streetlight_effect

NASA has money for carbon monitoring based on remote sensing. Therefore, folks will use remote sensing for carbon monitoring and try to link it to other things that aren’t necessarily measured well by remote sensing. Would the approach have been the same if Agency X had funded proposals to “do the best carbon monitoring possible” and given lots of money to collect new data specifically to answer carbon questions?”

Structural Problem: Not all journals are created equal.
But the public and policymakers don’t have a phone app where you type in the journal and it comes out with a ranking. and what would you do with that information anyway? Also, some folks have had trouble publishing in some of the highest ranked journals (e.g., Nature and Science) if their conclusions don’t fit with certain worldviews, and not necessarily that they used incorrect methods, nor that the results are shaky. So knowing the journal can help you determine how strong the evidence is.. or not. But clearly Chris points out that in this case, the research only made it into a lower tier journal. Does that mean anything to policymakers? Should it?

Behind the Science Curtain with One Carbon Science Study: II. A Simple Suggestion For Improving Peer Review

I found this in a paper about making bio jet fuel. I’m sure that’s not the original source.

 

Thanks to Matthew for pointing out that all kinds of scientists have conscious and unconscious agendas.  That’s why open QA/QC, collaborative design, public peer review, and other techniques can be so valuable for increasing peoples’ confidence in scientific products.  I think everyone agrees on that.  To that end, let’s talk more about the Harris et al. paper. Remember we are looking at it because I looked at Figure 3 and wondered about carbon emissions due to timber harvesting in Nevada and Southern California, places I thought I knew did not have much going on in the timber harvest biz.

Todd Morgan, whose data was used in the study (without asking him), raises some questions about double-counting emissions from mill residues. Now that’s a pretty technical thing. I can’t tell who is right, and I don’t even know the field well enough to know if there is a reviewer out there who a) knows enough to tell the difference, and b) does not have skin in the game (personality conflicts with authors, and so on), so I could ask that (unbiased) person for their point of view.  To know that, you really have to know the folks in a discipline. In many cases, it’s hard to find people like this willing to do reviews (not paid extra, some credit), let alone write a piece for The Smokey Wire (not paid extra, potentially negative credit).

Nevertheless, there is one very simple thing journal editors could require that would have an enormous positive influence IMHO.  If a paper uses a variety of datasets, I would require a letter from each source a) acknowledging that they were asked for data, b) and a finding or questioning whether their data were used appropriately.  These writeups then would be shared with all reviewers.

If I put on my reviewer hat, I would say “Hey, we can’t do that! We’d spend all our time waiting for people to write up their stuff, and we can’t force them to do that, and besides, it’s unlikely I’ll be able to tell who was right if they disagree.”

I don’t think most non-scientists understand how difficult it is to do quality peer review, nor exactly what peer reviewers do. They don’t (can’t) check data sets or calculations. They mostly see if the right techniques (appear to have been) used, the findings are plausible, and the right citations (the reviewers’ own work ;)),  and conclusions are drawn.  I think the whole biz was easier when I was a young scientist and perhaps disciplines were easier to understand, scientists perhaps tried harder to be objective, there was less emphasis on multidisciplinary big data manipulation studies, and findings were easier to ground truth by observation in the area studied.

You get what you pay for, and no one pays for peer review. I acknowledge that many scientists work their tushes off with little acknowledgement or support in this area, but if anyone really cared about good scientific products, we (society) could do a great deal more to support the quality process.

Here are Morgan’s specific concerns about mill residues in the paper:

“I’m skeptical of the methods that have led to such high proportions of carbon loss attributable to harvest (Table 5) – particularly in several western states.
One major concern I have is how/why mill residues seem to be counted twice.  My understanding of the Smith et al [31] publication is that mill residue (e.g., sawdust, etc) from processing timber products (e.g.,sawlogs) into primary products (e.g., lumber) is accounted for in the sequestered vs. emitted fractions for each product.  For example, we see that about 40% of the sawlog volume (in figure 6) is emitted, 40% is landfill, and 20% is in-use.  SO, adding mill residue is essentially double-counting the carbon emissions and sequestration associated with the wood harvested for products. Since the authors assume dispositions for mill residues which show 80-90% emitted (Figure 6), it looks to me like double counting of mill residue is causing much higher amounts of emitted carbon (loss) from harvesting.”

If you had reviewed the paper, wouldn’t you want to see the authors’ answer to this question? And perhaps involve Smith et al. in the discussion?