Thanks to Matthew for pointing out that all kinds of scientists have conscious and unconscious agendas. That’s why open QA/QC, collaborative design, public peer review, and other techniques can be so valuable for increasing peoples’ confidence in scientific products. I think everyone agrees on that. To that end, let’s talk more about the Harris et al. paper. Remember we are looking at it because I looked at Figure 3 and wondered about carbon emissions due to timber harvesting in Nevada and Southern California, places I thought I knew did not have much going on in the timber harvest biz.
Todd Morgan, whose data was used in the study (without asking him), raises some questions about double-counting emissions from mill residues. Now that’s a pretty technical thing. I can’t tell who is right, and I don’t even know the field well enough to know if there is a reviewer out there who a) knows enough to tell the difference, and b) does not have skin in the game (personality conflicts with authors, and so on), so I could ask that (unbiased) person for their point of view. To know that, you really have to know the folks in a discipline. In many cases, it’s hard to find people like this willing to do reviews (not paid extra, some credit), let alone write a piece for The Smokey Wire (not paid extra, potentially negative credit).
Nevertheless, there is one very simple thing journal editors could require that would have an enormous positive influence IMHO. If a paper uses a variety of datasets, I would require a letter from each source a) acknowledging that they were asked for data, b) and a finding or questioning whether their data were used appropriately. These writeups then would be shared with all reviewers.
If I put on my reviewer hat, I would say “Hey, we can’t do that! We’d spend all our time waiting for people to write up their stuff, and we can’t force them to do that, and besides, it’s unlikely I’ll be able to tell who was right if they disagree.”
I don’t think most non-scientists understand how difficult it is to do quality peer review, nor exactly what peer reviewers do. They don’t (can’t) check data sets or calculations. They mostly see if the right techniques (appear to have been) used, the findings are plausible, and the right citations (the reviewers’ own work ;)), and conclusions are drawn. I think the whole biz was easier when I was a young scientist and perhaps disciplines were easier to understand, scientists perhaps tried harder to be objective, there was less emphasis on multidisciplinary big data manipulation studies, and findings were easier to ground truth by observation in the area studied.
You get what you pay for, and no one pays for peer review. I acknowledge that many scientists work their tushes off with little acknowledgement or support in this area, but if anyone really cared about good scientific products, we (society) could do a great deal more to support the quality process.
Here are Morgan’s specific concerns about mill residues in the paper:
“I’m skeptical of the methods that have led to such high proportions of carbon loss attributable to harvest (Table 5) – particularly in several western states.
One major concern I have is how/why mill residues seem to be counted twice. My understanding of the Smith et al  publication is that mill residue (e.g., sawdust, etc) from processing timber products (e.g.,sawlogs) into primary products (e.g., lumber) is accounted for in the sequestered vs. emitted fractions for each product. For example, we see that about 40% of the sawlog volume (in figure 6) is emitted, 40% is landfill, and 20% is in-use. SO, adding mill residue is essentially double-counting the carbon emissions and sequestration associated with the wood harvested for products. Since the authors assume dispositions for mill residues which show 80-90% emitted (Figure 6), it looks to me like double counting of mill residue is causing much higher amounts of emitted carbon (loss) from harvesting.”
If you had reviewed the paper, wouldn’t you want to see the authors’ answer to this question? And perhaps involve Smith et al. in the discussion?