Separating Science from Non-Science– Or Not, by Daniel Sarewitz

Simulation of a Higgs boson decaying into four muons
Science & Society Picture Library / Getty

When we talk about “science tells us” and so on, sometimes it’s good to reflect on the abstraction of “science” and what makes any human activity “science.”
Dan Sarewitz looked at a couple of different perspectives, from theoretical physics to improving antifreeze formulations in this piece..

What, then, joins Hossenfelder’s field of theoretical physics to ecology, epidemiology, cultural anthropology, cognitive psychology, biochemistry, macroeconomics, computer science, and geology? Why do they all get to be called science? Certainly it is not similarity of method. The methods used to search for the subatomic components of the universe have nothing at all in common with the field geology methods in which I was trained in graduate school. Nor is something as apparently obvious as a commitment to empiricism a part of every scientific field. Many areas of theory development, in disciplines as disparate as physics and economics, have little contact with actual facts, while other fields now considered outside of science, such as history and textual analysis, are inherently empirical. Philosophers have pretty much given up on resolving what they call the “demarcation problem,” the search for definitive criteria to separate science from nonscience; maybe the best that can be hoped for is what John Dupré, invoking Wittgenstein, has called a “family resemblance” among fields we consider scientific. But scientists themselves haven’t given up on assuming that there is a single thing called “science” that the rest of the world should recognize as such.

The demarcation problem matters because the separation of science from nonscience is also a separation of those who are granted legitimacy to make claims about what is true in the world from the rest of us Philistines, pundits, provocateurs, and just plain folks. In a time when expertise and science are supposedly under attack, some convincing way to make this distinction would seem to be of value. Yet Hossenfelder’s jaunt through the world of theoretical physics explicitly raises the question of whether the activities of thousands of physicists should actually count as “science.” And if not, then what in tarnation are they doing?

When Hossenfelder writes about “science” or the “scientific method” she seems to have in mind a reasoning process wherein theories are formulated to extend or modify our understanding of the world and those theories in turn generate hypotheses that can be subjected to experimental or observational confirmation—what philosophers call “hypothetico-deductive” reasoning. This view is sensible, but it is also a mighty weak standard to live up to. Pretty much any decision is a bet on logical inferences about the consequences of an intended action (a hypothesis) based on beliefs about how the world works (theories). We develop guiding theories (prayer is good for you; rotate your tires) and test their consequences through our daily behavior—but we don’t call that science. We can tighten up Hossenfelder’s apparent definition a bit by stipulating that hypothesis-testing needs to be systematic, observations carefully calibrated, and experiments adequately controlled. But this has the opposite problem: It excludes a lot of activity that everyone agrees is science, such as Darwin’s development of the theory of natural selection, and economic modeling based on idealized assumptions like perfect information flow and utility-maximizing human decisions.

Of course the standard explanation of the difficulties with theoretical physics would simply be that science advances by failing, that it is self-correcting over time, and that all this flailing about is just what has to happen when you’re trying to understand something hard. Some version of this sort of failing-forward story is what Hossenfelder hears from many of her colleagues. But if all this activity is just self-correction in action, then why not call alchemy, astrology, phrenology, eugenics, and scientific socialism science as well, because in their time, each was pursued with sincere conviction by scientists who believed they were advancing reliable knowledge about the world? On what basis should we say that the findings of science at any given time really do bear a useful correspondence to reality? When is it okay to trust what scientists say? Should I believe in susy or not? The popularity of general-audience books about fundamental physics and cosmology has long baffled me. When, say, Brian Greene, in his 1999 bestseller The Elegant Universe, writes of susy that “Since supersymmetry ensures that bosons and fermions occur in pairs, substantial cancellations occur from the outset—cancellations that significantly calm some of the frenzied quantum effects,” should I believe that? Given that (despite my Ph.D. in a different field of science) I don’t have a prayer of understanding the math behind susy, what does it even mean to “believe” such a statement? How would it be any different from “believing” Genesis or Jabberwocky? Hossenfelder doesn’t seem so far from this perspective. “I don’t see a big difference between believing nature is beautiful and believing God is kind.”

5 thoughts on “Separating Science from Non-Science– Or Not, by Daniel Sarewitz”

  1. Science isn’t perfect but it sure beats the alternative, unfounded opinion, which appears in this forum a lot. Maybe we need a post about the valuable intellectual traits for critical thinking.

    Intellectual Humility: Having a consciousness of the limits of one’s knowledge, including a sensitivity to circumstances in which one’s native egocentrism is likely to function self-deceptively; sensitivity to bias, prejudice and limitations of one’s viewpoint. Intellectual humility depends on recognizing that one should not claim more than one actually knows. It does not imply spinelessness or submissiveness. It implies the lack of intellectual pretentiousness, boastfulness, or conceit, combined with insight into the logical foundations, or lack of such foundations, of one’s beliefs.

    Intellectual Courage: Having a consciousness of the need to face and fairly address ideas, beliefs or viewpoints toward which we have strong negative emotions and to which we have not given a serious hearing. This courage is connected with the recognition that ideas considered dangerous or absurd are sometimes rationally justified (in whole or in part) and that conclusions and beliefs inculcated in us are sometimes false or misleading. To determine for ourselves which is which, we must not passively and uncritically ” accept” what we have ” learned.” Intellectual courage comes into play here, because inevitably we will come to see some truth in some ideas considered dangerous and absurd, and distortion or falsity in some ideas strongly held in our social group. We need courage to be true to our own thinking in such circumstances. The penalties for non-conformity can be severe.

    Intellectual Empathy: Having a consciousness of the need to imaginatively put oneself in the place of others in order to genuinely understand them, which requires the consciousness of our egocentric tendency to identify truth with our immediate perceptions of long-standing thought or belief. This trait correlates with the ability to reconstruct accurately the viewpoints and reasoning of others and to reason from premises, assumptions, and ideas other than our own. This trait also correlates with the willingness to remember occasions when we were wrong in the past despite an intense conviction that we were right, and with the ability to imagine our being similarly deceived in a case-at-hand.

    Intellectual Integrity: Recognition of the need to be true to one’s own thinking; to be consistent in the intellectual standards one applies; to hold one’s self to the same rigorous standards of evidence and proof to which one holds one’s antagonists; to practice what one advocates for others; and to honestly admit discrepancies and inconsistencies in one’s own thought and action.

    Intellectual Perseverance: Having a consciousness of the need to use intellectual insights and truths in spite of difficulties, obstacles, and frustrations; firm adherence to rational principles despite the irrational opposition of others; a sense of the need to struggle with confusion and unsettled questions over an extended period of time to achieve deeper understanding or insight.

    Faith In Reason: Confidence that, in the long run, one’s own higher interests and those of humankind at large will be best served by giving the freest play to reason, by encouraging people to come to their own conclusions by developing their own rational faculties; faith that, with proper encouragement and cultivation, people can learn to think for themselves, to form rational viewpoints, draw reasonable conclusions, think coherently and logically, persuade each other by reason and become reasonable persons, despite the deep-seated obstacles in the native character of the human mind and in society as we know it.

    Fairmindedness: Having a consciousness of the need to treat all viewpoints alike, without reference to one’s own feelings or vested interests, or the feelings or vested interests of one’s friends, community or nation; implies adherence to intellectual standards without reference to one’s own advantage or the advantage of one’s group.

    https://web.archive.org/web/20040803030458/http://www.criticalthinking.org:80/University/intraits.html

    Our land management agencies are required to use best available science and use it appropriately. NEPA requires federal agencies to rely upon “high quality” information and “accurate scientific analysis.” 40 C.F.R. § 1500.1(b). The scientific information upon which an agency relies must be of “high quality because accurate scientific analysis, expert agency comments, and public scrutiny are essential to implementing NEPA.” Idaho Sporting Congress v. Thomas, 137 F.3d 1146, 1151 (9th Cir. 1998) (internal quotations omitted); see also Portland Audubon Society v. Espy, 998 F.2d 699, 703 (9th Cir. 1993) (overturning decision which “rests on stale scientific evidence, incomplete discussion of environmental effects . . . and false assumptions”)
    During ESA Section 7 consultation, the agency “shall use the best scientific and commercial data available.” 16 U.S.C. § 1536(a)(2). “[T]he Federal agency requesting formal consultation,” “shall provide the Service with the best scientific and commercial data available or which can be obtained during the consultation,” to serve as the basis for the Fish and Wildlife Service’s subsequent BO. 50 C.F.R. 402.14(d).

    Given how much we still have to learn about ecosystems and the effects of human management, all land management should be conducted within a framework of intentional learning, with constant monitoring and feedback between management and the consequences of management. Consider the adaptive management framework and methods described in V. Sit and B. Taylor, eds., Statistical methods for adaptive management studies. B.C. Ministry of Forests Research Branch, Victoria, B.C. http://www.for.gov.bc.ca/hfd/pubs/docs/lmh/lmh42.htm For instance, the agency should disclose the reliability of the scientific studies and other evidence that is used to support the NEPA analysis and the decision.

    A few key sources of evidence for the manager to know about— listed here in increasing order of reliability— include anecdotes and expert judgement, retrospective studies, nonexperimental (observational) studies, and experimental manipulation.

    As a whole, anecdotal information should be used with a great deal of caution— or at least with rigorous peer review— to help avoid problems such as motivational bias.

    “[E]xpert judgement cannot replace statistically sound experiments.” … Anecdotes and expert judgement alone are not recommended for evaluating management actions because of their low reliability and unknown bias.

    Inventories and surveys are not the same as nonexperimental studies; they display patterns but do not reveal correlates. Nevertheless, inventories and surveys can be useful in adaptive management. They provide information from which to select random samples, or a baseline of conditions from which to monitor changes over time. Inventories and surveys should still adhere to strict sampling protocols . . .

    When existing data are used, how well they can address the critical management question should be assessed honestly and accurately.

    Is there a ‘best’ statistical approach to adaptive management? The answer to this question is an unqualified ‘yes.’ The best approach for answering the questions ‘Did this action have the desired effect?’ and ‘Are the basic assumptions underlying management decisions correct?’ is to use controlled, randomized experiments with sufficient sample sizes and duration.

    Marcot, B. G. 1998. Selecting appropriate statistical procedures and asking the right questions: a synthesis. Pp. 129-142. in V. Sit and B. Taylor, eds., Statistical methods for adaptive management studies. B.C. Ministry of Forests Research Branch, Victoria, B.C. http://www.for.gov.bc.ca/hfd/pubs/docs/lmh/lmh42.htm

    A lot of people appear to suffer from a mental condition known as “Belief Perseverance” —

    “People tend to hold on to their beliefs even when it appears that they shouldn’t. Belief perseverance is the tendency to cling to one’s initial belief even after receiving new information that contradicts or disconfirms the basis of that belief. … The third type involves naive theories, beliefs about how the world works. … At least three psychological processes underlie belief perseverance. One involves use of the “availability heuristic” to decide what is most likely to happen. … A second process concerns “illusory correlation,” in which one sees or remembers more confirming cases and fewer disconfirming cases than really exists. A third process involves “data distortions,” in which confirming cases are inadvertently created and disconfirming cases are ignored. … Research also has investigated ways to reduce belief perseverance. The most obvious solution, asking people to be unbiased, doesn’t work. However, several techniques do reduce the problem. The most successful is to get the person to imagine or explain how the opposite belief might be true. This de-biasing technique is known as counterexplanation.”

    Anderson, C.A. (2007). Belief perseverance (pp. 109-110). In R. F. Baumeister & K. D. Vohs (Eds.), Encyclopedia of Social Psychology. Thousand Oaks, CA: Sage. http://www.psychology.iastate.edu/faculty/caa/abstracts/2005-2009/07a.pdf

    Reply
    • Then again, scientific opinions are where it’s at in Forestry, when dealing with site-specific conditions. You can check the offensive comments at the door, pretending that some of us reject science. Are you looking to upgrade from a ‘broad brush’?

      Reply
    • Second thanks for posting the traits for critical thinking.. oddly I just graduated from a school which was supposed to teach critical thinking, but did not reflect those values in practice (liberal arts university).

      As to adaptive management, various efforts have been tried including the Adaptive Areas (???) in the NW Forest Plan. Does someone know of a paper to show how that worked?
      The below is an interesting quote..
      “[E]xpert judgement cannot replace statistically sound experiments.” … Anecdotes and expert judgement alone are not recommended for evaluating management actions because of their low reliability and unknown bias.”

      What is a statistically sound experiment? Back in the old days (at OSU) we had designed experiments where we varied treatments and watched what happened. Think seedlings and nursery treatments. Experimental forests were designed for long term studies. But it seems to me like many studies we have looked at today are not designed experiments- they take existing data and make a series of assumptions- based on the nature of the question you probably can’t do a designed experiment. They are modeling exercises, which are fine, but they are not the same as a statistically sound designed experiment.
      It seems to me that Gluckman’s idea of extended peer review with practitioners would make scientific information more meaningful. I see both/and,not either/or, in terms of involvement of scientists of varying disciplines and an array of practitioners.

      Reply
  2. Since there isn’t a long-term study completed on the effects of thinning in the Sierra Nevada, that doesn’t mean we shouldn’t do it. There will be no “latest science” contradicting current styles of active forest management in the Sierra Nevada. Measuring the benefits of the current style of thinning over 5 decades would be nice but, I doubt that even is possible within the government.

    Reply

Leave a Reply to Larry Harrell Fotoware Cancel reply