When a person in policy or management tries to arrange research developed by the current system into a package relevant to a real world problem, this can leave a somewhat awkward jumble of technologies and strategies as a pool of scientific information. An analogy might be designing a car by contacting a couple of hundred groups and asking them to work on car design. Some might work on door design, others engines. But perhaps no one works on steering or air conditioning. And there’s probably a few who decide that what’s really needed is a train or an airplane and work on that. And it’s no one’s job to ensure that the pieces are all there or that the pieces fit together. Within the research system, there is little to encourage interdisciplinary cooperation as journals, funding agencies and other power structures of research communities tend to be either disciplinary, or from a restrictive subset of disciplines (e.g., ecological economics). This makes the work of taking the discrete nuggets of scientific information and arraying them into a meaningful policy analysis another mix of science, art and intuition.
DIVERGENT SELECTION, INDIRECT SELECTION AND DISCIPLINARY DRIFT
Administrators in research have selected for certain traits in scientists. Certainly it is desirable to measure accomplishments. This has been done for number of papers and grants awarded. However two questions arise. First, are there undesirable indirect effects from this selection? Second, how divergent is this selection from selection in the policy arena, and how would such a divergence influence scientists working with policy makers?
No doubt there are other forces that cause disciplinary fragmentation, but there has been a proliferation of journals and symposia, associations and subdisciplinary communities. This is good for publication records, but difficult to individuals who want to keep up with or synthesize science findings. In terms of worldview, there tends to be some “disciplinary drift” as well.
Each scientific discipline contains the paradox that the more the circumstances are controlled to get accurate data, the less relevant the answer is to the real world. Science used to depend for its legitimacy on designed experiments, which could be replicated and tested. As issues like global climate change come under scrutiny, however, or even evolution, it is recognized that in most cases rerunning the clock is not possible, and even if it were, stochastic forces might lead to a variety of possible outcomes. Therefore, as problems get more complex, science grows less “scientific.” We depend more and more on a given scientific community, rather than reproducibility, to determine what is good science and bad science. But as disciplines splinter and recombine, the scientific communities may be mixing values and science in varying proportions with unquestioned approaches and unstated assumptions and paradigms. Thus in today’s complex world, there may be ultimately no quality control on this science.
In addition, focusing on the production of publications as an organizational target plus disciplinary drift can have the effect of scientists amplifying minor discoveries or reinventing what is commonly known in another discipline. There is also a tendency to make the simple arcane and esoteric so that it appears that the discovery is important. In policy, citizens and their predilections are the key. In research, both citizens and practitioners are often left out of decisions and not the target of communication. This is a major difference between the two worlds.
Scientists can become advocates for technologies they develop, or amplify threats (from someone else’s technology). In this environment, it is difficult for a policy maker to get around the self-serving nature of these debates and get information that is balanced. For example, Jasanoff (1990) cites the Ecological Society of America adopting an influential public position on assessing the risk of releasing genetically engineered organisms into the environment. According to Jasanoff, this action was prompted in large part by a desire to enhance the organization’s professional standing; significantly it postdates a report from the National Research Council, in which the institutionally more powerful community of molecular biologists and biochemists had articulated somewhat different principles, downplaying ecological consequences. Today, almost nine years later, with substantial sums of research funds invested in the interim, we are no closer to understanding the true environmental risks of GMO’s than we were nine years ago. This is clearly an indictment of the scientific establishment’s ability or desire to look beyond its interests in increased research funding by discipline and develop information useful to the citizens of the U.S.