Peter Williams on Valuing Collaborative Input to Agency Decisions

Happy St. Patrick’s Day everyone! It’s snowing and blowing and drifting here pretty seriously so I won’t be heading to the pub.

I thought everyone could use a break from trying to understand the complexities of oil and gas production.. so here’s a brief post by Peter Williams, as a collaboration literature expert,  in answer to my question:

If you have a collaborative group and the FS doesn’t accept their recommendations, is there some literature around what is considered “good faith”, or what is input that is accepted or valued, and how do you tell if it’s valued if it’s not entirely accepted?

Granted, now that I look at it, perhaps I went off a little bit bringing up the “good faith” concept, but I think his answer is still helpful.

On the good-faith question, the literature is difficult because “good-faith” has a specific meaning in business, as does “stakeholder”, and also collaboration tends to mean something different in the private sector than the public because of the question of “decision authority” or responsibility. The question or phrase is different in each of the two sectors.

I guess a third consideration might be that a “values-based” or values-driven decision process can lead to a situation where a recommendation isn’t followed, but the values driving that recommendation are applied to make a different decision. That result would seem consistent with a collaborative process, whereas a collaborative process that produces a recommendation and then treats the recommendation as somewhat of a “position” is actually less collaborative than some participants might realize (Kuhn would call this “internally inconsistent” in his discussion of scientific revolutions).

Here’s an example from the business world where “good-faith” and stakeholder are both emphasized, but in a different way than you and I would mean:

**Here’s the abstract of that paper for those interested:

Although stakeholder theory is concerned with stakeholder engagement, substantive operational barometers of engagement are lacking in the literature. This theoretical paper attempts to strengthen the accountability aspect of normative stakeholder theory with a more robust notion of stakeholder engagement derived from the concept of good faith. Specifically, it draws from the labor relations field to argue that altered power dynamics are essential underpinnings of a viable stakeholder engagement mechanism. After describing the tenets of substantive engagement, the paper draws from the labor relations and commercial law literatures to describe the characteristics of good faith as dialogue, negotiation, transparency, and totality of conduct; explains how they can be adapted and applied to the stakeholder context; and suggests the use of mediation and non-binding arbitration. The paper concludes by addressing anticipated objections and shortcomings and discussing implications for theory and research.**

Back to Peter:

In my own work (FWIW), I’ve drawn from the decision-making literature, framing collaboration as a decision process and the question about “good-faith” as input to a decision, especially when the decision process is a collaborative one (i.e., decision is informed by a collaborative process, as opposed to being made collaboratively).

What I do is ask early about how participants (including the decision-maker(s)) might measure success. And then I split that into several key parts because sometimes success can be defined in an inappropriate or counter-productive way, like whether a group recommendation is accepted by the decision-maker (positional) as opposed to used by that person (values-driven). Also, there’s the question of whether the input is valued in the sense of informing the decision and whether it’s valued in the sense of having been heard. Both can be important, so speaking to both seems essential.

In practice, what I do is look for ways the deciding official or agency can, first, make it clear that they heard the input, which is similar to active listening. Then I look for ways to make a clear link between what was heard and the pending decision, so the discussion becomes “here’s what I heard” followed by “here’s how your input and participation helped.” And I also then talk about how the decision leads to actions (i.e., implementation) and that there are ways to stay engaged to help implement the action or otherwise pursue those driving values.

7 thoughts on “Peter Williams on Valuing Collaborative Input to Agency Decisions”

  1. Ah! This is brilliant. I’ve been ruminating about many of these concepts and questions as part of the second chapter of my dissertation (blah, blah, blah). I appreciate Sharon highlighting the distinction between input that is accepted/adopted and input that is valued as well as Peter’s elaboration on those concepts. I also find it particularly interesting to hear someone come at the questions from an entirely different angle and look forward to reading the full article that is abstracted above.

    FWIW, I have been thinking of these concepts in slightly different ways, in terms of factors that contribute to bureaucratic willingness to accept influence (from participatory processes like collaboration) and the ways in which bureaucrats demonstrate responsiveness. Implicit in my formulations, and perhaps Peter’s as well, are two ideas: that the role of bureaucrats is to make decisions based on public values (not entirely consistent with Weber’s technocratic model of bureaucracy) and that participatory democratic processes like collaboration that involve a diverse set of potentially impacted publics, reveal an approximation of public values (or public preferences, if you’re in the rational choice camp). When looked at this way, we can see the tension in worldviews between technocratic decision-making associated with representative democracy and values-driven decision-making associated with participatory or direct democracy.

    Returning to Sharon and Peter’s discussion, perhaps “good faith” could be demonstrated via bureaucratic responsiveness? Would that address Peter’s scenario of “a “values-based” or values-driven decision process … lead[ing] to a situation where a recommendation isn’t followed, but the values driving that recommendation are applied to make a different decision”? The challenge is: how do you measure whether a decision was values-driven and whether those values were derived from the deliberative process? Can we only use metrics of success as defined by the subjective perception of participants?

    • Chelsea, I don’t know if this will be helpful, and I’m not at all familiar with the literature, but the tension between technocratic and values-driven is a theme in Justin Farrell’s excellent IMHO book the Battle for Yellowstone. Here’s a quote from the intro that outlines how he looks at that in the book.

      I make two arguments: First, that this large-scale social change has important moral causes and consequences, as competing groups erect and protect new moral boundaries in the fight for nature. Second, that this new social and moral arrangement fostered protracted environmental conflict. I present the cast of characters involved in GYE conflicts, and then document the rise of conflict using a host of original time-series indicators, across a variety of institutional fields (e.g., lawsuits, voting segregation, congressional attention, scientific disputes, public responses, interest group conflict, carrying capacity conflict).

      In the remaining chapters I build on this sociohistorical approach, descending from the birds-eye-view level down into the inner workings of some of the most contentious and intractable conflicts in the GYE. In doing so, I am able to show specifically and concretely how morality and spirituality actually influence the day-to-day practices of environmental conflict. Each fine-grained case study shows the different ways in which these cultural elements are tangled up in-and come to influence-disputes that, on the face of it, appear to be purely rational, secular, scientific, legal, or economic.

    • It sounds like you and the author above are conceptualizing the bureaucrats as receiving collaborative input, then either accepting/responding or not. And then a second frame (not discussed as much here) where the bureaucrats are actually part of the collaborative decision as participants with their own interest. I would say that the line between those is pretty gray. In my experience, when the line officer believes they need collaborative input, their needs will shape the process in a way that is functionally the same as participation even if they aren’t participating directly. In either case, they are very likely to either accept or reflect the input in the decision. In that case, a “response” *is* the agency’s participation, just in a different setting (not sitting at the table, but on paper).

      But if they don’t believe they need the input and are just checking a box because collaboration is an expectation, then they’re much less likely to accept, and efforts to “respond” will often ring hollow (i.e., sound like excuses). The key distinction isn’t procedural—how the active listening and responding occurs. It’s substantive—does the agency need the input to solve a problem. If not, then it isn’t likely to accept and its response is likely to be unsatisfactory to the group.

      My advice to the agency is if you don’t think you need the input, don’t waste our time. And if you signal that you do want the input, you’re making a promise that you better be ready to keep.

      This is my long way of coming around to your question about how we know whether the agency was responsive: collaborative group perceptions are really the only reliable way. Only the collaborative group can say whether the response is meaningful and respectful of the input, or just a hollow excuse.

  2. Having spent quite a bit of time working on collaboratives, it is my experience that it is best to avoid getting to the point where a collaborative group makes a formal recommendation that is largely ignored. If it gets to this point, the result is often a loss of trust for a very long time. I am still bitter many years later about several projects I worked on where the collaborative was put in the awkward position of working with agency staff to develop thoughtful, detailed, science-based alternatives to agency actions, that were then almost completely ignored in the decision. Seeing the results of those actions on the ground, knowing what could have been, is a forever reminder of that terrible process.

    Although the agency is the decision making authority and one with their neck on the line, the agency is member of the collaborative. The agency should not be asking the collaborative for the first time what they think of a project they spent the past 6 months developing, that is a recipe for collaborative failure. This is a concept that can be difficult for some Rangers and Supervisors to understand, which is why personalities are so important. The process works when the agency acts as a member of the collaborative that includes the group in the development of potentially contentious projects from very early in the project design phase. This is the only way the values of the group can truly be incorporated.

    • A. I don’t understand part of what you said…”develop thoughtful, detailed, science-based alternatives to agency actions”.. wouldn’t collaboratives help the agency design the preferred alternative? Perhaps my brain could better wrap around more specifics as to what these ideas where and what happened on the ground that was not good.

  3. In my observation, so called “collaborative” groups are often anything but collaborative. Instead they are often just a single organization or coalition of like-minded organizations who have arrogated to themselves the title of “collaborative” and claim to represent a community consensus while steamrolling anyone with opposing views and ignoring any stakeholders and user-groups who their proposal would harm. While they may have the support of local governments, they are often highly controversial and face widespread opposition that they simply ignore and pretend doesn’t exist.

    This has been the case with both major collaboratives I’ve been following in central Colorado lately: the Gunnison Public Lands Initiative (really just a coalition of Wilderness advocacy groups and left-wing County Commissions) and Envision Chaffee County (really just one group of disgruntled locals fed up with camping and recreation in general, which was founded by a sitting county commissioner and wrote the county’s recreation plan). Both groups started with a particular agenda and either ignored or completely excluded anyone with opposing viewpoints from being involved with their “collaborative.” Then they get all peeved when the land management agencies actually follow their multiple-use mandate to manage public lands for the benefit of all instead of following their particular agenda.

    Here’s a short (5 page) relevant article (by someone who I imagine has worked with Peter Williams) that takes off from the idea of “decision authority” being unclear and distinguishes it from “decision space.” (I would differ in viewing the “legal” limitations as part of the former rather than the latter.) But I think it does a better job of showing how collaboration can help expand the discretionary decision space by bringing in new ideas in the scenario alluded to by Sam:

    “Agency personnel often ask, ‘What do citizen collaborative groups bring to the table that can assist in providing things necessary to the decision process?’” And then it provides an example of successful collaboration under this scenario for herbicide use after the Rim Fire.


Leave a Comment

Discover more from The Smokey Wire : National Forest News and Views

Subscribe now to keep reading and get access to the full archive.

Continue reading