Posted on

Bert Baumgaertner and Charles Lassiter –“Convergence and Shared Reflective Equilibrium”

In this post, Bert Baumgaertner and Charles Lassiter discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Photo of two men looking down on the train tracks from a diverging bridge.
“Quai Saint-Bernard, Paris” (1932) Henri Cartier-Bresson

Imagine you’re convinced that you should pull the lever to divert the trolley because it’s better to save more lives. But suppose you find the thought of pushing the Fat Man off the bridge too ghoulish to consider seriously. You have a few options to resolve the tension:

  1. you might revise your principle that saving more lives is always better;
  2. you could revise your intuition about the Fat Man case;
  3. you could postpone the thought experiment until you get clearer on your principles;
  4. you could identify how the Fat Man case is different from the original one of the lone engineer on the trolley track.

These are our options when we are engaging in reflective equilibrium. We’re trying to square our principles and judgments about particular cases, adjusting each until a satisfactory equilibrium is reached.

Now imagine there’s a group of us, all trying to arrive at an equilibrium but without talking to one another. Will we all converge on the same equilibrium?

Consider, for instance, two people—Tweedledee and Tweedledum. They are both thinking about what to do in the many variations of the Trolley Problem. For each variation, Tweedledee and Tweedledum might have a hunch or they might not. They might share hunches or they might not. They might consider variations in the same order or they might not. They might start with the same initial thoughts about the problem or they might not. They might have the same disposition for relieving the tension or they might not.

Just this brief gloss suggests that there are a lot of places where Tweedledee and Tweedledum might diverge. But we didn’t just want suggestive considerations, we wanted to get more specific about the processes involved and how likely divergence or convergence would be.

To this end, we imagined an idealized version of the process. First, each agent begins with a rule of thumb, intuitions about cases, and a disposition for how to navigate any tensions that arise. Each agent considers a case at a time. “Considering a case” means comparing the case under discussion to the paradigm cases sanctioned by the rule. If the case under consideration is similar enough to the paradigm cases, the agent accepts the case, which amounts to saying, “this situation falls into the extension of my rule.” Sometimes, an agent might have an intuition that the case falls into the extension of the rule, but it’s not close enough to the paradigm cases. This is when our agents deliberate, using one of the four strategies mentioned above.

In order to get a sense of how likely it is that Tweedledee and Tweedledum would converge, we needed to systematically explore the space of the possible ways in which the process of reflective equilibrium could go. So, we built a computer model of it. As we built the model, we purposely made choices we thought would favor the success of a group of agents reaching a shared equilibrium. By doing so, we have a kind of “best case” scenario. Adding in real-world complications would make reaching a shared equilibrium only harder, not easier.

An example or story that is used for consideration, like a particular Trolley problem, is made up of a set of features. Other versions have some of the same features but differ on others. So we imagined there is a string of yes/no bits, like YYNY, where Y in positions 1, 2, and 4 means the case has that respective feature, while N in position 3 means the case does not. Of course examples used in real debates are much more complicated and nuanced, but having only four possible features should only make it easier to reach agreement. Cases have labels representing intuitions. A label of “IA” means a person has an intuition to accept the case as an instance of a principle, “IR” means to reject it, and “NI” means they have no intuition about it. Finally, a principle consists of a “center” case and a similarity threshold (how many bit values can differ?) that defines the extension of cases that fall under the principle. 

We then represented the process of reflective equilibrium as a kind of negotiation between principles and intuitions by checking whether the relevant case of the intuition is or isn’t a member of the extension of the principle. To be sure, the real world is much more complicated, but the simplicity of our model makes it easier to see what sorts of things can get in the way of reaching shared equilibrium.

What we found is that it is very hard to converge on a single interpersonal equilibrium. Even in the best case scenario, with very charitable interpretations of some “plausible” assumptions, we don’t see convergence.

Analysts of the process of reflective equilibrium are right that interpersonal convergence might not happen if people have different starting places. But they underestimate that even if Tweedledee and Tweedledum start from the same place, reaching convergence is hard. The reason is that, even if we rule out all of the implausible decision points, there are still so many plausible decision points at which Tweedledee and Tweedledum can diverge. They might both change their rule of thumb, for instance, but they might change it in slightly different ways. Small differences—particularly early in the process—lead to substantial divergence.

Why does this matter? Despite how challenging it is for our model, in the real world we find convergence all over the place—like philosophers’ intuitions about Gettier cases—supposedly from our La-Z-Boys. On our representation of reflective equilibrium, such convergence is highly unlikely, suggesting that we should look elsewhere for an explanation. One alternative explanation we suggest (and explore in other work) is the idea of “precedent”, i.e., information one has about the commitments and rules of others, and how those might serve as guides in one’s own process of deliberation.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4654/.

About the authors

Bert Baumgaertner grew up in Ontario, Canada, completing his undergraduate degree at Wilfrid Laurier University. He moved to the sunny side of the continent to do his graduate studies at University of California, Davis. In 2013 he moved to Idaho to start his professional career as a philosophy professor, where he concurrently developed a passion for trail running and through-hiking in the literal Wilderness areas of the Pacific Northwest. He is now Associate Professor of Philosophy at University of Idaho. He considers himself a computational philosopher whose research draws from philosophy and the cognitive and social sciences. He uses agent-based models to address issues in social epistemology. 

Charles Lassiter was born in Washington DC and grew up in Virginia, later moving to New Jersey and New York for undergraduate and graduate studies. In 2013, he left the safety and familiarity of the East Coast to move to the comparative wilderness of the Pacific Northwest for a job at Gonzaga University, where he is currently Associate Professor of Philosophy and Director of the Center for the Applied Humanities. His research focuses on issues of enculturation and embodiment (broadly construed) for an understanding of mind and judgment (likewise broadly construed). He spends a lot of time combing through large datasets of cultural values and attitudes relevant to social epistemology.