Posted on

Bert Baumgaertner and Charles Lassiter –“Convergence and Shared Reflective Equilibrium”

In this post, Bert Baumgaertner and Charles Lassiter discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Photo of two men looking down on the train tracks from a diverging bridge.
“Quai Saint-Bernard, Paris” (1932) Henri Cartier-Bresson

Imagine you’re convinced that you should pull the lever to divert the trolley because it’s better to save more lives. But suppose you find the thought of pushing the Fat Man off the bridge too ghoulish to consider seriously. You have a few options to resolve the tension:

  1. you might revise your principle that saving more lives is always better;
  2. you could revise your intuition about the Fat Man case;
  3. you could postpone the thought experiment until you get clearer on your principles;
  4. you could identify how the Fat Man case is different from the original one of the lone engineer on the trolley track.

These are our options when we are engaging in reflective equilibrium. We’re trying to square our principles and judgments about particular cases, adjusting each until a satisfactory equilibrium is reached.

Now imagine there’s a group of us, all trying to arrive at an equilibrium but without talking to one another. Will we all converge on the same equilibrium?

Consider, for instance, two people—Tweedledee and Tweedledum. They are both thinking about what to do in the many variations of the Trolley Problem. For each variation, Tweedledee and Tweedledum might have a hunch or they might not. They might share hunches or they might not. They might consider variations in the same order or they might not. They might start with the same initial thoughts about the problem or they might not. They might have the same disposition for relieving the tension or they might not.

Just this brief gloss suggests that there are a lot of places where Tweedledee and Tweedledum might diverge. But we didn’t just want suggestive considerations, we wanted to get more specific about the processes involved and how likely divergence or convergence would be.

To this end, we imagined an idealized version of the process. First, each agent begins with a rule of thumb, intuitions about cases, and a disposition for how to navigate any tensions that arise. Each agent considers a case at a time. “Considering a case” means comparing the case under discussion to the paradigm cases sanctioned by the rule. If the case under consideration is similar enough to the paradigm cases, the agent accepts the case, which amounts to saying, “this situation falls into the extension of my rule.” Sometimes, an agent might have an intuition that the case falls into the extension of the rule, but it’s not close enough to the paradigm cases. This is when our agents deliberate, using one of the four strategies mentioned above.

In order to get a sense of how likely it is that Tweedledee and Tweedledum would converge, we needed to systematically explore the space of the possible ways in which the process of reflective equilibrium could go. So, we built a computer model of it. As we built the model, we purposely made choices we thought would favor the success of a group of agents reaching a shared equilibrium. By doing so, we have a kind of “best case” scenario. Adding in real-world complications would make reaching a shared equilibrium only harder, not easier.

An example or story that is used for consideration, like a particular Trolley problem, is made up of a set of features. Other versions have some of the same features but differ on others. So we imagined there is a string of yes/no bits, like YYNY, where Y in positions 1, 2, and 4 means the case has that respective feature, while N in position 3 means the case does not. Of course examples used in real debates are much more complicated and nuanced, but having only four possible features should only make it easier to reach agreement. Cases have labels representing intuitions. A label of “IA” means a person has an intuition to accept the case as an instance of a principle, “IR” means to reject it, and “NI” means they have no intuition about it. Finally, a principle consists of a “center” case and a similarity threshold (how many bit values can differ?) that defines the extension of cases that fall under the principle. 

We then represented the process of reflective equilibrium as a kind of negotiation between principles and intuitions by checking whether the relevant case of the intuition is or isn’t a member of the extension of the principle. To be sure, the real world is much more complicated, but the simplicity of our model makes it easier to see what sorts of things can get in the way of reaching shared equilibrium.

What we found is that it is very hard to converge on a single interpersonal equilibrium. Even in the best case scenario, with very charitable interpretations of some “plausible” assumptions, we don’t see convergence.

Analysts of the process of reflective equilibrium are right that interpersonal convergence might not happen if people have different starting places. But they underestimate that even if Tweedledee and Tweedledum start from the same place, reaching convergence is hard. The reason is that, even if we rule out all of the implausible decision points, there are still so many plausible decision points at which Tweedledee and Tweedledum can diverge. They might both change their rule of thumb, for instance, but they might change it in slightly different ways. Small differences—particularly early in the process—lead to substantial divergence.

Why does this matter? Despite how challenging it is for our model, in the real world we find convergence all over the place—like philosophers’ intuitions about Gettier cases—supposedly from our La-Z-Boys. On our representation of reflective equilibrium, such convergence is highly unlikely, suggesting that we should look elsewhere for an explanation. One alternative explanation we suggest (and explore in other work) is the idea of “precedent”, i.e., information one has about the commitments and rules of others, and how those might serve as guides in one’s own process of deliberation.

Want more?

Read the full article at

About the authors

Bert Baumgaertner grew up in Ontario, Canada, completing his undergraduate degree at Wilfrid Laurier University. He moved to the sunny side of the continent to do his graduate studies at University of California, Davis. In 2013 he moved to Idaho to start his professional career as a philosophy professor, where he concurrently developed a passion for trail running and through-hiking in the literal Wilderness areas of the Pacific Northwest. He is now Associate Professor of Philosophy at University of Idaho. He considers himself a computational philosopher whose research draws from philosophy and the cognitive and social sciences. He uses agent-based models to address issues in social epistemology. 

Charles Lassiter was born in Washington DC and grew up in Virginia, later moving to New Jersey and New York for undergraduate and graduate studies. In 2013, he left the safety and familiarity of the East Coast to move to the comparative wilderness of the Pacific Northwest for a job at Gonzaga University, where he is currently Associate Professor of Philosophy and Director of the Center for the Applied Humanities. His research focuses on issues of enculturation and embodiment (broadly construed) for an understanding of mind and judgment (likewise broadly construed). He spends a lot of time combing through large datasets of cultural values and attitudes relevant to social epistemology.

Posted on

Cathy Mason – “Reconceiving Murdochian Realism”

In this post, Cathy Mason discusses the article she recently published in Ergo. The full-length version of Cathy’s article can be found here.

A picture of a vase with irises.
“Irises” (1890) Vincent van Gogh

Iris Murdoch’s ethics is filled with discussions of moral reality, moral truth and how things really stand morally. What exactly does she mean by these? Her style is certainly a non-standard philosophical style, and her ideas are remarkably wide-ranging, but it can seem appealing to think that at heart her metaethical commitments largely align with standard realists’. I suggest, however, that this reading of Murdoch is mistaken: her realism amounts to something else altogether.

I take standard realism to be roughly captured by the following definition from Sayre-McCord:

Moral realists hold that there are moral facts, that it is in light of these facts that peoples’ moral judgments are true or false, and that the facts being what they are (and so the judgments being true, when they are) is not merely a reflection of our thinking the facts are one way or another. That is, moral facts are what they are even when we see them incorrectly or not at all. (Sayre-McCord 2005: 40)

Does Murdoch subscribe to this view? It can certainly be tempting to think so. She repeatedly talks about ‘realism’ and ‘objectivity’, and remarks like the following seem well-understood in standard realist terms:

The authority of morals is the authority of truth, that is of reality. (TSG 374)

The ordinary person does not, unless corrupted by philosophy, believe that he creates values by his choices. He thinks that some things really are better than others and that he is capable of getting it wrong. (TSG 380)

Here, Murdoch clearly commits to the idea that some moral claims are true, and that what makes them true is not something to do with the valuer, but something about the world. All this sounds very much like standard realism.

However, it would be a mistake to think that these surface similarities point towards a deeper congruence between Murdoch and standard realists. For a start, realists typically take moral facts to be one kind among many. Just as there are mathematical facts and psychological facts, so too there are moral facts. Yet Murdoch repeatedly insists that all reality is moral—and thus that all facts are in some sense moral facts (e.g. IP 329, OGG 357, MGM 35). Moreover, though Murdoch insists on the truth of some moral claims, she understands the notion of truth very differently from standard realists.  Whereas realists typically regard truth as something abstract, Murdoch suggests that it can only be understood in relation to truthfulness and the search for truth. The seeming agreement between Murdoch and standard realists on the truth of some ethical claims thus belies deeper disagreements between them.

What’s more, standard realism is hard to square with some wider views Murdoch holds. First, she suggests that some moral concepts can be genuinely private: fully virtuous agents may have different moral concepts without either of their conceptual schemas being inaccurate or incomplete. Second, she suggests that there can be private moral reasons: moral reasons need not be universal. It is hard to see how there could be room for private moral concepts and reasons within standard realism: either there are facts corresponding to a moral belief, or there are not. If there are, then it is a kind of moral ignorance to ignore such facts. If not, then the belief is simply false. Finally, Murdoch rejects the idea common in standard realism that the moral supervenes on the non-moral, since she suggests that there simply is no non-moral reality.

What, then, does Murdoch have in mind when she discusses realism? In most cases where Murdoch introduces ideas such as realism or objectivity, she is discussing the moral perceiver’s relation to the thing perceived, rather than only talking about the thing perceived. Her realism is a claim about the reality of the moral where reality is understood as that which is discerned by the virtuous perceiver.

Take, for example, the following passages:

[T]he realism (ability to perceive reality) required for goodness is a kind of intellectual ability to perceive what is true, which is automatically at the same time a suppression of self. (OGG 353)

[A]nything which alters consciousness in the direction of unselfishness, objectivity and realism is to be connected with virtue. (TSG 369)

In both of these quotes, Murdoch discusses the relation between a moral perceiver and the thing perceived. Realism or objectivity is talked of not as a metaphysical feature of objects, properties or facts, but as a feature of moral agents who are epistemically engaged with the world.

Of course, the standard realist might allow that there is such a thing as realism as a feature of a moral perceiver, and understand this in terms of accessing facts or properties which independently exist. Yet this ordering of explanations is ruled out by Murdoch’s insistence that reality itself is a normative (moral) concept. What is objectively real, for Murdoch, cannot be understood apart from ethics, apart from the essentially human activity of seeking to understand the world which is subject to moral evaluation. This is not to suggest that reality is a solely moral concept: it is also linked to truth, to how the world is. But it is to suggest that a conception of how the world is, of reality, must be essentially ethical.

What kind of relation, then, must the realistic observer stand into the thing observed? Murdoch suggests that no non-moral answer can be given here, no description that demarcates the realistic stance in an ethically neutral way. However, a description can be given in rich ethical terms. To be realistic is best understood as doing justice to the thing one is confronted with, being faithful to the reality of it, being truthful about it, and so on. All of these terms capture the idea that perception can be genuinely cognitive, whilst at the same time being a fundamentally ethical task.

Want more?

Read the full article at


  • Murdoch, Iris (1999). “The Idea of Perfection”. In Peter Conradi (Ed.), Existentialists and Mystics: Writings on Philosophy and Literature (299–337). Penguin. [IP]
  • Murdoch, Iris (1999). “On God and Good”. In Peter Conradi (Ed.), Existentialists and Mystics: Writings on Philosophy and Literature (337–63). Penguin. [OGG]
  • Murdoch, Iris (1999). “The Sovereignty of Good Over Other Concepts”. In Peter Conradi (Ed.), Existentialists and Mystics: Writings on Philosophy and Literature (363–86). Penguin. [TSG]
  • Murdoch, Iris (2012). “Metaphysics as a Guide to Morals”. Vintage Digital. [MGM]
  • Sayre-McCord, Geoffrey (2005). “Moral Realism”. In David Copp (Ed.), The Oxford Handbook of Ethical Theory (39–62). Oxford University Press.

About the author

Cathy Mason is an Assistant Professor in Philosophy at the Central European University (Vienna). She is currently working on a book on Iris Murdoch’s ‘metaethics’, as well as some ideas concerning the ethics of friendship.

Posted on

Victor Lange and Thor Grünbaum – “Measurement Scepticism, Construct Validation, and Methodology of Well-Being Theorising”

A young pregnant woman is holding a small balance for weighing gold. In front of her is a jewelry box and a mirror; on her right, a painting of the last judgment.
“Woman Holding a Balance” (c. 1664) Johannes Vermeer

In this post, Victor Lange and Thor Grünbaum discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Many of us think that decisions and actions are justified, at least partially, in relation to how they affect the well-being of the involved individuals. Consider how politicians and lawmakers often justify, implicitly or explicitly, their policy decisions and acts by reference to the well-being of citizens. In more radical terms, one might be an ethical consequentialist and claim that well-being is the ultimate justification of any decision or action.

It would therefore be wonderful if we could precisely measure the well-being of individuals. Contemporary psychology and social science contain a wide variety of scales for this purpose. Most often, these scales measure well-being by self-reports. For examples, subjects rate the degree to which they judge or feel satisfied with their own lives or they report the ratio of positive to negative emotions. Yet, even though such scales have been widely adopted, many researchers express scepticism about whether they actually measure well-being at all. In our paper, we label this view measurement scepticism about well-being. 

Our aim is not to develop or motivate measurement scepticism. Instead, we consider a recent and interesting reply to such scepticism, put forward by Anna Alexandrova (2017; see also Alexandrova and Haybron, 2016). According to Alexandrova, we can build an argument against measurement scepticism by employing a standard procedure of scientific psychology called construct validation. 

Construct validation is a psychometric procedure. Researchers use the procedure to assess the degree to which a scale actually measures its intended target phenomenon. If psychologists and social scientists have a reliable procedure to assess the degree to which a scale really measures what it is intended to measure, it seems obvious that we should use it to test well-being measurements. For the present purpose, let us highlight two key aspects of the procedure. 

First, construct validation utilises convergent and discriminant correlational patterns between the scores of various scales as a source of evidence. Convergent correlations concern the relation between scores on the target scale (intended to measure well-being) and scores on other scales (assumed to measure either well-being or some closely related phenomenon, such as wealth or physical health). Discriminant correlations concern non-significant relations between scores on the target scale and scores on scales that we expect to measure phenomena unrelated to well-being (e.g., scales measuring perceptual acuity). When assessing the construct validity of a scale, researchers evaluate a scale by considering whether it exhibits attractive convergent correlations (whether subjects with high scores on the target well-being scale also score high on physical health, for example) and discriminant correlations (e.g., whether subjects’ scores on the target well-being scale have significant correlations with perceptual acuity).

Second, the examination of correlational patterns depends on theory. Initially, we need a theory to build our scale (for instance, a theory of how well-being is expressed in the target population). Moreover, we need a theory to tell us what correlations we should expect (i.e. how answers on our scale should correlate with other scales). This means that, when engaging in construct validation, researchers test a scale and its underlying theory holistically. That is, the construct validation of the target scale involves testing both the scale and the theory of well-being that underlies it. Consequently, the procedure of construct validation requires that researchers remain open to revising their underlying theory if they persistently observe the wrong correlational patterns. Given this holistic nature of the procedure, correlational patterns might lead to revisions of one’s theory of well-being, perhaps even to abandoning it. 

The question now is this: Does the procedure of construct validation provide a good answer to measurement scepticism about well-being? While we acknowledge that for many psychological phenomena (e.g., intelligence) the procedures of construct validation might provide a satisfying reply to various forms of measurement scepticism, things are complicated with well-being. Here the normative nature of well-being rears its philosophical head. We argue that an acceptable answer to the question depends on the basic assumptions about the methodology of well-being theorising. Let us clarify by distinguishing between two methodological approaches.

First, methodological naturalism about well-being theorising claims that we should theorise about well-being in the same way we investigate any other natural phenomenon, namely, by ordinary inductive procedures of scientific investigation. Consequently, our theory of well-being should be open to revision on empirical grounds. Second, methodological non-naturalism claims that theorising about well-being should be limited to the methods known from traditional (moral) philosophy. The question of well-being is a question about what essentially and non-derivatively makes a person’s life go best. Well-being has an ineliminative normative or moral nature. Hence, the question of what well-being is, is a question only for philosophical analysis.  

The reader might see the problem now. Since construct validation requires openness to theory revision by correlational considerations, it is a procedure that only a methodological naturalist can accept. Consequently, if measurement scepticism is motivated by a form of non-naturalism, we cannot reject it by using construct validation. Non-naturalists will not accept that theorising about well-being can be a scientific and empirical project. This result is all the more important because many proponents of measurement scepticism seem to be methodological non-naturalists.  

In conclusion, if justifying an action or a social policy over another often requires assessing consequences for well-being, then scepticism about measurement of well-being becomes an important obstacle. We cannot address this scepticism head-on with the procedures of construct validation. Such procedures assume something the sceptic might not accept, namely, that our theory of well-being should be open to empirical revisions. Instead, we need to start by making our methodological commitments explicit. 

Want more?

Read the full article at


  • Alexandrova, Anna (2017). A Philosophy for the Science of Well-Being. Oxford University Press. 
  • Alexandrova, Anna and Daniel M. Haybron (2016). “Is Construct Validation Valid?” Philosophy of Science, 83(5), 1098–109. 

About the authors

Victor Lange is a PhD-fellow at the Section for Philosophy and a member of the CoInAct group at the Department of Psychology, University of Copenhagen. His research focuses upon attention, meditation, psychotherapy, action control, mental action, and psychedelic assisted therapy. He is a part of the platform Regnfang that publishes podcasts about the sciences of the mind.

Thor Grünbaum is an associate professor at the Section for Philosophy and the Department of Psychology, University of Copenhagen. He is head of the CoInAct research group. His research interests are in philosophy of action (planning, control, and knowledge), philosophy of psychology (explanation, underdetermination, methodology), and cognitive science (sense of agency, prospective memory, action control).

Posted on

Laura Schroeter and François Schroeter – “Bad News for Ardent Normative Realists?”

Portrait of a man composed by painting on the canvas various objects traditionally associated with fire – such as sticks, wood, guns and other tools – in such a way that they compose a human head.
“Fire” (1566) Giuseppe Arcimboldo

In this post, Laura and François Schroeter discuss their article recently published in Ergo. The full-length version of the article can be found here.

Many metaethicists are attracted to a position Matti Eklund (2017) calls ‘Ardent Normative Realism’. The main motivation behind this position can be illustrated with the help of a couple of examples.

Imagine you are disagreeing with a friend about whether abortion at 20 weeks is morally wrong. Imagine further that the two of you have a very different understanding of what it takes for an action to be morally wrong: you think that morality is determined by God’s law, while your friend does not. Despite this divergence, you two seem to be genuinely disagreeing about the same topic. If we interpreted you as talking past each other, we would be failing to take the normative authority of morality seriously (Enoch 2011). Both of you are interested in what is morally wrong tout court, not what is morally wrong according to the idiosyncratic standards of some individual.

Similarly, if we imagine two separate communities debating the same issue, we would have to say that they are interested in finding out whether abortion at 20 weeks is morally wrong tout court, rather than whether it is wrong according to the normative standards specific to the community. Settling for less would deflate the normative authority of morality.

In order to vindicate these intuitions, proponents of Ardent Normative Realism endorse a strong form of metaphysical realism in the moral domain. According to the Ardent Normative Realist, “reality itself favors certain ways of valuing and acting” (Eklund 2017: 1). If two communities disagree on moral questions, they cannot both be getting it right. At most, one of them is “limning” the normative structure of reality (22).

Now, suppose we grant that reality does indeed favor certain ways of valuing and acting. The Ardent Realist still faces an important problem. Given their radically different understandings of what makes an action morally wrong, how is it possible for individuals and communities to pick out the same reference with their moral terms? Contrast the moral term with the term ‘bachelor’, for example. The term ‘bachelor’ has the same reference, even when used by different individuals, because we all have very similar empirical criteria for who counts as a bachelor: unmarried eligible males. But imagine we introduce a new term, ‘nuba’, and different individuals have radically divergent views about what it takes for something to be a nuba. How can the term ‘nuba’ pick out the same property when it is used by individuals who rely on different application criteria?

To address this problem, many Ardent Realists have been tempted by a thesis Eklund calls ‘Referential Normativity’:

Two predicates or concepts conventionally associated with the same normative role are thereby determined to have the same reference. (Eklund 2017: 10)

Imagine that all it takes to count as competent with our new term, ‘nuba’, is that it plays the same normative role in one’s psychology that English speakers associate to ‘morally wrong’. For instance, if a speaker judges that an action is nuba, they will be disposed to avoid performing that action or to feel guilt if they do perform it. According to Referential Normativity, all it takes for speakers to pick out the same reference is that they take ‘nuba’ to play this normative role; their divergent empirical criteria for classifying actions as ‘nuba’ are strictly irrelevant to fixing its reference.

Obviously, it would be great news for Ardent Realists if Referential Normativity were true. However, we argue that Referential Normativity is just too good to be true. To show what’s problematic about it, we need to step back and ask foundational questions about how reference is determined. There is much controversy in the philosophical literature concerning this topic, but we seek to sidestep those divergences by focusing on points of agreement among theorists of reference determination.

What is the point of referential ascriptions? We suggest that ascribing a specific reference to an adjective like ‘nuba’ must:

(i) help explain the reasoning and actions of subjects using the term ‘nuba’, and

(ii) set truth-conditions for assessing whether assertions and beliefs involving ‘nuba’ are correct. 

Suppose, for instance, that we interpret competent speakers’ use of ‘nuba’ as attributing the property of being loud. This interpretation flouts both (i) and (ii). The interpretation is not explanatory because most users of ‘nuba’ will not associate its defining normative role with all and only loud actions, and so attributing this reference will not help to explain their reasoning and actions. And the interpretation does not set a plausible standard of correctness because there is no plausible story why all competent users are failing to live up to their semantic commitments if they fail to apply ‘nuba’ to loud actions. We must conclude that the interpretation of ‘nuba’ as referring to being loud is mistaken. 

In the full-length version of our paper, we examine different attempts to reconcile Referential Normativity with constraints (i) and (ii). We argue that these attempts all fail. In a nutshell, Referential Normativity tries to pull a rabbit out of a hat. The mere normative role associated with a term like ‘morally wrong’ is insufficient to ground the ascription of any empirically instantiated property as its reference. 

Want more?

Read the full article at


  • Eklund, M. (2017). Choosing Normative Concepts. Oxford, Oxford University Press.
  • Enoch, D. (2011). Taking Morality Seriously: A Defense of Robust Realism. Oxford, Oxford University Press.

About the authors

Picture of the first author.
Picture of the second author.

Laura and François Schroeter are Associate Professors of Historical and Philosophical Studies at the University of Melbourne. 

Laura received her PhD from the University of Michigan. After that, she took up a postdoctoral fellowship at the Research School of the Social Sciences at the Australian National University. She joined the University of Melbourne in 2008. Her research focuses on the philosophy of language, the philosophy of mind, and metaethics. She has written extensively about two-dimensional semantics, concept individuation, and normative concepts.

François received his PhD from the University of Fribourg. He joined the Philosophy Department at Melbourne in 2003, after spending time at the University of Michigan and at the Research School of Social Sciences at the Australian National University. He is interested in normative concepts, metaethics, and moral psychology.