Posted on

Cathy Mason – “Reconceiving Murdochian Realism”

In this post, Cathy Mason discusses the article she recently published in Ergo. The full-length version of Cathy’s article can be found here.

A picture of a vase with irises.
“Irises” (1890) Vincent van Gogh

Iris Murdoch’s ethics is filled with discussions of moral reality, moral truth and how things really stand morally. What exactly does she mean by these? Her style is certainly a non-standard philosophical style, and her ideas are remarkably wide-ranging, but it can seem appealing to think that at heart her metaethical commitments largely align with standard realists’. I suggest, however, that this reading of Murdoch is mistaken: her realism amounts to something else altogether.

I take standard realism to be roughly captured by the following definition from Sayre-McCord:

Moral realists hold that there are moral facts, that it is in light of these facts that peoples’ moral judgments are true or false, and that the facts being what they are (and so the judgments being true, when they are) is not merely a reflection of our thinking the facts are one way or another. That is, moral facts are what they are even when we see them incorrectly or not at all. (Sayre-McCord 2005: 40)

Does Murdoch subscribe to this view? It can certainly be tempting to think so. She repeatedly talks about ‘realism’ and ‘objectivity’, and remarks like the following seem well-understood in standard realist terms:

The authority of morals is the authority of truth, that is of reality. (TSG 374)

The ordinary person does not, unless corrupted by philosophy, believe that he creates values by his choices. He thinks that some things really are better than others and that he is capable of getting it wrong. (TSG 380)

Here, Murdoch clearly commits to the idea that some moral claims are true, and that what makes them true is not something to do with the valuer, but something about the world. All this sounds very much like standard realism.

However, it would be a mistake to think that these surface similarities point towards a deeper congruence between Murdoch and standard realists. For a start, realists typically take moral facts to be one kind among many. Just as there are mathematical facts and psychological facts, so too there are moral facts. Yet Murdoch repeatedly insists that all reality is moral—and thus that all facts are in some sense moral facts (e.g. IP 329, OGG 357, MGM 35). Moreover, though Murdoch insists on the truth of some moral claims, she understands the notion of truth very differently from standard realists.  Whereas realists typically regard truth as something abstract, Murdoch suggests that it can only be understood in relation to truthfulness and the search for truth. The seeming agreement between Murdoch and standard realists on the truth of some ethical claims thus belies deeper disagreements between them.

What’s more, standard realism is hard to square with some wider views Murdoch holds. First, she suggests that some moral concepts can be genuinely private: fully virtuous agents may have different moral concepts without either of their conceptual schemas being inaccurate or incomplete. Second, she suggests that there can be private moral reasons: moral reasons need not be universal. It is hard to see how there could be room for private moral concepts and reasons within standard realism: either there are facts corresponding to a moral belief, or there are not. If there are, then it is a kind of moral ignorance to ignore such facts. If not, then the belief is simply false. Finally, Murdoch rejects the idea common in standard realism that the moral supervenes on the non-moral, since she suggests that there simply is no non-moral reality.

What, then, does Murdoch have in mind when she discusses realism? In most cases where Murdoch introduces ideas such as realism or objectivity, she is discussing the moral perceiver’s relation to the thing perceived, rather than only talking about the thing perceived. Her realism is a claim about the reality of the moral where reality is understood as that which is discerned by the virtuous perceiver.

Take, for example, the following passages:

[T]he realism (ability to perceive reality) required for goodness is a kind of intellectual ability to perceive what is true, which is automatically at the same time a suppression of self. (OGG 353)

[A]nything which alters consciousness in the direction of unselfishness, objectivity and realism is to be connected with virtue. (TSG 369)

In both of these quotes, Murdoch discusses the relation between a moral perceiver and the thing perceived. Realism or objectivity is talked of not as a metaphysical feature of objects, properties or facts, but as a feature of moral agents who are epistemically engaged with the world.

Of course, the standard realist might allow that there is such a thing as realism as a feature of a moral perceiver, and understand this in terms of accessing facts or properties which independently exist. Yet this ordering of explanations is ruled out by Murdoch’s insistence that reality itself is a normative (moral) concept. What is objectively real, for Murdoch, cannot be understood apart from ethics, apart from the essentially human activity of seeking to understand the world which is subject to moral evaluation. This is not to suggest that reality is a solely moral concept: it is also linked to truth, to how the world is. But it is to suggest that a conception of how the world is, of reality, must be essentially ethical.

What kind of relation, then, must the realistic observer stand into the thing observed? Murdoch suggests that no non-moral answer can be given here, no description that demarcates the realistic stance in an ethically neutral way. However, a description can be given in rich ethical terms. To be realistic is best understood as doing justice to the thing one is confronted with, being faithful to the reality of it, being truthful about it, and so on. All of these terms capture the idea that perception can be genuinely cognitive, whilst at the same time being a fundamentally ethical task.

Want more?

Read the full article at


  • Murdoch, Iris (1999). “The Idea of Perfection”. In Peter Conradi (Ed.), Existentialists and Mystics: Writings on Philosophy and Literature (299–337). Penguin. [IP]
  • Murdoch, Iris (1999). “On God and Good”. In Peter Conradi (Ed.), Existentialists and Mystics: Writings on Philosophy and Literature (337–63). Penguin. [OGG]
  • Murdoch, Iris (1999). “The Sovereignty of Good Over Other Concepts”. In Peter Conradi (Ed.), Existentialists and Mystics: Writings on Philosophy and Literature (363–86). Penguin. [TSG]
  • Murdoch, Iris (2012). “Metaphysics as a Guide to Morals”. Vintage Digital. [MGM]
  • Sayre-McCord, Geoffrey (2005). “Moral Realism”. In David Copp (Ed.), The Oxford Handbook of Ethical Theory (39–62). Oxford University Press.

About the author

Cathy Mason is an Assistant Professor in Philosophy at the Central European University (Vienna). She is currently working on a book on Iris Murdoch’s ‘metaethics’, as well as some ideas concerning the ethics of friendship.

Posted on

Victor Lange and Thor Grünbaum – “Measurement Scepticism, Construct Validation, and Methodology of Well-Being Theorising”

A young pregnant woman is holding a small balance for weighing gold. In front of her is a jewelry box and a mirror; on her right, a painting of the last judgment.
“Woman Holding a Balance” (c. 1664) Johannes Vermeer

In this post, Victor Lange and Thor Grünbaum discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Many of us think that decisions and actions are justified, at least partially, in relation to how they affect the well-being of the involved individuals. Consider how politicians and lawmakers often justify, implicitly or explicitly, their policy decisions and acts by reference to the well-being of citizens. In more radical terms, one might be an ethical consequentialist and claim that well-being is the ultimate justification of any decision or action.

It would therefore be wonderful if we could precisely measure the well-being of individuals. Contemporary psychology and social science contain a wide variety of scales for this purpose. Most often, these scales measure well-being by self-reports. For examples, subjects rate the degree to which they judge or feel satisfied with their own lives or they report the ratio of positive to negative emotions. Yet, even though such scales have been widely adopted, many researchers express scepticism about whether they actually measure well-being at all. In our paper, we label this view measurement scepticism about well-being. 

Our aim is not to develop or motivate measurement scepticism. Instead, we consider a recent and interesting reply to such scepticism, put forward by Anna Alexandrova (2017; see also Alexandrova and Haybron, 2016). According to Alexandrova, we can build an argument against measurement scepticism by employing a standard procedure of scientific psychology called construct validation. 

Construct validation is a psychometric procedure. Researchers use the procedure to assess the degree to which a scale actually measures its intended target phenomenon. If psychologists and social scientists have a reliable procedure to assess the degree to which a scale really measures what it is intended to measure, it seems obvious that we should use it to test well-being measurements. For the present purpose, let us highlight two key aspects of the procedure. 

First, construct validation utilises convergent and discriminant correlational patterns between the scores of various scales as a source of evidence. Convergent correlations concern the relation between scores on the target scale (intended to measure well-being) and scores on other scales (assumed to measure either well-being or some closely related phenomenon, such as wealth or physical health). Discriminant correlations concern non-significant relations between scores on the target scale and scores on scales that we expect to measure phenomena unrelated to well-being (e.g., scales measuring perceptual acuity). When assessing the construct validity of a scale, researchers evaluate a scale by considering whether it exhibits attractive convergent correlations (whether subjects with high scores on the target well-being scale also score high on physical health, for example) and discriminant correlations (e.g., whether subjects’ scores on the target well-being scale have significant correlations with perceptual acuity).

Second, the examination of correlational patterns depends on theory. Initially, we need a theory to build our scale (for instance, a theory of how well-being is expressed in the target population). Moreover, we need a theory to tell us what correlations we should expect (i.e. how answers on our scale should correlate with other scales). This means that, when engaging in construct validation, researchers test a scale and its underlying theory holistically. That is, the construct validation of the target scale involves testing both the scale and the theory of well-being that underlies it. Consequently, the procedure of construct validation requires that researchers remain open to revising their underlying theory if they persistently observe the wrong correlational patterns. Given this holistic nature of the procedure, correlational patterns might lead to revisions of one’s theory of well-being, perhaps even to abandoning it. 

The question now is this: Does the procedure of construct validation provide a good answer to measurement scepticism about well-being? While we acknowledge that for many psychological phenomena (e.g., intelligence) the procedures of construct validation might provide a satisfying reply to various forms of measurement scepticism, things are complicated with well-being. Here the normative nature of well-being rears its philosophical head. We argue that an acceptable answer to the question depends on the basic assumptions about the methodology of well-being theorising. Let us clarify by distinguishing between two methodological approaches.

First, methodological naturalism about well-being theorising claims that we should theorise about well-being in the same way we investigate any other natural phenomenon, namely, by ordinary inductive procedures of scientific investigation. Consequently, our theory of well-being should be open to revision on empirical grounds. Second, methodological non-naturalism claims that theorising about well-being should be limited to the methods known from traditional (moral) philosophy. The question of well-being is a question about what essentially and non-derivatively makes a person’s life go best. Well-being has an ineliminative normative or moral nature. Hence, the question of what well-being is, is a question only for philosophical analysis.  

The reader might see the problem now. Since construct validation requires openness to theory revision by correlational considerations, it is a procedure that only a methodological naturalist can accept. Consequently, if measurement scepticism is motivated by a form of non-naturalism, we cannot reject it by using construct validation. Non-naturalists will not accept that theorising about well-being can be a scientific and empirical project. This result is all the more important because many proponents of measurement scepticism seem to be methodological non-naturalists.  

In conclusion, if justifying an action or a social policy over another often requires assessing consequences for well-being, then scepticism about measurement of well-being becomes an important obstacle. We cannot address this scepticism head-on with the procedures of construct validation. Such procedures assume something the sceptic might not accept, namely, that our theory of well-being should be open to empirical revisions. Instead, we need to start by making our methodological commitments explicit. 

Want more?

Read the full article at


  • Alexandrova, Anna (2017). A Philosophy for the Science of Well-Being. Oxford University Press. 
  • Alexandrova, Anna and Daniel M. Haybron (2016). “Is Construct Validation Valid?” Philosophy of Science, 83(5), 1098–109. 

About the authors

Victor Lange is a PhD-fellow at the Section for Philosophy and a member of the CoInAct group at the Department of Psychology, University of Copenhagen. His research focuses upon attention, meditation, psychotherapy, action control, mental action, and psychedelic assisted therapy. He is a part of the platform Regnfang that publishes podcasts about the sciences of the mind.

Thor Grünbaum is an associate professor at the Section for Philosophy and the Department of Psychology, University of Copenhagen. He is head of the CoInAct research group. His research interests are in philosophy of action (planning, control, and knowledge), philosophy of psychology (explanation, underdetermination, methodology), and cognitive science (sense of agency, prospective memory, action control).

Posted on

Kristie Miller – “Against Passage Illusionism”

Detail of Salvador Dalì’s tarot card “The Magician” (1983)

In this post, Kristie Miller discusses her article recently published in Ergo. The full-length version of Kristie’s article can be found here.

It might seem obvious that we experience the passing of time. Certainly, in some trivial sense we do. It is now late morning. Earlier, it was early morning. It seems to me as though some period of time has elapsed since it was early morning. Indeed, during that period it seemed to me as though time was elapsing, in that I seemed to be located at progressively later times.

One question that arises is this: in what do these seemings consist? One way to put the question is to ask what content our experience has. What state of the world does the experience represent as being the case?

Philosophers disagree about which answer is correct. Some think that time itself passes. In other words, they think that there is a unique set of events that are objectively, metaphysically, and non-perspectivally present, and that which events those are, changes. Other philosophers disagree. They hold that time itself is static; it does not pass, because no events are objectively, metaphysically, and non-perspectivally present, such that which events those are, changes. Rather, whether an event is present is a merely subjective or perspectival matter, to be understood in terms of where the event is located relative to some agent.

Those who claim that time itself passes typically use this claim to explain why we experience it as passing: we experience time as passing because it does. What, though, should we say if we think that time does not pass, but is rather static? You might think that the most natural thing to say would be that we don’t experience time as passing. We don’t represent there being a set of events that are non-perspectivally present, and that which those are, changes. Of course, we represent various events as occurring in a certain temporal order, and as being separated by a certain temporal duration, and we experience ourselves as being located at some times (rather than others) – but none of that involves us representing that some events have a special metaphysical status, and that which events have that status, changes. So, on this view, we have veridical experiences of static time.

Interestingly, however, until quite recently this was not the orthodox view. Instead, the orthodoxy was a view known as passage illusionism. This is the view that although time does not pass, it nevertheless seems to us as though it does. So, we are subject to an illusion in which things seem to us some way that they are not. In my paper I argue against passage illusionism. I consider various ways that the illusionist might try to explain the illusion of time passing, and I argue that none of them is plausible.

The illusionist’s job is quite difficult. First, the illusion in question is pervasive. At all times that we are conscious, it seems to us as though time passes. Second, the illusion is of something that does not exist – it is not an experience which could, in other circumstances, be veridical.

In the psychological sciences, illusions are explained by appealing to cognitive mechanisms that typically function well in representing some feature(s) of our environment. In most conditions, these mechanisms deliver us veridical experiences. In some local environments, however, certain features mislead the mechanism to misrepresent the world, generating an illusion. These kinds of explanation, however, involve illusions that are not pervasive (they occur only in some local environments) and are not of something that does not exist (they are the product of mechanisms that normally deliver veridical experiences). This gives us reason to be hesitant that any explanation of this kind will work for the passage illusionist.

I consider a number of mechanisms that represent aspects of time, including those that represent temporal order, duration, simultaneity, motion and change. I argue that, regardless of how we think about the content of mental states, we should conclude that none of the representational states generated by these mechanisms individually, or jointly, represent time as passing.

First, suppose we think that the content of our experiences is exhausted by the things in the world that those experiences typically co-vary with.  For instance, suppose you have a kind of mental state which typically co-varies with the presence of cows. On this view, that mental state represents cows, and nothing more. I argue that if we take this view of representational content, then none of the contents generated by the functioning of the various mechanisms that represent aspects of time, could either severally or, importantly, jointly, represent time as passing. For even if our brains could in some way ‘knit together’ some of these contents into a new percept, such contents don’t have the right features to generate a representation of time passing. For instance, they don’t include a representation of objective, non-perspectival presence. So, if we hold this view on mental content, we should think that passage illusionism is false.

Alternatively, we might think that our mental states do represent the things in the world with which they typically co-vary, but that their content is not exhausted by representing those things. So, the illusionist could argue that we experience passage by representing various temporal features, such that our experiences have not only that content, but also some extra content, and that jointly this generates a representation of temporal passage.

I argue that it is very hard to see why we would come to have experiences with this particular extra content. Representing that certain events are objectively, metaphysically, and non-perspectivally present, and that which event these are, changes, is a very sophisticated representation. If it is not an accurate representation, it’s hard to see why we would come to have it. Further, it seems plausible that the human experience of time is, in this regard, similar to the experience of some non-human animals. Yet it seems unlikely that non-human animals would come to have such sophisticated representations, if the world does not in fact contain passage.

So, I conclude, it is much more likely, if time does not pass, that we have veridical experiences of a static world rather than illusory experiences of a dynamical world.

Want more?

Read the full article at

About the author

Kristie Miller is Professor of Philosophy and Director of the Centre for Time at the University of Sydney. She writes on the nature of time, temporal experience, and persistence, and she also undertakes empirical work in these areas. At the moment, she is mostly focused on the question of whether, assuming we live in a four-dimensional block world, things seem to us just as they are. She has published widely in these areas, including three recent books: “Out of Time” (OUP 2022), “Persistence” (CUP 2022), and “Does Tomorrow Exist?” (Routledge 2023). She has a new book underway on the nature of experience in a block world, which hopefully will be completed by the end of 2024. 

Posted on

Russ Colton – “To Have A Need”

A man is hanging from the hand of a clock fixed on the exterior wall of a six-story building, risking his life.
Harold Lloyd in the 1923 movie “Safety Last!”

In this post, Russ Colton discusses the article he recently published in Ergo. The full-length version of his article can be found here.

Every day we notice the needs of ourselves or others and are moved to address them. We often feel obliged to do so, even for strangers. Whatever is needed seems important in a way that perks and luxuries do not. Some philosophers take such observations quite seriously and give need a key role in their moral and political theories. Yet they characterize the concept of need differently, and sometimes not very fully. To help us understand and assess their ideas—but also just for the sake of improving our understanding of a common concept that enters into everyone’s practical and moral thinking—I want to try to say as clearly as possible what it means to have a need. My paper is focused on this task of conceptual clarification.

Throughout, I am concerned with a certain kind of need—welfare need, as I call it, which is the need for whatever promotes a certain minimum level of life quality, like the need for air, education, or self-confidence. This differs from a goal need (aka “instrumental need”), which is a need for whatever is required to achieve some goal—whether that goal is good for you or not—like the need for a bottle opener or a bank loan.

It is well-known that sometimes we say a person needs something when they have neither a welfare need nor a goal need for it: for example, “The employee needs to be fired.” In such cases, however, we can readily deny that the person has the need—the employee does not have a need to be fired. By contrast, in cases of welfare or goal need, the person has the need. Thus, insofar as we are interested in need because of its connection to human welfare, the specific concept of having a need may be more important than needing, which is why I focus on the former. In considering examples that test my analysis, it is best to think in terms of having a need.

Among philosophers, perhaps the most popular gloss on welfare need is this: to need something is to require it in order to avoid harm. This idea is approximately correct, but it needs improvement. The relevant notions of requirement and harm must be pinned down, and the idea must be broadened, since people also need what is required to reduce danger, like vaccines and seatbelts. To make these improvements, I offer two analyses of having a need—one that captures the original intuition about harm avoidance, and a broader one that captures the concept in full by covering both harms and dangers.

David Wiggins (Needs, Values, Truth) is the only theorist who has tried to clarify with precision the requirement aspect of the harm-avoidance idea. In broad strokes, his view is this: I need to have X if and only if, under the present circumstances, necessarily, if I avoid harm, then I have X, where necessity here is constrained by what is “realistically conceivable.”

This idea has a number of problems. One of the most serious arises when we have a need for some future X that will be unmet. If a non-actual X can count as realistically conceivable, there seems to be nothing preventing the non-actual possibility that, even without it, whatever harmful process was headed toward us is eventually thwarted by other means, leaving us unharmed. But that means I can have a need for X even though I could avoid harm without it.

Another problem is that often, when we’re in a pinch in a given circumstance, we view multiple things that could save us as needed. If I were short of money to pay rent at the end of the month, then each of the following assertions would be reasonable: I need more money in my bank account; I need a friend to lend me money; I need the landlord to give me more time. But on Wiggins’s necessity approach, given the circumstances, I can need only one X, which will have to be the disjunction of all potential rent solutions.

I argue that these (and other) problems are readily avoided with a counterfactual-conditional approach, along these lines: I have a need for something when, without it, my life would be (in some sense) harmed.

Understanding the relevant notion of harm requires attending to how we balance positive and negative effects on welfare over time. I explore this in the paper and conclude that when you have a need for something, your life from then on would be better on the whole, and less unsatisfactory for some period, with it than without it.

There will be different intuitions about what counts as unsatisfactory. My analysis is neutral, but I do make a case for the claim that for our most ordinary conception of need, the relevant sense of “unsatisfactory” is not good. With this idea in hand, my analysis implies a very natural idea: if you lack what you need, your life for a time will not be good and will be worse than it would otherwise be, and this loss will not be outweighed by any benefit.

Finally, I extend the analysis to our needs for what makes us safer, things without which we would be in more danger independently of whether we would be harmed. This is challenging because many present needs are for future benefits, and the risks relevant to our future welfare can change during the interval. Fortunately, there are easy ways to address the relevant issues so that the analysis remains quite simple. Roughly put: I now have a need for X if and only if it is now highly probable that, at the time of X, the expected value (quality) of my life from now on would, for some period, be less unsatisfactory with X than without, and would on the whole be higher.

Want more?

Read the full article at

About the author

Russ Colton received his PhD from the University of Massachusetts Amherst. His current research interests are primarily in ethics.

Posted on

Ashley Atkins – “Race and the Politics of Loss: Revisiting the Legacy of Emmett Till”

Dana Schutz’s “Open Casket”, part of the 2017 Whitney Biennial © Benjamin Norman / The New York Times / Redux

In this post, Ashley Atkins discusses the article she recently published in Ergo. The full-length version of Ashley’s article can be found here.

The exhibition of Dana Schutz’s Open Casket at the 2017 Whitney Biennial sparked immediate and passionate criticism: a protest was staged in front of the painting on its opening day; a public discussion of the controversy surrounding the painting was held by the Whitney during its exhibition; and sometime in between, a public letter was penned that called for the painting’s destruction.

There are so few paintings of the dead in open casket that a painting of this kind was almost certain to capture people’s attention, even to shock. Helen Molesworth, a curator of contemporary art and former Chief Curator of the Museum of Contemporary Art in Los Angeles, articulates this shock in response to a recent exhibition of Alice Neel’s work, which included Dead Father (1946).

Image of a dead old man in an open coffin.
“Dead Father” (1946) Alice Neel © The Estate of Alice Neel

Though Molesworth “grew up in a tradition where you see dead people in coffin,” the act of making an image of this subject matter was felt to border on obscenity: “Nobody takes a picture of the dead person in a coffin, people don’t make paintingsoil paintings—of dead people in coffins. This is like […] almost taboo to me. Still, when I see that painting [Dead Father] I…I am a little shocked still.”

Obscenity was one of the charges presented against Open Casket (also an oil painting) but even so the feeling was that this bordered on something more sinister. Here we have a painting not of an intimate but of a stranger and, as Hannah Black—who led the call for its destruction—summed it up, a painting of a dead black boy by a white artist.

What propelled the controversy was the painting’s connection to its presumed—though not explicitly identified—subject, Emmett Till, a black teenager lynched in Mississippi in 1955, whose disfigured and badly decomposed body was laid in an open casket for the duration of a four-day public viewing at the insistence of his mother, Mamie Till-Mobley. What exactly, critics pointedly asked, is the artist’s relationship to this legacy? What does it mean to look at this painting?

Open Casket and the criticism surrounding it presents us with an opportunity to revisit this legacy and to examine, in particular, the significance of Mamie Till-Mobley’s public presentation of the body of her son, including her sanctioning of the publication of photographs of his body in casket, which continue to circulate. What can her actions mean to us today?

Schutz’s stated aims in painting Open Casket offer an illuminating starting point. Till-Mobley’s relationship to her son needed, in Schutz’s view, to be reflected in some way in this image; though the violence done to him was horrific and real and should be acknowledged as such—one of the reasons for its continuing political significance—the painting could not simply be grotesque (and I think we can see something in it of the tenderness that appears in Dead Father, what Fran Liebowitz unguardedly describes as beauty in her discussion of Neel’s painting). The image would also be, somehow, an American image.

We can find a basis for these ideas in Mamie Till-Mobley’s own reflections on these events, particularly in her declaration that all Americans needed to be impacted by the sight of the body as a whole so that they might together say what they had seen (Till-Mobley & Benson 2003: 140). As she reveals in her autobiography, Till-Mobley had herself studied the violence done to her son. She could describe it forensically, inch by inch, she tells us, but something other than this kind of engagement with the body of her son was intended by her invitation to Americans to look together and say what they saw. This was something she alone could not do. Americans also needed to see pictures of her son as he had been, she proposed, so that they could see what was lost to them.

It was through being initiated into rites of mourning, as we might put it, that Americans were to participate in this legacy. The most provocative aspect of Open Casket—what was experienced by critics as its intrusion into the mourning of others—is also, in my view, the central thread linking it to this legacy, namely, its engagement with Till-Mobley’s invitation to mourn her son’s legacy as an American one.

If this is right, why have critics neglected to consider that the painting might be a mournful one or to at least judge its failure in these terms?

One important reason is that these critics do not understand this legacy in the terms that I set out; they would reject the idea that “racial losses” are to be mourned collectively. The critical reception of this legacy is a divided one. It assigns two complementary functions to the photographs of Emmett Till in casket: on the one hand, these photographs are understood to facilitate mourning (to provide shelter, warning, and inspiration) among those vulnerable to white violence and, on the other hand, these photographs circulate as evidence and are meant to expose those implicated in this violence (as Schutz was said to be through her painting). The aim of exposure can be seen in the rhetoric surrounding the violence associated with the iconography of Emmett Till’s death. The violence, critics insist, speaks for itself, bears its own witness, without any need for subjective response (of which mournfulness is a paradigm). It is on such grounds that Schutz’s painting is criticized for being too subjective.

But if this is right, why was there any need for Mobley to invite all Americans to look together? What use do we have for the idea that the loss of her son might be conceived of as a common loss?

It is tempting to understand Till-Mobley’s invitation to Americans within a tradition of political thought that sees democracy as requiring continual sacrifice and, relatedly, as requiring that citizens cultivate a capacity to mourn such losses (Allen 2004; McIvor 2016). Though this tradition is acutely aware of the ways in which the burdens of loss have historically been shifted onto to African Americans, among others, legacies of racial violence and loss engendered in this manner are thought to be no less collective for being borne inequitably. It is on these grounds that even these losses are to be mourned—that is, acknowledged as losses—by all citizens.

This tradition misses, however, the significance of and the challenge presented by Till-Mobley’s invitation. She did not assume that the loss of her son could already constitute a collective loss. She proposed that it be seen as such; i.e., that Americans come to think of themselves as people who had suffered this loss and needed, collectively, to put into words what it was and how it impacted them—an act of political re-envisioning so bold that its fulfillment would perhaps have been understood as a political refounding of the country. In this sense, we might see her as participating in the political lineage of Abraham Lincoln, who, it has been argued, also used a concrete occasion of mourning, in Gettysburg, to offer a vision of the country so bold in its re-envisioning that it has been conceptualized in these terms (Wills 1992; Nussbaum 2013).

On July 25, 2023, President Joe Biden signed a proclamation establishing the Emmett Till and Mamie Till-Mobley National Monument in both Mississippi and Illinois (the family’s home state). The new national monument “will help tell the story of the events surrounding Emmett Till’s murder, their significance in the civil rights movement and American history, and the broader story of Black oppression, survival, and bravery in America.”

We can’t yet know how this story will be told. Will it present Emmett Till’s death as a loss to be mourned and, if so, by whom (what nation)? Will it present these events with tenderness, with beauty, which helps us to bear the ugliness of death and violence? Will Mamie Till-Mobley’s contribution to the civil rights movement be memorialized, as is standard, as helping “catalyze” this movement? This implies not only that her gesture was of great significance but that it was significant mainly for what followed, what would conventionally be thought of as properly political actions. Seeing Till-Mobley in this way would reflect the view of her contemporaries, among them powerful leaders in the NAACP who eventually publicly cut ties with her. In describing her grief as a catalyst and a benefit to later generations—the living rather than to the dead—they were making the point that grief was not itself of political significance, but rather a dangerous distraction from these other, properly political ends.

The question of what Mobley’s gesture can mean to us today will depend on many things, including our understanding of the prospects for a mournful politics.

Want more?

Read the full article at


  • Allen, Daniel (2004). Talking to Strangers: Anxieties of Citizenship Since Brown v. Board of Education. University of Chicago Press.
  • McIvor, David W. (2016). Mourning in America: Race and the Politics of Loss. Cornell University Press.
  • Till-Mobley, Mamie and Christopher Benson (2003). Death of Innocence: The Story of the Hate Crime That Changed America. Random House.
  • Nussbaum, Martha (2013). Political Emotions: Why Love Matters for Justice. Harvard University Press.
  • Wills, Gary (1992). Lincoln at Gettysburg: The Words That Remade America. Simon and Schuster.

About the author

Ashley Atkins is an Associate Professor of Philosophy at Western Michigan University. She received an NEH Fellowship this year in support of a book project that examines grief through the lens of contemporary memoir. “Race and the Politics of Loss” is part of a series of papers exploring legacies of racial violence and loss in democratic politics.

Posted on

Corey Dethier – “Interpreting the Probabilistic Language in IPCC Reports”

A young sibyl (sacred interpreter of the word of god in pagan religions) argues with an old prophet (sacred interpreter of the word of god in monotheistic religions). It looks as if the discussion will go on for a long while.
Detail of “A sibyl and a prophet” (ca. 1495) Andrea Mantegna

In this post, Corey Dethier discusses his article recently published in Ergo. The full-length version of Corey’s article can be found here.

Every few years, the Intergovernmental Panel on Climate Change (IPCC) releases reports on the current status of climate science. These reports are massive reviews of the existing literature by the most qualified experts in the field. As such, IPCC reports are widely taken to represent our best understanding of what the science currently tells us. For this reason, the IPCC’s findings are important, as is their method of presentation.

The IPCC typically qualifies its findings using different scales. In its 2013 report, for example, the IPCC says that the sensitivity of global temperatures to increases in CO2 concentration is “likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence) and very unlikely greater than 6°C (medium confidence)” (IPCC 2013, 81).

You might wonder what exactly these qualifications mean. On what grounds does the IPCC say that something is “likely” as opposed to “very likely”? And why does it assign “high confidence” to some claims and “medium confidence” to others? If you do wonder about this, you are not alone. Even many of the scientists involved in writing the IPCC reports find these qualifications confusing (Janzwood 2020; Mach et al. 2017). My recent paper – “Interpreting the Probabilistic Language in IPCC Reports” – aims to clarify this issue, with particular focus on the IPCC’s appeal to the likelihood scale.

Traditionally, probabilistic language such as “likely” has been interpreted in two ways. On a frequentist interpretation, something is “likely” when it happens with relatively high frequency in similar situations, while it is “very likely” when it happens with a much greater frequency. On a personalist interpretation, something is “likely” when you are more confident that it will happen than not, while something is “very likely” when you are much more confident.

Which of these interpretations better fits the IPCC’s practice? I argue that neither of them does. My main reason is that both interpretations are closely tied to specific methodologies in statistics. The frequentist interpretation is appropriate for “classical” statistical testing, whereas the personalist interpretation is appropriate when “Bayesian” methods are used. The details about the differences between these methods do not matter for our present purposes. My main point is that climate scientists use both kinds of statistics in their research, and since the IPCC’s report reviews all of the relevant literature, the same language is used to summarize results derived from both methods.

If neither of the traditional interpretations works, what should we use instead? My suggestion is the following: we should understand the IPCC’s use of probabilistic terms more like a letter grade (an A or a B or a C, etc.) than as strict probabilistic claims implying a certain probabilistic methodology.

An A in geometry or English suggests that a student is well-versed in the subject according to the standards of the class. If the standards are sufficiently rigorous, we can conclude that the student will probably do well when faced with new problems in the same subject area. But an A in geometry does not mean that the student will correctly solve geometry problems with a given frequency, nor does it specify an appropriate amount of confidence that you should have that they’ll solve a new geometry problem. 

The IPCC’s use of terms such as “likely” is similar. When the IPCC says that a claim is likely, that’s like saying that it got a C in a very hard test. When the IPCC says that sensitivity is “extremely unlikely less than 1°C”, that’s like saying that this claim fails the test entirely. In this analogy, the IPCC’s judgments of confidence reflect the experts’ evaluation of the quality of the class or test: “high confidence” means that the experts think that the test was very good. But even when a claim passes the test with full marks, and the test is judged to be very good, this only gives us a qualitative evaluation. Just as you shouldn’t conclude that an A student will get 90% of problems right in the future, you also shouldn’t conclude that something that the IPCC categorizes as “very likely” will happen at least 90% of the time. The judgment has an important qualitative component, which a purely numerical interpretation would miss.

It would be nice – for economists, for insurance companies, and for philosophers obsessed with precision – if the IPCC could make purely quantitative probabilistic claims. At the end of my paper, I discuss whether the IPCC should strive to do so. I’m on the fence: there are both costs and benefits. Crucially, however, my analysis suggests that this would require the IPCC to go beyond its current remit: in order to present results that allow for a precise quantitative interpretation of its probability claims, the IPCC would have to do more than simply summarize the current state of the research. 

Want more?

Read the full article at


  • IPCC (2013). Climate Change 2013: The Physical Science Basis. Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Thomas F. Stocker, Dahe Qin at al. (Eds.). Cambridge University Press.
  • Janzwood, Scott (2020). “Confident, Likely, or Both? The Implementation of the Uncertainty Language Framework in IPCC Special Reports”. Climatic Change 162, 1655–75.
  • Mach, Katharine J., Michael D. Mastrandrea, at al. (2017). “Unleashing Expert Judgment in Assessment”. Global Environmental Change 44, 1–14.

About the author

Corey Dethier is a postdoctoral fellow at the Minnesota Center for Philosophy of Science. He has published on a variety of topics relating to epistemology, rationality, and scientific method, but his main research focus is on epistemological and methodological issues in climate science, particularly those raised by the use of idealized statistical models to answer questions about climate change.

Posted on

Christine Bratu – “How (Not) To Wrong Others with Our Thoughts”

image of two cherubs thinking
Detail of the San Sisto Madonna (c. 1513-1514) Raphael

In this post, Christine Bratu discusses her article recently published in Ergo. The full-length version of Christine’s article can be found here.

Imagine Jim attends a fancy reception and, seeing a person of color standing around in a tuxedo, concludes that they are a waiter (when, in fact, they, too, are a guest). Alternatively, picture Anna who, during a prestigious conference, sees a young woman setting up a laptop at the lectern and concludes that she is part of the organizing team (when, in fact, this woman is the renowned professor who will give the keynote lecture). In many of us, cases like these elicit the fundamental intuition that there is something morally problematic going on.

Some philosophers have used this intuition to argue for the possibility of doxastic wronging (Basu 2018, 2019a, 2019b; Basu and Schroeder 2019; Keller 2018). Cases like these, they argue, show that we have the moral duty not to have bigoted beliefs about each other. On their interpretation, the situations above are morally troublesome because, by believing classic racist and sexist stereotypes, Jim and Anna violate a duty they have towards their fellow party guest and keynote speaker, respectively. According to proponents of doxastic wronging, positing this morally grounded epistemic duty is the best way to explain our intuition, since in the situations depicted neither protagonist acts in a reprehensible way (in fact, neither of them acts at all!) – it’s their racist and sexist beliefs as such that are the problem.

I think this proposal is intriguing. Group-based discrimination is a serious moral and political problem, and the moral duty not to have bigoted beliefs seems perfectly tailored to strike at its root. Nevertheless, in my article I argue that we should reject the existence of such a duty: there is no such thing as doxastic wronging. I argue for this by presenting what I call the liberal challenge.

I start from the assumption that positing any new, morally grounded epistemic duties comes at a price, because it constitutes a curtailment of our freedom of thought. We should only accept such curtailment if we can thereby gain something comparably important. I then point out three strategies that advocates of doxastic wronging could adopt to convince us that we are gaining something comparably important, and I explain why I think that all three of them fail.

First, the advocates of doxastic wronging could claim that positing a duty not to have bigoted beliefs helps us avoid bigoted actions. This strategy fails, I argue, because we are already under the moral obligation not to act in bigoted ways. If the reason for limiting our freedom of thought is merely to decrease the risk of bigoted actions, then placing us under this new obligation is superfluous.

Second, these philosophers could claim that positing a duty not to have bigoted beliefs helps us avoid practical vices that bigoted beliefs manifest such as, for instance, arrogance. This strategy fails, I argue, because – while we might be morally better, i.e. more virtuous, if we avoided vices like arrogance – we are under no moral obligation to do so.

Thirdly, they could claim that positing a duty not to have bigoted beliefs is necessary to avoid the intrinsic harm of being the object of bigoted beliefs. This third strategy starts off more promisingly as it is based on a correct observation. Most of us desire not to be the objects of bigoted beliefs. People who think about us in bigoted ways frustrate this legitimate desire, and so it seems that they thereby harm us. Yet even if we grant that bigoted beliefs harm their targets, we cannot conclude that the resulting harm is important enough to justify restricting our freedom of thought. People frustrate each other’s legitimate desires all the time. We frustrate our parents’ legitimate desire to see us flourish when we let our talents go to waste, and we frustrate our partners’ legitimate desire to continue the relationship when we break up with them. Cases like these show that frustrating someone’s legitimate desire is not sufficient for our behavior to count as morally impermissible. To make this strategy work, proponents of doxastic wronging must, in addition, argue that the desire not to be the objects of bigoted beliefs is so important that its frustration is morally impermissible. However, I contend that they can only do so by appealing to the impermissibility of either bigoted actions or vices that bigoted beliefs manifest. In other words, they can only do by falling back on one of the former two strategies. And since I’ve already shown that such strategies fail, so does this one.

If we reject the duty not to have bigoted beliefs – as I argue we should – what about our initial intuition? What is wrong with Jim’s assumption that a person of color is most likely a waiter rather than a guest , or with Anna’s assumption that a young woman at the conference podium is most likely an organizer rather than the keynote?

It seems to me that the best way to make sense of these cases is to explain them not in terms of doxastic wronging, but rather in terms of doxastic harming. People like Jim and Anna do not violate any obligations they have toward their targets when they think about them in racist or sexist ways. However, they do frustrate their desire not to be the objects of bigoted beliefs, and they thereby harm them. When we reproach people like Jim and Anna for their hurtful thoughts, we are accusing them not of having done something they were morally not allowed to do, but rather of having done something it would have been better not to do (even though they were morally allowed to do it).

The change in perspective I propose does not make light of the morally problematic nature of bigoted beliefs. On the contrary, it ensures that the criticism we level against people who entertain such beliefs hits its mark properly by avoiding moralistic overreach and by making morally grounded demands on what other people believe.

Want more?

Read the full article at


  • Basu, Rima (2018). “Can Beliefs Wrong?” Philosophical Topics 46 (1): 1–17.
  • Basu, Rima (2019a). “The Wrongs of Racist Beliefs”. Philosophical Studies 176 (9): 2497–515.
  • Basu, Rima (2019b). “What We Epistemically Owe to Each Other”. Philosophical Studies 176 (4): 915–31.
  • Basu, Rima and Mark Schroeder (2019). “Doxastic Wronging”. In Brian Kim and Matthew McGrath (Eds.), Pragmatic Encroachment in Epistemology, 181–205.
  • Keller, Simon (2018). “Belief for Someone Else’s Sake”. Philosophical Topics 46 (1): 19–35.

About the author

Christine Bratu is a professor of philosophy at the University of Göttingen in Germany. She received her PhD in philosophy from the Ludwig-Maximilian University of Munich. Her research interests are in feminist philosophy, moral and political philosophy (especially issues of disrespect and discrimination) and topics at the intersection between ethics and epistemology. 

Posted on

Quill Kukla and Mark Lance – “Telling Gender: The Pragmatics and Ethics of Gender Ascriptions”

picture of gendered bathrooms where the gendered icons are replaced by a shark and a T-Rex.

In this post, Quill Kukla and Mark Lance discuss their article recently published in Ergo. The full-length version of the article can be found here.

Debates over the validity or appropriateness of gender ascriptions, whether imposed on someone else (“You may pretend you’re a woman, but you’re actually a man!”) or self-proclaimed (“I am a man!”; “I don’t have a gender!”), typically turn to what gender “really is” and who “really” has which gender. We argue that such metaphysical turns are usually irrelevant distractions and redirections. We claim that gender ascriptions like “You are a man” or “I am not a woman” are not, first and foremost, functioning to make truth claims about substantive features of the world.

This may be a surprising claim. After all, a sentence like “You are a man” is grammatically a declarative. Declaratives are what we use to make claims about the world – Paris is the capital of France; metals conduct electricity; there is a deer in the meadow. The grammatical form of a sentence is generally an indicator of the pragmatic force of uttering that sentence, so sentences with declarative grammar normally function to make truth claims, which are appropriate if they match the world and not if they don’t. But this connection is not universal. If I say to my roommate “It’s really hot in here!” this can function as a request to open the window or turn down the heat. “The meeting is adjourned” might describe a social status of the meeting, but more typically, it functions to bring about or constitute the adjournment.

Imagine that one person says to another, “You and I are friends!” and the second person responds, “No, we are not.” It seems unlikely that they are disagreeing about a substantive issue of fact. They are likely not disagreeing over the empirical criteria for friendship, whatever those might be, or the evidence concerning whether they meet those criteria. Rather, the utterance “You and I are friends!” is a kind of social proposal. In calling you my friend, I am proposing that we relate to one another in specific ways and take ourselves as having various commitments to one another; I am making a claim on a certain normative relationship to you. The utterance functions more like “I bet you ten dollars” or “I take you as my spouse” than as a factual claim. To say “We are friends” to someone is to try to position us in social space with respect to one another. And to reject the friendship claim is to reject this proposed positioning.

Similarly, we want to claim that the primary function of gender ascriptions is to establish a normative positioning in social space. First-person gender ascriptions (“I am a woman!”) are attempts to claim a specific position in gendered social space, while second-person and third-person gender ascriptions (“You are no man!”; “He is a man!”) are attempts to impose a position in gendered social space. Most gender ascriptions mostly sustain a position that someone already has rather than constituting one from scratch, but they still work to incrementally solidify such a position.

Our position in gendered social space, or the gender we are taken as having (or lacking), inflects nearly every aspect of how we are expected and demanded to negotiate the social and material world. It shapes how we are supposed to hold our body and modulate our voice; what clothes we are supposed to wear; how we are supposed to manifest sexual attraction and attractiveness; where and how we pee; what hobbies and jobs we are supposed to have; who we compete against in sports events and which sports we take up in the first place; what our relationship is to our children; and so forth. Even fetuses, once recognized as ‘boys’ or ‘girls’, are expected to become babies for whom certain nursery and clothing colors and emotions and behaviors are appropriate. Such norms are modulated by race, age, ability, class, body shape, and more; there is not a single, consistent set of norms for each gender, but rather a complex and often contradictory web of norms in which we are all differently positioned. But these structures of social significance are inescapable. To occupy a position in gendered social space is to be situated with respect to this complex network of norms. One can transgress or resist any subset of these norms, of course, but they are still the norms that carve out expectations and evaluations and social uptake for almost every dimension of our social and material existence.

When people disagree over whether a gender attribution is appropriate, it is rarely primarily an empirical disagreement. There is no single, widely accepted empirical definition of gender. It is varyingly defined in terms of anatomy, gametes, genetics, psychology, social role, self-identification, and phenotype. But we argue that what is at stake in most disagreements over gender attributions is not which empirical features someone has, but rather whether it is appropriate to take someone as positioned in a specific way within gendered social space. When a trans woman claims, “I am a woman,” and someone responds, “No, you are a man,” they are not generally arguing about empirical matters, but rather, as in the friendship case, about how a claim on a social position will or won’t be ratified.

If the function of gendered language is not (primarily) to describe the world, but to establish, ratify, reinforce, or oppose the taking up of social roles, then the appropriateness of such linguistic performances is not a matter of truth and falsity. Rather, we should evaluate what we ought to say and how we ought to respond to one other’s gender ascriptions in terms of how we ought to organize social space, and how much respect individuals should be given for determining their own gender ascription.

We argue that core norms of self-determination and autonomy demand wide respect for and deference to first-person attributions (or rejections) of gender (“I am a man,” etc.). What gender is, or whether it is anything at all, is simply irrelevant to these core norms. Saying “Yes, you are a man” is endorsing a person’s right to choose the social role they wish to inhabit, similar to recognizing a person’s choice of career, spouse, or hobby. Indeed, people generally think of rules that force social positions on people, such as Jim Crow laws and caste systems, as paradigmatic antidemocratic violations of self-determination. Since gender norms govern many of the most intimate dimensions of our bodily lives, forcing gendered social positions seems especially unjustified. The only ethical reason to contravene someone’s first-person claim upon a social position is if their doing so harms others, and we find the idea that this is true in the case of gender completely without merit or serious evidence. (This is not always the case. If I declare, “I am your sovereign master!”, I am claiming a social position, but obviously one you have every right to reject, because it harms you and your own self-determination directly. We find the idea that one person’s gender claim has a significant chance of harming someone else absurd, although we recognize that transphobes do try to assert this.)

Thus, we claim that first-personal gender attributions are virtually always justified, not because people are infallible about their own gender, but because of the ethical function of these attributions. Likewise, second- and third-personal gender attributions that contradict or foreclose first-personal attributions are almost always unjustified. Debates over the metaphysics of gender may be of philosophical curiosity to some, but they are distractions when it comes to everyday questions about when and how to respect people’s claimed gender, or lack thereof.

Want more?

Read the full article at

About the authors

Quill Kukla is Professor of Philosophy and Director of Disability Studies at Georgetown University. From 2021 to 2023, they also held a Humboldt Stiftung Research Award at the Institut für Philosophie at Leibniz Universität Hannover. They received a PhD in Philosophy from the University of Pittsburgh and an MA in Geography from the City University of New York, and completed a Greenwall Postdoctoral Fellowship at The Johns Hopkins School of Public Health. Their most recent book is City Living: How Urban Dwellers and Urban Spaces Make One Another (Oxford University Press 2021) and their forthcoming book is entitled Sex Beyond ‘Yes!’ (W. W. Norton & Co. 2024)

Mark Lance, PhD University of Pittsburgh, is Professor of Philosophy and Professor of Justice and Peace at Georgetown University. He has published in areas ranging from relevance logic, to philosophy of language, to metaethics and contributed to public education projects through The Institute for Social Ecology, the Institute of Anarchist Studies, the Peace and Justice Studies Association, and the US Campaign for Palestinian Rights. He is an activist who has been arrested 13 times in civil disobedience actions protesting US government crimes. His most recent book is Toward a Revolution as Nonviolent as Possible (with Matt Meyer). Outside activism and philosophy, he is a rowing coach, chess player, and former orchestral trumpet player.

Posted on

Naftali Weinberger – “Signal Manipulation and the Causal Analysis of Racial Discrimination”

picture of fragmented parts of a Caucasian woman's face rearranged and surrounded by pearls.
“Sheherazade” (1950) René Magritte

In this post, Naftali Weinberger discusses the article he recently published in Ergo. The full-length version of Naftali’s article can be found here.

After the first presidential debate between Hillary Clinton and Donald Trump, the consensus was that Clinton came out ahead, but that Trump exceeded expectations. Some sensed sexism, claiming: had Trump been a woman and Clinton a man, there’s no way observers would have thought the debate was even close, given the difference between the candidates’ policy backgrounds.

How could we test this hypothesis? Some professors at NYU staged a play with the candidates’ genders swapped. A female actor played Trump and imitated his words and gestures, and a male actor played Clinton. Afterwards, participants were given a questionnaire. Surprisingly, audience members disliked male Clinton more than observers of the initial debate disliked the original. “Why is he smiling so much?”, some asked. And: “isn’t he a bit effeminate?”

Does this show there was no sexism? Here we need to be careful. Smiling is not gender-neutral, since norms for how much people are expected to smile are themselves gendered. So perhaps we need to rerun the experiment, and change not just the actors’ genders, but also modify the gestures in gender-conforming ways such that male Clinton smiles less. The worry is that the list of required modifications might be open-ended. The public persona Clinton has developed over the last half century is not independent of her gender. If we start changing every feature that gets interpreted through a gendered lens, we may end up changing all of them. 

This example illustrates how tricky it can be to test claims about the effects of demographic variables such as gender and race. I wrote “Signal Manipulation and the Causal Analysis of Racial Discrimination”, because I believe it is crucial to be able to empirically test at least some claims about discrimination, and that causal methods are necessary for doing so.

Studying racial discrimination requires one to bring together research from disparate academic areas. Whether race can be treated as a causal variable falls within causal inference. What race is, is a question for sociologists. Why we care specifically about discrimination against protected categories such as race is a matter for legal theorists and political philosophers.

Let’s start with whether race can be causal. Causal claims are typically tested by varying one factor while keeping others fixed. For instance, in a clinical trial one randomly assigns members to receive either the drug or the placebo. But does it make sense to vary just someone’s race or gender, while keeping everything else about them fixed?

This concern is often framed in terms of whether it is possible to experimentally manipulate race, and some claim that all causal variables must be potentially manipulable. I argue that manipulability is not the primary issue at stake in modeling discrimination. Rather, certain failures of manipulability point to a deeper problem in understanding race causally. Specifically, causal reasoning involves disentangling causal and merely evidential relevance: Does taking the drug promote recovery, or is it just that learning someone took the drug is evidence they were likely to recover (due to being healthier initially)? If one really could not change someone’s race without changing everything about them, the distinction between causal and evidential relevance would collapse.

We now turn to what race is. A key debate concerns whether it is biologically essential or socially constructed. Some think that race is non-manipulable only if it is understood biologically. Maya Sen and Omar Wasow argue that race is a socially constructed composite, and that even though one cannot intervene on the whole, one can manipulate components (e.g. dialect). Sen and Wasow do not theorize about the relationship between race and its components, and I believe this is by design. The underlying presupposition is that if race is constructed, it is nothing over and above the components through which it is socially mediated.

Yet race’s being socially constructed does not entail that it reduces to its social manifestations. To give Ron Mallon’s example: a dollar’s value is socially constructed, but this does not entail that there is nothing more to being a dollar than being perceived as one. Within our socially constructed value system, a molecule-for-molecule perfect counterfeit is still a counterfeit. The upshot of this is that even if race is a composite such that we can only manipulate particular components, it does not follow that race just is its components. The relationship between social construction and manipulability is more nuanced than has been presupposed.

Finally, how does the causal status of race connect to legal theories of discrimination? Discrimination law only makes sense given a distinction between discrimination on the basis of protected categories and mere arbitrary treatment. An employer who does not hire someone because the applicant simply annoys them might be irrational, but is not violating discrimination law. I argue that in order to distinguish between racial discrimination and arbitrary treatment, we need to be able to talk about whether race itself made a difference. This involves varying it independently of other factors and thus modeling it causally.

Where does this leave us with Clinton and Trump? I’d suggest that if we really can’t change Clinton’s perceived gender without changing everything about her, we cannot disentangle causal from evidential relevance, and causal reasoning does not apply. Fortunately, not all cases are like this. In audit studies, one can change a racially relevant cue (such as the name on a resume) to plausibly change only the racial information the employer receives. And this does not entail that race is only the name. Instead of asking whether race is a cause, we should ask when it is fruitful to model race causally, with a spectrum from cases like audit studies (in which it is) to cases like Clinton’s (in which it isn’t).  And even in audit studies, treating race as separable is an idealization, since one does not model it in all of its sociological complexity. If what I argue in the article is correct, however, this modeling exercise is indispensable for legally analyzing discrimination and designing interventions to mitigate it.

Want more?

Read the full article at

About the author

Naftali Weinberger is a scientific researcher at the Munich Center for Mathematical Philosophy. His work concerns the use of causal methodology to address foundational questions arising in the philosophy of science as well as questions arising in particular sciences, including: biology, psychometrics, neuroscience, and cognitive science. He has two primary research projects – one on causation in complex dynamical systems and another on the use of causal methods for the analysis of racial discrimination. He is currently trying to convince philosophers that causal representations are implicitly relative to a particular time-scale and that it is therefore crucial to pay attention to temporal dynamics when designing and evaluating policy interventions.

Posted on

Markus Pantsar – “On Radical Enactivist Accounts of Arithmetical Cognition”

Two children selling fruit from a basket count the coins they just received.
Detail of “The Little Fruit Seller” (c. 1670-1675) Bartolomé Esteban Murillo

In this post, Markus Pantsar discusses the article he recently published in Ergo. The full-length version of his article can be found here.

Traditionally, cognitive science has held the view that the human mind works through, or is at least best explained by, mental repre­sentations and computations (e.g., Chomsky 1965/2015; Fodor 1975; Marr 1982; Newell 1980). Radical enactivist accounts of cognition challenge this paradigm. According to them, the most basic forms of cognition do not to involve mental representations or mental content; representations (and content) exist only in minds that have access to linguistic and sociocultural truth-telling practices (Hutto and Myin 2013, 2017).

As presented by Hutto and Myin, radical enactivism is a general approach to the philosophy of cognition. It is partly from this generality that it gets much of its force and appeal. However, a general theory of cognition ultimately needs to be tested on particular cognitive phenomena. In my paper, I set out to do just that with regard to arithmetical cognition. I am not a radical enactivist, but neither am I antagonistic to the approach. My aim is to provide a dispassionate analysis based on the progress that has been made in the empirical study and philosophy of numerical cognition.

Arithmetical cognition is especially suited to test radical enactivism (Zahidi 2021). This is not because arithmetic itself suggests the existence of non-linguistic representations. In fact, since Dedekind and Peano presented an axiomatization for arithmetic, it became clear that the entire arithmetic of natural numbers can be presented in a very simple language with only a handful of rules (i.e., the axioms) (Dedekind 1888; Peano 1889).

It is not arithmetic as a mathematical theory that presents challenges for radical enactivism; it is rather the development of arithmetic. This development happens on two levels. First, at the level of individuals, we have the ontogenetic development of arithmetical cognition. Second, at the level of populations and cultures, we have the phylogenetic and cultural-historical development of arithmetic. In my paper I focus on the ontogenetic level, because it is at that level that radical enactivism faces its most serious challenge.

It is commonly accepted that, in learning arithmetical knowledge and skills, children apply their innate, evolutionarily-acquired proto-arithmetical abilities (Pantsar 2014, 2019). These abilities – sometimes also called “quantical” (Núñez 2017) – are already present in human infants, and we share them with many non-human animals.

According to the most common view, there are two main proto-arithmetical abilities (Knops 2020). The first is subitizing: the ability to determine the amount of objects in our field of vision without counting. Subitizing enables detecting exact quantities, but it stops working after three or four objects. For larger collections, there is an estimating ability. This ability is not limited to small quantities, but it gets increasingly inaccurate as the size of the observed collection increases.

For the present topic, the literature on subitizing and estimating presents interesting questions. Following the work of Elizabeth Spelke (2000) and Susan Carey (2009), it is commonplace to associate each ability with a special core cognitive system (Hyde 2011). Subitizing is associated with the object tracking system (OTS), which allows for the parallel observation of objects in the subitizing range, up to three or four. Estimating is associated with the approximate number system (ANS), which is thought to be a numerosity-specific system.

The problem for the radical enactivist is that, under most interpretations, both the OTS and ANS are based on non-linguistic representations. The OTS is based on the observed objects occupying mental object files, one file for one object (Beck 2017; Carey 2009). For example, when I see three apples, three object files are occupied, and we can understand this as a representation of the number of the apples.

The ANS, on the other hand, is usually interpreted as representing quantities on a mental number line (Dehaene 2011). This line is likely to be logarithmic, given that the estimating ability becomes less accurate as the quantities become larger. Studies on anumerical cultures in the Amazon provide further evidence of this; members of those cultures tend to place quantities on a (physical) number line in a logarithmic manner (Dehaene et al. 2008; but see Núñez 2011).

Therefore, we have good empirical evidence in support of the idea that proto-arithmetical abilities are to be interpreted in terms of non-linguistic representations. Now the question is: can radical enactivism provide an alternative explanation for proto-arithmetical abilities without evoking representations?

This proves to be difficult, because it requires answering what is perhaps the most fundamental question in the field: namely, what exactly is a mental representation? Should visual memories, for example, be considered representations? For the radical enactivist they should not, but little evidence or argumentation has been provided to support this denial. In the present context, we must ask: could the OTS and the ANS work without using representations? Radical enactivism says so, but there is little solid evidence in support of this view.

Nonetheless, it should also be noted that the object files and the mental number line as explanations of the functioning of the OTS and the ANS, respectively, are currently nothing more than theoretical postulations : neither object files nor a mental number line have been located in the brain at the neuronal level, although fMRI studies give us good clues on where to look (Nieder 2016).

To be sure, some monkey studies have detected the existence of number neurons: i.e., specific groups of neurons whose firing is connected to observing a particular (small) quantity of objects (Nieder 2016), and one could infer that such number neurons count as representations of quantities in the brain. But this inference is exactly the kind of inference that radical enactivists have warned us against. Radical enactivists agree that there is non-linguistic processing of information in the brain, but they deny that in such cases there is content, i.e., representations. In the words of Hutto and Myin, brains process non-linguistic information-as-covariance, but not information-as-content (Hutto and Myin 2013:67).

In conclusion, where do we stand? Is there a way forward in the debate on representations? I believe there is, but it would be spurious to claim that philosophers can find it on their own. Instead, we will need a better empirical understanding of the neuronal activity associated with the functioning of the OTS and the ANS. At the same time, it would also be misguided to expect empirical data alone to resolve the issue. We will not find groups of neurons that are unassailably non-linguistic representations, and philosophers will need to continue working with empirical researchers in an effort to gain more knowledge about the proto-arithmetical abilities.

Want more?

Read the full article at


  • Beck, J. (2017). “Can Bootstrapping Explain Concept Learning?” Cognition 158:110–21.
  • Carey, S. (2009). The Origin of Concepts. Oxford: Oxford University Press.
  • Chomsky, Noam (2015). Aspects of the Theory of Syntax (50th anniversary ed.). MIT Press. (Original work published 1965)
  • Dedekind, Richard. (1888). Richard Dedekind: was sind und was sollen die Zahlen?: Stetigkeit und irrationale Zahlen. 1. Auflage. edited by S. Müller-Stach. Berlin [Heidelberg]: Springer Spektrum.
  • Dehaene, S., V. Izard, E. Spelke, and P. Pica. (2008). “Log or Linear? Distinct Intuitions of the Number Scale in Western and Amazonian Indigene Cultures.” Science 320:1217–20.
  • Dehaene, Stanislas. (2011). The Number Sense: How the Mind Creates Mathematics, Revised and Updated Edition. Revised, Updated ed. edition. New York: Oxford University Press.
  • Fodor, J. (1975). The Language of Thought. New York: Harvard University Press.
  • Hutto, D. D., and E. Myin. (2013). Radicalizing Enactivism. Basic Minds without Content. Cambridge, MA: MIT Press.
  • Hutto, D. D., and E. Myin. (2017). Evolving enactivism. Basic minds meet content. Cambridge, MA: MIT Press.
  • Hyde, D. C. (2011). “Two Systems of Non-Symbolic Numerical Cognition.” Frontiers in Human Neuroscience 5:150.
  • Knops, A. (2020). Numerical Cognition. The Basics. New York: Routledge.
  • Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. San Francisco: W.H. Freeman and Company.
  • Newell, A. (1980). “Physical symbol systems.” Cognitive Science 4(2):135–83.
  • Nieder, A. (2016). “The Neuronal Code for Number.” Nature Reviews Neuroscience 17(6):366.
  • Núñez, Rafael E. (2011). “No Innate Number Line in the Human Brain.” Journal of Cross-Cultural Psychology 42(4):651–68.
  • Núñez, Rafael E. (2017). “Is There Really an Evolved Capacity for Number?” Trends in Cognitive Science 21:409–24.
  • Pantsar, Markus. (2014). “An Empirically Feasible Approach to the Epistemology of Arithmetic.” Synthese 191(17):4201–29. doi: 10.1007/s11229-014-0526-y.
  • Pantsar, Markus. (2019). “The Enculturated Move from Proto-Arithmetic to Arithmetic.” Frontiers in Psychology 10:1454.
  • Peano, G. (1889). “The Principles of Arithmetic, Presented by a New Method.” Pp. 101–34 in Selected Works of Giuseppe Peano, edited by H. Kennedy. Toronto; Buffalo: University of Toronto Press.
  • Spelke, Elizabeth S. (2000). “Core Knowledge.” American Psychologist 55(11):1233–43. doi: 10.1037/0003-066X.55.11.1233.
  • Zahidi, K. (2021). “Radicalizing numerical cognition.” Synthese 198(Suppl 1):529–45.

About the author

Markus Pantsar is a guest professor at the RWTH University in Aachen. He has the title of docent at University of Helsinki. Pantsar’s main research fields are philosophy of mathematics and artificial intelligence. His upcoming book “Numerical Cognition and the Epistemology of Arithmetic” (Cambridge University Press) will present a detailed, empirically-informed philosophical account of arithmetical knowledge.