Posted on

Markus Pantsar – “On Radical Enactivist Accounts of Arithmetical Cognition”

Two children selling fruit from a basket count the coins they just received.
Detail of “The Little Fruit Seller” (c. 1670-1675) Bartolomé Esteban Murillo

In this post, Markus Pantsar discusses the article he recently published in Ergo. The full-length version of his article can be found here.

Traditionally, cognitive science has held the view that the human mind works through, or is at least best explained by, mental repre­sentations and computations (e.g., Chomsky 1965/2015; Fodor 1975; Marr 1982; Newell 1980). Radical enactivist accounts of cognition challenge this paradigm. According to them, the most basic forms of cognition do not to involve mental representations or mental content; representations (and content) exist only in minds that have access to linguistic and sociocultural truth-telling practices (Hutto and Myin 2013, 2017).

As presented by Hutto and Myin, radical enactivism is a general approach to the philosophy of cognition. It is partly from this generality that it gets much of its force and appeal. However, a general theory of cognition ultimately needs to be tested on particular cognitive phenomena. In my paper, I set out to do just that with regard to arithmetical cognition. I am not a radical enactivist, but neither am I antagonistic to the approach. My aim is to provide a dispassionate analysis based on the progress that has been made in the empirical study and philosophy of numerical cognition.

Arithmetical cognition is especially suited to test radical enactivism (Zahidi 2021). This is not because arithmetic itself suggests the existence of non-linguistic representations. In fact, since Dedekind and Peano presented an axiomatization for arithmetic, it became clear that the entire arithmetic of natural numbers can be presented in a very simple language with only a handful of rules (i.e., the axioms) (Dedekind 1888; Peano 1889).

It is not arithmetic as a mathematical theory that presents challenges for radical enactivism; it is rather the development of arithmetic. This development happens on two levels. First, at the level of individuals, we have the ontogenetic development of arithmetical cognition. Second, at the level of populations and cultures, we have the phylogenetic and cultural-historical development of arithmetic. In my paper I focus on the ontogenetic level, because it is at that level that radical enactivism faces its most serious challenge.

It is commonly accepted that, in learning arithmetical knowledge and skills, children apply their innate, evolutionarily-acquired proto-arithmetical abilities (Pantsar 2014, 2019). These abilities – sometimes also called “quantical” (Núñez 2017) – are already present in human infants, and we share them with many non-human animals.

According to the most common view, there are two main proto-arithmetical abilities (Knops 2020). The first is subitizing: the ability to determine the amount of objects in our field of vision without counting. Subitizing enables detecting exact quantities, but it stops working after three or four objects. For larger collections, there is an estimating ability. This ability is not limited to small quantities, but it gets increasingly inaccurate as the size of the observed collection increases.

For the present topic, the literature on subitizing and estimating presents interesting questions. Following the work of Elizabeth Spelke (2000) and Susan Carey (2009), it is commonplace to associate each ability with a special core cognitive system (Hyde 2011). Subitizing is associated with the object tracking system (OTS), which allows for the parallel observation of objects in the subitizing range, up to three or four. Estimating is associated with the approximate number system (ANS), which is thought to be a numerosity-specific system.

The problem for the radical enactivist is that, under most interpretations, both the OTS and ANS are based on non-linguistic representations. The OTS is based on the observed objects occupying mental object files, one file for one object (Beck 2017; Carey 2009). For example, when I see three apples, three object files are occupied, and we can understand this as a representation of the number of the apples.

The ANS, on the other hand, is usually interpreted as representing quantities on a mental number line (Dehaene 2011). This line is likely to be logarithmic, given that the estimating ability becomes less accurate as the quantities become larger. Studies on anumerical cultures in the Amazon provide further evidence of this; members of those cultures tend to place quantities on a (physical) number line in a logarithmic manner (Dehaene et al. 2008; but see Núñez 2011).

Therefore, we have good empirical evidence in support of the idea that proto-arithmetical abilities are to be interpreted in terms of non-linguistic representations. Now the question is: can radical enactivism provide an alternative explanation for proto-arithmetical abilities without evoking representations?

This proves to be difficult, because it requires answering what is perhaps the most fundamental question in the field: namely, what exactly is a mental representation? Should visual memories, for example, be considered representations? For the radical enactivist they should not, but little evidence or argumentation has been provided to support this denial. In the present context, we must ask: could the OTS and the ANS work without using representations? Radical enactivism says so, but there is little solid evidence in support of this view.

Nonetheless, it should also be noted that the object files and the mental number line as explanations of the functioning of the OTS and the ANS, respectively, are currently nothing more than theoretical postulations : neither object files nor a mental number line have been located in the brain at the neuronal level, although fMRI studies give us good clues on where to look (Nieder 2016).

To be sure, some monkey studies have detected the existence of number neurons: i.e., specific groups of neurons whose firing is connected to observing a particular (small) quantity of objects (Nieder 2016), and one could infer that such number neurons count as representations of quantities in the brain. But this inference is exactly the kind of inference that radical enactivists have warned us against. Radical enactivists agree that there is non-linguistic processing of information in the brain, but they deny that in such cases there is content, i.e., representations. In the words of Hutto and Myin, brains process non-linguistic information-as-covariance, but not information-as-content (Hutto and Myin 2013:67).

In conclusion, where do we stand? Is there a way forward in the debate on representations? I believe there is, but it would be spurious to claim that philosophers can find it on their own. Instead, we will need a better empirical understanding of the neuronal activity associated with the functioning of the OTS and the ANS. At the same time, it would also be misguided to expect empirical data alone to resolve the issue. We will not find groups of neurons that are unassailably non-linguistic representations, and philosophers will need to continue working with empirical researchers in an effort to gain more knowledge about the proto-arithmetical abilities.

Want more?

Read the full article at


  • Beck, J. (2017). “Can Bootstrapping Explain Concept Learning?” Cognition 158:110–21.
  • Carey, S. (2009). The Origin of Concepts. Oxford: Oxford University Press.
  • Chomsky, Noam (2015). Aspects of the Theory of Syntax (50th anniversary ed.). MIT Press. (Original work published 1965)
  • Dedekind, Richard. (1888). Richard Dedekind: was sind und was sollen die Zahlen?: Stetigkeit und irrationale Zahlen. 1. Auflage. edited by S. Müller-Stach. Berlin [Heidelberg]: Springer Spektrum.
  • Dehaene, S., V. Izard, E. Spelke, and P. Pica. (2008). “Log or Linear? Distinct Intuitions of the Number Scale in Western and Amazonian Indigene Cultures.” Science 320:1217–20.
  • Dehaene, Stanislas. (2011). The Number Sense: How the Mind Creates Mathematics, Revised and Updated Edition. Revised, Updated ed. edition. New York: Oxford University Press.
  • Fodor, J. (1975). The Language of Thought. New York: Harvard University Press.
  • Hutto, D. D., and E. Myin. (2013). Radicalizing Enactivism. Basic Minds without Content. Cambridge, MA: MIT Press.
  • Hutto, D. D., and E. Myin. (2017). Evolving enactivism. Basic minds meet content. Cambridge, MA: MIT Press.
  • Hyde, D. C. (2011). “Two Systems of Non-Symbolic Numerical Cognition.” Frontiers in Human Neuroscience 5:150.
  • Knops, A. (2020). Numerical Cognition. The Basics. New York: Routledge.
  • Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. San Francisco: W.H. Freeman and Company.
  • Newell, A. (1980). “Physical symbol systems.” Cognitive Science 4(2):135–83.
  • Nieder, A. (2016). “The Neuronal Code for Number.” Nature Reviews Neuroscience 17(6):366.
  • Núñez, Rafael E. (2011). “No Innate Number Line in the Human Brain.” Journal of Cross-Cultural Psychology 42(4):651–68.
  • Núñez, Rafael E. (2017). “Is There Really an Evolved Capacity for Number?” Trends in Cognitive Science 21:409–24.
  • Pantsar, Markus. (2014). “An Empirically Feasible Approach to the Epistemology of Arithmetic.” Synthese 191(17):4201–29. doi: 10.1007/s11229-014-0526-y.
  • Pantsar, Markus. (2019). “The Enculturated Move from Proto-Arithmetic to Arithmetic.” Frontiers in Psychology 10:1454.
  • Peano, G. (1889). “The Principles of Arithmetic, Presented by a New Method.” Pp. 101–34 in Selected Works of Giuseppe Peano, edited by H. Kennedy. Toronto; Buffalo: University of Toronto Press.
  • Spelke, Elizabeth S. (2000). “Core Knowledge.” American Psychologist 55(11):1233–43. doi: 10.1037/0003-066X.55.11.1233.
  • Zahidi, K. (2021). “Radicalizing numerical cognition.” Synthese 198(Suppl 1):529–45.

About the author

Markus Pantsar is a guest professor at the RWTH University in Aachen. He has the title of docent at University of Helsinki. Pantsar’s main research fields are philosophy of mathematics and artificial intelligence. His upcoming book “Numerical Cognition and the Epistemology of Arithmetic” (Cambridge University Press) will present a detailed, empirically-informed philosophical account of arithmetical knowledge.

Posted on

Kevin Richardson – “Exclusion and Erasure: Two Types of Ontological Oppression”

Painting in which one half of the view is obstructed by a person looking out (but you can see some of the sky around her), and the other half is obstructed by a curtain (but you can see some of the sky from a cut-out)
“Decalcomania” (1966) René Magritte © Magritte Gallery 2021

In this post, Kevin Richardson discusses the article he recently published in Ergo. The full-length version of Kevin’s article can be found here.

Between July 2021 and December 2022, there were 4,000 cases of books being banned in US public schools. The most frequently banned books in the 2022-2023 school year was Gender Queer: A Memoir, by Maia Kobabe. Kobabe’s graphic novel is a coming of age story in which the author questions the gender binary. The gender binary is the set of social norms that tells us that there are only two genders (man and woman), that these genders are biologically defined, and that everyone has exactly one of them. In the book, Kobabe comes out as a gender non-binary, identifying with neither of the two standard gender categories.

Gender Queer and other books that are directly or indirectly critical of the gender binary have been under attack. Not only is there legislation that bans books about trans, non-binary, and genderqueer people; there is also legislation that aspires to ban the people themselves. As of this writing, reports that 83 anti-trans bills have been passed out of the 574 proposed in the US this year. The bills are anti-trans because they target trans people by restricting their access to gender-affirming care, reclassifying drag shows as “adult entertainment,” codifying the right of teachers to not respect students’ preferred pronouns, and so on.

At a rapidly accelerating pace, we see more attempts to make the lives of trans people impossible. In Normal Life, Dean Spade, legal theorist and activist, writes:

"Trans people are told by the law, state agencies, private discriminators, and our families that we are impossible people who cannot exist, cannot be seen, cannot be classified, and cannot fit anywhere."

Republicans and conservatives everywhere are on a mission to eliminate the legal possibility of trans people and LGBTQ people more generally.

How should we understand this notion of “making impossible”? In my paper, “Exclusion and Erasure: Two Types of Ontological Oppression”, I describe two ways in which trans people are made impossible: exclusion and erasure.

Ontological exclusion is what happens when an institution wrongfully refuses to let you participate in it because of your social identity. For example, trans woman Calliope Wong was rejected from Smith College, a  women’s college in Northampton, Massachusetts, on the grounds that she was not properly eligible to be a student at the college. In 2013, Smith College defined being a woman in terms of being female, a biological property they took Wong not to have.

We also see ontological exclusion in the current movement for so-called gender critical feminism. According to these feminists, feminism should be a movement based on a person’s sex, not their gender. This means that women’s sports, and access to women’s restrooms, are to be legally restricted to cisgender (as opposed to transgender) women.

I contrast ontological exclusion with ontological erasure. A case of ontological erasure happens when an institution fails to determinately recognize your social identity. In my paper, I discuss the case of Bryn Mawr college, another women’s college. While Smith College outright rejected trans applicants, Bryn Mawr momentarily held an ambiguous position toward trans women. They did not determinately rule out trans applicants, but they also did not determinately acknowledge the legitimacy of trans applicants. Instead, they claimed that they would consider the legitimacy of trans applicants on a case-by-case basis.

This is a case of ontological erasure because the social institution erases the existence of the category of trans people itself. It does this by failing to have a determinate judgment about whether trans people can apply. Trans, genderqueer, and non-binary people defy the gender binary. As such, trans identities are often perceived as indeterminate. One is neither a man nor a woman, neither a woman nor a non-woman.

In my paper, I write about how erasure can also be oppressive to people who inhabit marginalized identities. I focus on the case of trans people, but erasure is possible whenever you have people who sit in the gaps between the dominant social categories: multiracial people, bisexual people, and so on. Being erased, I argue, is a case of what Robin Dembroff and Cat Saint-Croix call “agential identity discrimination”. You are discriminated against, not simply in virtue of your identity, but in virtue of your attempt to get others to recognize your identity.

There is much more detail in the paper, but in this blog post, I want to highlight a few things that are important in light of recent events. The political climate for LGBTQ people has changed drastically over the course of my writing and publishing this paper. While my paper focuses on erasure as a static, largely hidden phenomenon, I  want to emphasize that erasure is much more dynamic and public than it may appear in my article.

Today, there is an increased effort to enforce gender boundaries. This means there is an increased effort to engage in ontological exclusion. More institutions are removing the indeterminacy from their definitions of gender, ruling out the ability of many trans people to participate or feel safe within them. There is currently a “gender panic”, as sociologists Kristen Chilt and Laurel Westbrook call it. Gender panics occur when there is a perceived threat the the gender binary. In defense of the binary, there is an intense affirmation of the boundaries of gender.

At the same time as there is an effort to draw the boundaries around who is, and is not, a woman, there is also an effort to make this very boundary-drawing effort invisible. For example, the book Gender Queer is being taken off of shelves because it is likely to lead young people to question the gender binary. The goal is not simply to exclude genderqueer people from public spaces (and society more generally), but to eradicate the very possibility of the category genderqueer. Erasure exists alongside exclusion; erasure and exclusion complement and reinforce each other.

Want more?

Read the full article at

About the author

Kevin Richardson is an Assistant Professor of Philosophy at Duke University. He mainly researches social ontology, with an emphasis on the ontology of gender, sexual orientation, and race.

Posted on

Jonas Werner – “Extended Dispositionalism and Determinism”

Picture of a tall glass tumbler, meant to represent the glass disposition to break.
“Nr. 1598” (2001) Peter Dreher © Courtesy of the Peter Dreher Foundation

In this post, Jonas Werner discusses the article he recently published in Ergo. The full-length version of Jonas’ article can be found here.

You should handle glasses with care because they are fragile and could break. Dispositions, like fragility or inflammability, go hand in hand with possibilities, like the possibility that the glass breaks or the paper ignites.

Modal dispositionalists take this observation as the starting point for their theory of metaphysical modality. Every possibility, so they claim, is underwritten by a disposition. Some of these dispositions are possessed to a very small degree, some are iterated (dispositions to acquire certain dispositions), and some were lost and are only a thing of the past.

At the heart of the modal dispositionalist’s position  lies the following biconditional:

It is metaphysically possible that p just in case something has, had, or will have an (iterated or non-iterated) disposition to be such that p.

I call the proponent of this biconditional the “classic dispositionalist”.

I argue that, for some p, it is possible that p although nothing has, had, or will have a disposition to be such that p. Some possibilities are only indirectly underwritten by dispositions. 

Why should one be unhappy with classic dispositionalism? Because, when combined with certain plausible assumptions, it quickly leads to a disaster.

The first assumption is that dispositions are always future-directed: nothing can be disposed to change the past.  For the classic dispositionalist, this immediately leads to the result that the first moment in time (if there is one) could not have been different.

The second assumption is that there are immutable truths about the dispositional roles of fundamental physical objects. Plausibly, nothing has the power to change these dispositional roles. For example, nothing can stop electrons from repelling protons. As a result, truths about the fundamental dispositional roles of fundamental objects turn out to be necessary for the classic dispositionalist.

The problem with both assumptions is that they generate too many necessities. To see the force of this worry, assume (something close to) determinism:

A complete description of the state of the universe at the first moment in time, in conjunction with immutable truths about the dispositional roles of physical objects, entails a complete description of every later state of the universe. 

Now, the first assumption gave us that the state of the universe at the first moment in time is necessarily the way it is, while the second assumption gave us that fundamental dispositional roles are necessarily the way they are. Whatever is entailed by necessities is itself a necessity. Hence, we get the result that every state of the world obtains by necessity.

According to this result, for a match that never ignites – because I accidentally dropped in a pond, for example – it is impossible that it ignites. If we were to still subscribe to classic dispositionalism, we would even have to say that it was never flammable. This seems absurd!

Of course, there is some room for manoeuvre for the classic dispositionalist, which I discuss in some detail in the paper. For now, I just wish to mention that the case based on determinism is just an extreme version of the general worry that the classic dispositionalist might be forced to accept necessities that are incompatible with the manifestation of some dispositions.

In the second part of my paper, I propose a variant of dispositionalism that is immune to this problem, which I dub “extended dispositionalism”.

Clearly, the right-to-left part of the biconditional has to be saved. Something having a disposition to be such that p needs to be sufficient for the possibility that p, otherwise the central idea of modal dispositionalism is lost. But the dispositionalist need not say that something having a disposition to be such that p is necessary for it being possible that p.

Extended dispositionalism allows that possibilities are indirectly underwritten by dispositions; it allows that the left-to-right direction of our biconditional fails. This blocks the problem described above, because from the fact that nothing is disposed to be such that p we need not conclude that it is not possible that p.

How can possibilities be indirectly underwritten by dispositions?

In a nutshell, we can take a collection of true propositions to be a candidate for a collection of metaphysical necessities just in case every disposition is such that its manifestation is logically consistent with the conjunction of all propositions in this set.

There will be many such collections. However, some of them might be more plausible candidates for a basis of all metaphysical necessities than others. Maybe there is a maximal collection; maybe a collection is the largest one that avoids arguably objectionable cases of arbitrariness; or maybe it turns out that what’s necessary is indeterminate.

In any case, the method of looking for a collection of necessities that is compatible with the right-to-left direction of our biconditional has it that dispositions keep their role as the source of modality.

Still, we might have possibilities that are not the manifestation of any dispositions. We could, for example, hold that the first state of the universe is not necessary, although nothing ever has, had, or will have a disposition for it to be different.

Want more?

Read the full article at

About the author

Jonas Werner is a postdoctoral fellow at the Massachusetts Institute of Technology. Previously, he was a postdoctoral researcher at the University of Bern. He received his PhD from the University of Hamburg. His research focuses on metaphysics and the philosophy of language. 

Posted on

Joshua Shepherd and J. Adam Carter – “Knowledge, Practical Knowledge, and Intentional Action”

picture of a catch in baseball
“Safe!” (ca. 1937) Jared French © National Baseball Hall of Fame Library

In this post, Joshua Shepherd and J. Adam Carter discuss the article they recently published in Ergo. The full-length version of their article can be found here.

A popular family of views, often inspired by Anscombe, maintain that knowledge of what I am doing (under some description) is necessary for that doing to qualify as an intentional action. We argue that these views are wrong, that intentional action does not require knowledge of this sort, and that the reason is that intentional action and knowledge have different levels of permissiveness regarding modally close failures.

Our argument revolves around a type of case that is similar in some (but not all) ways to Davidson’s famous carbon copier case. Here is one version of the type:

The greatest hitter of all time (call him Pujols) approaches the plate and forms an intention to hit a home run – that is, to hit the ball some 340 feet or more in the air, such that it flies out of the field of play. Pujols believes he will hit a home run, and he has the practical belief, as he is swinging, that he is hitting a home run. As it happens, Pujols’s behavior, from setting his stance and eyeing the pitcher, to locating the pitch, to swinging the bat and making contact with the ball, is an exquisite exercise of control. Pujols hits a home run, and so his belief that he is doing just that is true.

Given the skill and control Pujols has with respect to hitting baseballs, Pujols intentionally hits a home run. (If one thinks hitting a home run is too unlikely, we consider more likely events, like Pujols getting a base hit. If one doesn’t like baseball, we consider other examples.)

But Pujols does not know that he is doing so. For in many very similar circumstances, Pujols does not succeed in hitting a home run. Pujols’s belief that he is hitting a home run is unsafe.

When intentional action is at issue, it is commonly the case that explanations that advert to control sit comfortably alongside the admission that in nearby cases, the agent fails. Fallibility is a hallmark of human agency, and our attributions of intentional action reflect our tacit sense that some amount of risk, luck, and the cooperation of circumstance is often required to some degree – even for simple actions.

The same thing is not true of knowledge. When it comes to attributing knowledge, we simply have much less tolerance for luck and for failure in similar circumstances.

One interesting objection to our argument appeals to an Anscombe-inspired take on the kind of knowledge involved in intentional action.

Anscombe famously distinguished between contemplative and non-contemplative forms of knowledge. A central case of non-contemplative knowledge, for Anscombe, is the case of practical knowledge – a special kind of self-knowledge of what the agent is doing that does not simply mirror what the agent is doing, but is somehow involved in its unfolding. The important objection to our argument is that the argument makes most sense if applied to contemplative knowledge, but fails to take seriously the unique nature of non-contemplative, practical knowledge.

We discuss a few different ways of understanding practical knowledge, due to Michael Thompson, Kim Frost, and Will Small. The notion of practical knowledge is fascinating, and there are important insights in these authors. But we think it is not too difficult to apply our argument to a claim that practical knowledge is necessary for intentional action.

Human agents sometimes know exactly how to behave, they make no specific mistake, and yet they fail. Sometimes they behave in indistinguishable ways, and they succeed. Most of the time, human agents behave imperfectly, but there is room for error, and they succeed. The chance involved in intentional action is incompatible with both contemplative and non-contemplative knowledge.

We also discuss a probabilistic notion of knowledge due to Sara Moss (and an extension of it to action by Carlotta Pavese), and whether it might be of assistance. It won’t.

Consider Ticha, the pessimistic basketball player.

Ticha significantly underrates herself and her chances, even though she is quite a good shooter. She systematically forms beliefs about her chances that are false, believing that success is unlikely when it is likely. When Ticha lines up a shot that has, say, a 50% chance of success, she believes that the chances are closer to 25%. Ticha makes the shot. 

Was Ticha intentionally making the shot, and did she intentionally make it? Plausibly, yes.

Did Ticha have probabilistic knowledge along the way? Plausibly, no, since her probabilistic belief was false.

The moral of our paper, then, has implications for how we understand the essence of intentional action. We contrast two perspectives on this.

The first is an angelic perspective that sees knowledge of what one is doing as of the essence of what one is intentionally doing, that limns agency by emphasizing powers of rationality and the importance of self-consciousness, and that views the typical case of intentional action as one in which the agent’s success is very close to guaranteed, resulting from the perfect exercise of agentive capacities.

The second is an animal perspective that emphasizes the limits of our powers of execution, planning, and perception, and thus emphasizes the need for agency to involve special kinds of mental structure, as well as a range of tricks, techniques, plans, and back-up plans.

We think the natural world provides more insight into the nature of agency, and of intentional action, than the sources that motivate the angelic perspective. We also think there is room within the animal perspective for a proper philosophical treatment of knowledge-in-action. But that’s a separate conversation.

Want more?

Read the full article at

About the authors

Joshua Shepherd is ICREA Research Professor at Universitat Autónoma de Barcelona, and PI of Rethinking Conscious Agency, funded by the European Research Council. He works on issues in the philosophy of action, psychology, and neuroethics. His last book, The Shape of Agency, is available open access from Oxford University Press.

J. Adam Carter is Professor in Philosophy at the University of Glasgow. His research is mainly in epistemology, with special focus on virtue epistemology, know-how, cognitive ability, intentional action, relativism, social epistemology, epistemic luck, epistemic value, group knowledge, understanding, and epistemic defeat.

Posted on

Eyal Tal and Hannah Tierney – “Cruel Intentions and Evil Deeds”

Pop-art depiction of a man and woman riding away in a car with evil intentions
“In the Car” (1963) © Roy Lichtenstein

In this post, Hannah Tierney and Eyal Tal discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Doing the right thing can be difficult. Doing the morally worthy thing can be even harder.

Accounts of moral worth aim to determine the kinds of motivations that elevate merely right actions—actions that happen to conform to the correct normative theory—to morally worthy actions—actions that merit praise or credit.

Some argue that an agent performs a morally worthy action if and only if they do it because the action is morally right (Herman 1981; Jeske 1998; Sliwa 2016; Johnson King 2020). Others argue that a morally worthy action is that which an agent performs because of features that make the action right (Arpaly 2003; Arpaly & Schroeder 2014; Markovits 2010).

What sets these views apart is the kind of motivation each takes to be essential for an action’s moral worth.

When an agent is motivated to do the right thing because of the action’s moral rightness, she has a higher-order motivation to perform this action. When an agent is motivated to do the right thing because of a particular right-making feature of the action, she has a first-order motivation to perform this action. Higher-order theorists (Sliwa 2016; Johnson King 2020) argue that higher-order motivations are necessary and sufficient for moral worth, while first-order motivations are largely irrelevant. In contrast, first-order theorists (Arpaly 2003; Markovits 2010) argue that first-order motivations are necessary and sufficient for moral worth, while higher-order motivations are irrelevant.

In an important sense, higher-order and first-order views of moral worth are diametrically opposed. The motivations that one camp argues are necessary and sufficient for moral worth are the very motivations that the other camp argues are irrelevant.

Nevertheless, proponents of these opposing views share something important. With the exception of Arpaly (2003) and Arpaly & Schroeder (2014), they theorize about the nature of moral worth by focusing mainly on the moral worth of, and praiseworthiness or creditworthiness for, right actions.

Yet each of these properties has a negatively valenced counterpart that attaches to wrong actions. Just as agents can deserve praise or credit for doing the right thing, they can deserve blame or discredit for doing the wrong thing. While the former actions have moral worth, the latter actions have what we will call moral counterworth.

In our paper, we explore the moral counterworth of wrong actions in order to shed new light on the nature of moral worth. Contrary to theorists in both camps, we argue that more than one kind of motivation can affect the moral worth of actions. 

Compare the following cases: 

Selfish Gossip: Cecile learns of a good friend’s embarrassing secret. She knows that it would be wrong to reveal it, and she does not wish to do wrong. While at a party, an opportunity to be the centre of attention arises. Wanting to be popular, Cecile succumbs to temptation and reveals her friend’s secret. 
Cruel Gossip: Sebastian learns of a good friend’s embarrassing secret. He knows that it would be wrong to reveal it, and he does not wish to do wrong. While at a party, an opportunity arises to humiliate his friend by revealing the secret. Wanting to embarrass his friend, Sebastian succumbs to temptation and reveals his friend’s secret.

Though both Cecile and Sebastian are blameworthy for revealing their friend’s secret, they are not equally blameworthy. Sebastian is (much) more blameworthy than Cecile and his action possesses more counterworth than Cecile’s action.

What could explain this difference? The only difference between Cecile and Sebastian lies in their first-order motivations. Cecile’s motivation to reveal her friend’s secret is selfish—she cares more about being popular than her friend’s privacy. But Sebastian’s motivation to tell the secret is cruel—he desires to harm his friend by embarrassing them.

Sebastian’s cruel first-order motivation renders him more blameworthy than Cecile. If this is right, then first-order motivations are not irrelevant to moral counterworth—they can directly contribute to the degree to which an agent is blameworthy. 

Reflecting on cases of wrong actions indicates that higher-order motivations can impact moral counterworth as well.

Compare the case of Selfish Gossip, in which Cecile reveals a friend’s secret in order to be the centre of attention despite having the higher-order motivation not to perform wrong actions, to the following case:

Evil Gossip: Isabelle learns of a good friend’s embarrassing secret. She knows that it would be wrong to reveal it, and she wishes to do wrong. While at a party, an opportunity to be the centre of attention arises. Wanting to both be popular and do wrong, Isabelle reveals her friend’s secret.

While both Cecile and Isabelle are blameworthy for their actions, Isabelle is (much) more blameworthy. The relevant difference between Cecile and Isabelle lies in their higher-order motivations.

Cecile possesses a higher-order motivation not to reveal her friend’s secret—she knows that doing so is wrong and does not want to do the wrong thing. In contrast, Isabelle possesses a higher-order motivation to reveal the secret—she wants to reveal the secret because doing so is wrong. 

We submit that Isabelle’s motivation to do wrong renders her more blameworthy than Cecile. And if we are right that Isabelle’s motivation to do wrong enhances the degree to which she is blameworthy for doing wrong, then higher-order motivations are not irrelevant to moral counterworth. 

From here, we defend the following argument: 

(1)	First-order and higher-order motivations can each affect moral counterworth.
(2)	Moral counterworth and moral worth are relevantly similar, such that the kinds of motivations that affect the former can also affect the latter.
(3)	First-order motivations and higher-order motivations can each affect the moral worth of an agent’s action.

In our paper, we defend each premise from potential objections and conclude by explaining how reflection on moral counterworth serves to support recently developed accounts of moral worth that make room for the relevance of both higher-order and first-order motivations. (Isserow 2019, 2020; Portmore 2022; Singh 2020)

Want more?

Read the full article at


  • Arpaly, N. (2003). Unprincipled Virtue: An Inquiry into Moral Agency. Oxford University Press. 
  • Arpaly, N. & Schroeder, T. (2014). In Praise of Desire. Oxford University Press. 
  • Herman, B. (1981). “On the Value of Acting from the Motive of Duty.” Philosophical Review 66: 359–382.
  • Isserow, J. (2019). “Moral Worth and Doing the Right Thing by Accident.” Australasian Journal of Philosophy 97: 251–264.
  • Isserow, J. (2020). “Moral Worth: Having it Both Ways.” The Journal of Philosophy 117(10): 529–556. 
  • Jeske, D. (1998). “A Defense of Acting from Duty.” The Journal of Value Inquiry 32(1): 61–74.
  • Johnson King, Z. (2020). “Accidentally Doing the Right Thing.” Philosophy and Phenomenological Research 1: 186–206.
  • Markovits, J. (2010). “Acting for the Right Reasons.” The Philosophical Review 119 (2): 201–242. 
  • Portmore, D. (2022) “Moral Worth and our Ultimate Moral Concerns.” Oxford Studies in Normative Ethics, volume 12. 
  • Singh, K. (2020). “Moral Worth, Credit, and Non-Accidentality.”  Oxford Studies in Normative Ethics, volume 10. 
  • Sliwa, P. (2016). “Moral Worth and Moral Knowledge.” Philosophy and Phenomenological Research 93(2): 393–418. 

About the authors

Eyal Tal received his PhD in philosophy from University of Arizona. He is interested in epistemology, ethics, metaethics, metaphysics, philosophy of psychiatry, and philosophy of science.

Hannah Tierney is Assistant Professor in the philosophy department at the University of California, Davis. She specializes in ethics and metaphysics, and she writes mainly on issues of free will, moral responsibility, and personal identity.

Posted on

Charles Goldhaber – “The Humors in Hume’s Skepticism”

An enigmatic winged female figure is surrounded by the symbols of mathematics, alchemy, and the crafts. She looks gloomy and melancholic, but also in the grip of inspired and creative contemplation. She is thought to be a symbol of scientific and artistic creativity.
Melencolia I” (1514) Albrecht Dürer

In this post, Charles Goldhaber discusses the article he recently published in Ergo. The full-length version of Charles’ article can be found here.

Something very surprising occurs in the “Conclusion” to Book I of David Hume’s A Treatise of Human Nature: he pauses to survey some of the skeptical strands within the “science of man”, and doing so causes his generally dispassionate tone to explode into a turbulent personal narrative.

He describes himself as plunging into an extremely gloomy mood. Sunken, he fancies himself

“some strange uncouth monster…inviron’d with the deepest darkness” (T

When this “philosophical melancholy and delirium” reaches a fever pitch, Hume reports:

“The intense view of these manifold contradictions and imperfections in human reason has so wrought upon me, and heated my brain, that I am ready to reject all belief and reasoning, and can look upon no opinion even as more probable or likely than another” (T–9).

Hume’s drastic shift in literary style and severe mental state are striking, but perhaps even more surprising is the fact that, within just a few pages, he seems to have recovered entirely. He says:

“I feel an ambition to arise in me of contributing to the instruction of mankind, and of acquiring a name by my inventions and discoveries” (T 

He then launches headfirst into his account of the passions in Book II, seemingly untroubled by the skeptical implications of his own findings.

What happened here? How did Hume break out of his melancholy and resume his philosophical pursuits? Why did he even bother to write about his mental state—and in such a curiously stylish way at that?

I think the style of his writing is our best clue to answering all these questions. Or rather, the clue is Hume’s use of a swath of words and images deriving from ancient and medieval medicine.

On the humoral theory, health consists in a balance of four humors, or essential bodily liquids: black bile, blood, yellow bile, and phlegm. Each of them corresponds to one of four temperaments, or disposition to certain characteristic passions, actions, and ailments.

Interestingly, Hume’s recovery from “philosophical melancholy” in the “Conclusion” involves four transitional stages: Before resuming philosophy, Hume is melancholic, enjoys social pleasures, feels aggression toward philosophy, and then rests. Close attention to Hume’s description of these transitional stages reveals that he makes deliberate allusions to one of the four humors or temperaments in each.

This suggests that his recovery occurs when his cycle through the four humors produces a balance between them. For Hume, a healthy mental state is one which incorporates a moderate degree of all four temperaments.

Hume’s humoral allusions resolve a textual puzzle about the progression of his personal narrative in the “Conclusion.” Perhaps more interestingly, they also help us understand how Hume conceived of skepticism and its role in human life.

By associating excessive skepticism with melancholy, heated brains, and lycanthropic delusions, Hume invites his readers to conceive of it along the lines of classical diseases resulting from the excess of black bile. Such diseases were thought to have been especially common in philosophers, whose intense, often brooding reflections encouraged the production of black bile and the corresponding “melancholic” temperament. Philosophers would benefit from tempering these excesses through activities associated with the other humors or temperaments, as Hume himself does in the “Conclusion.”

Yet black bile was not taken to be inherently unhealthy. Some of it was necessary for humoral balance. Likewise, though Hume emphasizes the dangers of excessive skepticism, he finds a more moderate degree to be salutary. Hume’s invocation of the humoral theory of medicine then helps us see that skepticism can both threaten and restore a healthy mind, depending on the degree of its predominance in our thought.

You might wonder whether Hume would really invoke an antiquated medical theory in his writings. After all, Hume was highly critical of the “occult qualities” appealed to by the “antients” (T, and he wrote the Treatise during a boom in Scottish medical innovations.

You might also worry that Hume’s invocation of humoral theory saps his claims on the value and management of skepticism of any plausibility. After all, we know that health is more than a matter of balancing liquids.

Both worries are answerable. First, though by Hume’s youth humoral medicine had lost its status as the dominant theoretical orthodoxy, it continued to hold sway over medical practice. On top of this, temperament psychology remained a rich source of themes in literature well beyond Hume’s life.

Second, Hume’s point that skepticism can be both healthy and harmful depending on degree does not rely on any literal endorsement of humoral theory. Indeed, it’s unclear which medical theory Hume endorsed, if any. What is clear is that Hume found humoral theory to be a helpful analogy for thinking about how skepticism can be moderated in ways that promote healthy doxastic dispositions – and that is a point on which we can agree, even while rejecting the medical theory.

The humoral allusions in Hume’s discussion of skepticism can help us revive a promising approach to epistemology which currently has no modern equivalent. The core idea is that proper mental functioning involves a balance of tendencies to reason and believe in certain ways. Certain epistemic vices, such as skepticism and dogmatism, are extreme expressions of the very same tendencies. The vices are then more a matter of degree than of doctrine.

As a result, even skeptics and dogmatists can lead us toward proper mental functioning, when adopting some share of their dispositions helps us correct our own imbalances. We do not need to accept humoral theory to appreciate this idea, but it’s an idea that Hume’s invocations of humoral theory can lead us to see for the first time.

Want more?

Read the full article at

About the author

Charles Goldhaber is a Visiting Assistant Professor at Haverford College. His research focuses on skepticism in contemporary epistemology and the early modern era, especially in Hume and Kant.

Posted on

F. J. Elbert – “God and the Problem of Blameless Moral Ignorance”

Elohim is a Hebrew name for God. This picture illustrates the Book of Genesis. Adam is shown growing out of the earth, a piece of which Elohim holds in his left hand.
“Elohim Creating Adam” (1795) William Blake

In this post, F. J. Elbert discusses his article recently published in Ergo. The full-length version of the article can be found here.

The Abrahamic religions (Judaism, Christianity, and Islam) share more than just the belief that Abraham was an important prophet. They also hold in common the view that God is the perfectly good creator of the world who has designed it so that any gratuitous evil is of our choosing rather than God’s responsibility.

The origin story is the same in outline, and it is found in Genesis. God placed Adam and Eve in a garden free of evil. However, Adam and Eve knowingly and willingly disobeyed God. They introduced evil into the world by rebelling against their creator. God bore no fault in their fall. 

Suppose we grant that there is a creator. It does not follow that humans have an overriding moral obligation to praise and obey the being who created them. On the contrary, if the creator is responsible for a morally unsupportable evil, then it follows that the creator is not perfectly good. 

Consider the following origin story, which we can call “The Garden of Blameless Disobedience”. In this story, the creator gives Adam and Eve conflicting commands. Adam is told they can eat every fruit in the garden except apples, and Eve is told they can eat anything except strawberries. Suppose Adam eats strawberries, and Eve takes great delight in the occasional apple. Each disobeys a command the creator has given. However, assume each has an all-things-considered obligation, or one that trumps all other commitments, to obey the creator. In that case, the creator could not have a morally sufficient reason for giving them conflicting commands, because no moral good could possibly result. In the Garden of Eden, Adam and Eve are wholly responsible for introducing gratuitous evil into the world. In contrast, in the Garden of Blameless Disobedience, the creator is responsible for the evil and hence the creator is not God. 

There is a variant of “The Garden of Blameless Disobedience” in which Adam and Eve are also not culpable for introducing evil into the world. We can call it “The Garden of Blameless Confusion”. In it, the creator commands Eve not to eat strawberries, but the creator does not speak to Adam. However, Adam sincerely but mistakenly believes that God has given him a command that the only fruit they cannot eat are apples. In this garden, the creator designs Adam so that, through no fault of his own, he does not reliably form beliefs about what the creator has commanded. He accepts some commands as originating from his creator when they do not. As a result, Adam and Eve quarrel unnecessarily about what fruit they can and cannot eat. Again, assuming they have a paramount or overriding obligation to obey their creator, Adam’s sincere but mistaken belief that they should not eat apples can serve no greater moral purpose; there cannot be a morally sufficient reason for doing what is all things considered morally wrong. In the Garden of Blameless Confusion, the creator does not deserve unsurpassed praise and unquestioning obedience and therefore the creator is not God. 

My argument in “God and the Problem of Blameless Moral Ignorance” is that our world is much more like the Garden of Blameless Disobedience or the Garden of Blameless Confusion than the Garden of Eden.

Any creator whom we have an overriding obligation to praise and obey cannot be responsible for or the cause of any of our wrongdoing. God cannot create a state-of-affairs in which we stumble into evil. But suppose there is a creator who is the architect of a world in which we sometimes blamelessly attribute false commands to her or him. In that case, that creator is responsible for the ensuing evil. Since it is morally better that we don’t disobey God, even unwittingly, or violate one of our fundamental moral obligations, it is not enough that we are not culpable when we do. That we are blameless does not exonerate the creator.

Some theists agree. They deny that God is responsible for our mistaken moral beliefs or for attributing commands to Him that he did not give.

Nonetheless, they also hold that every acceptance of a false command and every fundamental mistaken moral belief is due to sin. They believe God has given us a faculty, the “sensus divinitatis”, which, somewhat like a conscience, provides all who do not hate God with knowledge of His existence and basic demands. According to them, all false beliefs about our fundamental moral obligations and God’s commands originate in pride and a rebellious desire to direct one’s life rather than submit to God’s will. 

However, we have overwhelming evidence that this latter claim is false. While it is undoubtedly the case that human beings often knowingly and willingly do what is wrong, there are also many instances in which people do what is wrong while sincerely aiming at the good and fulfilling God’s will.

Consider the following example (I discuss more in the paper). Some theists believe God has commanded them to provide women with abortions under certain circumstances. Others think that God has forbidden abortion in every instance. Can this difference in belief in every instance be attributed to a hatred of God? Surely not.

There are a host of cases in which sincere believers, roughly equal in charity and devotional practices, disagree about what our fundamental moral obligations are.

Given the existence of blameless moral ignorance, it is inconceivable that God exists. God cannot be responsible for evil which serves no greater moral purpose. Any creator who designs human beings so that they are blamelessly mistaken about what they most ought to do is a lesser god.

There cannot be a morally sufficient reason for either causing or allowing rational agents to do what is, all-things-considered, morally wrong. For the Creator of the world to be worthy of the highest praise and unquestioning obedience, the moral structure of the world must be good, and recognizably so.

Want more?

Read the full article at

About the author

F. J. Elbert received a Ph.D. in philosophy from Vanderbilt University. His research focuses on the implications of blameless fundamental moral disagreement in the fields of ethics, political philosophy, and philosophy of religion.

Posted on

Eliran Haziza – “Assertion, Implicature, and Iterated Knowledge”

Picture of various circles in many sizes and colors, all enclosed within one big, starkly black circle.
“Circles in a Circle” (1923) Wassily Kandinsky

In this post, Eliran Haziza discusses his article recently published in Ergo. The full-length version of Eliran’s article can be found here.

It’s common sense that you shouldn’t say stuff you don’t know. I would seem to be violating some norm of speech if I were to tell you that it’s raining in Topeka if I don’t know it to be true. Philosophers have formulated this idea as the knowledge norm of assertion: speakers must assert only what they know.

Speech acts are governed by all sorts of norms. You shouldn’t yell, for example, and you shouldn’t speak offensively. But the idea is that the speech act of assertion is closely tied to the knowledge norm. Other norms apply to many other speech acts: it’s not only assertions that shouldn’t be yelled, but also questions, promises, greetings, and so on. The knowledge norm, in some sense, makes assertion the kind of speech act that it is.

Part of the reason for the knowledge norm has to do with what we communicate when we assert. When I tell you that it’s raining in Topeka, I make you believe, if you accept my words, that it’s raining in Topeka. It’s wrong to make you believe things I don’t know to be true, so it’s wrong to assert them.

However, I can get you to believe things not only by asserting but also by implying them. To take an example made famous by Paul Grice: suppose I sent you a letter of recommendation for a student, stating only that he has excellent handwriting and attends lectures regularly. You’d be right to infer that he isn’t a good student. I asserted no such thing, but I did imply it. If I don’t know that the student isn’t good, it would seem to be wrong to imply it, just as it would be wrong to assert it.

If this is right, then the knowledge norm of assertion is only part of the story of the epistemic requirements of assertion. It’s not just what we explicitly say that we must know, it’s also what we imply.

This is borne out by conversational practice. We’re often inclined to reply to suspicious assertions with “How do you know that?”. This is one of the reasons to think there is in fact a knowledge norm of assertion. We ask speakers how they know because they’re supposed to know, and because they’re not supposed to say things they don’t know.

The same kind of reply is often warranted not to what is said but to what is implied. Suppose we’re at a party, and you suggest we try a bottle of wine. I say “Sorry, but I don’t drink cheap wine.” It’s perfectly natural to reply “How do you know this wine is cheap?” I didn’t say that this wine was cheap, but I did clearly imply it, and it’s perfectly reasonable to hold me accountable not only to knowing that I don’t drink cheap wine, but also to knowing that this particular wine is cheap.

Implicature, or what is implied, may not appear to commit us to knowing it because implicatures often can be canceled. I’m not contradicting myself if I say in my recommendation letter that the student has excellent handwriting, attends lectures regularly, and is also a brilliant student. Nor is there any inconsistency in saying that I don’t drink cheap wine, and this particular wine isn’t cheap. Same words, but the addition prevents what would have been otherwise implied.

Nevertheless, once an implicature is made (and it’s not made when it’s canceled), it is expected to be known, and it violates a norm if it’s not. So it’s not only assertion that has a knowledge norm, but implicature as well: speakers must imply only what they know. This has an interesting and perhaps unexpected consequence: If there is a knowledge norm for both assertion and implicature, the KK thesis is true.

The KK thesis is the controversial claim that you know something only if you know that you know it. This is also known as the idea that knowledge is luminous.

Why would it be implied by the knowledge norms of assertion and implicature? If speakers must assert only what they know, then any assertion implies that the speaker knows it. In fact, this seems to be why it’s so natural to reply “How do you know?” The speaker implies that she knows, and we ask how. But if speakers must know not only what they assert but also what they imply, then they must assert only what they know that they know. This reasoning can be repeated: if speakers must assert only what they know that they know, then any assertion implies that the speaker knows that she knows it. The speaker must know what she implies. So she must assert only what she knows that she knows that she knows. And so on.

The result is that speakers must have indefinitely iterated knowledge that what they assert is true: they must know that they know that they know that they know …

This might seem a ridiculously strict norm on assertion. How could anyone ever be in a position to assert anything?

The answer is that if the KK thesis is true, the iterated knowledge norm is the same as the knowledge norm: if knowing entails knowing that you know, then it also entails indefinitely iterated knowledge. So you satisfy the iterated knowledge norm simply by satisfying the knowledge norm. If we must know what we say and imply to be true, then knowledge is luminous.

Want more?

Read the full article at

About the author

Eliran Haziza is a PhD candidate at the University of Toronto. He works mainly in the philosophy of language and epistemology, and his current research focuses on inquiry, questions, assertion, and implicature.

Posted on

Cameron Buckner – “A Forward-Looking Theory of Content”

Self-portrait of Vincent Van Gogh from 1889.
“Self-portrait” (1889) Vincent Van Gogh

In this post, Cameron Buckner discusses the article he recently published in Ergo. The full-length version of Cameron’s article can be found here.

As far as kinds of thing go, representations are awfully weird. They are things that by nature are about other things. Van Gogh’s self-portrait is about Van Gogh; and my memory of breakfast this morning is about some recently-consumed steel-cut oats.

The relationship between a representation and its target implicates history; part of what it is to be a portrait of Van Gogh is to have been crafted by Van Gogh to resemble his reflection in a mirror, and part of what it is to be the memory of my breakfast this morning is to be formed through perceptual interaction with my steel-cut oats.

Mere historical causation isn’t enough for aboutness, though; a broken window isn’t about the rock thrown through it. Aboutness thus also seems to implicate accuracy or truth evaluations. The painting can portray Van Gogh accurately or inaccurately; and if I misremember having muesli for breakfast this morning, then my memory is false. Representation thus also introduces the possibility of misrepresentation.

As if things weren’t already bad enough, we often worry about indeterminacy regarding a representation’s target. Suppose, for example, that Van Gogh’s portrait resembles both himself and his brother Theo, and we can’t decide who it portrays. Sometimes this can be settled by asking about explicit intentions; we can simply ask Van Gogh who he intended to paint. Unfortunately, explicit intentions fail to resolve the content of basic mental states like concepts, which are rarely formed through acts of explicit intent.

To paraphrase Douglas Adams, allowing the universe to contain a kind of thing whose very nature muddles together causal origins, accuracy, and indeterminacy in this way made a lot of people very angry and has widely been regarded as a bad move.

There was a period from 1980-1995, which I call the “heyday of work on mental content”, where it seemed like the best philosophical minds were working on these issues and would soon sort them out. Fodor, Millikan, Dretske, and Papineau served as a generation of “philosophical naturalists” who hoped that respectable scientific concepts like information and biological function would definitively address these tensions.

Information theory promised to ground causal origins and aboutness in the mathematical firmament of probability theory, and biological functions promised to harmonize historical origins, correctness, and the possibility of error using the respectable melodies of natural selection or associative learning.

Dretske, for example, held that associative learning bestowed neural states with representational functions; by detecting correlations between bodily movements produced in response to external stimuli and rewards—such the contingency between a rat’s pressing of a bar when a light is on and receipt of a food pellet reward—Dretske held that instrumental conditioning creates a link between a perceptual state triggered by the light and a motor state that controls bar-pressing movements, causing the rat to reliably press the bar more often in the future when the light is activated. Dretske says in this case that the neural state of detecting the light indicates that light is on, and when learning recruits this indicator to control bar-pressing movements, it bestows upon it the function of indicating this state of affairs going forward— a function which it retains even if it is later triggered in error, by something else (thus explaining misrepresentation as well).

This is a lovely way of weaving together causal origins, accuracy, and determinacy, and, like many other graduate students in the 1990s and 2000s, I got awfully excited about it when I first heard about it. Unfortunately, it still doesn’t work. There are lots of quibbles, but the main issue is that, despite appearances, it still has a hard time allowing for a representation to be both determinate and (later) tokened in error.

A diagram of Dretske’s “structuring cause” solution to the problem of mental content. On his view, neural state N is about stimulus conditions F if learning recruits N to cause movements M because of its ability to indicate F in the learning history. In recruiting N to indicate F going forward, Dretske says that it provides a “structuring cause” explanation of behavior; that it indicated F in the past explains why it now causes M. However, if content is fixed in the past in this way, then organisms can later persist in error indefinitely (e.g. token N in the absence of F) without ever changing their representational strategies. On my view, such persistent error provides evidence that the organism doesn’t actually regard tokening N in the absence of F as an error, that F is not actually the content of N (by the agent’s own lights).
Figure1. A diagram of Dretske’s “structuring cause” solution to the problem of mental content. On his view, neural state N is about stimulus conditions F if learning recruits N to cause movements M because of its ability to indicate F in the learning history. In recruiting N to indicate F going forward, Dretske says that it provides a “structuring cause” explanation of behavior; that it indicated F in the past explains why it now causes M. However, if content is fixed in the past in this way, then organisms can later persist in error indefinitely (e.g. token N in the absence of F) without ever changing their representational strategies. On my view, such persistent error provides evidence that the organism doesn’t actually regard tokening N in the absence of F as an error, that F is not actually the content of N (by the agent’s own lights).

I present the argument as a dilemma on the term “indication”. Indication either requires perfect causal covariation, or something less. Consider the proverbial frog and its darting tongue; if the frog will also eat lead pellets flicked through its visual field, then its representation can only perfectly covary with some category that includes lead pellets, such as “small, dark, moving speck”. On this ascription, it looks impossible for the frog to ever make a mistake, because all and only small dark moving specks will ever trigger its tongue movements. If on the other hand indication during recruitment can be less than perfect, then we could say that the representation means something more intuitively satisfying like “fly”, but then we’ve lost the tight relationship between information theory and causal origins to settle indeterminacy, because there are lots of other candidate categories that the representation imperfectly indicated during learning (such as insect, food item, etc.).

This is all pretty familiar ground; what is less familiar is that there is a relatively unexplored “forward- looking” alternative that starts to look very good in light of this dilemma.

To my mind, the views that determine content by looking backward to causal history get into trouble precisely because they do not assign error a role in the content-determination process. Error on these views is a byproduct of representation; on backward-looking views, organisms can persist in error indefinitely despite having their noses rubbed in evidence of their mistake, like the frog that will go on eating BBs until its belly is full of lead.

Representational agents are not passive victims of error; in ideal circumstances, they should react to errors, specifically by revising their representational schemes to make those errors less likely in the future. Part of what it is to have a representation of X is to regard evidence that you’ve activated that representation in the absence of X as a mistake.

Content ascriptions should thus be grounded in the agent’s own epistemic capacities for revising its representations to better indicate their contents in response to evidence of representational error. Specifically, on my view, a representation means whatever it indicates at the end of its likeliest revision trajectory—a view that, not coincidentally, happens to fit very well with a family of “predictive processing” approaches to cognition that have recently achieved unprecedented success in cognitive science and artificial intelligence.

Want more?

Read the full article at

About the author

Cameron Buckner is an Associate Professor in the Department of Philosophy at the University of Houston. His research primarily concerns philosophical issues which arise in the study of non-human minds, especially animal cognition and artificial intelligence. He just finished writing a book (forthcoming from OUP in Summer 2023) that uses empiricist philosophy of mind to understand recent advances in deep-neural-network-based artificial intelligence.

Posted on

Brendan Balcerak Jackson, David DiDomenico, and Kenji Lota – “In Defense of Clutter”

Picture of a cluttered room with books, prints, musical instruments, ceramic containers, and other random objects disorderly covering every bit of surface available.
“Old armour, prints, pictures, pipes, China (all crack’d), 
old rickety tables, and chairs broken back’d” (1882) Benjamin Walter Spiers

In this post, Brendan Balcerak Jackson, David DiDomenico, and Kenji Lota discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Suppose I believe that mermaids are real, and this belief brings me joy. Is it okay for me to believe that mermaids are real? On the one hand, it is tempting to think that if my belief doesn’t harm anyone, then it is okay for me to have it. On the other hand, it seems irrational for me to believe that mermaids are real when I don’t have any evidence or proof to support this belief. Are there standards that I ought to abide by when forming and revising my beliefs? If there are such standards, what are they?

Two philosophical views about the standards that govern what we ought to believe are pragmatism and the epistemic view. Pragmatism holds that our individual goals, desires, and interests are relevant to these standards. According to pragmatists, the fact that a belief brings me joy is a good reason for me to have it. The epistemic view holds that all that matters are considerations that speak for or against the truth of the belief; although believing that mermaids are real brings me joy, this is not a good reason because it is not evidence that the belief is true. 

Gilbert Harman famously argued for a standard on belief formation and revision that he called ‘The Principle of Clutter Avoidance’:

One should not clutter one’s mind with trivialities (Harman 1986: 12). 

For example, suppose that knowing Jupiter’s circumference would not serve any of my goals, desires, or interests. If I end up believing truly that Jupiter’s circumference is 272,946 miles (perhaps I stumble upon this fact while scrolling through TikTok), am I doing something I ought not to do?

According to Harman, I ought not to form this belief because doing so would clutter my mind. Why waste valuable cognitive resources believing things that are irrelevant to one’s own wellbeing? Harman’s view is that our cognitive resources shouldn’t be wasted in this way, and this is his rationale for accepting the Principle of Clutter Avoidance.

Many epistemologists are inclined to accept Harman’s principle, or something like it. This is significant because the principle appears to lend significant weight to pragmatism over the epistemic view. Picking up on Harman’s ideas about avoiding cognitive clutter, Jane Friedman has recently argued that Harman’s principle has the following potential implication:

Evidence alone doesn’t demand belief, and it can’t even, on its own, permit or justify belief (Friedman 2018: 576). 

Rather, genuine standards of belief revision must combine considerations about one’s interests with more traditional epistemic sorts of considerations. Friedman argues that the need to avoid clutter implies that evidence can be overridden by consideration of our interests: even if your evidence suggests that some proposition is true, Harman’s principle may prohibit you from believing it. According to Friedman, accepting Harman’s principle leads to a picture of rational belief revision that is highly “interest-driven”, according to which our practical interests have a significant role to play.

These are radical implications, in our view, and so we wonder whether Harman’s principle should be accepted. Is it a genuine principle of rational belief revision? Our aim in “In Defense of Clutter” is to argue that it is not. Moreover, we offer an alternative way to account for clutter avoidance that is consistent with the epistemic view.

Suppose that you believe with very good evidence that it will rain and, with equally good evidence, that if it will rain, then your neighbor will bring an umbrella to work. An obvious logical consequence of these two beliefs—one that we may suppose you are able to appreciate—is that your neighbor will bring an umbrella to work.

This information may well be unimportant for you. It may be that no current interest of yours would be served by settling the question of whether your neighbor will bring an umbrella to work. But suppose that in spite of this you ask the question anyway. Having asked it, isn’t it clear that you ought to answer it in the affirmative? At the very least, isn’t it clear that you are permitted to do so? The question has come up, and you can easily see the answer. How can you be criticized for answering it?

In general, if a question comes up, surely it is okay to answer it in whatever way is best supported by your evidence. According to the Principle of Clutter Avoidance, however, you should not answer the question, because this would be to form a belief that doesn’t serve any of your practical interests. This is implausible. The answer to your question clearly follows from beliefs that are well supported by your evidence.

Can we account for the relevance of clutter avoidance without being led to this implausible result? Here is our proposal. Rather than locating the significance of cognitive clutter at the level of rational belief revision, we locate its significance at earlier stages of inquiry.

Philosophers have written extensively on rational belief revision, but comparably little about earlier stages of inquiry; for example, about asking or considering questions, and about the standards that govern these activities. If we zoom out from rational belief revision and reorient our focus on earlier stages of inquiry, we can bring the significance of cognitive clutter into view.

We propose that clutter considerations play a role in determining how lines of inquiry ought to be opened and pursued over time, but they are irrelevant to closing lines of inquiry by forming beliefs.

It is okay to answer a question in whatever way is best supported by one’s evidence, but a thinker makes a mistake when they ask or consider junk questions—questions whose answers will not serve any of their interests. This enables us to take seriously the considerations of cognitive economy that Harman, Friedman, and many others find compelling, without thereby being led to an interest-driven epistemology.

Want more?

Read the full article at


  • Friedman, Jane (2018). “Junk Beliefs and Interest-Driven Epistemology”. Philosophy and Phenomenological Research, 97(3), 568–83.
  • Harman, Gilbert (1986). Change in View. MIT Press.

About the authors

Brendan Balcerak Jackson‘s research focuses on natural language semantics and pragmatics, as well as linguistic understanding and communication, and on reasoning and rationality more generally. He has a PhD in philosophy, with a concentration in linguistics, from Cornell University, and he has worked as a researcher and teacher at various universities in the United States, Australia, and Germany. Since April 2023, he is a member of the Semantic Computing Research Group at the University of Bielefeld.

David DiDomenico is a Lecturer in the Department of Philosophy at Texas State University. His research interests are in epistemology and the philosophy of mind.

Kenji Lota is a doctoral student at the University of Miami. They are interested in epistemology and the philosophy of language and action.