Posted on

Gabriel De Marco and Thomas Douglas – Nudge Transparency Is Not Required for Nudge Resistibility

In this post, Gabriel De Marco and Thomas Douglas discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Image of a variety of cakes on display.
“Cakes” (1963) Wayne Thiebaud © National Gallery of Art


Food Placement. In order to encourage healthy eating, cafeteria staff place healthy food options at eye-level, whereas unhealthy options are placed lower down. Diners are more likely to pick healthy foods and less likely to pick unhealthy foods than they would have been had foods instead been distributed randomly.

Interventions like this are often called nudges. Though many agree that it is, at least sometimes, permissible to nudge people, there is a thriving debate about when, exactly, it is so.

In the now-voluminous literature on the ethics of nudging, some authors have suggested that nudging is permissible only when the nudge is easy to resist. But what does it take for a nudge to be easy to resist? Authors rarely give accounts of this, yet they often seem to assume what we call

The Awareness Condition (AC). A nudge is easy to resist only if the agent can easily become aware of it.

We think AC is false. In our paper, we mount a more developed argument for this, but in this blog post we simply advance one counterexample and consider one possible response to it.

Here’s the counterexample:

Giovanni and Liliana: Giovanni, the owner of a company, wants his workers to pay for the more expensive, unhealthy snacks in the company cafeteria, so, without informing his office workers, he instructs the cafeteria staff to place these snacks at eye level. While in line at the cafeteria, Liliana (who is on a diet) sees the unhealthy food, and is a bit tempted by it, partly as a result of the nudge. Recognizing the temptation, she performs a relatively easy self-control exercise: she reminds herself of her plan to eat healthily, and why she has it. She thinks about how following a diet is going to be difficult, and once she starts making exceptions, it’s just going to be easier to make exceptions later on. After this, she decides to take the salad and leave the chocolate pudding behind. Although she was aware that she was tempted to pick the chocolate pudding, she was not aware that she was being nudged, nor did she have the capacity to easily become aware of this, since Giovanni went to great lengths to hide his intentions.

Did Liliana resist the nudge? We think so. We also think that the nudge was easily resistible for her, even though she did not have the capacity to easily become aware of the fact that she was being nudged. If you agree, then we have a straightforward counterexample to AC.

In response, someone might argue that, although Liliana resists something, she does not resist the nudge. Rather, she resists the effects of the nudge: the (increased) motivation to pick the chocolate pudding. Resisting the nudge, rather than its effects, requires that one intends to act contrary to the nudge. But Liliana doesn’t intend to do that. Although she intends to pick the healthy option, to pick the salad, or to not pick the chocolate pudding, she does not intend to act contrary to the nudge.

If resisting a nudge requires that one intend to act contrary to the nudge, then Liliana does not resist the nudge, and the counterexample to AC fails. Yet we do not think that resisting a nudge requires that one intend to act contrary to the nudge. While we grant that a way of resisting a nudge is to do so while intending to act contrary to it, and that resisting it in this way requires awareness of the nudge, we do not think that this is the only way to resist a nudge. Partly, we think this because we find it plausible that Liliana (and agents in other similar cases) do resist the nudge.

But further, we think that, if resisting a nudge requires intending to act contrary to the nudge, this will cast doubt on the thought that nudges ought to be easy to resist. Suppose that there are two reasonable ways of understanding “resisting a nudge.” On one understanding, resistance requires that the agent acts contrary to the nudge and intends to do so. Liliana does not resist the nudge on this understanding. On a second, broader way of understanding resistance, one need not intend to act contrary to the nudge in order to resist it; it is enough simply to act contrary to the nudge. Liliana does resist the nudge in this way.

Now consider two claims:

The strong claim: A nudge is permissible only if it is easy to act contrary to it with the intention of doing so.

The weak claim: A nudge is permissible only if it is easy to act contrary to it.

Are these claims plausible? We think that the weak claim might be, but the strong claim is not.

Consider again Food Placement. This was a case of a nudge just like Giovanni’s nudge, except that the food placement is intended to get more people to pick the healthy food option over the unhealthy one, rather than the reverse. In this version of the case, Giovanni wants to do what is in the best interests of his staff. According to the strong claim, this nudge would be impermissible insofar as his staff cannot easily become aware of the nudge. And this is so even though it would be permissible for Giovanni to put the healthy foods at eye level randomly. Moreover, it would remain so even if all the following are true:

  1. the nudge only very slightly increases the nudgee’s motivation to take the healthy food,
  2. the nudgee acts contrary to this motivation and picks the same unhealthy food she would have picked in the absence of the nudge,
  3. she finds it very easy to act contrary to the nudge in this way,
  4. her acting contrary to the nudge in this way is a reflection of her values or desires, and
  5. her acting contrary to the nudge is the result of normal deliberation which is not significantly influenced by the nudge.

We find it hard to believe that this nudge is impermissible, or even more weakly, that we have a strong or substantial reason against implementing it.

We think, then, that if nudges have to be easily resistible in order to be ethically acceptable, this will be because something like the weak claim holds. On this view, a nudge can meet this requirement if it is easy for the nudgee to resist it in our broader sense, and this is compatible with it being difficult for the nudgee to become aware of the nudge, as in our Giovanni and Liliana case.

Want more?

Read the full article at

About the authors

Gabriel De Marco is a Research Fellow in Applied Moral Philosophy at the Oxford Uehiro Centre for Practical Ethics. His research focuses on free will, moral responsibility, and the ethics of influence.

Tom Douglas is Professor of Applied Philosophy and Director of Research at the Oxford Uehiro Centre for Practical Ethics. His research focuses especially on the ethics of using medical and neuro-scientific technologies for non-therapeutic purposes, such as cognitive enhancement, crime prevention, and infectious disease control. He is currently leading the project ‘Protecting Minds: The Right to Mental Integrity and the Ethics of Arational Influence‘, funded by the European Research Council.

Posted on

Cathy Mason – “Reconceiving Murdochian Realism”

In this post, Cathy Mason discusses the article she recently published in Ergo. The full-length version of Cathy’s article can be found here.

A picture of a vase with irises.
“Irises” (1890) Vincent van Gogh

Iris Murdoch’s ethics is filled with discussions of moral reality, moral truth and how things really stand morally. What exactly does she mean by these? Her style is certainly a non-standard philosophical style, and her ideas are remarkably wide-ranging, but it can seem appealing to think that at heart her metaethical commitments largely align with standard realists’. I suggest, however, that this reading of Murdoch is mistaken: her realism amounts to something else altogether.

I take standard realism to be roughly captured by the following definition from Sayre-McCord:

Moral realists hold that there are moral facts, that it is in light of these facts that peoples’ moral judgments are true or false, and that the facts being what they are (and so the judgments being true, when they are) is not merely a reflection of our thinking the facts are one way or another. That is, moral facts are what they are even when we see them incorrectly or not at all. (Sayre-McCord 2005: 40)

Does Murdoch subscribe to this view? It can certainly be tempting to think so. She repeatedly talks about ‘realism’ and ‘objectivity’, and remarks like the following seem well-understood in standard realist terms:

The authority of morals is the authority of truth, that is of reality. (TSG 374)

The ordinary person does not, unless corrupted by philosophy, believe that he creates values by his choices. He thinks that some things really are better than others and that he is capable of getting it wrong. (TSG 380)

Here, Murdoch clearly commits to the idea that some moral claims are true, and that what makes them true is not something to do with the valuer, but something about the world. All this sounds very much like standard realism.

However, it would be a mistake to think that these surface similarities point towards a deeper congruence between Murdoch and standard realists. For a start, realists typically take moral facts to be one kind among many. Just as there are mathematical facts and psychological facts, so too there are moral facts. Yet Murdoch repeatedly insists that all reality is moral—and thus that all facts are in some sense moral facts (e.g. IP 329, OGG 357, MGM 35). Moreover, though Murdoch insists on the truth of some moral claims, she understands the notion of truth very differently from standard realists.  Whereas realists typically regard truth as something abstract, Murdoch suggests that it can only be understood in relation to truthfulness and the search for truth. The seeming agreement between Murdoch and standard realists on the truth of some ethical claims thus belies deeper disagreements between them.

What’s more, standard realism is hard to square with some wider views Murdoch holds. First, she suggests that some moral concepts can be genuinely private: fully virtuous agents may have different moral concepts without either of their conceptual schemas being inaccurate or incomplete. Second, she suggests that there can be private moral reasons: moral reasons need not be universal. It is hard to see how there could be room for private moral concepts and reasons within standard realism: either there are facts corresponding to a moral belief, or there are not. If there are, then it is a kind of moral ignorance to ignore such facts. If not, then the belief is simply false. Finally, Murdoch rejects the idea common in standard realism that the moral supervenes on the non-moral, since she suggests that there simply is no non-moral reality.

What, then, does Murdoch have in mind when she discusses realism? In most cases where Murdoch introduces ideas such as realism or objectivity, she is discussing the moral perceiver’s relation to the thing perceived, rather than only talking about the thing perceived. Her realism is a claim about the reality of the moral where reality is understood as that which is discerned by the virtuous perceiver.

Take, for example, the following passages:

[T]he realism (ability to perceive reality) required for goodness is a kind of intellectual ability to perceive what is true, which is automatically at the same time a suppression of self. (OGG 353)

[A]nything which alters consciousness in the direction of unselfishness, objectivity and realism is to be connected with virtue. (TSG 369)

In both of these quotes, Murdoch discusses the relation between a moral perceiver and the thing perceived. Realism or objectivity is talked of not as a metaphysical feature of objects, properties or facts, but as a feature of moral agents who are epistemically engaged with the world.

Of course, the standard realist might allow that there is such a thing as realism as a feature of a moral perceiver, and understand this in terms of accessing facts or properties which independently exist. Yet this ordering of explanations is ruled out by Murdoch’s insistence that reality itself is a normative (moral) concept. What is objectively real, for Murdoch, cannot be understood apart from ethics, apart from the essentially human activity of seeking to understand the world which is subject to moral evaluation. This is not to suggest that reality is a solely moral concept: it is also linked to truth, to how the world is. But it is to suggest that a conception of how the world is, of reality, must be essentially ethical.

What kind of relation, then, must the realistic observer stand into the thing observed? Murdoch suggests that no non-moral answer can be given here, no description that demarcates the realistic stance in an ethically neutral way. However, a description can be given in rich ethical terms. To be realistic is best understood as doing justice to the thing one is confronted with, being faithful to the reality of it, being truthful about it, and so on. All of these terms capture the idea that perception can be genuinely cognitive, whilst at the same time being a fundamentally ethical task.

Want more?

Read the full article at


  • Murdoch, Iris (1999). “The Idea of Perfection”. In Peter Conradi (Ed.), Existentialists and Mystics: Writings on Philosophy and Literature (299–337). Penguin. [IP]
  • Murdoch, Iris (1999). “On God and Good”. In Peter Conradi (Ed.), Existentialists and Mystics: Writings on Philosophy and Literature (337–63). Penguin. [OGG]
  • Murdoch, Iris (1999). “The Sovereignty of Good Over Other Concepts”. In Peter Conradi (Ed.), Existentialists and Mystics: Writings on Philosophy and Literature (363–86). Penguin. [TSG]
  • Murdoch, Iris (2012). “Metaphysics as a Guide to Morals”. Vintage Digital. [MGM]
  • Sayre-McCord, Geoffrey (2005). “Moral Realism”. In David Copp (Ed.), The Oxford Handbook of Ethical Theory (39–62). Oxford University Press.

About the author

Cathy Mason is an Assistant Professor in Philosophy at the Central European University (Vienna). She is currently working on a book on Iris Murdoch’s ‘metaethics’, as well as some ideas concerning the ethics of friendship.

Posted on

Victor Lange and Thor Grünbaum – “Measurement Scepticism, Construct Validation, and Methodology of Well-Being Theorising”

A young pregnant woman is holding a small balance for weighing gold. In front of her is a jewelry box and a mirror; on her right, a painting of the last judgment.
“Woman Holding a Balance” (c. 1664) Johannes Vermeer

In this post, Victor Lange and Thor Grünbaum discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Many of us think that decisions and actions are justified, at least partially, in relation to how they affect the well-being of the involved individuals. Consider how politicians and lawmakers often justify, implicitly or explicitly, their policy decisions and acts by reference to the well-being of citizens. In more radical terms, one might be an ethical consequentialist and claim that well-being is the ultimate justification of any decision or action.

It would therefore be wonderful if we could precisely measure the well-being of individuals. Contemporary psychology and social science contain a wide variety of scales for this purpose. Most often, these scales measure well-being by self-reports. For examples, subjects rate the degree to which they judge or feel satisfied with their own lives or they report the ratio of positive to negative emotions. Yet, even though such scales have been widely adopted, many researchers express scepticism about whether they actually measure well-being at all. In our paper, we label this view measurement scepticism about well-being. 

Our aim is not to develop or motivate measurement scepticism. Instead, we consider a recent and interesting reply to such scepticism, put forward by Anna Alexandrova (2017; see also Alexandrova and Haybron, 2016). According to Alexandrova, we can build an argument against measurement scepticism by employing a standard procedure of scientific psychology called construct validation. 

Construct validation is a psychometric procedure. Researchers use the procedure to assess the degree to which a scale actually measures its intended target phenomenon. If psychologists and social scientists have a reliable procedure to assess the degree to which a scale really measures what it is intended to measure, it seems obvious that we should use it to test well-being measurements. For the present purpose, let us highlight two key aspects of the procedure. 

First, construct validation utilises convergent and discriminant correlational patterns between the scores of various scales as a source of evidence. Convergent correlations concern the relation between scores on the target scale (intended to measure well-being) and scores on other scales (assumed to measure either well-being or some closely related phenomenon, such as wealth or physical health). Discriminant correlations concern non-significant relations between scores on the target scale and scores on scales that we expect to measure phenomena unrelated to well-being (e.g., scales measuring perceptual acuity). When assessing the construct validity of a scale, researchers evaluate a scale by considering whether it exhibits attractive convergent correlations (whether subjects with high scores on the target well-being scale also score high on physical health, for example) and discriminant correlations (e.g., whether subjects’ scores on the target well-being scale have significant correlations with perceptual acuity).

Second, the examination of correlational patterns depends on theory. Initially, we need a theory to build our scale (for instance, a theory of how well-being is expressed in the target population). Moreover, we need a theory to tell us what correlations we should expect (i.e. how answers on our scale should correlate with other scales). This means that, when engaging in construct validation, researchers test a scale and its underlying theory holistically. That is, the construct validation of the target scale involves testing both the scale and the theory of well-being that underlies it. Consequently, the procedure of construct validation requires that researchers remain open to revising their underlying theory if they persistently observe the wrong correlational patterns. Given this holistic nature of the procedure, correlational patterns might lead to revisions of one’s theory of well-being, perhaps even to abandoning it. 

The question now is this: Does the procedure of construct validation provide a good answer to measurement scepticism about well-being? While we acknowledge that for many psychological phenomena (e.g., intelligence) the procedures of construct validation might provide a satisfying reply to various forms of measurement scepticism, things are complicated with well-being. Here the normative nature of well-being rears its philosophical head. We argue that an acceptable answer to the question depends on the basic assumptions about the methodology of well-being theorising. Let us clarify by distinguishing between two methodological approaches.

First, methodological naturalism about well-being theorising claims that we should theorise about well-being in the same way we investigate any other natural phenomenon, namely, by ordinary inductive procedures of scientific investigation. Consequently, our theory of well-being should be open to revision on empirical grounds. Second, methodological non-naturalism claims that theorising about well-being should be limited to the methods known from traditional (moral) philosophy. The question of well-being is a question about what essentially and non-derivatively makes a person’s life go best. Well-being has an ineliminative normative or moral nature. Hence, the question of what well-being is, is a question only for philosophical analysis.  

The reader might see the problem now. Since construct validation requires openness to theory revision by correlational considerations, it is a procedure that only a methodological naturalist can accept. Consequently, if measurement scepticism is motivated by a form of non-naturalism, we cannot reject it by using construct validation. Non-naturalists will not accept that theorising about well-being can be a scientific and empirical project. This result is all the more important because many proponents of measurement scepticism seem to be methodological non-naturalists.  

In conclusion, if justifying an action or a social policy over another often requires assessing consequences for well-being, then scepticism about measurement of well-being becomes an important obstacle. We cannot address this scepticism head-on with the procedures of construct validation. Such procedures assume something the sceptic might not accept, namely, that our theory of well-being should be open to empirical revisions. Instead, we need to start by making our methodological commitments explicit. 

Want more?

Read the full article at


  • Alexandrova, Anna (2017). A Philosophy for the Science of Well-Being. Oxford University Press. 
  • Alexandrova, Anna and Daniel M. Haybron (2016). “Is Construct Validation Valid?” Philosophy of Science, 83(5), 1098–109. 

About the authors

Victor Lange is a PhD-fellow at the Section for Philosophy and a member of the CoInAct group at the Department of Psychology, University of Copenhagen. His research focuses upon attention, meditation, psychotherapy, action control, mental action, and psychedelic assisted therapy. He is a part of the platform Regnfang that publishes podcasts about the sciences of the mind.

Thor Grünbaum is an associate professor at the Section for Philosophy and the Department of Psychology, University of Copenhagen. He is head of the CoInAct research group. His research interests are in philosophy of action (planning, control, and knowledge), philosophy of psychology (explanation, underdetermination, methodology), and cognitive science (sense of agency, prospective memory, action control).

Posted on

Russ Colton – “To Have A Need”

A man is hanging from the hand of a clock fixed on the exterior wall of a six-story building, risking his life.
Harold Lloyd in the 1923 movie “Safety Last!”

In this post, Russ Colton discusses the article he recently published in Ergo. The full-length version of his article can be found here.

Every day we notice the needs of ourselves or others and are moved to address them. We often feel obliged to do so, even for strangers. Whatever is needed seems important in a way that perks and luxuries do not. Some philosophers take such observations quite seriously and give need a key role in their moral and political theories. Yet they characterize the concept of need differently, and sometimes not very fully. To help us understand and assess their ideas—but also just for the sake of improving our understanding of a common concept that enters into everyone’s practical and moral thinking—I want to try to say as clearly as possible what it means to have a need. My paper is focused on this task of conceptual clarification.

Throughout, I am concerned with a certain kind of need—welfare need, as I call it, which is the need for whatever promotes a certain minimum level of life quality, like the need for air, education, or self-confidence. This differs from a goal need (aka “instrumental need”), which is a need for whatever is required to achieve some goal—whether that goal is good for you or not—like the need for a bottle opener or a bank loan.

It is well-known that sometimes we say a person needs something when they have neither a welfare need nor a goal need for it: for example, “The employee needs to be fired.” In such cases, however, we can readily deny that the person has the need—the employee does not have a need to be fired. By contrast, in cases of welfare or goal need, the person has the need. Thus, insofar as we are interested in need because of its connection to human welfare, the specific concept of having a need may be more important than needing, which is why I focus on the former. In considering examples that test my analysis, it is best to think in terms of having a need.

Among philosophers, perhaps the most popular gloss on welfare need is this: to need something is to require it in order to avoid harm. This idea is approximately correct, but it needs improvement. The relevant notions of requirement and harm must be pinned down, and the idea must be broadened, since people also need what is required to reduce danger, like vaccines and seatbelts. To make these improvements, I offer two analyses of having a need—one that captures the original intuition about harm avoidance, and a broader one that captures the concept in full by covering both harms and dangers.

David Wiggins (Needs, Values, Truth) is the only theorist who has tried to clarify with precision the requirement aspect of the harm-avoidance idea. In broad strokes, his view is this: I need to have X if and only if, under the present circumstances, necessarily, if I avoid harm, then I have X, where necessity here is constrained by what is “realistically conceivable.”

This idea has a number of problems. One of the most serious arises when we have a need for some future X that will be unmet. If a non-actual X can count as realistically conceivable, there seems to be nothing preventing the non-actual possibility that, even without it, whatever harmful process was headed toward us is eventually thwarted by other means, leaving us unharmed. But that means I can have a need for X even though I could avoid harm without it.

Another problem is that often, when we’re in a pinch in a given circumstance, we view multiple things that could save us as needed. If I were short of money to pay rent at the end of the month, then each of the following assertions would be reasonable: I need more money in my bank account; I need a friend to lend me money; I need the landlord to give me more time. But on Wiggins’s necessity approach, given the circumstances, I can need only one X, which will have to be the disjunction of all potential rent solutions.

I argue that these (and other) problems are readily avoided with a counterfactual-conditional approach, along these lines: I have a need for something when, without it, my life would be (in some sense) harmed.

Understanding the relevant notion of harm requires attending to how we balance positive and negative effects on welfare over time. I explore this in the paper and conclude that when you have a need for something, your life from then on would be better on the whole, and less unsatisfactory for some period, with it than without it.

There will be different intuitions about what counts as unsatisfactory. My analysis is neutral, but I do make a case for the claim that for our most ordinary conception of need, the relevant sense of “unsatisfactory” is not good. With this idea in hand, my analysis implies a very natural idea: if you lack what you need, your life for a time will not be good and will be worse than it would otherwise be, and this loss will not be outweighed by any benefit.

Finally, I extend the analysis to our needs for what makes us safer, things without which we would be in more danger independently of whether we would be harmed. This is challenging because many present needs are for future benefits, and the risks relevant to our future welfare can change during the interval. Fortunately, there are easy ways to address the relevant issues so that the analysis remains quite simple. Roughly put: I now have a need for X if and only if it is now highly probable that, at the time of X, the expected value (quality) of my life from now on would, for some period, be less unsatisfactory with X than without, and would on the whole be higher.

Want more?

Read the full article at

About the author

Russ Colton received his PhD from the University of Massachusetts Amherst. His current research interests are primarily in ethics.

Posted on

Corey Dethier – “Interpreting the Probabilistic Language in IPCC Reports”

A young sibyl (sacred interpreter of the word of god in pagan religions) argues with an old prophet (sacred interpreter of the word of god in monotheistic religions). It looks as if the discussion will go on for a long while.
Detail of “A sibyl and a prophet” (ca. 1495) Andrea Mantegna

In this post, Corey Dethier discusses his article recently published in Ergo. The full-length version of Corey’s article can be found here.

Every few years, the Intergovernmental Panel on Climate Change (IPCC) releases reports on the current status of climate science. These reports are massive reviews of the existing literature by the most qualified experts in the field. As such, IPCC reports are widely taken to represent our best understanding of what the science currently tells us. For this reason, the IPCC’s findings are important, as is their method of presentation.

The IPCC typically qualifies its findings using different scales. In its 2013 report, for example, the IPCC says that the sensitivity of global temperatures to increases in CO2 concentration is “likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence) and very unlikely greater than 6°C (medium confidence)” (IPCC 2013, 81).

You might wonder what exactly these qualifications mean. On what grounds does the IPCC say that something is “likely” as opposed to “very likely”? And why does it assign “high confidence” to some claims and “medium confidence” to others? If you do wonder about this, you are not alone. Even many of the scientists involved in writing the IPCC reports find these qualifications confusing (Janzwood 2020; Mach et al. 2017). My recent paper – “Interpreting the Probabilistic Language in IPCC Reports” – aims to clarify this issue, with particular focus on the IPCC’s appeal to the likelihood scale.

Traditionally, probabilistic language such as “likely” has been interpreted in two ways. On a frequentist interpretation, something is “likely” when it happens with relatively high frequency in similar situations, while it is “very likely” when it happens with a much greater frequency. On a personalist interpretation, something is “likely” when you are more confident that it will happen than not, while something is “very likely” when you are much more confident.

Which of these interpretations better fits the IPCC’s practice? I argue that neither of them does. My main reason is that both interpretations are closely tied to specific methodologies in statistics. The frequentist interpretation is appropriate for “classical” statistical testing, whereas the personalist interpretation is appropriate when “Bayesian” methods are used. The details about the differences between these methods do not matter for our present purposes. My main point is that climate scientists use both kinds of statistics in their research, and since the IPCC’s report reviews all of the relevant literature, the same language is used to summarize results derived from both methods.

If neither of the traditional interpretations works, what should we use instead? My suggestion is the following: we should understand the IPCC’s use of probabilistic terms more like a letter grade (an A or a B or a C, etc.) than as strict probabilistic claims implying a certain probabilistic methodology.

An A in geometry or English suggests that a student is well-versed in the subject according to the standards of the class. If the standards are sufficiently rigorous, we can conclude that the student will probably do well when faced with new problems in the same subject area. But an A in geometry does not mean that the student will correctly solve geometry problems with a given frequency, nor does it specify an appropriate amount of confidence that you should have that they’ll solve a new geometry problem. 

The IPCC’s use of terms such as “likely” is similar. When the IPCC says that a claim is likely, that’s like saying that it got a C in a very hard test. When the IPCC says that sensitivity is “extremely unlikely less than 1°C”, that’s like saying that this claim fails the test entirely. In this analogy, the IPCC’s judgments of confidence reflect the experts’ evaluation of the quality of the class or test: “high confidence” means that the experts think that the test was very good. But even when a claim passes the test with full marks, and the test is judged to be very good, this only gives us a qualitative evaluation. Just as you shouldn’t conclude that an A student will get 90% of problems right in the future, you also shouldn’t conclude that something that the IPCC categorizes as “very likely” will happen at least 90% of the time. The judgment has an important qualitative component, which a purely numerical interpretation would miss.

It would be nice – for economists, for insurance companies, and for philosophers obsessed with precision – if the IPCC could make purely quantitative probabilistic claims. At the end of my paper, I discuss whether the IPCC should strive to do so. I’m on the fence: there are both costs and benefits. Crucially, however, my analysis suggests that this would require the IPCC to go beyond its current remit: in order to present results that allow for a precise quantitative interpretation of its probability claims, the IPCC would have to do more than simply summarize the current state of the research. 

Want more?

Read the full article at


  • IPCC (2013). Climate Change 2013: The Physical Science Basis. Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Thomas F. Stocker, Dahe Qin at al. (Eds.). Cambridge University Press.
  • Janzwood, Scott (2020). “Confident, Likely, or Both? The Implementation of the Uncertainty Language Framework in IPCC Special Reports”. Climatic Change 162, 1655–75.
  • Mach, Katharine J., Michael D. Mastrandrea, at al. (2017). “Unleashing Expert Judgment in Assessment”. Global Environmental Change 44, 1–14.

About the author

Corey Dethier is a postdoctoral fellow at the Minnesota Center for Philosophy of Science. He has published on a variety of topics relating to epistemology, rationality, and scientific method, but his main research focus is on epistemological and methodological issues in climate science, particularly those raised by the use of idealized statistical models to answer questions about climate change.

Posted on

Christine Bratu – “How (Not) To Wrong Others with Our Thoughts”

image of two cherubs thinking
Detail of the San Sisto Madonna (c. 1513-1514) Raphael

In this post, Christine Bratu discusses her article recently published in Ergo. The full-length version of Christine’s article can be found here.

Imagine Jim attends a fancy reception and, seeing a person of color standing around in a tuxedo, concludes that they are a waiter (when, in fact, they, too, are a guest). Alternatively, picture Anna who, during a prestigious conference, sees a young woman setting up a laptop at the lectern and concludes that she is part of the organizing team (when, in fact, this woman is the renowned professor who will give the keynote lecture). In many of us, cases like these elicit the fundamental intuition that there is something morally problematic going on.

Some philosophers have used this intuition to argue for the possibility of doxastic wronging (Basu 2018, 2019a, 2019b; Basu and Schroeder 2019; Keller 2018). Cases like these, they argue, show that we have the moral duty not to have bigoted beliefs about each other. On their interpretation, the situations above are morally troublesome because, by believing classic racist and sexist stereotypes, Jim and Anna violate a duty they have towards their fellow party guest and keynote speaker, respectively. According to proponents of doxastic wronging, positing this morally grounded epistemic duty is the best way to explain our intuition, since in the situations depicted neither protagonist acts in a reprehensible way (in fact, neither of them acts at all!) – it’s their racist and sexist beliefs as such that are the problem.

I think this proposal is intriguing. Group-based discrimination is a serious moral and political problem, and the moral duty not to have bigoted beliefs seems perfectly tailored to strike at its root. Nevertheless, in my article I argue that we should reject the existence of such a duty: there is no such thing as doxastic wronging. I argue for this by presenting what I call the liberal challenge.

I start from the assumption that positing any new, morally grounded epistemic duties comes at a price, because it constitutes a curtailment of our freedom of thought. We should only accept such curtailment if we can thereby gain something comparably important. I then point out three strategies that advocates of doxastic wronging could adopt to convince us that we are gaining something comparably important, and I explain why I think that all three of them fail.

First, the advocates of doxastic wronging could claim that positing a duty not to have bigoted beliefs helps us avoid bigoted actions. This strategy fails, I argue, because we are already under the moral obligation not to act in bigoted ways. If the reason for limiting our freedom of thought is merely to decrease the risk of bigoted actions, then placing us under this new obligation is superfluous.

Second, these philosophers could claim that positing a duty not to have bigoted beliefs helps us avoid practical vices that bigoted beliefs manifest such as, for instance, arrogance. This strategy fails, I argue, because – while we might be morally better, i.e. more virtuous, if we avoided vices like arrogance – we are under no moral obligation to do so.

Thirdly, they could claim that positing a duty not to have bigoted beliefs is necessary to avoid the intrinsic harm of being the object of bigoted beliefs. This third strategy starts off more promisingly as it is based on a correct observation. Most of us desire not to be the objects of bigoted beliefs. People who think about us in bigoted ways frustrate this legitimate desire, and so it seems that they thereby harm us. Yet even if we grant that bigoted beliefs harm their targets, we cannot conclude that the resulting harm is important enough to justify restricting our freedom of thought. People frustrate each other’s legitimate desires all the time. We frustrate our parents’ legitimate desire to see us flourish when we let our talents go to waste, and we frustrate our partners’ legitimate desire to continue the relationship when we break up with them. Cases like these show that frustrating someone’s legitimate desire is not sufficient for our behavior to count as morally impermissible. To make this strategy work, proponents of doxastic wronging must, in addition, argue that the desire not to be the objects of bigoted beliefs is so important that its frustration is morally impermissible. However, I contend that they can only do so by appealing to the impermissibility of either bigoted actions or vices that bigoted beliefs manifest. In other words, they can only do by falling back on one of the former two strategies. And since I’ve already shown that such strategies fail, so does this one.

If we reject the duty not to have bigoted beliefs – as I argue we should – what about our initial intuition? What is wrong with Jim’s assumption that a person of color is most likely a waiter rather than a guest , or with Anna’s assumption that a young woman at the conference podium is most likely an organizer rather than the keynote?

It seems to me that the best way to make sense of these cases is to explain them not in terms of doxastic wronging, but rather in terms of doxastic harming. People like Jim and Anna do not violate any obligations they have toward their targets when they think about them in racist or sexist ways. However, they do frustrate their desire not to be the objects of bigoted beliefs, and they thereby harm them. When we reproach people like Jim and Anna for their hurtful thoughts, we are accusing them not of having done something they were morally not allowed to do, but rather of having done something it would have been better not to do (even though they were morally allowed to do it).

The change in perspective I propose does not make light of the morally problematic nature of bigoted beliefs. On the contrary, it ensures that the criticism we level against people who entertain such beliefs hits its mark properly by avoiding moralistic overreach and by making morally grounded demands on what other people believe.

Want more?

Read the full article at


  • Basu, Rima (2018). “Can Beliefs Wrong?” Philosophical Topics 46 (1): 1–17.
  • Basu, Rima (2019a). “The Wrongs of Racist Beliefs”. Philosophical Studies 176 (9): 2497–515.
  • Basu, Rima (2019b). “What We Epistemically Owe to Each Other”. Philosophical Studies 176 (4): 915–31.
  • Basu, Rima and Mark Schroeder (2019). “Doxastic Wronging”. In Brian Kim and Matthew McGrath (Eds.), Pragmatic Encroachment in Epistemology, 181–205.
  • Keller, Simon (2018). “Belief for Someone Else’s Sake”. Philosophical Topics 46 (1): 19–35.

About the author

Christine Bratu is a professor of philosophy at the University of Göttingen in Germany. She received her PhD in philosophy from the Ludwig-Maximilian University of Munich. Her research interests are in feminist philosophy, moral and political philosophy (especially issues of disrespect and discrimination) and topics at the intersection between ethics and epistemology. 

Posted on

Eyal Tal and Hannah Tierney – “Cruel Intentions and Evil Deeds”

Pop-art depiction of a man and woman riding away in a car with evil intentions
“In the Car” (1963) © Roy Lichtenstein

In this post, Hannah Tierney and Eyal Tal discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Doing the right thing can be difficult. Doing the morally worthy thing can be even harder.

Accounts of moral worth aim to determine the kinds of motivations that elevate merely right actions—actions that happen to conform to the correct normative theory—to morally worthy actions—actions that merit praise or credit.

Some argue that an agent performs a morally worthy action if and only if they do it because the action is morally right (Herman 1981; Jeske 1998; Sliwa 2016; Johnson King 2020). Others argue that a morally worthy action is that which an agent performs because of features that make the action right (Arpaly 2003; Arpaly & Schroeder 2014; Markovits 2010).

What sets these views apart is the kind of motivation each takes to be essential for an action’s moral worth.

When an agent is motivated to do the right thing because of the action’s moral rightness, she has a higher-order motivation to perform this action. When an agent is motivated to do the right thing because of a particular right-making feature of the action, she has a first-order motivation to perform this action. Higher-order theorists (Sliwa 2016; Johnson King 2020) argue that higher-order motivations are necessary and sufficient for moral worth, while first-order motivations are largely irrelevant. In contrast, first-order theorists (Arpaly 2003; Markovits 2010) argue that first-order motivations are necessary and sufficient for moral worth, while higher-order motivations are irrelevant.

In an important sense, higher-order and first-order views of moral worth are diametrically opposed. The motivations that one camp argues are necessary and sufficient for moral worth are the very motivations that the other camp argues are irrelevant.

Nevertheless, proponents of these opposing views share something important. With the exception of Arpaly (2003) and Arpaly & Schroeder (2014), they theorize about the nature of moral worth by focusing mainly on the moral worth of, and praiseworthiness or creditworthiness for, right actions.

Yet each of these properties has a negatively valenced counterpart that attaches to wrong actions. Just as agents can deserve praise or credit for doing the right thing, they can deserve blame or discredit for doing the wrong thing. While the former actions have moral worth, the latter actions have what we will call moral counterworth.

In our paper, we explore the moral counterworth of wrong actions in order to shed new light on the nature of moral worth. Contrary to theorists in both camps, we argue that more than one kind of motivation can affect the moral worth of actions. 

Compare the following cases: 

Selfish Gossip: Cecile learns of a good friend’s embarrassing secret. She knows that it would be wrong to reveal it, and she does not wish to do wrong. While at a party, an opportunity to be the centre of attention arises. Wanting to be popular, Cecile succumbs to temptation and reveals her friend’s secret. 
Cruel Gossip: Sebastian learns of a good friend’s embarrassing secret. He knows that it would be wrong to reveal it, and he does not wish to do wrong. While at a party, an opportunity arises to humiliate his friend by revealing the secret. Wanting to embarrass his friend, Sebastian succumbs to temptation and reveals his friend’s secret.

Though both Cecile and Sebastian are blameworthy for revealing their friend’s secret, they are not equally blameworthy. Sebastian is (much) more blameworthy than Cecile and his action possesses more counterworth than Cecile’s action.

What could explain this difference? The only difference between Cecile and Sebastian lies in their first-order motivations. Cecile’s motivation to reveal her friend’s secret is selfish—she cares more about being popular than her friend’s privacy. But Sebastian’s motivation to tell the secret is cruel—he desires to harm his friend by embarrassing them.

Sebastian’s cruel first-order motivation renders him more blameworthy than Cecile. If this is right, then first-order motivations are not irrelevant to moral counterworth—they can directly contribute to the degree to which an agent is blameworthy. 

Reflecting on cases of wrong actions indicates that higher-order motivations can impact moral counterworth as well.

Compare the case of Selfish Gossip, in which Cecile reveals a friend’s secret in order to be the centre of attention despite having the higher-order motivation not to perform wrong actions, to the following case:

Evil Gossip: Isabelle learns of a good friend’s embarrassing secret. She knows that it would be wrong to reveal it, and she wishes to do wrong. While at a party, an opportunity to be the centre of attention arises. Wanting to both be popular and do wrong, Isabelle reveals her friend’s secret.

While both Cecile and Isabelle are blameworthy for their actions, Isabelle is (much) more blameworthy. The relevant difference between Cecile and Isabelle lies in their higher-order motivations.

Cecile possesses a higher-order motivation not to reveal her friend’s secret—she knows that doing so is wrong and does not want to do the wrong thing. In contrast, Isabelle possesses a higher-order motivation to reveal the secret—she wants to reveal the secret because doing so is wrong. 

We submit that Isabelle’s motivation to do wrong renders her more blameworthy than Cecile. And if we are right that Isabelle’s motivation to do wrong enhances the degree to which she is blameworthy for doing wrong, then higher-order motivations are not irrelevant to moral counterworth. 

From here, we defend the following argument: 

(1)	First-order and higher-order motivations can each affect moral counterworth.
(2)	Moral counterworth and moral worth are relevantly similar, such that the kinds of motivations that affect the former can also affect the latter.
(3)	First-order motivations and higher-order motivations can each affect the moral worth of an agent’s action.

In our paper, we defend each premise from potential objections and conclude by explaining how reflection on moral counterworth serves to support recently developed accounts of moral worth that make room for the relevance of both higher-order and first-order motivations. (Isserow 2019, 2020; Portmore 2022; Singh 2020)

Want more?

Read the full article at


  • Arpaly, N. (2003). Unprincipled Virtue: An Inquiry into Moral Agency. Oxford University Press. 
  • Arpaly, N. & Schroeder, T. (2014). In Praise of Desire. Oxford University Press. 
  • Herman, B. (1981). “On the Value of Acting from the Motive of Duty.” Philosophical Review 66: 359–382.
  • Isserow, J. (2019). “Moral Worth and Doing the Right Thing by Accident.” Australasian Journal of Philosophy 97: 251–264.
  • Isserow, J. (2020). “Moral Worth: Having it Both Ways.” The Journal of Philosophy 117(10): 529–556. 
  • Jeske, D. (1998). “A Defense of Acting from Duty.” The Journal of Value Inquiry 32(1): 61–74.
  • Johnson King, Z. (2020). “Accidentally Doing the Right Thing.” Philosophy and Phenomenological Research 1: 186–206.
  • Markovits, J. (2010). “Acting for the Right Reasons.” The Philosophical Review 119 (2): 201–242. 
  • Portmore, D. (2022) “Moral Worth and our Ultimate Moral Concerns.” Oxford Studies in Normative Ethics, volume 12. 
  • Singh, K. (2020). “Moral Worth, Credit, and Non-Accidentality.”  Oxford Studies in Normative Ethics, volume 10. 
  • Sliwa, P. (2016). “Moral Worth and Moral Knowledge.” Philosophy and Phenomenological Research 93(2): 393–418. 

About the authors

Eyal Tal received his PhD in philosophy from University of Arizona. He is interested in epistemology, ethics, metaethics, metaphysics, philosophy of psychiatry, and philosophy of science.

Hannah Tierney is Assistant Professor in the philosophy department at the University of California, Davis. She specializes in ethics and metaphysics, and she writes mainly on issues of free will, moral responsibility, and personal identity.

Posted on

Alycia LaGuardia-LoBianco – “Trauma and Compassionate Blame”

Allegoric fresco representing the sufferings of weak mankind, the well-armed strong, compassion and ambition in their quest for happiness.
Detail from the Beethoven Frieze “The Sufferings of Weak Mankind, the Well-Armed Strong, Compassion and Ambition” (1902) Gustav Klimt

In this post, Alycia LaGuardia-LoBianco discusses the article she recently published in Ergo. The full-length version of Alycia’s article can be found here.

When someone we love hurts us, our responses are influenced by our relationship with her: our hurt is tinged with love, care, expectations, a shared history, among other things. These responses may be further complicated if, in addition, that person’s harmful behavior has been shaped by a traumatic past. Having experienced a traumatic event may partly shape the way a person behaves. For instance, a veteran may lash out at her family; a victim of abuse may repeat that abuse on his family. Though this is of course not true for all survivors of trauma, there can be ways that past trauma shapes present behavior. And as a result of recognizing that a loved one’s harmful behavior may be caused by their past trauma, we may think that we shouldn’t blame them for the hurt they caused. After all, shouldn’t the trauma they’ve suffered exempt them from blame? 

I argue that the recognition of traumatic histories should have an impact on how we blame loved ones—but not by making blame inappropriate. Rather, these histories should motivate us to take a broader view of that person’s wrongdoing in the context of their traumatic past. It should motivate what I call ‘compassionate blame’: an attitude that considers the person as both someone who has caused harm and someone who has suffered harm. This attitude recognizes the unfortunate reality that someone has been unfairly shaped to commit harms, so that blame for that harm is bound up with compassion for the person who suffered. 

Should we blame those with a traumatic past for their harmful behavior?

When considering traumatic influences on harmful behavior, an intuitive view holds that survivors ought not be blamed for what they’ve done: traumatic histories exempt them from blame. Why might this be the case?

First, we might think that it is inappropriate to blame survivors because they have suffered from the trauma they’ve experienced. To heap blame upon a survivor may seem cruel or callous; they have already endured enough. This reason for exempting survivors is a version of a concern against blaming the victim, and it is admirably merciful.

However, the fact that one has suffered does not bear on whether they are blameworthy for their behavior, even when that suffering is relevantly connected to the subsequent harm committed. The consideration of avoiding a further burden on survivors may have an impact on how we express our blame, but it does not actually change whether survivors are blameworthy. So, the fact that survivors have suffered cannot be an exempting condition for blame.

Second, we might think that survivors ought not be blamed because they did not control the traumatic circumstances they endured. If the conditions that partly shaped a person’s behaviors are outside their control, we may be reluctant to blame them for those behaviors. After all, it seems an intuitive aspect of moral responsibility that we are only responsible for actions over which we have some relevant control. 

Although we should recognize that we are all vulnerable to good and bad luck, we should nonetheless be hesitant to forego responsibility because of it. Our choices and actions are built out of conditions of our past which are not entirely of our choosing. That we are sometimes responsible for conditions over which we had no control—including the ways our characters have been partly shaped by forces beyond us—is a widespread feature of our lives, and it does not normally undermine responsibility.

Similarly, genuine relationships seem to require a basic expectation of responsibility even among the vicissitudes of luck. Exempting a survivor’s behavior because of their past may result in treating them merely as the product of their trauma, and this would seem to hinder a genuine relationship with them. Moreover, exemption from blame risks undermining the seriousness of the wrong at issue.

Behind these objections is a broad concern about proper regard for survivors. We don’t want to patronizingly reduce survivors to their trauma, or to avoid blaming them in a way that is unfair to their victims, even though we do want to remain sensitive to their past suffering. Our relationship is with the person, not with their past, so we should, first, acknowledge that survivors are responsible for what they have done wrong, and then also ask how the reality of their trauma should impact our response. 

Cultivating an attitude of compassionate blame

It may be tempting to conclude from the foregoing arguments that, because trauma does not exempt, survivors should be straightforwardly blamed. Against this, I suggest that the reality of trauma should impact our blaming practices: we should be sensitive to the trauma endured and the harm committed in an attitude of compassionate blame.

Compassion is an emotion in which “the perception of the other’s negative condition evokes sorrow or suffering in the one who feels the emotion” (Snow 1991: 196) along with a set of beliefs about the other’s suffering (Snow 1991: 198). Blame adds an emotional valence to our beliefs regarding the connection between the survivor’s traumatic circumstances and their harmful behavior. Though they may seem to pull us in different directions, the feelings of compassion and blame are perfectly compatible, and we have complex emotional experiences of this sort all the time.

Compassionate blame allows us to recognize the seriousness of the harms at issue, treat the survivor as a responsible person, and appropriately acknowledge their suffering. It enables us to respond appropriately to a difficult situation in which those who have been hurt hurt others, and to do so in a way that attends to the complex moral features of these relationships.

Want more?

Read the full-length version of this article at


  • Snow, N. E. (1991). Compassion. American Philosophical Quarterly, 28(3), 195–205. 

About the author

photo of the author

Alycia LaGuardia-LoBianco is an Assistant Professor of Philosophy at Grand Valley State University, where she teaches and researches in feminist philosophy, ethics, moral psychology, and the philosophy of psychiatry. She is especially curious about how experiences of oppression, trauma, and mental illness shape personal identity and responsibility.

Posted on

Laura Schroeter and François Schroeter – “Bad News for Ardent Normative Realists?”

Portrait of a man composed by painting on the canvas various objects traditionally associated with fire – such as sticks, wood, guns and other tools – in such a way that they compose a human head.
“Fire” (1566) Giuseppe Arcimboldo

In this post, Laura and François Schroeter discuss their article recently published in Ergo. The full-length version of the article can be found here.

Many metaethicists are attracted to a position Matti Eklund (2017) calls ‘Ardent Normative Realism’. The main motivation behind this position can be illustrated with the help of a couple of examples.

Imagine you are disagreeing with a friend about whether abortion at 20 weeks is morally wrong. Imagine further that the two of you have a very different understanding of what it takes for an action to be morally wrong: you think that morality is determined by God’s law, while your friend does not. Despite this divergence, you two seem to be genuinely disagreeing about the same topic. If we interpreted you as talking past each other, we would be failing to take the normative authority of morality seriously (Enoch 2011). Both of you are interested in what is morally wrong tout court, not what is morally wrong according to the idiosyncratic standards of some individual.

Similarly, if we imagine two separate communities debating the same issue, we would have to say that they are interested in finding out whether abortion at 20 weeks is morally wrong tout court, rather than whether it is wrong according to the normative standards specific to the community. Settling for less would deflate the normative authority of morality.

In order to vindicate these intuitions, proponents of Ardent Normative Realism endorse a strong form of metaphysical realism in the moral domain. According to the Ardent Normative Realist, “reality itself favors certain ways of valuing and acting” (Eklund 2017: 1). If two communities disagree on moral questions, they cannot both be getting it right. At most, one of them is “limning” the normative structure of reality (22).

Now, suppose we grant that reality does indeed favor certain ways of valuing and acting. The Ardent Realist still faces an important problem. Given their radically different understandings of what makes an action morally wrong, how is it possible for individuals and communities to pick out the same reference with their moral terms? Contrast the moral term with the term ‘bachelor’, for example. The term ‘bachelor’ has the same reference, even when used by different individuals, because we all have very similar empirical criteria for who counts as a bachelor: unmarried eligible males. But imagine we introduce a new term, ‘nuba’, and different individuals have radically divergent views about what it takes for something to be a nuba. How can the term ‘nuba’ pick out the same property when it is used by individuals who rely on different application criteria?

To address this problem, many Ardent Realists have been tempted by a thesis Eklund calls ‘Referential Normativity’:

Two predicates or concepts conventionally associated with the same normative role are thereby determined to have the same reference. (Eklund 2017: 10)

Imagine that all it takes to count as competent with our new term, ‘nuba’, is that it plays the same normative role in one’s psychology that English speakers associate to ‘morally wrong’. For instance, if a speaker judges that an action is nuba, they will be disposed to avoid performing that action or to feel guilt if they do perform it. According to Referential Normativity, all it takes for speakers to pick out the same reference is that they take ‘nuba’ to play this normative role; their divergent empirical criteria for classifying actions as ‘nuba’ are strictly irrelevant to fixing its reference.

Obviously, it would be great news for Ardent Realists if Referential Normativity were true. However, we argue that Referential Normativity is just too good to be true. To show what’s problematic about it, we need to step back and ask foundational questions about how reference is determined. There is much controversy in the philosophical literature concerning this topic, but we seek to sidestep those divergences by focusing on points of agreement among theorists of reference determination.

What is the point of referential ascriptions? We suggest that ascribing a specific reference to an adjective like ‘nuba’ must:

(i) help explain the reasoning and actions of subjects using the term ‘nuba’, and

(ii) set truth-conditions for assessing whether assertions and beliefs involving ‘nuba’ are correct. 

Suppose, for instance, that we interpret competent speakers’ use of ‘nuba’ as attributing the property of being loud. This interpretation flouts both (i) and (ii). The interpretation is not explanatory because most users of ‘nuba’ will not associate its defining normative role with all and only loud actions, and so attributing this reference will not help to explain their reasoning and actions. And the interpretation does not set a plausible standard of correctness because there is no plausible story why all competent users are failing to live up to their semantic commitments if they fail to apply ‘nuba’ to loud actions. We must conclude that the interpretation of ‘nuba’ as referring to being loud is mistaken. 

In the full-length version of our paper, we examine different attempts to reconcile Referential Normativity with constraints (i) and (ii). We argue that these attempts all fail. In a nutshell, Referential Normativity tries to pull a rabbit out of a hat. The mere normative role associated with a term like ‘morally wrong’ is insufficient to ground the ascription of any empirically instantiated property as its reference. 

Want more?

Read the full article at


  • Eklund, M. (2017). Choosing Normative Concepts. Oxford, Oxford University Press.
  • Enoch, D. (2011). Taking Morality Seriously: A Defense of Robust Realism. Oxford, Oxford University Press.

About the authors

Picture of the first author.
Picture of the second author.

Laura and François Schroeter are Associate Professors of Historical and Philosophical Studies at the University of Melbourne. 

Laura received her PhD from the University of Michigan. After that, she took up a postdoctoral fellowship at the Research School of the Social Sciences at the Australian National University. She joined the University of Melbourne in 2008. Her research focuses on the philosophy of language, the philosophy of mind, and metaethics. She has written extensively about two-dimensional semantics, concept individuation, and normative concepts.

François received his PhD from the University of Fribourg. He joined the Philosophy Department at Melbourne in 2003, after spending time at the University of Michigan and at the Research School of Social Sciences at the Australian National University. He is interested in normative concepts, metaethics, and moral psychology.