Posted on

Justin Capes – “The W-Defense Defended”

In this post, Justin Capes discusses the article he recently published in Ergo. The full-length version of Justin’s article can be found here.

street sign indicating that expectations and reality go in opposite directions

A person deserves blame for what she did only if she could have avoided doing it. This principle of alternative possibilities (PAP), as it has come to be known, sounds plausible. But why think it’s true?

David Widerker (2000, 2003) suggests an answer that I find attractive. Widerker’s argument, which is known as the W-defense, goes something like this:

Premise 1: A person deserves blame for what she did only if it would have been reasonable to expect her not to do it.

Premise 2: It would have been reasonable to expect a person not to do what she did only if she could have avoided doing it.

Conclusion: A person deserves blame for what she did only if she could have avoided doing it.

This argument is significant, as it promises to advance a debate many believe has reached an impasse. But does it deliver? Does it yield a convincing argument for PAP? 

Some think not. In a past philosophical life, I complained that Premise 1 is unmotivated (Capes 2010). Others have complained that it requires us to reject intuitively plausible judgments about particular cases (Frankfurt 2003; McKenna 2005, 2008). And still others have complained that Premise 2 depends on the controversial ‘ought’ implies ‘can’ maxim (Fischer 2006).

None of these complaints, though, is legitimate. I respond to each of them in turn.

Start with complaint (lodged by my past philosophical self) that Premise 1 is unmotivated. This isn’t so. Premise 1 is supported by many of our ordinary and uncontroversial moral practices. Consider, for example, what we do when we want to persuade others that we don’t deserve blame for what we did. One of the most obvious strategies is to argue that expecting us to behave any differently would have been unreasonable – in behaving as we did, we didn’t fail to live up to any reasonable expectations. If we can establish this, either by showing that we didn’t fail to live up to others’ expectations or by showing that, if we have, the expectations in question were unreasonable, that will suffice, it seems, to demonstrate that we aren’t blameworthy for our behavior. Premise 1 thus looks pretty plausible.

Often, though, ideas that sound plausible have counterexamples. Consider the following case (modeled on a famous example from Frankfurt 1969):

Shooter: Jones shoots Smith without hesitation. But if Jones had hesitated, a nefarious neuroscientist would have taken control of Jones’s brain and forced him to shoot Smith, and there is nothing Jones could have done to stop this from happening.

Because Jones shot Smith on his own, without being caused to do so by the neuroscientist, many judge that Jones deserves some blame for shooting Smith, even though he couldn’t have avoided shooting Smith (since the neuroscientist would have forced him to shoot Smith if he hadn’t done so on his own). Thus, many philosophers think cases like Shooter are counterexamples to PAP. 

Many of those same philosophers also think cases like Shooter are counterexamples to Premise 1 of the W-defense. They grant, for the sake of argument, that it wouldn’t have been reasonable to expect Jones to avoid shooting Smith, given that he couldn’t have avoided doing so. However, they claim that Jones deserves some blame for shooting Smith, nonetheless.

So, here’s the situation. Premise 1 is plausible, but it also seems to conflict with our intuitive sense that Jones deserves some blame for what he did. What’s a W-defender to do? 

As I see it, we should have our cake and eat it too. Here’s the recipe. Note that, in the example, Jones shoots Smith on his own (i.e., without being forced to do so), and, although Jones couldn’t have avoided shooting Smith, he could have avoided shooting Smith on his own. For example, he could have thought twice about shooting Smith, hesitating enough to prompt the neuroscientist to intervene and force him to shoot Smith. Moreover, we could reasonably have expected Jones to do just that. We can therefore justly blame Jones for shooting Smith on his own (or for shooting Smith without hesitation), since we could reasonably have expected Jones to avoid doing that. What we can’t justly blame Jones for, though, is shooting Smith, as we couldn’t have reasonably expected him to avoid doing that.

So, Jones does deserve some blame for something in this case, but what he deserves blame for is something he could have avoided and that we could reasonably have expected him to avoid. In this way, we can retain PAP and Premise 1 of the W-defense without having to deny that there is indeed something in Shooter for which Jones deserves some blame.

What about Premise 2? Well, it arguably entails the controversial deontic maxim that ‘ought’ implies ‘can’ (the Maxim, for short), and some see this as reason to reject it (Fischer 2006: 210). However, I argue that the case for Premise 2 is stronger than the case against the Maxim. So, if Premise 2 entails the Maxim, we should accept both claims.

To illustrate this, imagine the following:

Sandy walks by a lake with twenty children in it, all of whom are clearly drowning. She can’t rescue all twenty children; there’s not enough time and no one else around to help.

What would it be reasonable to expect Sandy to do in this situation? The obvious answer is: to save as many of the drowning children as she can. But why isn’t it reasonable to expect her to save all twenty? Here, too, the answer is obvious; it’s because she can’t save all twenty. 

I think we will be hard pressed to account for these judgments without appealing to the general idea (of which Premise 2 is an instance) that expecting something of someone is reasonable only if the person can comply with the expectation. Since the judgements in question are correct, I think we should accept the general idea (and thus Premise 2).

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/5717/.

References

  • Capes, Justin. (2010). “The W-Defense.” Philosophical Studies 150: 61-77.
  • Fischer, John Martin. (2006). My Way. New York: Oxford University Press.
  • Frankfurt, Harry. (1969). “Alternate Possibilities and Moral Responsibility,” Journal of Philosophy 66: 829-39.
  • Frankfurt, Harry. (2003). “Some Thoughts Concerning PAP.” In Moral Responsibility and Alternative Possibilities: Essays on the Importance of Alternative Possibilities, eds. David Widerker and Michael McKenna, 339-348. Aldershot, UK: Ashgate Press.
  • McKenna, Michael. (2005). “Where Frankfurt and Strawson Meet.” Midwest Studies in Philosophy 29: 163-180.
  • McKenna, Michael. (2008). “Frankfurt’s Argument Against the Principle of Alternative Possibilities: Looking Beyond the Examples.” Nous 42: 770-793.
  • Widerker, David. (2000). “Frankfurt’s Attack on Alternative Possibilities: A Further Look,” Philosophical Perspectives 14: 181-201.
  • Widerker, David. (2003). “Blameworthiness, and Frankfurt’s Argument Against the Principle of Alternative Possibilities,” in Moral Responsibility and Alternative Possibilities: Essays on the Importance of Alternative Possibilities, eds. David Widerker and Michael McKenna, 53-74. Aldershot, UK: Ashgate Press.

About the author

Justin Capes is Associate Professor of Philosophy at Flagler College. He writes on issues in ethics and the philosophy of action, especially those that concern proper responses to wrongdoing.

Posted on

Gabriel De Marco and Thomas Douglas – Nudge Transparency Is Not Required for Nudge Resistibility

In this post, Gabriel De Marco and Thomas Douglas discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Image of a variety of cakes on display.
“Cakes” (1963) Wayne Thiebaud © National Gallery of Art

Consider,

Food Placement. In order to encourage healthy eating, cafeteria staff place healthy food options at eye-level, whereas unhealthy options are placed lower down. Diners are more likely to pick healthy foods and less likely to pick unhealthy foods than they would have been had foods instead been distributed randomly.

Interventions like this are often called nudges. Though many agree that it is, at least sometimes, permissible to nudge people, there is a thriving debate about when, exactly, it is so.

In the now-voluminous literature on the ethics of nudging, some authors have suggested that nudging is permissible only when the nudge is easy to resist. But what does it take for a nudge to be easy to resist? Authors rarely give accounts of this, yet they often seem to assume what we call

The Awareness Condition (AC). A nudge is easy to resist only if the agent can easily become aware of it.

We think AC is false. In our paper, we mount a more developed argument for this, but in this blog post we simply advance one counterexample and consider one possible response to it.

Here’s the counterexample:

Giovanni and Liliana: Giovanni, the owner of a company, wants his workers to pay for the more expensive, unhealthy snacks in the company cafeteria, so, without informing his office workers, he instructs the cafeteria staff to place these snacks at eye level. While in line at the cafeteria, Liliana (who is on a diet) sees the unhealthy food, and is a bit tempted by it, partly as a result of the nudge. Recognizing the temptation, she performs a relatively easy self-control exercise: she reminds herself of her plan to eat healthily, and why she has it. She thinks about how following a diet is going to be difficult, and once she starts making exceptions, it’s just going to be easier to make exceptions later on. After this, she decides to take the salad and leave the chocolate pudding behind. Although she was aware that she was tempted to pick the chocolate pudding, she was not aware that she was being nudged, nor did she have the capacity to easily become aware of this, since Giovanni went to great lengths to hide his intentions.

Did Liliana resist the nudge? We think so. We also think that the nudge was easily resistible for her, even though she did not have the capacity to easily become aware of the fact that she was being nudged. If you agree, then we have a straightforward counterexample to AC.

In response, someone might argue that, although Liliana resists something, she does not resist the nudge. Rather, she resists the effects of the nudge: the (increased) motivation to pick the chocolate pudding. Resisting the nudge, rather than its effects, requires that one intends to act contrary to the nudge. But Liliana doesn’t intend to do that. Although she intends to pick the healthy option, to pick the salad, or to not pick the chocolate pudding, she does not intend to act contrary to the nudge.

If resisting a nudge requires that one intend to act contrary to the nudge, then Liliana does not resist the nudge, and the counterexample to AC fails. Yet we do not think that resisting a nudge requires that one intend to act contrary to the nudge. While we grant that a way of resisting a nudge is to do so while intending to act contrary to it, and that resisting it in this way requires awareness of the nudge, we do not think that this is the only way to resist a nudge. Partly, we think this because we find it plausible that Liliana (and agents in other similar cases) do resist the nudge.

But further, we think that, if resisting a nudge requires intending to act contrary to the nudge, this will cast doubt on the thought that nudges ought to be easy to resist. Suppose that there are two reasonable ways of understanding “resisting a nudge.” On one understanding, resistance requires that the agent acts contrary to the nudge and intends to do so. Liliana does not resist the nudge on this understanding. On a second, broader way of understanding resistance, one need not intend to act contrary to the nudge in order to resist it; it is enough simply to act contrary to the nudge. Liliana does resist the nudge in this way.

Now consider two claims:

The strong claim: A nudge is permissible only if it is easy to act contrary to it with the intention of doing so.

The weak claim: A nudge is permissible only if it is easy to act contrary to it.

Are these claims plausible? We think that the weak claim might be, but the strong claim is not.

Consider again Food Placement. This was a case of a nudge just like Giovanni’s nudge, except that the food placement is intended to get more people to pick the healthy food option over the unhealthy one, rather than the reverse. In this version of the case, Giovanni wants to do what is in the best interests of his staff. According to the strong claim, this nudge would be impermissible insofar as his staff cannot easily become aware of the nudge. And this is so even though it would be permissible for Giovanni to put the healthy foods at eye level randomly. Moreover, it would remain so even if all the following are true:

  1. the nudge only very slightly increases the nudgee’s motivation to take the healthy food,
  2. the nudgee acts contrary to this motivation and picks the same unhealthy food she would have picked in the absence of the nudge,
  3. she finds it very easy to act contrary to the nudge in this way,
  4. her acting contrary to the nudge in this way is a reflection of her values or desires, and
  5. her acting contrary to the nudge is the result of normal deliberation which is not significantly influenced by the nudge.

We find it hard to believe that this nudge is impermissible, or even more weakly, that we have a strong or substantial reason against implementing it.

We think, then, that if nudges have to be easily resistible in order to be ethically acceptable, this will be because something like the weak claim holds. On this view, a nudge can meet this requirement if it is easy for the nudgee to resist it in our broader sense, and this is compatible with it being difficult for the nudgee to become aware of the nudge, as in our Giovanni and Liliana case.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4635/.

About the authors

Gabriel De Marco is a Research Fellow in Applied Moral Philosophy at the Oxford Uehiro Centre for Practical Ethics. His research focuses on free will, moral responsibility, and the ethics of influence.

Tom Douglas is Professor of Applied Philosophy and Director of Research at the Oxford Uehiro Centre for Practical Ethics. His research focuses especially on the ethics of using medical and neuro-scientific technologies for non-therapeutic purposes, such as cognitive enhancement, crime prevention, and infectious disease control. He is currently leading the project ‘Protecting Minds: The Right to Mental Integrity and the Ethics of Arational Influence‘, funded by the European Research Council.

Posted on

Joshua Shepherd and J. Adam Carter – “Knowledge, Practical Knowledge, and Intentional Action”

picture of a catch in baseball
“Safe!” (ca. 1937) Jared French © National Baseball Hall of Fame Library

In this post, Joshua Shepherd and J. Adam Carter discuss the article they recently published in Ergo. The full-length version of their article can be found here.

A popular family of views, often inspired by Anscombe, maintain that knowledge of what I am doing (under some description) is necessary for that doing to qualify as an intentional action. We argue that these views are wrong, that intentional action does not require knowledge of this sort, and that the reason is that intentional action and knowledge have different levels of permissiveness regarding modally close failures.

Our argument revolves around a type of case that is similar in some (but not all) ways to Davidson’s famous carbon copier case. Here is one version of the type:

The greatest hitter of all time (call him Pujols) approaches the plate and forms an intention to hit a home run – that is, to hit the ball some 340 feet or more in the air, such that it flies out of the field of play. Pujols believes he will hit a home run, and he has the practical belief, as he is swinging, that he is hitting a home run. As it happens, Pujols’s behavior, from setting his stance and eyeing the pitcher, to locating the pitch, to swinging the bat and making contact with the ball, is an exquisite exercise of control. Pujols hits a home run, and so his belief that he is doing just that is true.

Given the skill and control Pujols has with respect to hitting baseballs, Pujols intentionally hits a home run. (If one thinks hitting a home run is too unlikely, we consider more likely events, like Pujols getting a base hit. If one doesn’t like baseball, we consider other examples.)

But Pujols does not know that he is doing so. For in many very similar circumstances, Pujols does not succeed in hitting a home run. Pujols’s belief that he is hitting a home run is unsafe.

When intentional action is at issue, it is commonly the case that explanations that advert to control sit comfortably alongside the admission that in nearby cases, the agent fails. Fallibility is a hallmark of human agency, and our attributions of intentional action reflect our tacit sense that some amount of risk, luck, and the cooperation of circumstance is often required to some degree – even for simple actions.

The same thing is not true of knowledge. When it comes to attributing knowledge, we simply have much less tolerance for luck and for failure in similar circumstances.

One interesting objection to our argument appeals to an Anscombe-inspired take on the kind of knowledge involved in intentional action.

Anscombe famously distinguished between contemplative and non-contemplative forms of knowledge. A central case of non-contemplative knowledge, for Anscombe, is the case of practical knowledge – a special kind of self-knowledge of what the agent is doing that does not simply mirror what the agent is doing, but is somehow involved in its unfolding. The important objection to our argument is that the argument makes most sense if applied to contemplative knowledge, but fails to take seriously the unique nature of non-contemplative, practical knowledge.

We discuss a few different ways of understanding practical knowledge, due to Michael Thompson, Kim Frost, and Will Small. The notion of practical knowledge is fascinating, and there are important insights in these authors. But we think it is not too difficult to apply our argument to a claim that practical knowledge is necessary for intentional action.

Human agents sometimes know exactly how to behave, they make no specific mistake, and yet they fail. Sometimes they behave in indistinguishable ways, and they succeed. Most of the time, human agents behave imperfectly, but there is room for error, and they succeed. The chance involved in intentional action is incompatible with both contemplative and non-contemplative knowledge.

We also discuss a probabilistic notion of knowledge due to Sara Moss (and an extension of it to action by Carlotta Pavese), and whether it might be of assistance. It won’t.

Consider Ticha, the pessimistic basketball player.

Ticha significantly underrates herself and her chances, even though she is quite a good shooter. She systematically forms beliefs about her chances that are false, believing that success is unlikely when it is likely. When Ticha lines up a shot that has, say, a 50% chance of success, she believes that the chances are closer to 25%. Ticha makes the shot. 

Was Ticha intentionally making the shot, and did she intentionally make it? Plausibly, yes.

Did Ticha have probabilistic knowledge along the way? Plausibly, no, since her probabilistic belief was false.

The moral of our paper, then, has implications for how we understand the essence of intentional action. We contrast two perspectives on this.

The first is an angelic perspective that sees knowledge of what one is doing as of the essence of what one is intentionally doing, that limns agency by emphasizing powers of rationality and the importance of self-consciousness, and that views the typical case of intentional action as one in which the agent’s success is very close to guaranteed, resulting from the perfect exercise of agentive capacities.

The second is an animal perspective that emphasizes the limits of our powers of execution, planning, and perception, and thus emphasizes the need for agency to involve special kinds of mental structure, as well as a range of tricks, techniques, plans, and back-up plans.

We think the natural world provides more insight into the nature of agency, and of intentional action, than the sources that motivate the angelic perspective. We also think there is room within the animal perspective for a proper philosophical treatment of knowledge-in-action. But that’s a separate conversation.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2277/.

About the authors

Joshua Shepherd is ICREA Research Professor at Universitat Autónoma de Barcelona, and PI of Rethinking Conscious Agency, funded by the European Research Council. He works on issues in the philosophy of action, psychology, and neuroethics. His last book, The Shape of Agency, is available open access from Oxford University Press.

J. Adam Carter is Professor in Philosophy at the University of Glasgow. His research is mainly in epistemology, with special focus on virtue epistemology, know-how, cognitive ability, intentional action, relativism, social epistemology, epistemic luck, epistemic value, group knowledge, understanding, and epistemic defeat.