Posted on

Joshua Shepherd and J. Adam Carter – “Knowledge, Practical Knowledge, and Intentional Action”

picture of a catch in baseball
“Safe!” (ca. 1937) Jared French © National Baseball Hall of Fame Library

In this post, Joshua Shepherd and J. Adam Carter discuss the article they recently published in Ergo. The full-length version of their article can be found here.

A popular family of views, often inspired by Anscombe, maintain that knowledge of what I am doing (under some description) is necessary for that doing to qualify as an intentional action. We argue that these views are wrong, that intentional action does not require knowledge of this sort, and that the reason is that intentional action and knowledge have different levels of permissiveness regarding modally close failures.

Our argument revolves around a type of case that is similar in some (but not all) ways to Davidson’s famous carbon copier case. Here is one version of the type:

The greatest hitter of all time (call him Pujols) approaches the plate and forms an intention to hit a home run – that is, to hit the ball some 340 feet or more in the air, such that it flies out of the field of play. Pujols believes he will hit a home run, and he has the practical belief, as he is swinging, that he is hitting a home run. As it happens, Pujols’s behavior, from setting his stance and eyeing the pitcher, to locating the pitch, to swinging the bat and making contact with the ball, is an exquisite exercise of control. Pujols hits a home run, and so his belief that he is doing just that is true.

Given the skill and control Pujols has with respect to hitting baseballs, Pujols intentionally hits a home run. (If one thinks hitting a home run is too unlikely, we consider more likely events, like Pujols getting a base hit. If one doesn’t like baseball, we consider other examples.)

But Pujols does not know that he is doing so. For in many very similar circumstances, Pujols does not succeed in hitting a home run. Pujols’s belief that he is hitting a home run is unsafe.

When intentional action is at issue, it is commonly the case that explanations that advert to control sit comfortably alongside the admission that in nearby cases, the agent fails. Fallibility is a hallmark of human agency, and our attributions of intentional action reflect our tacit sense that some amount of risk, luck, and the cooperation of circumstance is often required to some degree – even for simple actions.

The same thing is not true of knowledge. When it comes to attributing knowledge, we simply have much less tolerance for luck and for failure in similar circumstances.

One interesting objection to our argument appeals to an Anscombe-inspired take on the kind of knowledge involved in intentional action.

Anscombe famously distinguished between contemplative and non-contemplative forms of knowledge. A central case of non-contemplative knowledge, for Anscombe, is the case of practical knowledge – a special kind of self-knowledge of what the agent is doing that does not simply mirror what the agent is doing, but is somehow involved in its unfolding. The important objection to our argument is that the argument makes most sense if applied to contemplative knowledge, but fails to take seriously the unique nature of non-contemplative, practical knowledge.

We discuss a few different ways of understanding practical knowledge, due to Michael Thompson, Kim Frost, and Will Small. The notion of practical knowledge is fascinating, and there are important insights in these authors. But we think it is not too difficult to apply our argument to a claim that practical knowledge is necessary for intentional action.

Human agents sometimes know exactly how to behave, they make no specific mistake, and yet they fail. Sometimes they behave in indistinguishable ways, and they succeed. Most of the time, human agents behave imperfectly, but there is room for error, and they succeed. The chance involved in intentional action is incompatible with both contemplative and non-contemplative knowledge.

We also discuss a probabilistic notion of knowledge due to Sara Moss (and an extension of it to action by Carlotta Pavese), and whether it might be of assistance. It won’t.

Consider Ticha, the pessimistic basketball player.

Ticha significantly underrates herself and her chances, even though she is quite a good shooter. She systematically forms beliefs about her chances that are false, believing that success is unlikely when it is likely. When Ticha lines up a shot that has, say, a 50% chance of success, she believes that the chances are closer to 25%. Ticha makes the shot. 

Was Ticha intentionally making the shot, and did she intentionally make it? Plausibly, yes.

Did Ticha have probabilistic knowledge along the way? Plausibly, no, since her probabilistic belief was false.

The moral of our paper, then, has implications for how we understand the essence of intentional action. We contrast two perspectives on this.

The first is an angelic perspective that sees knowledge of what one is doing as of the essence of what one is intentionally doing, that limns agency by emphasizing powers of rationality and the importance of self-consciousness, and that views the typical case of intentional action as one in which the agent’s success is very close to guaranteed, resulting from the perfect exercise of agentive capacities.

The second is an animal perspective that emphasizes the limits of our powers of execution, planning, and perception, and thus emphasizes the need for agency to involve special kinds of mental structure, as well as a range of tricks, techniques, plans, and back-up plans.

We think the natural world provides more insight into the nature of agency, and of intentional action, than the sources that motivate the angelic perspective. We also think there is room within the animal perspective for a proper philosophical treatment of knowledge-in-action. But that’s a separate conversation.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2277/.

About the authors

Joshua Shepherd is ICREA Research Professor at Universitat Autónoma de Barcelona, and PI of Rethinking Conscious Agency, funded by the European Research Council. He works on issues in the philosophy of action, psychology, and neuroethics. His last book, The Shape of Agency, is available open access from Oxford University Press.

J. Adam Carter is Professor in Philosophy at the University of Glasgow. His research is mainly in epistemology, with special focus on virtue epistemology, know-how, cognitive ability, intentional action, relativism, social epistemology, epistemic luck, epistemic value, group knowledge, understanding, and epistemic defeat.

Posted on

Eliran Haziza – “Assertion, Implicature, and Iterated Knowledge”

Picture of various circles in many sizes and colors, all enclosed within one big, starkly black circle.
“Circles in a Circle” (1923) Wassily Kandinsky

In this post, Eliran Haziza discusses his article recently published in Ergo. The full-length version of Eliran’s article can be found here.

It’s common sense that you shouldn’t say stuff you don’t know. I would seem to be violating some norm of speech if I were to tell you that it’s raining in Topeka if I don’t know it to be true. Philosophers have formulated this idea as the knowledge norm of assertion: speakers must assert only what they know.

Speech acts are governed by all sorts of norms. You shouldn’t yell, for example, and you shouldn’t speak offensively. But the idea is that the speech act of assertion is closely tied to the knowledge norm. Other norms apply to many other speech acts: it’s not only assertions that shouldn’t be yelled, but also questions, promises, greetings, and so on. The knowledge norm, in some sense, makes assertion the kind of speech act that it is.

Part of the reason for the knowledge norm has to do with what we communicate when we assert. When I tell you that it’s raining in Topeka, I make you believe, if you accept my words, that it’s raining in Topeka. It’s wrong to make you believe things I don’t know to be true, so it’s wrong to assert them.

However, I can get you to believe things not only by asserting but also by implying them. To take an example made famous by Paul Grice: suppose I sent you a letter of recommendation for a student, stating only that he has excellent handwriting and attends lectures regularly. You’d be right to infer that he isn’t a good student. I asserted no such thing, but I did imply it. If I don’t know that the student isn’t good, it would seem to be wrong to imply it, just as it would be wrong to assert it.

If this is right, then the knowledge norm of assertion is only part of the story of the epistemic requirements of assertion. It’s not just what we explicitly say that we must know, it’s also what we imply.

This is borne out by conversational practice. We’re often inclined to reply to suspicious assertions with “How do you know that?”. This is one of the reasons to think there is in fact a knowledge norm of assertion. We ask speakers how they know because they’re supposed to know, and because they’re not supposed to say things they don’t know.

The same kind of reply is often warranted not to what is said but to what is implied. Suppose we’re at a party, and you suggest we try a bottle of wine. I say “Sorry, but I don’t drink cheap wine.” It’s perfectly natural to reply “How do you know this wine is cheap?” I didn’t say that this wine was cheap, but I did clearly imply it, and it’s perfectly reasonable to hold me accountable not only to knowing that I don’t drink cheap wine, but also to knowing that this particular wine is cheap.

Implicature, or what is implied, may not appear to commit us to knowing it because implicatures often can be canceled. I’m not contradicting myself if I say in my recommendation letter that the student has excellent handwriting, attends lectures regularly, and is also a brilliant student. Nor is there any inconsistency in saying that I don’t drink cheap wine, and this particular wine isn’t cheap. Same words, but the addition prevents what would have been otherwise implied.

Nevertheless, once an implicature is made (and it’s not made when it’s canceled), it is expected to be known, and it violates a norm if it’s not. So it’s not only assertion that has a knowledge norm, but implicature as well: speakers must imply only what they know. This has an interesting and perhaps unexpected consequence: If there is a knowledge norm for both assertion and implicature, the KK thesis is true.

The KK thesis is the controversial claim that you know something only if you know that you know it. This is also known as the idea that knowledge is luminous.

Why would it be implied by the knowledge norms of assertion and implicature? If speakers must assert only what they know, then any assertion implies that the speaker knows it. In fact, this seems to be why it’s so natural to reply “How do you know?” The speaker implies that she knows, and we ask how. But if speakers must know not only what they assert but also what they imply, then they must assert only what they know that they know. This reasoning can be repeated: if speakers must assert only what they know that they know, then any assertion implies that the speaker knows that she knows it. The speaker must know what she implies. So she must assert only what she knows that she knows that she knows. And so on.

The result is that speakers must have indefinitely iterated knowledge that what they assert is true: they must know that they know that they know that they know …

This might seem a ridiculously strict norm on assertion. How could anyone ever be in a position to assert anything?

The answer is that if the KK thesis is true, the iterated knowledge norm is the same as the knowledge norm: if knowing entails knowing that you know, then it also entails indefinitely iterated knowledge. So you satisfy the iterated knowledge norm simply by satisfying the knowledge norm. If we must know what we say and imply to be true, then knowledge is luminous.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2236/.

About the author

Eliran Haziza is a PhD candidate at the University of Toronto. He works mainly in the philosophy of language and epistemology, and his current research focuses on inquiry, questions, assertion, and implicature.

Posted on

Brendan Balcerak Jackson, David DiDomenico, and Kenji Lota – “In Defense of Clutter”

Picture of a cluttered room with books, prints, musical instruments, ceramic containers, and other random objects disorderly covering every bit of surface available.
“Old armour, prints, pictures, pipes, China (all crack’d), 
old rickety tables, and chairs broken back’d” (1882) Benjamin Walter Spiers

In this post, Brendan Balcerak Jackson, David DiDomenico, and Kenji Lota discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Suppose I believe that mermaids are real, and this belief brings me joy. Is it okay for me to believe that mermaids are real? On the one hand, it is tempting to think that if my belief doesn’t harm anyone, then it is okay for me to have it. On the other hand, it seems irrational for me to believe that mermaids are real when I don’t have any evidence or proof to support this belief. Are there standards that I ought to abide by when forming and revising my beliefs? If there are such standards, what are they?

Two philosophical views about the standards that govern what we ought to believe are pragmatism and the epistemic view. Pragmatism holds that our individual goals, desires, and interests are relevant to these standards. According to pragmatists, the fact that a belief brings me joy is a good reason for me to have it. The epistemic view holds that all that matters are considerations that speak for or against the truth of the belief; although believing that mermaids are real brings me joy, this is not a good reason because it is not evidence that the belief is true. 

Gilbert Harman famously argued for a standard on belief formation and revision that he called ‘The Principle of Clutter Avoidance’:

One should not clutter one’s mind with trivialities (Harman 1986: 12). 

For example, suppose that knowing Jupiter’s circumference would not serve any of my goals, desires, or interests. If I end up believing truly that Jupiter’s circumference is 272,946 miles (perhaps I stumble upon this fact while scrolling through TikTok), am I doing something I ought not to do?

According to Harman, I ought not to form this belief because doing so would clutter my mind. Why waste valuable cognitive resources believing things that are irrelevant to one’s own wellbeing? Harman’s view is that our cognitive resources shouldn’t be wasted in this way, and this is his rationale for accepting the Principle of Clutter Avoidance.

Many epistemologists are inclined to accept Harman’s principle, or something like it. This is significant because the principle appears to lend significant weight to pragmatism over the epistemic view. Picking up on Harman’s ideas about avoiding cognitive clutter, Jane Friedman has recently argued that Harman’s principle has the following potential implication:

Evidence alone doesn’t demand belief, and it can’t even, on its own, permit or justify belief (Friedman 2018: 576). 

Rather, genuine standards of belief revision must combine considerations about one’s interests with more traditional epistemic sorts of considerations. Friedman argues that the need to avoid clutter implies that evidence can be overridden by consideration of our interests: even if your evidence suggests that some proposition is true, Harman’s principle may prohibit you from believing it. According to Friedman, accepting Harman’s principle leads to a picture of rational belief revision that is highly “interest-driven”, according to which our practical interests have a significant role to play.

These are radical implications, in our view, and so we wonder whether Harman’s principle should be accepted. Is it a genuine principle of rational belief revision? Our aim in “In Defense of Clutter” is to argue that it is not. Moreover, we offer an alternative way to account for clutter avoidance that is consistent with the epistemic view.

Suppose that you believe with very good evidence that it will rain and, with equally good evidence, that if it will rain, then your neighbor will bring an umbrella to work. An obvious logical consequence of these two beliefs—one that we may suppose you are able to appreciate—is that your neighbor will bring an umbrella to work.

This information may well be unimportant for you. It may be that no current interest of yours would be served by settling the question of whether your neighbor will bring an umbrella to work. But suppose that in spite of this you ask the question anyway. Having asked it, isn’t it clear that you ought to answer it in the affirmative? At the very least, isn’t it clear that you are permitted to do so? The question has come up, and you can easily see the answer. How can you be criticized for answering it?

In general, if a question comes up, surely it is okay to answer it in whatever way is best supported by your evidence. According to the Principle of Clutter Avoidance, however, you should not answer the question, because this would be to form a belief that doesn’t serve any of your practical interests. This is implausible. The answer to your question clearly follows from beliefs that are well supported by your evidence.

Can we account for the relevance of clutter avoidance without being led to this implausible result? Here is our proposal. Rather than locating the significance of cognitive clutter at the level of rational belief revision, we locate its significance at earlier stages of inquiry.

Philosophers have written extensively on rational belief revision, but comparably little about earlier stages of inquiry; for example, about asking or considering questions, and about the standards that govern these activities. If we zoom out from rational belief revision and reorient our focus on earlier stages of inquiry, we can bring the significance of cognitive clutter into view.

We propose that clutter considerations play a role in determining how lines of inquiry ought to be opened and pursued over time, but they are irrelevant to closing lines of inquiry by forming beliefs.

It is okay to answer a question in whatever way is best supported by one’s evidence, but a thinker makes a mistake when they ask or consider junk questions—questions whose answers will not serve any of their interests. This enables us to take seriously the considerations of cognitive economy that Harman, Friedman, and many others find compelling, without thereby being led to an interest-driven epistemology.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2257/

References

  • Friedman, Jane (2018). “Junk Beliefs and Interest-Driven Epistemology”. Philosophy and Phenomenological Research, 97(3), 568–83.
  • Harman, Gilbert (1986). Change in View. MIT Press.

About the authors

Brendan Balcerak Jackson‘s research focuses on natural language semantics and pragmatics, as well as linguistic understanding and communication, and on reasoning and rationality more generally. He has a PhD in philosophy, with a concentration in linguistics, from Cornell University, and he has worked as a researcher and teacher at various universities in the United States, Australia, and Germany. Since April 2023, he is a member of the Semantic Computing Research Group at the University of Bielefeld.

David DiDomenico is a Lecturer in the Department of Philosophy at Texas State University. His research interests are in epistemology and the philosophy of mind.

Kenji Lota is a doctoral student at the University of Miami. They are interested in epistemology and the philosophy of language and action.