Posted on

Paul L. Franco – “Susan Stebbing on Logical Positivism and Communication”

In this post, Paul L. Franco discusses his article recently published in Ergo. The full-length version of Paul’s article can be found here.

portrait of Susan Stebbing, 1939
Lizzie Susan Stebbing
photographed by Howard Coster (1939) © National Portrait Gallery, London

In anthologies aimed at giving readers an overview of analytic philosophy in the early twentieth century, we are used to seeing listed works by G.E. Moore, Bertrand Russell, Rudolf Carnap, and Ludwig Wittgenstein. But upon reading these anthologies it is not immediately obvious what, say, Moore’s common-sense philosophy shares with Carnap’s scientific philosophy. Moore waves his hands to prove an external world; Carnap uses formal languages to logically construct it. Yet, both belong to a tradition now-called analytic philosophy. Following Alan Richardson, I think an interesting question in history of analytic philosophy concerns how this happened. 

One common story centers A.J. Ayer’s visit around 1933 to the Vienna Circle to study with Moritz Schlick, Carnap’s colleague and leading representative of the logical positivist movement. Ayer distilled the lessons from his visit in his book Language, Truth and Logic (1936). In a readable style – more accessible than the technical work of some Vienna Circle members – Ayer brought the good word of verificationism to an Anglophone audience, resulting in vigorous debate.

Like Siobhan Chapman, Michael Beaney, and others, I think that this story – although not entirely wrong – neglects Susan Stebbing’s role in shaping early analytic philosophy. She contributed through her involvement with the journal Analysis, which published papers on logical positivism before 1936. She was also a central institutional figure in other ways, inviting Schlick and Carnap to lecture in London. In contrast with Ayer, who admitted that the extent of his scientific background was listening to a Geiger counter once in a lab, Stebbing, like the logical positivists, paid close attention to science. 

Stebbing’s sustained engagement with logical positivists in articles and reviews in the thirties is central to their reception in the British context. This work is also a core part of Stebbing’s rich output on philosophical analysis. For these reasons, her work illuminates early analytic philosophy’s development.

My paper reconstructs and interprets Stebbing’s criticisms of the logical positivist conception of analysis. The centerpiece is “Logical Positivism and Analysis” in which she contrasts her understanding of the logical positivist approach with the sort of analysis Moore practices. Stebbing argues that Moore insists on a threefold distinction between:

  1. knowing that a proposition is true;
  2. understanding its meaning;
  3. giving an analysis of it.

Accordingly, philosophical analysis doesn’t give the meaning of statements or justify them. Instead, it clarifies relationships between statements which are already known and understood.

Although she is not an acolyte of Moore, Stebbing agrees with the fundamentals of his account and contrasts it with the picture offered by logical positivism. On her view, the logical positivist conception of analysis – represented by Wittgenstein, Schlick, and Carnap – begins with the principle of verification. This principle says the meaning of a statement is its method of verification. To know a statement’s meaning is to know what verifies it, and philosophical analysis clarifies a statement’s meaning by revealing its verification conditions. Carnap was also committed to what he called methodological solipsism. This is the view that the verification of statements about physical objects and other minds is provided by that which is immediately given in phenomenal experience. Adopting this methodological commitment means that verification conditions reduce to first-personal statements about experience.

Stebbing asks how the principle of verification can ground communication in light of methodological solipsism. For her, the logical positivists should be able to answer. This is because they are interested in meaning and knowledge, and communication is necessary for intersubjective knowledge. Here, we come to the crux of her criticisms. She says that the identification of meaning with verification conditions collapses Moore’s threefold distinction. Then, she argues that in collapsing the distinction, and given Carnap’s methodological solipsism, the principle of verification gives counterintuitive conclusions about the meaning of statements about other minds and the past.

For example, on Stebbing’s account of logical positivism, the meaning of your statement “I have a toothache” is, for me, given in first-personal statements about my experience of your bodily behavior, your utterances, and so on. Similarly, the meaning of historical statements like “Queen Anne died in 1714” is given by first-personal statements about my experience when consulting the relevant records. After all, the verification theory of meaning identifies the meaning of statements with their verification conditions and methodological solipsism says those are found in statements about what is given in phenomenal experience. But Stebbing thinks this misidentifies knowing that a statement is true with understanding its meaning. For her, it is clear that you don’t intend to communicate about my experience in talking about your toothache. It is also clear that when you speak about Queen Anne’s death, you do not intend to communicate about the way I would verify it. Instead, in talking about your toothache, you intend to communicate about your experience; in talking about Queen Anne’s death, you intend to communicate about the world. Stebbing thinks that I understand the meaning of both statements because they are about the “same sort” (Stebbing 1934, 170) of things I could experience, even though I’m not currently experiencing them. For Stebbing, it is this “same-sortness” of experience which grounds our understanding of the meaning of statements about other minds and history, not our knowledge of their verification conditions. 

These are just the basics; my paper has other details of Stebbing’s criticisms – and related ones by Margaret MacDonald –that I’m tempted to mention but won’t. Instead, I’ll close by explaining how paying close attention to Stebbing’s engagement with logical positivism can be helpful. As I see it, there are three main upshots.  

First, we can better understand Stebbing’s novel contributions to the analytic turn in philosophy, especially her attention to the nuances of different types of philosophical analysis. 

Second, we realize that the well-worn, presumed-to-be-devastating objection that the principle of verification fails to meet its own criteria for meaningfulness doesn’t appear in Stebbing’s work. Rather, she is concerned about whether logical positivism provides an account of meaning that explains successful communication. Whatever problems verificationism was thought to have, they were more interesting than whether the principle of verification is verifiable.

Third, by paying close attention to Stebbing’s focus on communication, we can better understand how her appeal to the common-sense conviction that we understand what we are talking about when we talk in clear and unambiguous ways is echoed in criticisms of logical positivism in ensuing decades – in particular, in the criticisms developed by ordinary language philosophers like J.L. Austin and P.F. Strawson. 

Stebbing shaped the understanding of logical positivism in a way that made their brand of philosophical analysis recognizably similar to that of philosophers who didn’t share their scientific concerns. In doing so, she helped create the big tent that is early analytic philosophy. 

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/5185/.

About the author

Paul L. Franco is Associate Teaching Professor in Philosophy at the University of Washington-Seattle. His research is in the history of analytic philosophy, the history of philosophy of science, values in science, and intersections between the three areas. He currently serves as the treasurer for HOPOS

Posted on

Bryan Pickel and Brian Rabern – “Against Fregean Quantification”

In this post, Bryan Pickel and Brian Rabern discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Still life of various kinds fruits laying on a tablecloth.
“Martwa natura” (1910) Witkacy

A central achievement of early analytic philosophy was the development of a formal language capable of representing the logic of quantifiers. It is widely accepted that the key advances emerged in the late nineteenth century with Gottlob Frege’s Begriffschrift. According to Dummett,

“[Frege] resolved, for the first time in the whole history of logic, the problem which had foiled the most penetrating minds that had given their attention to the subject.” (Dummett 1973: 8)

However, the standard expression of this achievement came in the 1930s with Alfred Tarski, albeit with subtle and important adjustments. Tarski introduced a language that regiments quantified phrases found in natural or scientific languages, where the truth conditions of any sentence can be specified in terms of meanings assigned to simpler expressions from which it is derived.

Tarski’s framework serves as the lingua franca of analytic philosophy and allied disciplines, including foundational mathematics, computer science, and linguistic semantics. It forms the basis of the predicate logic conventionally taught in introductory logic courses – recognizable by its distinctive symbols such as inverted “A’s” and backward “E’s,” truth-functions, predicates, names, and variables.

This formalism proves indispensable for tasks such as expressing the Peano Axioms, elucidating the truth-conditional ambiguity of statements like “Every linguist saw a philosopher,” or articulating metaphysical relationships between parts and wholes. Additionally, its computationally more manageable fragments have found applications in semantic web technologies and artificial intelligence.

Yet, from the outset there was dissatisfaction with Tarski’s methods. To see where the dissatisfaction originates first, consider the non-quantified fragment of the language. For this fragment, the truth conditions of any complex sentence can be specified in terms of the truth conditions of its simpler sentences, and the truth conditions of any simple sentence, in turn, can be specified in terms of the referents of its parts. For example, the sentence ‘Hazel saw Annabel and Annabel waved’ is true if and only if its component sentences ‘Hazel saw Annabel’ and ‘Annabel waved’ are both true. ‘Hazel saw Annabel’ is true if the referents of ‘Hazel’ and ‘Annabel’ stand in the seeing relation. ‘Annabel waved’ is true if the referent of ‘Annabel’ waved. For this fragment, then, truth and reference can be considered central to semantic theory.

This feature can’t be maintained for the full language, however. To regiment quantifiers, Tarksi  introduced open sentences and variables, effectively displacing truth and reference with “satisfaction by an assignment” and “value under an assignment”. Consider for instance a sentence such as  ‘Hazel saw someone who waved’. A broadly Tarskian analysis would be this: ‘there is an x such that: Hazel saw x and x waved’. For Tarski, variables do not refer absolutely, but only relative to an assignment. We can speak of the variable x as being assigned to different individuals: to Annabel or to Hazel. Similarly, an open sentence such as ‘Hazel saw x’ or ‘x waved’ is not true or false, but only true or false relative to an assignment of values to its variables.

This aspect of Tarski’s approach is the root cause of dissatisfaction, yet it constitutes his unique method for resolving “the problem” – i.e., the problem of multiple generality that Frege had previously solved. Tarski used the additional structure to explain the truth conditions of multiply quantified sentences such as `Everyone saw someone who waved’, or `For every y, there is an x such that: y saw x and x waved’. The overall sentence is true if for every assignment of values to ‘y’, there is an assignment of values to both ‘y’ and ‘x’ such that ‘y saw x’ and ‘x waved’ are both true on that assignment.

Tarksi’s theory is formally elegant, but its foundational assumptions are disputed. This has prompted philosophers to revisit Frege’s earlier approach to quantification.

According to Frege, a “variable” is not even an expression of the language but instead a typographic aspect of a distributed quantifier sign. So Frege would think of a sentence such as  ‘there is an x such that: Hazel saw x and x waved’ as divisible into two parts:

  1. there is an x such that: … x….
  2. Hazel saw … and … waved

Frege would say that expression (ii) is a predicate that is true or false of individuals depending on whether Hazel saw them and they waved. For Frege, this predicate is derived by starting with a full sentence such as ‘Hazel saw Annabel and Annabel waved’ and removing the name ‘Annabel’. In this way, Frege seems to give a semantics for quantification that more naturally extends the non-quantified portion of the language. As Evans says:

[T]he Fregean theory with its direct recursion on truth is very much simpler and smoother than the Tarskian alternative…. But its interest does not stem from this, but rather from examination at a more philosophical level. It seems to me that serious exception can be taken to the Tarskian theory on the ground that it loses sight of, or takes no account of, the centrality of sentences (and of truth) in the theory of meaning. (Evans 1977: 476)

In short: Frege did it first, and Frege did it better.

Our paper “Against Fregean Quantification” takes a closer look at these claims. We identify three features in which the Fregean approach has been held to make an advance on Tarski: it treats quantifiers as predicates of predicates, the basis of the recursion includes only names and predicates, and the complex predicates do not contain variable markers.

However, we show that in each case, the Fregean approach must similarly abandon the centrality of truth and reference to its semantic theory. Most surprisingly, we show that rather than extending the semantics of the non-quantified portion of the language, the Fregean turns ordinary proper names into variable-like expressions. In doing so, Frege leads to a typographic variant of the most radical of Tarskian views: variabilism, the view that names should be modeled as Tarskian variables.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2906/.

References

  • Dummett, Michael. (1973). Frege: Philosophy of Language. London: Gerald Duckworth.
  • Evans, Gareth. (1977). “Pronouns, Quantifiers, and Relative Clauses (I)”. Canadian Journal of Philosophy 7(3): 467–536.
  • Frege, Gottlob. (1879). Begriffsschrift: Eine der arithmetischen nachgebildete Formelsprache des reinen Denkens. Halle a.d.S.
  • Tarski, Alfred. (1935). “The Concept of Truth in Formalized Languages”. In Logic, Semantics, Metamathematics (1956): 152–278 . Clarendon Press.

About the authors

Bryan Pickel is Senior Lecturer in Philosophy at the University of Glasgow. He received his PhD from the University of Texas at Austin. His main areas of research are metaphysics, the philosophy of language, and the history of analytic philosophy.

Brian Rabern is Reader at the School of Philosophy, Psychology, and Language Sciences at the University of Edinburgh. Additionally, he serves as a software engineer at GraphFm. He received his PhD in Philosophy from the Australian National University. His main areas of research are the philosophy of language and logic.

Posted on

Corey Dethier – “Interpreting the Probabilistic Language in IPCC Reports”

A young sibyl (sacred interpreter of the word of god in pagan religions) argues with an old prophet (sacred interpreter of the word of god in monotheistic religions). It looks as if the discussion will go on for a long while.
Detail of “A sibyl and a prophet” (ca. 1495) Andrea Mantegna

In this post, Corey Dethier discusses his article recently published in Ergo. The full-length version of Corey’s article can be found here.

Every few years, the Intergovernmental Panel on Climate Change (IPCC) releases reports on the current status of climate science. These reports are massive reviews of the existing literature by the most qualified experts in the field. As such, IPCC reports are widely taken to represent our best understanding of what the science currently tells us. For this reason, the IPCC’s findings are important, as is their method of presentation.

The IPCC typically qualifies its findings using different scales. In its 2013 report, for example, the IPCC says that the sensitivity of global temperatures to increases in CO2 concentration is “likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence) and very unlikely greater than 6°C (medium confidence)” (IPCC 2013, 81).

You might wonder what exactly these qualifications mean. On what grounds does the IPCC say that something is “likely” as opposed to “very likely”? And why does it assign “high confidence” to some claims and “medium confidence” to others? If you do wonder about this, you are not alone. Even many of the scientists involved in writing the IPCC reports find these qualifications confusing (Janzwood 2020; Mach et al. 2017). My recent paper – “Interpreting the Probabilistic Language in IPCC Reports” – aims to clarify this issue, with particular focus on the IPCC’s appeal to the likelihood scale.

Traditionally, probabilistic language such as “likely” has been interpreted in two ways. On a frequentist interpretation, something is “likely” when it happens with relatively high frequency in similar situations, while it is “very likely” when it happens with a much greater frequency. On a personalist interpretation, something is “likely” when you are more confident that it will happen than not, while something is “very likely” when you are much more confident.

Which of these interpretations better fits the IPCC’s practice? I argue that neither of them does. My main reason is that both interpretations are closely tied to specific methodologies in statistics. The frequentist interpretation is appropriate for “classical” statistical testing, whereas the personalist interpretation is appropriate when “Bayesian” methods are used. The details about the differences between these methods do not matter for our present purposes. My main point is that climate scientists use both kinds of statistics in their research, and since the IPCC’s report reviews all of the relevant literature, the same language is used to summarize results derived from both methods.

If neither of the traditional interpretations works, what should we use instead? My suggestion is the following: we should understand the IPCC’s use of probabilistic terms more like a letter grade (an A or a B or a C, etc.) than as strict probabilistic claims implying a certain probabilistic methodology.

An A in geometry or English suggests that a student is well-versed in the subject according to the standards of the class. If the standards are sufficiently rigorous, we can conclude that the student will probably do well when faced with new problems in the same subject area. But an A in geometry does not mean that the student will correctly solve geometry problems with a given frequency, nor does it specify an appropriate amount of confidence that you should have that they’ll solve a new geometry problem. 

The IPCC’s use of terms such as “likely” is similar. When the IPCC says that a claim is likely, that’s like saying that it got a C in a very hard test. When the IPCC says that sensitivity is “extremely unlikely less than 1°C”, that’s like saying that this claim fails the test entirely. In this analogy, the IPCC’s judgments of confidence reflect the experts’ evaluation of the quality of the class or test: “high confidence” means that the experts think that the test was very good. But even when a claim passes the test with full marks, and the test is judged to be very good, this only gives us a qualitative evaluation. Just as you shouldn’t conclude that an A student will get 90% of problems right in the future, you also shouldn’t conclude that something that the IPCC categorizes as “very likely” will happen at least 90% of the time. The judgment has an important qualitative component, which a purely numerical interpretation would miss.

It would be nice – for economists, for insurance companies, and for philosophers obsessed with precision – if the IPCC could make purely quantitative probabilistic claims. At the end of my paper, I discuss whether the IPCC should strive to do so. I’m on the fence: there are both costs and benefits. Crucially, however, my analysis suggests that this would require the IPCC to go beyond its current remit: in order to present results that allow for a precise quantitative interpretation of its probability claims, the IPCC would have to do more than simply summarize the current state of the research. 

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4637/.

References

  • IPCC (2013). Climate Change 2013: The Physical Science Basis. Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Thomas F. Stocker, Dahe Qin at al. (Eds.). Cambridge University Press.
  • Janzwood, Scott (2020). “Confident, Likely, or Both? The Implementation of the Uncertainty Language Framework in IPCC Special Reports”. Climatic Change 162, 1655–75.
  • Mach, Katharine J., Michael D. Mastrandrea, at al. (2017). “Unleashing Expert Judgment in Assessment”. Global Environmental Change 44, 1–14.

About the author

Corey Dethier is a postdoctoral fellow at the Minnesota Center for Philosophy of Science. He has published on a variety of topics relating to epistemology, rationality, and scientific method, but his main research focus is on epistemological and methodological issues in climate science, particularly those raised by the use of idealized statistical models to answer questions about climate change.

Posted on

Quill Kukla and Mark Lance – “Telling Gender: The Pragmatics and Ethics of Gender Ascriptions”

picture of gendered bathrooms where the gendered icons are replaced by a shark and a T-Rex.

In this post, Quill Kukla and Mark Lance discuss their article recently published in Ergo. The full-length version of the article can be found here.

Debates over the validity or appropriateness of gender ascriptions, whether imposed on someone else (“You may pretend you’re a woman, but you’re actually a man!”) or self-proclaimed (“I am a man!”; “I don’t have a gender!”), typically turn to what gender “really is” and who “really” has which gender. We argue that such metaphysical turns are usually irrelevant distractions and redirections. We claim that gender ascriptions like “You are a man” or “I am not a woman” are not, first and foremost, functioning to make truth claims about substantive features of the world.

This may be a surprising claim. After all, a sentence like “You are a man” is grammatically a declarative. Declaratives are what we use to make claims about the world – Paris is the capital of France; metals conduct electricity; there is a deer in the meadow. The grammatical form of a sentence is generally an indicator of the pragmatic force of uttering that sentence, so sentences with declarative grammar normally function to make truth claims, which are appropriate if they match the world and not if they don’t. But this connection is not universal. If I say to my roommate “It’s really hot in here!” this can function as a request to open the window or turn down the heat. “The meeting is adjourned” might describe a social status of the meeting, but more typically, it functions to bring about or constitute the adjournment.

Imagine that one person says to another, “You and I are friends!” and the second person responds, “No, we are not.” It seems unlikely that they are disagreeing about a substantive issue of fact. They are likely not disagreeing over the empirical criteria for friendship, whatever those might be, or the evidence concerning whether they meet those criteria. Rather, the utterance “You and I are friends!” is a kind of social proposal. In calling you my friend, I am proposing that we relate to one another in specific ways and take ourselves as having various commitments to one another; I am making a claim on a certain normative relationship to you. The utterance functions more like “I bet you ten dollars” or “I take you as my spouse” than as a factual claim. To say “We are friends” to someone is to try to position us in social space with respect to one another. And to reject the friendship claim is to reject this proposed positioning.

Similarly, we want to claim that the primary function of gender ascriptions is to establish a normative positioning in social space. First-person gender ascriptions (“I am a woman!”) are attempts to claim a specific position in gendered social space, while second-person and third-person gender ascriptions (“You are no man!”; “He is a man!”) are attempts to impose a position in gendered social space. Most gender ascriptions mostly sustain a position that someone already has rather than constituting one from scratch, but they still work to incrementally solidify such a position.

Our position in gendered social space, or the gender we are taken as having (or lacking), inflects nearly every aspect of how we are expected and demanded to negotiate the social and material world. It shapes how we are supposed to hold our body and modulate our voice; what clothes we are supposed to wear; how we are supposed to manifest sexual attraction and attractiveness; where and how we pee; what hobbies and jobs we are supposed to have; who we compete against in sports events and which sports we take up in the first place; what our relationship is to our children; and so forth. Even fetuses, once recognized as ‘boys’ or ‘girls’, are expected to become babies for whom certain nursery and clothing colors and emotions and behaviors are appropriate. Such norms are modulated by race, age, ability, class, body shape, and more; there is not a single, consistent set of norms for each gender, but rather a complex and often contradictory web of norms in which we are all differently positioned. But these structures of social significance are inescapable. To occupy a position in gendered social space is to be situated with respect to this complex network of norms. One can transgress or resist any subset of these norms, of course, but they are still the norms that carve out expectations and evaluations and social uptake for almost every dimension of our social and material existence.

When people disagree over whether a gender attribution is appropriate, it is rarely primarily an empirical disagreement. There is no single, widely accepted empirical definition of gender. It is varyingly defined in terms of anatomy, gametes, genetics, psychology, social role, self-identification, and phenotype. But we argue that what is at stake in most disagreements over gender attributions is not which empirical features someone has, but rather whether it is appropriate to take someone as positioned in a specific way within gendered social space. When a trans woman claims, “I am a woman,” and someone responds, “No, you are a man,” they are not generally arguing about empirical matters, but rather, as in the friendship case, about how a claim on a social position will or won’t be ratified.

If the function of gendered language is not (primarily) to describe the world, but to establish, ratify, reinforce, or oppose the taking up of social roles, then the appropriateness of such linguistic performances is not a matter of truth and falsity. Rather, we should evaluate what we ought to say and how we ought to respond to one other’s gender ascriptions in terms of how we ought to organize social space, and how much respect individuals should be given for determining their own gender ascription.

We argue that core norms of self-determination and autonomy demand wide respect for and deference to first-person attributions (or rejections) of gender (“I am a man,” etc.). What gender is, or whether it is anything at all, is simply irrelevant to these core norms. Saying “Yes, you are a man” is endorsing a person’s right to choose the social role they wish to inhabit, similar to recognizing a person’s choice of career, spouse, or hobby. Indeed, people generally think of rules that force social positions on people, such as Jim Crow laws and caste systems, as paradigmatic antidemocratic violations of self-determination. Since gender norms govern many of the most intimate dimensions of our bodily lives, forcing gendered social positions seems especially unjustified. The only ethical reason to contravene someone’s first-person claim upon a social position is if their doing so harms others, and we find the idea that this is true in the case of gender completely without merit or serious evidence. (This is not always the case. If I declare, “I am your sovereign master!”, I am claiming a social position, but obviously one you have every right to reject, because it harms you and your own self-determination directly. We find the idea that one person’s gender claim has a significant chance of harming someone else absurd, although we recognize that transphobes do try to assert this.)

Thus, we claim that first-personal gender attributions are virtually always justified, not because people are infallible about their own gender, but because of the ethical function of these attributions. Likewise, second- and third-personal gender attributions that contradict or foreclose first-personal attributions are almost always unjustified. Debates over the metaphysics of gender may be of philosophical curiosity to some, but they are distractions when it comes to everyday questions about when and how to respect people’s claimed gender, or lack thereof.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2911/.

About the authors

Quill Kukla is Professor of Philosophy and Director of Disability Studies at Georgetown University. From 2021 to 2023, they also held a Humboldt Stiftung Research Award at the Institut für Philosophie at Leibniz Universität Hannover. They received a PhD in Philosophy from the University of Pittsburgh and an MA in Geography from the City University of New York, and completed a Greenwall Postdoctoral Fellowship at The Johns Hopkins School of Public Health. Their most recent book is City Living: How Urban Dwellers and Urban Spaces Make One Another (Oxford University Press 2021) and their forthcoming book is entitled Sex Beyond ‘Yes!’ (W. W. Norton & Co. 2024)

Mark Lance, PhD University of Pittsburgh, is Professor of Philosophy and Professor of Justice and Peace at Georgetown University. He has published in areas ranging from relevance logic, to philosophy of language, to metaethics and contributed to public education projects through The Institute for Social Ecology, the Institute of Anarchist Studies, the Peace and Justice Studies Association, and the US Campaign for Palestinian Rights. He is an activist who has been arrested 13 times in civil disobedience actions protesting US government crimes. His most recent book is Toward a Revolution as Nonviolent as Possible (with Matt Meyer). Outside activism and philosophy, he is a rowing coach, chess player, and former orchestral trumpet player.

Posted on

Eliran Haziza – “Assertion, Implicature, and Iterated Knowledge”

Picture of various circles in many sizes and colors, all enclosed within one big, starkly black circle.
“Circles in a Circle” (1923) Wassily Kandinsky

In this post, Eliran Haziza discusses his article recently published in Ergo. The full-length version of Eliran’s article can be found here.

It’s common sense that you shouldn’t say stuff you don’t know. I would seem to be violating some norm of speech if I were to tell you that it’s raining in Topeka if I don’t know it to be true. Philosophers have formulated this idea as the knowledge norm of assertion: speakers must assert only what they know.

Speech acts are governed by all sorts of norms. You shouldn’t yell, for example, and you shouldn’t speak offensively. But the idea is that the speech act of assertion is closely tied to the knowledge norm. Other norms apply to many other speech acts: it’s not only assertions that shouldn’t be yelled, but also questions, promises, greetings, and so on. The knowledge norm, in some sense, makes assertion the kind of speech act that it is.

Part of the reason for the knowledge norm has to do with what we communicate when we assert. When I tell you that it’s raining in Topeka, I make you believe, if you accept my words, that it’s raining in Topeka. It’s wrong to make you believe things I don’t know to be true, so it’s wrong to assert them.

However, I can get you to believe things not only by asserting but also by implying them. To take an example made famous by Paul Grice: suppose I sent you a letter of recommendation for a student, stating only that he has excellent handwriting and attends lectures regularly. You’d be right to infer that he isn’t a good student. I asserted no such thing, but I did imply it. If I don’t know that the student isn’t good, it would seem to be wrong to imply it, just as it would be wrong to assert it.

If this is right, then the knowledge norm of assertion is only part of the story of the epistemic requirements of assertion. It’s not just what we explicitly say that we must know, it’s also what we imply.

This is borne out by conversational practice. We’re often inclined to reply to suspicious assertions with “How do you know that?”. This is one of the reasons to think there is in fact a knowledge norm of assertion. We ask speakers how they know because they’re supposed to know, and because they’re not supposed to say things they don’t know.

The same kind of reply is often warranted not to what is said but to what is implied. Suppose we’re at a party, and you suggest we try a bottle of wine. I say “Sorry, but I don’t drink cheap wine.” It’s perfectly natural to reply “How do you know this wine is cheap?” I didn’t say that this wine was cheap, but I did clearly imply it, and it’s perfectly reasonable to hold me accountable not only to knowing that I don’t drink cheap wine, but also to knowing that this particular wine is cheap.

Implicature, or what is implied, may not appear to commit us to knowing it because implicatures often can be canceled. I’m not contradicting myself if I say in my recommendation letter that the student has excellent handwriting, attends lectures regularly, and is also a brilliant student. Nor is there any inconsistency in saying that I don’t drink cheap wine, and this particular wine isn’t cheap. Same words, but the addition prevents what would have been otherwise implied.

Nevertheless, once an implicature is made (and it’s not made when it’s canceled), it is expected to be known, and it violates a norm if it’s not. So it’s not only assertion that has a knowledge norm, but implicature as well: speakers must imply only what they know. This has an interesting and perhaps unexpected consequence: If there is a knowledge norm for both assertion and implicature, the KK thesis is true.

The KK thesis is the controversial claim that you know something only if you know that you know it. This is also known as the idea that knowledge is luminous.

Why would it be implied by the knowledge norms of assertion and implicature? If speakers must assert only what they know, then any assertion implies that the speaker knows it. In fact, this seems to be why it’s so natural to reply “How do you know?” The speaker implies that she knows, and we ask how. But if speakers must know not only what they assert but also what they imply, then they must assert only what they know that they know. This reasoning can be repeated: if speakers must assert only what they know that they know, then any assertion implies that the speaker knows that she knows it. The speaker must know what she implies. So she must assert only what she knows that she knows that she knows. And so on.

The result is that speakers must have indefinitely iterated knowledge that what they assert is true: they must know that they know that they know that they know …

This might seem a ridiculously strict norm on assertion. How could anyone ever be in a position to assert anything?

The answer is that if the KK thesis is true, the iterated knowledge norm is the same as the knowledge norm: if knowing entails knowing that you know, then it also entails indefinitely iterated knowledge. So you satisfy the iterated knowledge norm simply by satisfying the knowledge norm. If we must know what we say and imply to be true, then knowledge is luminous.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2236/.

About the author

Eliran Haziza is a PhD candidate at the University of Toronto. He works mainly in the philosophy of language and epistemology, and his current research focuses on inquiry, questions, assertion, and implicature.