Posted on

Bryan Pickel and Brian Rabern – “Against Fregean Quantification”

In this post, Bryan Pickel and Brian Rabern discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Still life of various kinds fruits laying on a tablecloth.
“Martwa natura” (1910) Witkacy

A central achievement of early analytic philosophy was the development of a formal language capable of representing the logic of quantifiers. It is widely accepted that the key advances emerged in the late nineteenth century with Gottlob Frege’s Begriffschrift. According to Dummett,

“[Frege] resolved, for the first time in the whole history of logic, the problem which had foiled the most penetrating minds that had given their attention to the subject.” (Dummett 1973: 8)

However, the standard expression of this achievement came in the 1930s with Alfred Tarski, albeit with subtle and important adjustments. Tarski introduced a language that regiments quantified phrases found in natural or scientific languages, where the truth conditions of any sentence can be specified in terms of meanings assigned to simpler expressions from which it is derived.

Tarski’s framework serves as the lingua franca of analytic philosophy and allied disciplines, including foundational mathematics, computer science, and linguistic semantics. It forms the basis of the predicate logic conventionally taught in introductory logic courses – recognizable by its distinctive symbols such as inverted “A’s” and backward “E’s,” truth-functions, predicates, names, and variables.

This formalism proves indispensable for tasks such as expressing the Peano Axioms, elucidating the truth-conditional ambiguity of statements like “Every linguist saw a philosopher,” or articulating metaphysical relationships between parts and wholes. Additionally, its computationally more manageable fragments have found applications in semantic web technologies and artificial intelligence.

Yet, from the outset there was dissatisfaction with Tarski’s methods. To see where the dissatisfaction originates first, consider the non-quantified fragment of the language. For this fragment, the truth conditions of any complex sentence can be specified in terms of the truth conditions of its simpler sentences, and the truth conditions of any simple sentence, in turn, can be specified in terms of the referents of its parts. For example, the sentence ‘Hazel saw Annabel and Annabel waved’ is true if and only if its component sentences ‘Hazel saw Annabel’ and ‘Annabel waved’ are both true. ‘Hazel saw Annabel’ is true if the referents of ‘Hazel’ and ‘Annabel’ stand in the seeing relation. ‘Annabel waved’ is true if the referent of ‘Annabel’ waved. For this fragment, then, truth and reference can be considered central to semantic theory.

This feature can’t be maintained for the full language, however. To regiment quantifiers, Tarksi  introduced open sentences and variables, effectively displacing truth and reference with “satisfaction by an assignment” and “value under an assignment”. Consider for instance a sentence such as  ‘Hazel saw someone who waved’. A broadly Tarskian analysis would be this: ‘there is an x such that: Hazel saw x and x waved’. For Tarski, variables do not refer absolutely, but only relative to an assignment. We can speak of the variable x as being assigned to different individuals: to Annabel or to Hazel. Similarly, an open sentence such as ‘Hazel saw x’ or ‘x waved’ is not true or false, but only true or false relative to an assignment of values to its variables.

This aspect of Tarski’s approach is the root cause of dissatisfaction, yet it constitutes his unique method for resolving “the problem” – i.e., the problem of multiple generality that Frege had previously solved. Tarski used the additional structure to explain the truth conditions of multiply quantified sentences such as `Everyone saw someone who waved’, or `For every y, there is an x such that: y saw x and x waved’. The overall sentence is true if for every assignment of values to ‘y’, there is an assignment of values to both ‘y’ and ‘x’ such that ‘y saw x’ and ‘x waved’ are both true on that assignment.

Tarksi’s theory is formally elegant, but its foundational assumptions are disputed. This has prompted philosophers to revisit Frege’s earlier approach to quantification.

According to Frege, a “variable” is not even an expression of the language but instead a typographic aspect of a distributed quantifier sign. So Frege would think of a sentence such as  ‘there is an x such that: Hazel saw x and x waved’ as divisible into two parts:

  1. there is an x such that: … x….
  2. Hazel saw … and … waved

Frege would say that expression (ii) is a predicate that is true or false of individuals depending on whether Hazel saw them and they waved. For Frege, this predicate is derived by starting with a full sentence such as ‘Hazel saw Annabel and Annabel waved’ and removing the name ‘Annabel’. In this way, Frege seems to give a semantics for quantification that more naturally extends the non-quantified portion of the language. As Evans says:

[T]he Fregean theory with its direct recursion on truth is very much simpler and smoother than the Tarskian alternative…. But its interest does not stem from this, but rather from examination at a more philosophical level. It seems to me that serious exception can be taken to the Tarskian theory on the ground that it loses sight of, or takes no account of, the centrality of sentences (and of truth) in the theory of meaning. (Evans 1977: 476)

In short: Frege did it first, and Frege did it better.

Our paper “Against Fregean Quantification” takes a closer look at these claims. We identify three features in which the Fregean approach has been held to make an advance on Tarski: it treats quantifiers as predicates of predicates, the basis of the recursion includes only names and predicates, and the complex predicates do not contain variable markers.

However, we show that in each case, the Fregean approach must similarly abandon the centrality of truth and reference to its semantic theory. Most surprisingly, we show that rather than extending the semantics of the non-quantified portion of the language, the Fregean turns ordinary proper names into variable-like expressions. In doing so, Frege leads to a typographic variant of the most radical of Tarskian views: variabilism, the view that names should be modeled as Tarskian variables.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2906/.

References

  • Dummett, Michael. (1973). Frege: Philosophy of Language. London: Gerald Duckworth.
  • Evans, Gareth. (1977). “Pronouns, Quantifiers, and Relative Clauses (I)”. Canadian Journal of Philosophy 7(3): 467–536.
  • Frege, Gottlob. (1879). Begriffsschrift: Eine der arithmetischen nachgebildete Formelsprache des reinen Denkens. Halle a.d.S.
  • Tarski, Alfred. (1935). “The Concept of Truth in Formalized Languages”. In Logic, Semantics, Metamathematics (1956): 152–278 . Clarendon Press.

About the authors

Bryan Pickel is Senior Lecturer in Philosophy at the University of Glasgow. He received his PhD from the University of Texas at Austin. His main areas of research are metaphysics, the philosophy of language, and the history of analytic philosophy.

Brian Rabern is Reader at the School of Philosophy, Psychology, and Language Sciences at the University of Edinburgh. Additionally, he serves as a software engineer at GraphFm. He received his PhD in Philosophy from the Australian National University. His main areas of research are the philosophy of language and logic.

Posted on

Christopher Frugé – “Janus-Faced Grounding”

In this post, Christopher Frugé discusses the article he recently published in Ergo. The full-length version of his article can be found here.

Picture of the Roman double-faced god Janus: one face, older in age, looks back to the past, while the other, younger, looks forward to the future.
Detail of “Bust of the Roman God Janus” (1569) © The New York Public Library

Grounding is the generation of the less fundamental from the more fundamental. The fact that Houston is a city is not a fundamental aspect of reality. Rather, it’s grounded in facts about people and, ultimately, fundamental physics.

What is the status of grounding itself? Most theorists of ground think that grounding is non-fundamental and so must itself be grounded. Yet, if grounding is always grounded, then every grounding fact generates an infinite regress of the grounding of grounding facts, where each grounding fact needs to be grounded in turn. I argue that this regress is vicious, and so some grounding facts must be fundamental.

Grounding theorists take grounding to be grounded because it seems to follow from two principles about fundamentality. Purity says that fundamental facts can’t contain any non-fundamental facts. Completeness says that the fundamental facts generate all the non-fundamental facts. The idea behind purity is that the fundamental is supposed to be ‘pure’ of the non-fundamental. There can’t be any chairs alongside quarks at the most basic level of reality. Completeness stems from the thought that the fundamental is ‘enough’ to generate the rest of reality. Once the base layer has been put in place, then it produces everything else.

These principles are plausible, but they lead to regress. For example, the fact that Houston is a city is grounded in certain fundamental physical facts. By the standard construal of purity, this grounding fact is non-fundamental, since it ‘contains’ a non-fundamental element: the fact that Houston is a city. But by completeness, non-fundamental facts must be grounded, so this grounding fact must be grounded. But then this grounding fact must be grounded for the same reason, and so on forever.

We have what’s called the fact regress:

The standard take among grounding theorists is that the regress isn’t vicious, just a surprising discovery. This is because it doesn’t violate the well-foundedness of grounding. Well-foundedness requires that at some point each path of A grounds B and C grounds A and D grounds C… must come to an end.

The fact regress doesn’t violate well-foundedness, because each grounding fact can ground out in something fundamental. It’s just that each grounding fact needs to be grounded. Consider a case where A is fundamental and grounds B, but this grounding fact is grounded in a fundamental C. And that grounding fact is grounded in a fundamental D and so on. This satisfies well-foundedness but is an instance of the fact regress.

Nonetheless, I claim that the fact regress is still vicious. This is because what’s grounded doesn’t merely depend on its ground but also depends on the grounds of its grounding fact – and on the grounds of each grounding fact in the path of grounding of grounding. Call this connection dependence.

Why is connection dependence a genuine form of dependence? Suppose that A grounds B, where B isn’t grounded in anything else. But say that C grounds that A grounds B, where A grounds B isn’t grounded in anything else. Then, B depends not just on A but also on C. If C were removed, then A wouldn’t ground B. So then B would not be generated by anything and so would not come into being. For example, if a collection of particles ground the composite whole of those particles only via a composition operation grounding this grounding fact, then if, perhaps counterpossibly, there were no composition operation then those particles would not ground that whole. Similar reasoning applies at each step in the path of grounding of grounding.

So, then, the fact regress is bad for the same reason that violations of well-foundedness are bad. Without well-foundedness, it could be that each ground would need to be grounded in turn, and so the creation of a non-fundamental element of reality would never end up coming about because it would always require a further ground. Yet, given the fact regress, there can also be no stopping point—no point from which what’s grounded is ultimately able to be generated from its grounds. So determination, and hence what’s determined, would always be deferred and never achieved.

Therefore, I uphold well-connectedness, which requires that every path of grounding of grounding facts terminates in an ungrounded grounding fact:

This prohibits the fact regress.

Well-connectedness falls out of the proper interpretation of completeness, which imposes the requirement that the fundamental is enough for the non-fundamental. For any portion of non-fundamental reality, there is some portion of the fundamental that is ‘enough’ to produce it. If well-connectedness is violated, then there is no portion of fundamental reality that is sufficient unto itself to produce any bit of non-fundamental reality. There would always have to be a further determination of how the fundamental determines the non-fundamental. But at some point the grounding of grounding must stop. Some grounding facts must be fundamental.

However, the fact regress seems to fall out of completeness and purity. So what gives? I think the key is to see that the proper interpretation of purity doesn’t require that grounding facts be grounded.

There’s a distinction between what’s called factive and non-factive grounding. Roughly put, A non-factively grounds B if and only if given A then A generates B. A factively grounds B just in case A non-factively grounds B and A obtains. So it could be that A non-factively grounds B even if B doesn’t obtain since A doesn’t obtain. Thus, in a legitimate sense, the fact that A non-factively grounds B doesn’t ‘contain’ A or B, since that grounding fact can obtain without either A or B obtaining. We can think of the non-factive grounding facts as ‘mentioning’ the ground and ground without ‘containing’ them. But this is consistent with purity, since fundamental non-factive grounding facts don’t have any non-fundamental constituents.

Want more?

Read the full article at: https://journals.publishing.umich.edu/ergo/article/id/4664/.

About the author

Christopher Frugé is a Junior Research Fellow at the University of Oxford in St John’s College. He received his PhD from Rutgers. He works on foundational and normative ethics as well as metaphysics. 

Posted on

Gabriel De Marco and Thomas Douglas – Nudge Transparency Is Not Required for Nudge Resistibility

In this post, Gabriel De Marco and Thomas Douglas discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Image of a variety of cakes on display.
“Cakes” (1963) Wayne Thiebaud © National Gallery of Art

Consider,

Food Placement. In order to encourage healthy eating, cafeteria staff place healthy food options at eye-level, whereas unhealthy options are placed lower down. Diners are more likely to pick healthy foods and less likely to pick unhealthy foods than they would have been had foods instead been distributed randomly.

Interventions like this are often called nudges. Though many agree that it is, at least sometimes, permissible to nudge people, there is a thriving debate about when, exactly, it is so.

In the now-voluminous literature on the ethics of nudging, some authors have suggested that nudging is permissible only when the nudge is easy to resist. But what does it take for a nudge to be easy to resist? Authors rarely give accounts of this, yet they often seem to assume what we call

The Awareness Condition (AC). A nudge is easy to resist only if the agent can easily become aware of it.

We think AC is false. In our paper, we mount a more developed argument for this, but in this blog post we simply advance one counterexample and consider one possible response to it.

Here’s the counterexample:

Giovanni and Liliana: Giovanni, the owner of a company, wants his workers to pay for the more expensive, unhealthy snacks in the company cafeteria, so, without informing his office workers, he instructs the cafeteria staff to place these snacks at eye level. While in line at the cafeteria, Liliana (who is on a diet) sees the unhealthy food, and is a bit tempted by it, partly as a result of the nudge. Recognizing the temptation, she performs a relatively easy self-control exercise: she reminds herself of her plan to eat healthily, and why she has it. She thinks about how following a diet is going to be difficult, and once she starts making exceptions, it’s just going to be easier to make exceptions later on. After this, she decides to take the salad and leave the chocolate pudding behind. Although she was aware that she was tempted to pick the chocolate pudding, she was not aware that she was being nudged, nor did she have the capacity to easily become aware of this, since Giovanni went to great lengths to hide his intentions.

Did Liliana resist the nudge? We think so. We also think that the nudge was easily resistible for her, even though she did not have the capacity to easily become aware of the fact that she was being nudged. If you agree, then we have a straightforward counterexample to AC.

In response, someone might argue that, although Liliana resists something, she does not resist the nudge. Rather, she resists the effects of the nudge: the (increased) motivation to pick the chocolate pudding. Resisting the nudge, rather than its effects, requires that one intends to act contrary to the nudge. But Liliana doesn’t intend to do that. Although she intends to pick the healthy option, to pick the salad, or to not pick the chocolate pudding, she does not intend to act contrary to the nudge.

If resisting a nudge requires that one intend to act contrary to the nudge, then Liliana does not resist the nudge, and the counterexample to AC fails. Yet we do not think that resisting a nudge requires that one intend to act contrary to the nudge. While we grant that a way of resisting a nudge is to do so while intending to act contrary to it, and that resisting it in this way requires awareness of the nudge, we do not think that this is the only way to resist a nudge. Partly, we think this because we find it plausible that Liliana (and agents in other similar cases) do resist the nudge.

But further, we think that, if resisting a nudge requires intending to act contrary to the nudge, this will cast doubt on the thought that nudges ought to be easy to resist. Suppose that there are two reasonable ways of understanding “resisting a nudge.” On one understanding, resistance requires that the agent acts contrary to the nudge and intends to do so. Liliana does not resist the nudge on this understanding. On a second, broader way of understanding resistance, one need not intend to act contrary to the nudge in order to resist it; it is enough simply to act contrary to the nudge. Liliana does resist the nudge in this way.

Now consider two claims:

The strong claim: A nudge is permissible only if it is easy to act contrary to it with the intention of doing so.

The weak claim: A nudge is permissible only if it is easy to act contrary to it.

Are these claims plausible? We think that the weak claim might be, but the strong claim is not.

Consider again Food Placement. This was a case of a nudge just like Giovanni’s nudge, except that the food placement is intended to get more people to pick the healthy food option over the unhealthy one, rather than the reverse. In this version of the case, Giovanni wants to do what is in the best interests of his staff. According to the strong claim, this nudge would be impermissible insofar as his staff cannot easily become aware of the nudge. And this is so even though it would be permissible for Giovanni to put the healthy foods at eye level randomly. Moreover, it would remain so even if all the following are true:

  1. the nudge only very slightly increases the nudgee’s motivation to take the healthy food,
  2. the nudgee acts contrary to this motivation and picks the same unhealthy food she would have picked in the absence of the nudge,
  3. she finds it very easy to act contrary to the nudge in this way,
  4. her acting contrary to the nudge in this way is a reflection of her values or desires, and
  5. her acting contrary to the nudge is the result of normal deliberation which is not significantly influenced by the nudge.

We find it hard to believe that this nudge is impermissible, or even more weakly, that we have a strong or substantial reason against implementing it.

We think, then, that if nudges have to be easily resistible in order to be ethically acceptable, this will be because something like the weak claim holds. On this view, a nudge can meet this requirement if it is easy for the nudgee to resist it in our broader sense, and this is compatible with it being difficult for the nudgee to become aware of the nudge, as in our Giovanni and Liliana case.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4635/.

About the authors

Gabriel De Marco is a Research Fellow in Applied Moral Philosophy at the Oxford Uehiro Centre for Practical Ethics. His research focuses on free will, moral responsibility, and the ethics of influence.

Tom Douglas is Professor of Applied Philosophy and Director of Research at the Oxford Uehiro Centre for Practical Ethics. His research focuses especially on the ethics of using medical and neuro-scientific technologies for non-therapeutic purposes, such as cognitive enhancement, crime prevention, and infectious disease control. He is currently leading the project ‘Protecting Minds: The Right to Mental Integrity and the Ethics of Arational Influence‘, funded by the European Research Council.

Posted on

Cathy Mason – “Reconceiving Murdochian Realism”

In this post, Cathy Mason discusses the article she recently published in Ergo. The full-length version of Cathy’s article can be found here.

A picture of a vase with irises.
“Irises” (1890) Vincent van Gogh

Iris Murdoch’s ethics is filled with discussions of moral reality, moral truth and how things really stand morally. What exactly does she mean by these? Her style is certainly a non-standard philosophical style, and her ideas are remarkably wide-ranging, but it can seem appealing to think that at heart her metaethical commitments largely align with standard realists’. I suggest, however, that this reading of Murdoch is mistaken: her realism amounts to something else altogether.

I take standard realism to be roughly captured by the following definition from Sayre-McCord:

Moral realists hold that there are moral facts, that it is in light of these facts that peoples’ moral judgments are true or false, and that the facts being what they are (and so the judgments being true, when they are) is not merely a reflection of our thinking the facts are one way or another. That is, moral facts are what they are even when we see them incorrectly or not at all. (Sayre-McCord 2005: 40)

Does Murdoch subscribe to this view? It can certainly be tempting to think so. She repeatedly talks about ‘realism’ and ‘objectivity’, and remarks like the following seem well-understood in standard realist terms:

The authority of morals is the authority of truth, that is of reality. (TSG 374)

The ordinary person does not, unless corrupted by philosophy, believe that he creates values by his choices. He thinks that some things really are better than others and that he is capable of getting it wrong. (TSG 380)

Here, Murdoch clearly commits to the idea that some moral claims are true, and that what makes them true is not something to do with the valuer, but something about the world. All this sounds very much like standard realism.

However, it would be a mistake to think that these surface similarities point towards a deeper congruence between Murdoch and standard realists. For a start, realists typically take moral facts to be one kind among many. Just as there are mathematical facts and psychological facts, so too there are moral facts. Yet Murdoch repeatedly insists that all reality is moral—and thus that all facts are in some sense moral facts (e.g. IP 329, OGG 357, MGM 35). Moreover, though Murdoch insists on the truth of some moral claims, she understands the notion of truth very differently from standard realists.  Whereas realists typically regard truth as something abstract, Murdoch suggests that it can only be understood in relation to truthfulness and the search for truth. The seeming agreement between Murdoch and standard realists on the truth of some ethical claims thus belies deeper disagreements between them.

What’s more, standard realism is hard to square with some wider views Murdoch holds. First, she suggests that some moral concepts can be genuinely private: fully virtuous agents may have different moral concepts without either of their conceptual schemas being inaccurate or incomplete. Second, she suggests that there can be private moral reasons: moral reasons need not be universal. It is hard to see how there could be room for private moral concepts and reasons within standard realism: either there are facts corresponding to a moral belief, or there are not. If there are, then it is a kind of moral ignorance to ignore such facts. If not, then the belief is simply false. Finally, Murdoch rejects the idea common in standard realism that the moral supervenes on the non-moral, since she suggests that there simply is no non-moral reality.

What, then, does Murdoch have in mind when she discusses realism? In most cases where Murdoch introduces ideas such as realism or objectivity, she is discussing the moral perceiver’s relation to the thing perceived, rather than only talking about the thing perceived. Her realism is a claim about the reality of the moral where reality is understood as that which is discerned by the virtuous perceiver.

Take, for example, the following passages:

[T]he realism (ability to perceive reality) required for goodness is a kind of intellectual ability to perceive what is true, which is automatically at the same time a suppression of self. (OGG 353)

[A]nything which alters consciousness in the direction of unselfishness, objectivity and realism is to be connected with virtue. (TSG 369)

In both of these quotes, Murdoch discusses the relation between a moral perceiver and the thing perceived. Realism or objectivity is talked of not as a metaphysical feature of objects, properties or facts, but as a feature of moral agents who are epistemically engaged with the world.

Of course, the standard realist might allow that there is such a thing as realism as a feature of a moral perceiver, and understand this in terms of accessing facts or properties which independently exist. Yet this ordering of explanations is ruled out by Murdoch’s insistence that reality itself is a normative (moral) concept. What is objectively real, for Murdoch, cannot be understood apart from ethics, apart from the essentially human activity of seeking to understand the world which is subject to moral evaluation. This is not to suggest that reality is a solely moral concept: it is also linked to truth, to how the world is. But it is to suggest that a conception of how the world is, of reality, must be essentially ethical.

What kind of relation, then, must the realistic observer stand into the thing observed? Murdoch suggests that no non-moral answer can be given here, no description that demarcates the realistic stance in an ethically neutral way. However, a description can be given in rich ethical terms. To be realistic is best understood as doing justice to the thing one is confronted with, being faithful to the reality of it, being truthful about it, and so on. All of these terms capture the idea that perception can be genuinely cognitive, whilst at the same time being a fundamentally ethical task.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4653/.

References

  • Murdoch, Iris (1999). “The Idea of Perfection”. In Peter Conradi (Ed.), Existentialists and Mystics: Writings on Philosophy and Literature (299–337). Penguin. [IP]
  • Murdoch, Iris (1999). “On God and Good”. In Peter Conradi (Ed.), Existentialists and Mystics: Writings on Philosophy and Literature (337–63). Penguin. [OGG]
  • Murdoch, Iris (1999). “The Sovereignty of Good Over Other Concepts”. In Peter Conradi (Ed.), Existentialists and Mystics: Writings on Philosophy and Literature (363–86). Penguin. [TSG]
  • Murdoch, Iris (2012). “Metaphysics as a Guide to Morals”. Vintage Digital. [MGM]
  • Sayre-McCord, Geoffrey (2005). “Moral Realism”. In David Copp (Ed.), The Oxford Handbook of Ethical Theory (39–62). Oxford University Press.

About the author

Cathy Mason is an Assistant Professor in Philosophy at the Central European University (Vienna). She is currently working on a book on Iris Murdoch’s ‘metaethics’, as well as some ideas concerning the ethics of friendship.

Posted on

Victor Lange and Thor Grünbaum – “Measurement Scepticism, Construct Validation, and Methodology of Well-Being Theorising”

A young pregnant woman is holding a small balance for weighing gold. In front of her is a jewelry box and a mirror; on her right, a painting of the last judgment.
“Woman Holding a Balance” (c. 1664) Johannes Vermeer

In this post, Victor Lange and Thor Grünbaum discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Many of us think that decisions and actions are justified, at least partially, in relation to how they affect the well-being of the involved individuals. Consider how politicians and lawmakers often justify, implicitly or explicitly, their policy decisions and acts by reference to the well-being of citizens. In more radical terms, one might be an ethical consequentialist and claim that well-being is the ultimate justification of any decision or action.

It would therefore be wonderful if we could precisely measure the well-being of individuals. Contemporary psychology and social science contain a wide variety of scales for this purpose. Most often, these scales measure well-being by self-reports. For examples, subjects rate the degree to which they judge or feel satisfied with their own lives or they report the ratio of positive to negative emotions. Yet, even though such scales have been widely adopted, many researchers express scepticism about whether they actually measure well-being at all. In our paper, we label this view measurement scepticism about well-being. 

Our aim is not to develop or motivate measurement scepticism. Instead, we consider a recent and interesting reply to such scepticism, put forward by Anna Alexandrova (2017; see also Alexandrova and Haybron, 2016). According to Alexandrova, we can build an argument against measurement scepticism by employing a standard procedure of scientific psychology called construct validation. 

Construct validation is a psychometric procedure. Researchers use the procedure to assess the degree to which a scale actually measures its intended target phenomenon. If psychologists and social scientists have a reliable procedure to assess the degree to which a scale really measures what it is intended to measure, it seems obvious that we should use it to test well-being measurements. For the present purpose, let us highlight two key aspects of the procedure. 

First, construct validation utilises convergent and discriminant correlational patterns between the scores of various scales as a source of evidence. Convergent correlations concern the relation between scores on the target scale (intended to measure well-being) and scores on other scales (assumed to measure either well-being or some closely related phenomenon, such as wealth or physical health). Discriminant correlations concern non-significant relations between scores on the target scale and scores on scales that we expect to measure phenomena unrelated to well-being (e.g., scales measuring perceptual acuity). When assessing the construct validity of a scale, researchers evaluate a scale by considering whether it exhibits attractive convergent correlations (whether subjects with high scores on the target well-being scale also score high on physical health, for example) and discriminant correlations (e.g., whether subjects’ scores on the target well-being scale have significant correlations with perceptual acuity).

Second, the examination of correlational patterns depends on theory. Initially, we need a theory to build our scale (for instance, a theory of how well-being is expressed in the target population). Moreover, we need a theory to tell us what correlations we should expect (i.e. how answers on our scale should correlate with other scales). This means that, when engaging in construct validation, researchers test a scale and its underlying theory holistically. That is, the construct validation of the target scale involves testing both the scale and the theory of well-being that underlies it. Consequently, the procedure of construct validation requires that researchers remain open to revising their underlying theory if they persistently observe the wrong correlational patterns. Given this holistic nature of the procedure, correlational patterns might lead to revisions of one’s theory of well-being, perhaps even to abandoning it. 

The question now is this: Does the procedure of construct validation provide a good answer to measurement scepticism about well-being? While we acknowledge that for many psychological phenomena (e.g., intelligence) the procedures of construct validation might provide a satisfying reply to various forms of measurement scepticism, things are complicated with well-being. Here the normative nature of well-being rears its philosophical head. We argue that an acceptable answer to the question depends on the basic assumptions about the methodology of well-being theorising. Let us clarify by distinguishing between two methodological approaches.

First, methodological naturalism about well-being theorising claims that we should theorise about well-being in the same way we investigate any other natural phenomenon, namely, by ordinary inductive procedures of scientific investigation. Consequently, our theory of well-being should be open to revision on empirical grounds. Second, methodological non-naturalism claims that theorising about well-being should be limited to the methods known from traditional (moral) philosophy. The question of well-being is a question about what essentially and non-derivatively makes a person’s life go best. Well-being has an ineliminative normative or moral nature. Hence, the question of what well-being is, is a question only for philosophical analysis.  

The reader might see the problem now. Since construct validation requires openness to theory revision by correlational considerations, it is a procedure that only a methodological naturalist can accept. Consequently, if measurement scepticism is motivated by a form of non-naturalism, we cannot reject it by using construct validation. Non-naturalists will not accept that theorising about well-being can be a scientific and empirical project. This result is all the more important because many proponents of measurement scepticism seem to be methodological non-naturalists.  

In conclusion, if justifying an action or a social policy over another often requires assessing consequences for well-being, then scepticism about measurement of well-being becomes an important obstacle. We cannot address this scepticism head-on with the procedures of construct validation. Such procedures assume something the sceptic might not accept, namely, that our theory of well-being should be open to empirical revisions. Instead, we need to start by making our methodological commitments explicit. 

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4663/.

References

  • Alexandrova, Anna (2017). A Philosophy for the Science of Well-Being. Oxford University Press. 
  • Alexandrova, Anna and Daniel M. Haybron (2016). “Is Construct Validation Valid?” Philosophy of Science, 83(5), 1098–109. 

About the authors

Victor Lange is a PhD-fellow at the Section for Philosophy and a member of the CoInAct group at the Department of Psychology, University of Copenhagen. His research focuses upon attention, meditation, psychotherapy, action control, mental action, and psychedelic assisted therapy. He is a part of the platform Regnfang that publishes podcasts about the sciences of the mind.

Thor Grünbaum is an associate professor at the Section for Philosophy and the Department of Psychology, University of Copenhagen. He is head of the CoInAct research group. His research interests are in philosophy of action (planning, control, and knowledge), philosophy of psychology (explanation, underdetermination, methodology), and cognitive science (sense of agency, prospective memory, action control).

Posted on

Kristie Miller – “Against Passage Illusionism”

Detail of Salvador Dalì’s tarot card “The Magician” (1983)

In this post, Kristie Miller discusses her article recently published in Ergo. The full-length version of Kristie’s article can be found here.

It might seem obvious that we experience the passing of time. Certainly, in some trivial sense we do. It is now late morning. Earlier, it was early morning. It seems to me as though some period of time has elapsed since it was early morning. Indeed, during that period it seemed to me as though time was elapsing, in that I seemed to be located at progressively later times.

One question that arises is this: in what do these seemings consist? One way to put the question is to ask what content our experience has. What state of the world does the experience represent as being the case?

Philosophers disagree about which answer is correct. Some think that time itself passes. In other words, they think that there is a unique set of events that are objectively, metaphysically, and non-perspectivally present, and that which events those are, changes. Other philosophers disagree. They hold that time itself is static; it does not pass, because no events are objectively, metaphysically, and non-perspectivally present, such that which events those are, changes. Rather, whether an event is present is a merely subjective or perspectival matter, to be understood in terms of where the event is located relative to some agent.

Those who claim that time itself passes typically use this claim to explain why we experience it as passing: we experience time as passing because it does. What, though, should we say if we think that time does not pass, but is rather static? You might think that the most natural thing to say would be that we don’t experience time as passing. We don’t represent there being a set of events that are non-perspectivally present, and that which those are, changes. Of course, we represent various events as occurring in a certain temporal order, and as being separated by a certain temporal duration, and we experience ourselves as being located at some times (rather than others) – but none of that involves us representing that some events have a special metaphysical status, and that which events have that status, changes. So, on this view, we have veridical experiences of static time.

Interestingly, however, until quite recently this was not the orthodox view. Instead, the orthodoxy was a view known as passage illusionism. This is the view that although time does not pass, it nevertheless seems to us as though it does. So, we are subject to an illusion in which things seem to us some way that they are not. In my paper I argue against passage illusionism. I consider various ways that the illusionist might try to explain the illusion of time passing, and I argue that none of them is plausible.

The illusionist’s job is quite difficult. First, the illusion in question is pervasive. At all times that we are conscious, it seems to us as though time passes. Second, the illusion is of something that does not exist – it is not an experience which could, in other circumstances, be veridical.

In the psychological sciences, illusions are explained by appealing to cognitive mechanisms that typically function well in representing some feature(s) of our environment. In most conditions, these mechanisms deliver us veridical experiences. In some local environments, however, certain features mislead the mechanism to misrepresent the world, generating an illusion. These kinds of explanation, however, involve illusions that are not pervasive (they occur only in some local environments) and are not of something that does not exist (they are the product of mechanisms that normally deliver veridical experiences). This gives us reason to be hesitant that any explanation of this kind will work for the passage illusionist.

I consider a number of mechanisms that represent aspects of time, including those that represent temporal order, duration, simultaneity, motion and change. I argue that, regardless of how we think about the content of mental states, we should conclude that none of the representational states generated by these mechanisms individually, or jointly, represent time as passing.

First, suppose we think that the content of our experiences is exhausted by the things in the world that those experiences typically co-vary with.  For instance, suppose you have a kind of mental state which typically co-varies with the presence of cows. On this view, that mental state represents cows, and nothing more. I argue that if we take this view of representational content, then none of the contents generated by the functioning of the various mechanisms that represent aspects of time, could either severally or, importantly, jointly, represent time as passing. For even if our brains could in some way ‘knit together’ some of these contents into a new percept, such contents don’t have the right features to generate a representation of time passing. For instance, they don’t include a representation of objective, non-perspectival presence. So, if we hold this view on mental content, we should think that passage illusionism is false.

Alternatively, we might think that our mental states do represent the things in the world with which they typically co-vary, but that their content is not exhausted by representing those things. So, the illusionist could argue that we experience passage by representing various temporal features, such that our experiences have not only that content, but also some extra content, and that jointly this generates a representation of temporal passage.

I argue that it is very hard to see why we would come to have experiences with this particular extra content. Representing that certain events are objectively, metaphysically, and non-perspectivally present, and that which event these are, changes, is a very sophisticated representation. If it is not an accurate representation, it’s hard to see why we would come to have it. Further, it seems plausible that the human experience of time is, in this regard, similar to the experience of some non-human animals. Yet it seems unlikely that non-human animals would come to have such sophisticated representations, if the world does not in fact contain passage.

So, I conclude, it is much more likely, if time does not pass, that we have veridical experiences of a static world rather than illusory experiences of a dynamical world.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2914/.

About the author

Kristie Miller is Professor of Philosophy and Director of the Centre for Time at the University of Sydney. She writes on the nature of time, temporal experience, and persistence, and she also undertakes empirical work in these areas. At the moment, she is mostly focused on the question of whether, assuming we live in a four-dimensional block world, things seem to us just as they are. She has published widely in these areas, including three recent books: “Out of Time” (OUP 2022), “Persistence” (CUP 2022), and “Does Tomorrow Exist?” (Routledge 2023). She has a new book underway on the nature of experience in a block world, which hopefully will be completed by the end of 2024. 

Posted on

Russ Colton – “To Have A Need”

A man is hanging from the hand of a clock fixed on the exterior wall of a six-story building, risking his life.
Harold Lloyd in the 1923 movie “Safety Last!”

In this post, Russ Colton discusses the article he recently published in Ergo. The full-length version of his article can be found here.

Every day we notice the needs of ourselves or others and are moved to address them. We often feel obliged to do so, even for strangers. Whatever is needed seems important in a way that perks and luxuries do not. Some philosophers take such observations quite seriously and give need a key role in their moral and political theories. Yet they characterize the concept of need differently, and sometimes not very fully. To help us understand and assess their ideas—but also just for the sake of improving our understanding of a common concept that enters into everyone’s practical and moral thinking—I want to try to say as clearly as possible what it means to have a need. My paper is focused on this task of conceptual clarification.

Throughout, I am concerned with a certain kind of need—welfare need, as I call it, which is the need for whatever promotes a certain minimum level of life quality, like the need for air, education, or self-confidence. This differs from a goal need (aka “instrumental need”), which is a need for whatever is required to achieve some goal—whether that goal is good for you or not—like the need for a bottle opener or a bank loan.

It is well-known that sometimes we say a person needs something when they have neither a welfare need nor a goal need for it: for example, “The employee needs to be fired.” In such cases, however, we can readily deny that the person has the need—the employee does not have a need to be fired. By contrast, in cases of welfare or goal need, the person has the need. Thus, insofar as we are interested in need because of its connection to human welfare, the specific concept of having a need may be more important than needing, which is why I focus on the former. In considering examples that test my analysis, it is best to think in terms of having a need.

Among philosophers, perhaps the most popular gloss on welfare need is this: to need something is to require it in order to avoid harm. This idea is approximately correct, but it needs improvement. The relevant notions of requirement and harm must be pinned down, and the idea must be broadened, since people also need what is required to reduce danger, like vaccines and seatbelts. To make these improvements, I offer two analyses of having a need—one that captures the original intuition about harm avoidance, and a broader one that captures the concept in full by covering both harms and dangers.

David Wiggins (Needs, Values, Truth) is the only theorist who has tried to clarify with precision the requirement aspect of the harm-avoidance idea. In broad strokes, his view is this: I need to have X if and only if, under the present circumstances, necessarily, if I avoid harm, then I have X, where necessity here is constrained by what is “realistically conceivable.”

This idea has a number of problems. One of the most serious arises when we have a need for some future X that will be unmet. If a non-actual X can count as realistically conceivable, there seems to be nothing preventing the non-actual possibility that, even without it, whatever harmful process was headed toward us is eventually thwarted by other means, leaving us unharmed. But that means I can have a need for X even though I could avoid harm without it.

Another problem is that often, when we’re in a pinch in a given circumstance, we view multiple things that could save us as needed. If I were short of money to pay rent at the end of the month, then each of the following assertions would be reasonable: I need more money in my bank account; I need a friend to lend me money; I need the landlord to give me more time. But on Wiggins’s necessity approach, given the circumstances, I can need only one X, which will have to be the disjunction of all potential rent solutions.

I argue that these (and other) problems are readily avoided with a counterfactual-conditional approach, along these lines: I have a need for something when, without it, my life would be (in some sense) harmed.

Understanding the relevant notion of harm requires attending to how we balance positive and negative effects on welfare over time. I explore this in the paper and conclude that when you have a need for something, your life from then on would be better on the whole, and less unsatisfactory for some period, with it than without it.

There will be different intuitions about what counts as unsatisfactory. My analysis is neutral, but I do make a case for the claim that for our most ordinary conception of need, the relevant sense of “unsatisfactory” is not good. With this idea in hand, my analysis implies a very natural idea: if you lack what you need, your life for a time will not be good and will be worse than it would otherwise be, and this loss will not be outweighed by any benefit.

Finally, I extend the analysis to our needs for what makes us safer, things without which we would be in more danger independently of whether we would be harmed. This is challenging because many present needs are for future benefits, and the risks relevant to our future welfare can change during the interval. Fortunately, there are easy ways to address the relevant issues so that the analysis remains quite simple. Roughly put: I now have a need for X if and only if it is now highly probable that, at the time of X, the expected value (quality) of my life from now on would, for some period, be less unsatisfactory with X than without, and would on the whole be higher.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4643/.

About the author

Russ Colton received his PhD from the University of Massachusetts Amherst. His current research interests are primarily in ethics.

Posted on

Ashley Atkins – “Race and the Politics of Loss: Revisiting the Legacy of Emmett Till”

Dana Schutz’s “Open Casket”, part of the 2017 Whitney Biennial © Benjamin Norman / The New York Times / Redux

In this post, Ashley Atkins discusses the article she recently published in Ergo. The full-length version of Ashley’s article can be found here.

The exhibition of Dana Schutz’s Open Casket at the 2017 Whitney Biennial sparked immediate and passionate criticism: a protest was staged in front of the painting on its opening day; a public discussion of the controversy surrounding the painting was held by the Whitney during its exhibition; and sometime in between, a public letter was penned that called for the painting’s destruction.

There are so few paintings of the dead in open casket that a painting of this kind was almost certain to capture people’s attention, even to shock. Helen Molesworth, a curator of contemporary art and former Chief Curator of the Museum of Contemporary Art in Los Angeles, articulates this shock in response to a recent exhibition of Alice Neel’s work, which included Dead Father (1946).

Image of a dead old man in an open coffin.
“Dead Father” (1946) Alice Neel © The Estate of Alice Neel

Though Molesworth “grew up in a tradition where you see dead people in coffin,” the act of making an image of this subject matter was felt to border on obscenity: “Nobody takes a picture of the dead person in a coffin, people don’t make paintingsoil paintings—of dead people in coffins. This is like […] almost taboo to me. Still, when I see that painting [Dead Father] I…I am a little shocked still.”

Obscenity was one of the charges presented against Open Casket (also an oil painting) but even so the feeling was that this bordered on something more sinister. Here we have a painting not of an intimate but of a stranger and, as Hannah Black—who led the call for its destruction—summed it up, a painting of a dead black boy by a white artist.

What propelled the controversy was the painting’s connection to its presumed—though not explicitly identified—subject, Emmett Till, a black teenager lynched in Mississippi in 1955, whose disfigured and badly decomposed body was laid in an open casket for the duration of a four-day public viewing at the insistence of his mother, Mamie Till-Mobley. What exactly, critics pointedly asked, is the artist’s relationship to this legacy? What does it mean to look at this painting?

Open Casket and the criticism surrounding it presents us with an opportunity to revisit this legacy and to examine, in particular, the significance of Mamie Till-Mobley’s public presentation of the body of her son, including her sanctioning of the publication of photographs of his body in casket, which continue to circulate. What can her actions mean to us today?

Schutz’s stated aims in painting Open Casket offer an illuminating starting point. Till-Mobley’s relationship to her son needed, in Schutz’s view, to be reflected in some way in this image; though the violence done to him was horrific and real and should be acknowledged as such—one of the reasons for its continuing political significance—the painting could not simply be grotesque (and I think we can see something in it of the tenderness that appears in Dead Father, what Fran Liebowitz unguardedly describes as beauty in her discussion of Neel’s painting). The image would also be, somehow, an American image.

We can find a basis for these ideas in Mamie Till-Mobley’s own reflections on these events, particularly in her declaration that all Americans needed to be impacted by the sight of the body as a whole so that they might together say what they had seen (Till-Mobley & Benson 2003: 140). As she reveals in her autobiography, Till-Mobley had herself studied the violence done to her son. She could describe it forensically, inch by inch, she tells us, but something other than this kind of engagement with the body of her son was intended by her invitation to Americans to look together and say what they saw. This was something she alone could not do. Americans also needed to see pictures of her son as he had been, she proposed, so that they could see what was lost to them.

It was through being initiated into rites of mourning, as we might put it, that Americans were to participate in this legacy. The most provocative aspect of Open Casket—what was experienced by critics as its intrusion into the mourning of others—is also, in my view, the central thread linking it to this legacy, namely, its engagement with Till-Mobley’s invitation to mourn her son’s legacy as an American one.

If this is right, why have critics neglected to consider that the painting might be a mournful one or to at least judge its failure in these terms?

One important reason is that these critics do not understand this legacy in the terms that I set out; they would reject the idea that “racial losses” are to be mourned collectively. The critical reception of this legacy is a divided one. It assigns two complementary functions to the photographs of Emmett Till in casket: on the one hand, these photographs are understood to facilitate mourning (to provide shelter, warning, and inspiration) among those vulnerable to white violence and, on the other hand, these photographs circulate as evidence and are meant to expose those implicated in this violence (as Schutz was said to be through her painting). The aim of exposure can be seen in the rhetoric surrounding the violence associated with the iconography of Emmett Till’s death. The violence, critics insist, speaks for itself, bears its own witness, without any need for subjective response (of which mournfulness is a paradigm). It is on such grounds that Schutz’s painting is criticized for being too subjective.

But if this is right, why was there any need for Mobley to invite all Americans to look together? What use do we have for the idea that the loss of her son might be conceived of as a common loss?

It is tempting to understand Till-Mobley’s invitation to Americans within a tradition of political thought that sees democracy as requiring continual sacrifice and, relatedly, as requiring that citizens cultivate a capacity to mourn such losses (Allen 2004; McIvor 2016). Though this tradition is acutely aware of the ways in which the burdens of loss have historically been shifted onto to African Americans, among others, legacies of racial violence and loss engendered in this manner are thought to be no less collective for being borne inequitably. It is on these grounds that even these losses are to be mourned—that is, acknowledged as losses—by all citizens.

This tradition misses, however, the significance of and the challenge presented by Till-Mobley’s invitation. She did not assume that the loss of her son could already constitute a collective loss. She proposed that it be seen as such; i.e., that Americans come to think of themselves as people who had suffered this loss and needed, collectively, to put into words what it was and how it impacted them—an act of political re-envisioning so bold that its fulfillment would perhaps have been understood as a political refounding of the country. In this sense, we might see her as participating in the political lineage of Abraham Lincoln, who, it has been argued, also used a concrete occasion of mourning, in Gettysburg, to offer a vision of the country so bold in its re-envisioning that it has been conceptualized in these terms (Wills 1992; Nussbaum 2013).

On July 25, 2023, President Joe Biden signed a proclamation establishing the Emmett Till and Mamie Till-Mobley National Monument in both Mississippi and Illinois (the family’s home state). The new national monument “will help tell the story of the events surrounding Emmett Till’s murder, their significance in the civil rights movement and American history, and the broader story of Black oppression, survival, and bravery in America.”

We can’t yet know how this story will be told. Will it present Emmett Till’s death as a loss to be mourned and, if so, by whom (what nation)? Will it present these events with tenderness, with beauty, which helps us to bear the ugliness of death and violence? Will Mamie Till-Mobley’s contribution to the civil rights movement be memorialized, as is standard, as helping “catalyze” this movement? This implies not only that her gesture was of great significance but that it was significant mainly for what followed, what would conventionally be thought of as properly political actions. Seeing Till-Mobley in this way would reflect the view of her contemporaries, among them powerful leaders in the NAACP who eventually publicly cut ties with her. In describing her grief as a catalyst and a benefit to later generations—the living rather than to the dead—they were making the point that grief was not itself of political significance, but rather a dangerous distraction from these other, properly political ends.

The question of what Mobley’s gesture can mean to us today will depend on many things, including our understanding of the prospects for a mournful politics.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2250/.

References

  • Allen, Daniel (2004). Talking to Strangers: Anxieties of Citizenship Since Brown v. Board of Education. University of Chicago Press.
  • McIvor, David W. (2016). Mourning in America: Race and the Politics of Loss. Cornell University Press.
  • Till-Mobley, Mamie and Christopher Benson (2003). Death of Innocence: The Story of the Hate Crime That Changed America. Random House.
  • Nussbaum, Martha (2013). Political Emotions: Why Love Matters for Justice. Harvard University Press.
  • Wills, Gary (1992). Lincoln at Gettysburg: The Words That Remade America. Simon and Schuster.

About the author

Ashley Atkins is an Associate Professor of Philosophy at Western Michigan University. She received an NEH Fellowship this year in support of a book project that examines grief through the lens of contemporary memoir. “Race and the Politics of Loss” is part of a series of papers exploring legacies of racial violence and loss in democratic politics.

Posted on

Corey Dethier – “Interpreting the Probabilistic Language in IPCC Reports”

A young sibyl (sacred interpreter of the word of god in pagan religions) argues with an old prophet (sacred interpreter of the word of god in monotheistic religions). It looks as if the discussion will go on for a long while.
Detail of “A sibyl and a prophet” (ca. 1495) Andrea Mantegna

In this post, Corey Dethier discusses his article recently published in Ergo. The full-length version of Corey’s article can be found here.

Every few years, the Intergovernmental Panel on Climate Change (IPCC) releases reports on the current status of climate science. These reports are massive reviews of the existing literature by the most qualified experts in the field. As such, IPCC reports are widely taken to represent our best understanding of what the science currently tells us. For this reason, the IPCC’s findings are important, as is their method of presentation.

The IPCC typically qualifies its findings using different scales. In its 2013 report, for example, the IPCC says that the sensitivity of global temperatures to increases in CO2 concentration is “likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence) and very unlikely greater than 6°C (medium confidence)” (IPCC 2013, 81).

You might wonder what exactly these qualifications mean. On what grounds does the IPCC say that something is “likely” as opposed to “very likely”? And why does it assign “high confidence” to some claims and “medium confidence” to others? If you do wonder about this, you are not alone. Even many of the scientists involved in writing the IPCC reports find these qualifications confusing (Janzwood 2020; Mach et al. 2017). My recent paper – “Interpreting the Probabilistic Language in IPCC Reports” – aims to clarify this issue, with particular focus on the IPCC’s appeal to the likelihood scale.

Traditionally, probabilistic language such as “likely” has been interpreted in two ways. On a frequentist interpretation, something is “likely” when it happens with relatively high frequency in similar situations, while it is “very likely” when it happens with a much greater frequency. On a personalist interpretation, something is “likely” when you are more confident that it will happen than not, while something is “very likely” when you are much more confident.

Which of these interpretations better fits the IPCC’s practice? I argue that neither of them does. My main reason is that both interpretations are closely tied to specific methodologies in statistics. The frequentist interpretation is appropriate for “classical” statistical testing, whereas the personalist interpretation is appropriate when “Bayesian” methods are used. The details about the differences between these methods do not matter for our present purposes. My main point is that climate scientists use both kinds of statistics in their research, and since the IPCC’s report reviews all of the relevant literature, the same language is used to summarize results derived from both methods.

If neither of the traditional interpretations works, what should we use instead? My suggestion is the following: we should understand the IPCC’s use of probabilistic terms more like a letter grade (an A or a B or a C, etc.) than as strict probabilistic claims implying a certain probabilistic methodology.

An A in geometry or English suggests that a student is well-versed in the subject according to the standards of the class. If the standards are sufficiently rigorous, we can conclude that the student will probably do well when faced with new problems in the same subject area. But an A in geometry does not mean that the student will correctly solve geometry problems with a given frequency, nor does it specify an appropriate amount of confidence that you should have that they’ll solve a new geometry problem. 

The IPCC’s use of terms such as “likely” is similar. When the IPCC says that a claim is likely, that’s like saying that it got a C in a very hard test. When the IPCC says that sensitivity is “extremely unlikely less than 1°C”, that’s like saying that this claim fails the test entirely. In this analogy, the IPCC’s judgments of confidence reflect the experts’ evaluation of the quality of the class or test: “high confidence” means that the experts think that the test was very good. But even when a claim passes the test with full marks, and the test is judged to be very good, this only gives us a qualitative evaluation. Just as you shouldn’t conclude that an A student will get 90% of problems right in the future, you also shouldn’t conclude that something that the IPCC categorizes as “very likely” will happen at least 90% of the time. The judgment has an important qualitative component, which a purely numerical interpretation would miss.

It would be nice – for economists, for insurance companies, and for philosophers obsessed with precision – if the IPCC could make purely quantitative probabilistic claims. At the end of my paper, I discuss whether the IPCC should strive to do so. I’m on the fence: there are both costs and benefits. Crucially, however, my analysis suggests that this would require the IPCC to go beyond its current remit: in order to present results that allow for a precise quantitative interpretation of its probability claims, the IPCC would have to do more than simply summarize the current state of the research. 

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4637/.

References

  • IPCC (2013). Climate Change 2013: The Physical Science Basis. Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Thomas F. Stocker, Dahe Qin at al. (Eds.). Cambridge University Press.
  • Janzwood, Scott (2020). “Confident, Likely, or Both? The Implementation of the Uncertainty Language Framework in IPCC Special Reports”. Climatic Change 162, 1655–75.
  • Mach, Katharine J., Michael D. Mastrandrea, at al. (2017). “Unleashing Expert Judgment in Assessment”. Global Environmental Change 44, 1–14.

About the author

Corey Dethier is a postdoctoral fellow at the Minnesota Center for Philosophy of Science. He has published on a variety of topics relating to epistemology, rationality, and scientific method, but his main research focus is on epistemological and methodological issues in climate science, particularly those raised by the use of idealized statistical models to answer questions about climate change.

Posted on

Christine Bratu – “How (Not) To Wrong Others with Our Thoughts”

image of two cherubs thinking
Detail of the San Sisto Madonna (c. 1513-1514) Raphael

In this post, Christine Bratu discusses her article recently published in Ergo. The full-length version of Christine’s article can be found here.

Imagine Jim attends a fancy reception and, seeing a person of color standing around in a tuxedo, concludes that they are a waiter (when, in fact, they, too, are a guest). Alternatively, picture Anna who, during a prestigious conference, sees a young woman setting up a laptop at the lectern and concludes that she is part of the organizing team (when, in fact, this woman is the renowned professor who will give the keynote lecture). In many of us, cases like these elicit the fundamental intuition that there is something morally problematic going on.

Some philosophers have used this intuition to argue for the possibility of doxastic wronging (Basu 2018, 2019a, 2019b; Basu and Schroeder 2019; Keller 2018). Cases like these, they argue, show that we have the moral duty not to have bigoted beliefs about each other. On their interpretation, the situations above are morally troublesome because, by believing classic racist and sexist stereotypes, Jim and Anna violate a duty they have towards their fellow party guest and keynote speaker, respectively. According to proponents of doxastic wronging, positing this morally grounded epistemic duty is the best way to explain our intuition, since in the situations depicted neither protagonist acts in a reprehensible way (in fact, neither of them acts at all!) – it’s their racist and sexist beliefs as such that are the problem.

I think this proposal is intriguing. Group-based discrimination is a serious moral and political problem, and the moral duty not to have bigoted beliefs seems perfectly tailored to strike at its root. Nevertheless, in my article I argue that we should reject the existence of such a duty: there is no such thing as doxastic wronging. I argue for this by presenting what I call the liberal challenge.

I start from the assumption that positing any new, morally grounded epistemic duties comes at a price, because it constitutes a curtailment of our freedom of thought. We should only accept such curtailment if we can thereby gain something comparably important. I then point out three strategies that advocates of doxastic wronging could adopt to convince us that we are gaining something comparably important, and I explain why I think that all three of them fail.

First, the advocates of doxastic wronging could claim that positing a duty not to have bigoted beliefs helps us avoid bigoted actions. This strategy fails, I argue, because we are already under the moral obligation not to act in bigoted ways. If the reason for limiting our freedom of thought is merely to decrease the risk of bigoted actions, then placing us under this new obligation is superfluous.

Second, these philosophers could claim that positing a duty not to have bigoted beliefs helps us avoid practical vices that bigoted beliefs manifest such as, for instance, arrogance. This strategy fails, I argue, because – while we might be morally better, i.e. more virtuous, if we avoided vices like arrogance – we are under no moral obligation to do so.

Thirdly, they could claim that positing a duty not to have bigoted beliefs is necessary to avoid the intrinsic harm of being the object of bigoted beliefs. This third strategy starts off more promisingly as it is based on a correct observation. Most of us desire not to be the objects of bigoted beliefs. People who think about us in bigoted ways frustrate this legitimate desire, and so it seems that they thereby harm us. Yet even if we grant that bigoted beliefs harm their targets, we cannot conclude that the resulting harm is important enough to justify restricting our freedom of thought. People frustrate each other’s legitimate desires all the time. We frustrate our parents’ legitimate desire to see us flourish when we let our talents go to waste, and we frustrate our partners’ legitimate desire to continue the relationship when we break up with them. Cases like these show that frustrating someone’s legitimate desire is not sufficient for our behavior to count as morally impermissible. To make this strategy work, proponents of doxastic wronging must, in addition, argue that the desire not to be the objects of bigoted beliefs is so important that its frustration is morally impermissible. However, I contend that they can only do so by appealing to the impermissibility of either bigoted actions or vices that bigoted beliefs manifest. In other words, they can only do by falling back on one of the former two strategies. And since I’ve already shown that such strategies fail, so does this one.

If we reject the duty not to have bigoted beliefs – as I argue we should – what about our initial intuition? What is wrong with Jim’s assumption that a person of color is most likely a waiter rather than a guest , or with Anna’s assumption that a young woman at the conference podium is most likely an organizer rather than the keynote?

It seems to me that the best way to make sense of these cases is to explain them not in terms of doxastic wronging, but rather in terms of doxastic harming. People like Jim and Anna do not violate any obligations they have toward their targets when they think about them in racist or sexist ways. However, they do frustrate their desire not to be the objects of bigoted beliefs, and they thereby harm them. When we reproach people like Jim and Anna for their hurtful thoughts, we are accusing them not of having done something they were morally not allowed to do, but rather of having done something it would have been better not to do (even though they were morally allowed to do it).

The change in perspective I propose does not make light of the morally problematic nature of bigoted beliefs. On the contrary, it ensures that the criticism we level against people who entertain such beliefs hits its mark properly by avoiding moralistic overreach and by making morally grounded demands on what other people believe.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/3595/.

References

  • Basu, Rima (2018). “Can Beliefs Wrong?” Philosophical Topics 46 (1): 1–17.
  • Basu, Rima (2019a). “The Wrongs of Racist Beliefs”. Philosophical Studies 176 (9): 2497–515.
  • Basu, Rima (2019b). “What We Epistemically Owe to Each Other”. Philosophical Studies 176 (4): 915–31.
  • Basu, Rima and Mark Schroeder (2019). “Doxastic Wronging”. In Brian Kim and Matthew McGrath (Eds.), Pragmatic Encroachment in Epistemology, 181–205.
  • Keller, Simon (2018). “Belief for Someone Else’s Sake”. Philosophical Topics 46 (1): 19–35.

About the author

Christine Bratu is a professor of philosophy at the University of Göttingen in Germany. She received her PhD in philosophy from the Ludwig-Maximilian University of Munich. Her research interests are in feminist philosophy, moral and political philosophy (especially issues of disrespect and discrimination) and topics at the intersection between ethics and epistemology.