Posted on

Alexander Edlich and Alfred Archer – “Tightlacing and Abusive Normative Address”

In this post, Alexander Edlich and Alfred Archer discuss their article recently published in Ergo. The full-length version of their article can be found here.

“Standing woman viewed from the front, clasping her corset” (ca. 1883) Edgar Degas

When we interact with each other, we draw on assumptions about what the other is like. When I talk to someone, I assume they can understand me; when I ask them for help, I assume they are capable of helping; when I offer help, I assume it is in their interest. These assumptions are often neither explicit nor conscious, but they guide our interactions.

Likewise, when we act, we are influenced by assumptions about what we ourselves are like. I only get into the pool because I assume that I can swim; I go to bed early because I assume that I need a certain amount of sleep; I hesitate to run for a public office because I am unsure I can handle its pressures.

Our agency is thus deeply influenced by our conception of ourselves. This also holds for our moral agency. We may feel under a duty to help others because we assume not only that they need help but also that we are able to offer the help they need at no great cost to ourselves. Conversely, we think that others should help when we assume they are able to and it is not too demanding. Whether we think we have a certain duty, then, depends in part on what kind of people we understand ourselves to be, including what we understand our capacities to be and what we think will count as unduly burdensome for us. 

This makes us vulnerable to having our self-conceptions wrongfully distorted by others in ways that control how we think we should act. We call this kind of wrong “tightlacing”.

Tightlacing occurs when someone is subjected to influences that foreseeably distort their self-conception in a way that makes them have overburdening demands on themselves. This may come about in various ways, but moral address, and the assumptions it makes about its target, are a particularly useful vehicle for it. Consider two examples:

  • Some parents want their children to suppress their emotions, especially where they are negative, and this can lead to emotional abuse. If normal episodes of, say, a child’s anger regularly elicit responses like “Keep your ridiculous anger to yourself”, “Why are you annoying us with this?”, or “Your mother/father is having such a hard time already, and now you’re being so unhelpful”, a moral demand is made that conveys an assumption about what the child is like. They are expected to be able to suppress their anger in order not to annoy their parents, and this conveys that their anger is something they should be in control of, and that managing it away has no cost to them.
  • Survivors of political atrocities like the Holocaust are sometimes expected to overcome feelings of resentment to enable a society to move forward. Where they refuse to do so, they may find themselves accused of vengefulness and egoism. Their resentment is not treated as a proper emotional reaction to the horror they experienced, but as something they, in the interest of society, should be able to get rid of. This expectation, too, conveys a view of their emotional nature.

In these examples, people are addressed with moral demands based on the assumption that they can regulate their emotions away in order to benefit others without significant cost to themselves. Given the perceived authority of moral demands and the fact that these problematic assumptions are left implicit, this risks manipulating the moral addressees into accepting the view conveyed. If successful, this strategy induces in them a distorted view of themselves: for example, that they are beings with no significant affective nature, or at least with an affective nature that can be easily disposed of. As a consequence, they will tend not to view their emotions as a natural and integral part of themselves, and they will not recognise that emotion regulation can be a difficult and costly task. In short, their view of themselves might become distorted.

This tightlaces them: if someone is made to think their affective nature carries no weight and is easily managed away, they will make moral demands on themselves that are based on this assumption. Once manipulated into thinking that one’s anger has no significance, or that emotion regulation has no costs, these agents are likely to expect themselves to regulate their anger away even when doing so is in fact inappropriate. They will apply norms to themselves that appear to be justified but, given their actual nature, are in fact decidedly overburdening. Victims of tightlacing thus find themselves wearing a normative corset which does not fit what they are like.

We are not saying that tightlacing is the only form of wrongdoing occurring in these examples. Nor do we think that tightlacing only occurs in relation to managing feelings; it occurs whenever someone’s view of themselves is changed such that they apply overburdening demands to themselves. 

Tightlacing wrongs people in many ways: it is manipulative, it makes unreasonable demands, and it is likely harmful to its victims. But, in addition to this, there are two especially significant ways in which it is wrongful. First, by pressuring the victims into applying unreasonable demands to themselves, it denies their rights, and it makes them complicit in this denial. Second, it erases who they are and, again, makes them complicit in this erasure of who they are.

To sum up, our actions and interactions depend on our conception of what we and others are like. This makes us vulnerable not only to other people’s conception of us, but also to what they may do to alter our conception of ourselves. Not abiding by a distorted conception of ourselves, and not being tightlaced into unfitting norms, matters greatly to our lives and our freedom. To break free of unfitting normative corsets, we need to let go of such distorted conceptions. 

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4644/.

About the authors

Alexander Edlich is a postdoctoral researcher at Ludwig-Maximilians-Universität Munich, where he completed his PhD in 2023. He works on moral responsibility (specifically blame, protest, and apology), the philosophy of emotions, and feminist and LGBTQ ethics.

Alfred Archer is Associate Professor of Philosophy at Tilburg University. He is interested in ethics, social philosophy, philosophy of sport and moral psychology. He is the co-author of Honouring and Admiring the Immoral: An Ethical Guide (Routledge 2021), Why It’s Ok to be a Sports Fan (Routledge 2024) and Extravagance and Misery: The Emotional Regime of Market Societies (Oxford University Press 2024).

Posted on

Sebastian Bender – “Spinoza on the Essences of Singular Things”

In this post, Sebastian Bender discusses the article he recently published in Ergo. The full-length version of Sebastian’s article can be found here.

Picture of a red drop at the center of an intricate double spiral pattern painted on a rugged cloth
“At the core” (1935) Paul Klee

Essences play a key role in Spinoza’s philosophy, but it is surprisingly difficult to figure out what he takes them to be. Are essences concepts, as some commentators suggest? (Wolfson 1934; Newlands 2018.) Or are they something in things? And what is their theoretical role in Spinoza’s system?

The surprising finding of the paper is that Spinoza’s account of essence is much further removed from traditional Aristotelian accounts than one might expect.

The notion of essence has had a central role in the history of Western philosophy at least since Aristotle, and Spinoza’s account can only be understood against this background. For Aristotle and his scholastic successors, the essence of a thing tells us what that thing is. A well-known example is the Aristotelian definition of a human being as a rational animal. This definition expresses the essence of a human. It states that being rational and being an animal are essential to being human.

Three aspects of traditional Aristotelian essences are noteworthy. First, they are universal. Tina and Tom both belong to the species ‘human’ because they share the same essence. Thus, Aristotelians hold that essences explain why individuals belong to a certain kind or species. Second, the essential features of a thing are its core features, and they are explanatorily prior to any non-essential features. The ability to laugh, for example, is something humans cannot lack, but it is not an essential quality because it depends on rationality, which is explanatorily prior. Finally, according to Aristotelians essential features are intrinsic. This last point is often not mentioned, presumably because it is taken for granted (it is made explicit, though, by Cohen & Reeve 2021).

In the early modern period, many aspects of the Aristotelian metaphysical framework are overthrown. This includes, for instance, the notions of substantial form and prime matter, which many early modern philosophers deem useless or confused. Other Aristotelian concepts and tools, however, continue to be used; among them is the notion of essence. Despite their strongly anti-Aristotelian rhetoric, philosophers such as Descartes, Hobbes, and Cavendish more or less adhere to an Aristotelian conception of essence (Schechtman 2024).

What about Spinoza? As recent scholarship has shown, there is at least one clear point of divergence between Spinoza and the Aristotelian tradition: Spinozistic essences are most likely individual essences (Martin 2008; Della Rocca 2008). For Spinoza, humans do not all share the same essence; instead each human being has a highly specific individual essence. In fact, Spinoza tends to view general kind concepts, such as ‘human’ or ‘horse,’ as epistemically problematic. Such concepts may mislead us because they tempt us to ignore real and meaningful differences between distinct things in the world. Like many other philosophers of the second half of the seventeenth century (including, e.g., Leibniz), Spinoza severs the connection between essences and kinds (Schechtman 2024).

Setting this issue aside, however, many commentators have argued that Spinoza by and large adopts an Aristotelian framework of essence. Here is Thomas Ward’s succinct summary of this reading:

Although [Spinoza] rejects part of the Aristotelian conception of essence, according to which it is in virtue of its essence that a thing is a member of a kind, he nevertheless retains a different part of an Aristotelian conception of essence, according to which an essence is some structural feature of a thing which causally explains other, non-essential features. (Ward 2011, p. 44)

Thus, it seems that according to Spinoza essences are intrinsic features of things, and they are explanatorily prior in that they account for the less fundamental features of such things. It thus seems that, except for the fact that his essences are individual while Aristotelian essences are universal, Spinoza accepts much of the Aristotelian framework.

In contrast, I argue that Spinoza’s account of essence is much less Aristotelian than this commonly held view might suggest. The main issue is that Spinoza questions an idea which Aristotelians, and many other philosophers, simply take for granted: that the essence of a thing tells us what that thing is. On Spinoza’s view, essences – at least the essences of singular things – can do so only in part. 

To see why this is so, it is important to note (i) how singular things relate to God for Spinoza, and (ii) how singular things relate to their own causal history.

As for the first point, Spinoza is a substance monist, who holds that God is the only substance. Everything else—be it tables, apples, or planets—is ‘in’ God and can only be ‘conceived through’ God (E1def5). Thus, in order to (fully) understand what a certain singular thing is, one needs to understand God.

As for the second point, Spinoza holds that “[t]he cognition of an effect depends on, and involves, the cognition of its cause” (E1ax4, translation modified). Thus, in order to (fully) understand what a singular thing is, one must grasp the entire causal history of the thing.

It may seem, then, that essences are really packed, or ‘overloaded,’ for Spinoza (Della Rocca 2008; Lin 2012). But this is not his view. In fact, Spinoza tries to explicitly avoid the ‘overloading’ of essences. At an important passage, he writes that “singular things can neither be nor be conceived without God, and nevertheless, God does not pertain to their essence” (E2p10s2). Similarly, Spinoza does not include the causal history of singular things in their essences.

The result is that, unlike Aristotelians, Spinoza believes that the essences of singular things do not render these things fully conceivable. Both God and the causal history of a thing are needed to fully grasp what a thing is. But since Spinoza excludes information about God and causal history from the essences of singular things, grasping these essences does not enable us to (fully) understand what the things they are essences of truly are. From this we can conclude that Spinoza’s view of essences and their theoretical role is quite different from the traditional Aristotelian account.

Want more?

See the full article at https://journals.publishing.umich.edu/ergo/article/id/2266/.

References

  • Cohen, S. Marc and C. D. C. Reeve (2021). “Aristotle’s Metaphysics.” In: The Stanford Encyclopedia of Philosophy (Winter 2021 Edition). Ed by Edward N. Zalta. URL = <https://plato.stanford.edu/archives/win2021/entries/aristotle-metaphysics/>.
  • Della Rocca, Michael (2008). Spinoza. Routledge.
  • Lin, Martin (2012). “Rationalism and Necessitarianism.” Noûs 46(3): 418–48.
  • Martin, Christopher (2008). “The Framework of Essences in Spinoza’s Ethics.” British Journal for the History of Philosophy 16(3): 489–509.
  • Newlands, Samuel (2018). Reconceiving Spinoza. Oxford University Press.
  • Schechtman, Anat (2024). “Modern.” In: The Routledge Handbook of Essence in Philosophy. Ed. by Kathrin Koslicki and Michael Raven. Routledge, 41-52.
  • Spinoza, Baruch de (1985). The Collected Works of Spinoza (2 vols.). Ed. and trans. by E. Curley. Princeton University Press. [References to the Ethics (E) are cited by using the following abbreviations: ax = axiom, d = demonstration, def = definition, p = part, s = scholium.]
  • Ward, Thomas (2011). “Spinoza on the Essences of Modes. British Journal for the History of Philosophy 19 (1): 19–46.
  • Wolfson, Harry (1934). The Philosophy of Spinoza. Harvard University Press.

About the author

Picture of the author

Sebastian Bender is Assistant Professor in Philosophy at the University of Göttingen. His research focuses on early modern philosophy, especially on the metaphysics, epistemology, philosophy of mind, and political philosophy of this era. He writes on figures such as Francisco Suárez, René Descartes, Nicolas Malebranche, Baruch de Spinoza, Gottfried Wilhelm Leibniz, Anne Conway, John Locke, Margaret Cavendish, George Berkeley, David Hume, and Immanuel Kant.

Posted on

Nir Ben-Moshe – “An Adam Smithian Account of Humanity”

In this post, Nir Ben-Moshe discusses his article recently published in Ergo. The full-length version of Nir’s article can be found here.

photo of two people in a room without sensory reference points.
“Bridget’s Bardo” © James Turrell/Courtesy The Pace Gallery/Photo by Florian Holzherr

Some sentimentalists, inspired by Adam Smith’s moral philosophy, have tried to establish a strong normative connection between one’s own perspective and the perspectives of others, as a way of showing that one is reciprocally bound to, and ought to be engaged with, one’s fellow human beings (Debes 2012; Stueber 2017; Fleischacker 2019).

I understand the connection via Christine Korsgaard’s claim that

“valuing humanity in your own person [. . .] implies, entails, or involves valuing it in that of others” (Korsgaard 1996: 132)

Based on Smith’s moral philosophy, I offer a sentimentalist defense of a version of the Korsgaardian claim: my valuing my own humanity, my unique perspective, entails my valuing your humanity, your unique perspective.

Following Samuel Fleischacker (2019: 31), the conception of humanity that I develop builds on the notion of perspective at the heart of Smith’s account of sympathy.

Smith has an account of sympathy based on imaginative projection, according to which we use our imagination in order to place ourselves in another actor’s situation (TMS I.i.1.2).

“[Sympathy] does not arise so much from the view of the passion, as from that of the situation which excites it” (TMS I.i.1.10)

This account pays special attention to an agent’s perspective on the situation, including the causes of their passions: 

“The first question which we ask is, What has befallen you? Till this be answered, […] our fellow-feeling is not very considerable” (TMS I.i.1.9).

I associate humanity with having a unique perspective and being aware of this perspective. More specifically, a human being is the type of being that is aware, qua spectator, of its own unique perspective, qua actor.

The main argument of the paper relies on three components of Smith’s moral theory.

The first component is the impartial spectator, which Smith understands as the standpoint of a person in general, a neutral point of view. This standpoint humbles our self-love and makes us appreciate that our perspective is but one of a multitude and in no respect better than any other. From this standpoint, Smith argues, we take into account the perspectives and interests of all concerned. 

The second component is the normative status of other-oriented sympathy. When discussing sympathy early in TMS, Smith focuses on imagining being oneself in another actor’s situation (self-oriented sympathy). However, later in TMS, Smith focuses on imagining being another actor in that actor’s situation (other-oriented sympathy). I make the case that Smith thought that other-oriented sympathy is the proper form of sympathy. The key idea is that the appropriate form of sympathy to experience from a neutral point of view is a form of sympathy that is not influenced by self-love.

The third component is the desire for mutual sympathy with others: people are pleased when others sympathize with them and when they are able to sympathize with others.

In order to appreciate the importance of these three components of Smith’s thought, I make use of a distinction that Nagel (1986: 170) draws between recognizing a perspective and occupying it.

The standpoint of the impartial spectator is a standpoint from which I recognize that all perspectives have equal worth: adopting it makes me appreciate that my own perspective is no more privileged than—indeed, is equal to—other people’s perspectives; it tells me that insofar as my perspective is worthy of recognition, your perspective is worthy of recognition, too.

However, this type of recognition would merely show me that your perspective has normative force for you in the same way in which my perspective has normative force for me; it would not show, in and of itself, that your perspective has normative force for me. This is so because, while the recognition of equality necessitates consistency, this consistency can be attained by recognizing that each perspective has normative force for its author; it does not necessitate my engagement with your perspective. 

This is where other-oriented sympathy, as the proper form of sympathy, has a crucial role to play: the impartial spectator requires me to see the situation from your perspective rather than my own; it requires me, that is, to try to occupy your perspective, and not merely recognize it. 

This requirement to occupy someone else’s perspective does not come about ex nihilo, but rather builds on the aforementioned third component, namely, the desire for mutual sympathy with others. According to Smith, the desire for mutual sympathy leads to a process in which we constantly imagine being in the other person’s situation and augment our sympathy so as to match the experiences of the actor; this process, in turn, leads people to construct a rudimentary internal spectator, albeit one that is just averaged out from the perspectives of the people that we have encountered. When the impartial spectator is constructed, he reaffirms the sympathetic efforts to see the situation from others’ perspectives, giving normative authority to other-oriented sympathy.

The combination of being required to both recognize and occupy your perspective means that your perspective has normative force for me in the following way. Insofar as I value my perspective as a unique perspective, I am required to engage with your perspective and consider it from the inside—I am required to attempt to sympathize with you in your person and character in the same way in which I, qua spectator, sympathize with myself, qua actor, in my own person and character—while also recognizing that your perspective has equal standing to my own.

By doing so, your perspective, which is equal in standing to, but different in content from, mine, demands a rational response from me, thereby making your perspective normative for me. Therefore, the proposed Smithian account shows that my valuing my own humanity, my unique perspective, entails my valuing your humanity, your unique perspective.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4662/.

References

  • Debes, Remy (2012). “Adam Smith on Dignity and Equality”. British Journal for the History of Philosophy, 20(1), 109–40.
  • Fleischacker, Samuel (2019). Being Me Being You: Adam Smith & Empathy. University of Chicago Press.
  • Korsgaard, Christine M. (1996). The Sources of Normativity. Onora O’Neill (Ed.). Cambridge University Press.
  • Nagel, Thomas (1986). The View from Nowhere. Oxford University Press.
  • Smith, Adam (1976). The Theory of Moral Sentiments [TMS]. D. D. Raphael and A. L. Macfie (Eds.). Liberty Fund.
  • Stueber, Karsten R. (2017). “Smithian Constructivism: Elucidating the Reality of the Normative Domain”. In Remy Debes and Karsten R. Stueber (Eds.), Ethical Sentimentalism: New Perspectives (192–209). Cambridge University Press.

About the author

Nir Ben-Moshe is Associate Professor in the Department of Philosophy and Health Innovation Professor in the Carle Illinois College of Medicine at the University of Illinois Urbana-Champaign. His research falls primarily into two areas. The first area lies at the intersection of contemporary moral philosophy and 18th-century moral philosophy. The second area is biomedical ethics. He is currently working on a book entitled Idealization and the Moral Point of View: An Adam Smithian Account of Moral Reasons.

Posted on

Craig Agule – “Defending Elective Forgiveness”

In this post, Craig Agule discusses his article recently published in Ergo. The full-length version of Craig’s article can be found here.

self-portrait of frada kahlo with a calm monkey, and angry black cat, and a neckless of thorns
“Self-Portrait With Thorn Necklace and Hummingbird” (1940) Frida Kahlo

Not all that long ago, I got angry at someone close to me. They had slighted me; it was not a terrible wrong, but it was enough to be angry (or, at least, I was angry, and several confidants told me that my reaction was reasonable). This person deserved my resentment, and I righteously felt it. But as time passed, and as our relationship continued, I found myself wondering what to do with my anger:  should I hold on to it, or should I forgive this person, letting my anger go? 

I kept coming back to two related puzzles. First, what sort of reasons would support or oppose my forgiving this person? For example, should I be thinking about whether they had taken responsibility? This person had acted poorly and never apologized. Or should I rather be thinking about the consequences of being angry? My anger was both unproductive and disruptive. Were those downsides of anger enough reason to forgive the unrepentant wrongdoer?

I came to see that there was good reason to forgive, but this brought me to the second puzzle:  If it was wise and prudent to forgive, then wouldn’t holding on to my anger be unwise and imprudent? The thought irked me.  Even if it would be reasonable to forgive, this person deserved my anger, and I was entitled to be angry. I was allowed to forgive, I thought, but not required to forgive.

I was irked because I suspected a tension in my thinking about forgiveness. On the one hand, I take forgiveness to be principled, in that we can forgive for reasons and we can offer others reason to forgive. On the other hand, I take forgiveness to be elective, such that, at least in many cases, it is acceptable both to forgive and to withhold forgiveness.

One way to defuse the tension is to pick between these two features of forgiveness, and so a number of philosophers defend principled forgiveness, deflating or abandoning electivity. We may identify reasons to forgive by thinking about the function of anger and blame. Perhaps we can say, with Miranda Fricker, that the point of blame is to bring the wrongdoer and the blamer into an aligned moral understanding. Or perhaps we can say, with Pamela Hieronymi, that the point of blame is to protest a past action that persists as a threat. If the point of blame is something like this, then we might have good reason to forgive in cases where blame has become pointless. If, for example, the wrongdoer has earnestly apologized, that apology might be adequate evidence that the wrongdoer has come to the right moral understanding and is therefore unlikely to repeat their wrong.

This can help us organize our thinking about whether and when to forgive. We think about the point of our anger, and we think about whether anger remains useful. Yet despite this advantage, this way of thinking also threatens the electivity of forgiveness. According to this framework, withholding forgiveness from a repentant wrongdoer might be irrational and even morally inappropriate.

In this paper, I argue that forgiveness is both principled and elective by looking closely at the nature of blame. Reactions like blame, I claim, are in the business of marking and making significance. When we blame someone, we prioritize their culpability and wrongdoing. This affects how we perceive and treat the person. When we forgive, our priorities change. The person’s wrongdoing and culpability is no longer quite as significant for our relationship.

Noticing the role of priorities at the heart of both blame and forgiveness helps to explain why forgiveness is elective. Our priorities are largely (although not entirely!) up to us, given that we have great freedom to settle for ourselves what sort of life we want to lead. Because forgiveness is largely a matter of our changing priorities, whether we should forgive has as much to do with who we are and want to be as it has to do with external facts, such as whether the wrongdoer has apologized. This is a deep sort of electivity!

At the same time, forgiveness remains principled. We might, for example, explain our forgiveness by reference to the wrongdoer’s apology. That apology might have been particularly important to us, or it might have causally prompted us to revisit our own priorities. The apology, then, provides an explanation for our coming to forgive, even if it does not compel forgiveness.

Thinking of forgiveness in terms of priorities helps us to reframe our thinking about forgiveness. For instance, it enables us to make sense of both conditional and unconditional forgiveness: sometimes we forgive because the things that we care about in the world have changed, and sometimes we forgive because our cares themselves have changed.

Thinking in terms of priorities also helps us to understand why forgiveness is not, normatively, entirely up to us. Although we have tremendous leeway to set our own priorities, there are some legitimate demands others can make on us regarding our priorities, often grounded in our relationships and interactions. Thus, there might well be some cases, albeit probably rare, where failing to forgive is blameworthy!

More generally, my defense of truly elective forgiveness pushes us to look inward in thinking about whether we should hold on to anger. Of course, we should think about the wrongdoer’s wrongs and culpability. Yet we should also keep in mind that when we hold on to our anger we prioritize, and so it is also important that we think about how our anger fits into what we care and should care about.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4647/.

About the author

Craig K. Agule is Assistant Professor of Philosophy at Rutgers University–Camden. He is interested in philosophy of law and moral psychology, particularly issues concerning moral and legal responsibility and the normative conditions of blame and of punishment.

Posted on

Alexandre Billon – “The Sense of Existence”

In this post, Alexandre Billon discusses his article recently published in Ergo. The full-length version of Alexandre’s article can be found here.

Bathers at Asnières by Georges Seurat
Bathers at Asnières (1884) Georges Seurat © The National Gallery

This chair, the moon, your toe, this hedgehog, and I, we all exist. But what does it mean for these things to exist? What does it mean for them to be real? 

These questions may seem among the most abstract and sublime in all of philosophy, and certain professors, who dominated the French University when I was an undergraduate, elevated them to the status of disquieting metaphysical idols, burning up the gaze of the crowd while slowly revealing themselves, in the thick mists of the Black Forest, to the wise.

A simpler way of approaching these questions is to rely on the way existence spontaneously appears to us — let us call this the “sense of existence”. If I want to know what’s in front of me, I appeal to the verdict of my experience. Think, for example, of the proponents of the A-theory of time, according to which the present is a distinguished moment, so to speak, at the center of time. At the heart of their view is not a complex scientific theory, but rather the way the present appears to us. 

The phenomenological tradition is famous for having sought to elucidate the nature of existence on the basis of the sense of existence. The problem is that, despite what a cursory glance might suggest, they don’t agree with each other at all. There is, for example, less common ground between Martin Heidegger’s and Michel Henry’s theses on this topic than between René Descartes’ and David Hume’s theses on the self. 

This disagreement is not surprising. Look at a table in front of you and question your sense of the existence of this table. My expectation is that you will conclude that it is not obvious that the existence of the table, as opposed to the table itself, appears to you in any way whatsoever. The sense of existence, if it is there, is elusive, and many philosophers have abandoned the idea of relying on experience in order to investigate existence.

I think this abandonment is premature; there are ways of clarifying the sense of existence. To do this, we first need to be clear about various competing answers. This will ensure that we don’t passively accept the first proposal that comes along. Then, we need to use psychopathology to overcome some of the limitations of introspective analysis.

Let us first consider deflationary accounts of the sense of existence, which hold, with David Hume and Immanuel Kant, that our sense of the existence of a particular is nothing over and above our sense of that particular. For instance, they hold that, when we sense a cat, we don’t have an impression of the existence of the cat over and above our impression of the cat.

In contrast to deflationary accounts, there are several interesting non-deflationary theories. These can be traced back to the seminal work of the Encyclopedists (Turgot) and Ideologists (Condillac, Destutt de Tracy, Maine de Biran). They all admit of the existence of a kind of impression of existence, but they disagree on the content of this impression, which can be:

  • an impression of the resistance of the real thing (Maine de Biran, Olivier Massin, Frédérique de Vignemont);
  • an the impression of spatial depth of the real thing (Edmund Husserl);
  • an impression that  the real thing is an object of possible action, or “affordance” (Pierre Janet, Henri Bergson);
  •  an impression that the real thing is temporally present (Henri Bergson); or, finally,
  • an impression that the real thing is in direct contact with us, or that we are acquainted with it (Turgot, Mohan Matten, perhaps Jérôme Dokic).

Which of these answers is best?

To evaluate these theories, we can try to determine why these impressions (of resistance, depth, temporally present character, etc.) merit the title of being impressions of reality rather than, say, redness or circularity.

The theorist of resistance can, for example, answer this question by invoking the idea that to exist is to have causal powers and that resistance is the mark of causal powers. The advocate of the Husserlian theory of depth can say that perceiving a thing as having depth is perceiving that it has hidden aspects, which do not reduce to the aspects we perceive, and that this is precisely what forms our sense of the reality of that thing. Similarly, the theory of temporally present character can be justified by claiming that the past and the future are unreal, whereas to be real, to exist, is to be now.

The acquaintance theorist, however, seems unable to produce a plausible answer. Unless they accept a kind of solipsistic idealism according to which what exists is what appears to them directly, it is difficult to see why such an appearance could be the mark of reality.

How else may we adjudicate which of these theories is best? 

I suggest we look at a fairly common pathology: derealization (in the DSM-V tr ‘disorder of depersonalization and derealization’). Patients suffering from derealization seem to perceive the world exactly like us, except for existence. They see objects, their shapes and colors, but these objects seem unreal to them and, in extreme cases, literally non-existent. 

This pathology is classically analyzed as involving an experiential gap. This suggests that deflationist answers are wrong: normally, there is in our experience something like an impression of existence in addition to our impressions of objects.

Moreover, derealization patients correctly perceive the resistance of things and, although they sometimes have disorders in the perception of the present, this is not always the case. This rules out the theory of resistance and the theory of the present. 

In addition, by analyzing the subjective reports of patients, we find that they have no sensorimotor problems. On this basis, I argue that the theories of depth and affordances should also be ruled out.

How to characterize our sense of existence, then? 

Karl Jaspers claims that this is a primitive impression, which we have no means of describing other than as an impression… of reality. I do not agree. There is, I think, a theory that is both plausible and compatible with the fact that people affected by derealization have normal sensorimotor abilities. This is the theory according to which our sense of existence is a sense of substantiality. 

According to such theory, sensing that an object really exists is, roughly, sensing that it does not reduce to a bundle of properties but rather is a substrate, which carries and unifies these properties. This theory enables us to explain both that the reality of an object is normally perceived, and that the perception does not have any sensorimotor consequence.

David Chalmers briefly considers this theory in an article in the New York Times. According to him, the theory is however undermined by modern science.

Quantum wave functions with indeterminate values? That seems as ethereal and unsubstantial as virtual reality. But hey! We’re used to it. 

If the structuralists – who claim that nothing has a real substrate beyond structure – or the digitalists – who claim that we live in a simulation and therefore nothing has a real substrate beyond digital structure – were right, then our sense of existence would indeed be massively erroneous. This would imply that patients who suffer from derealization see the world better than us. 

However, pace Chalmers, modern scientific evidence per se does not compel us to endorse structuralism or digitalism, and if we reject both doctrines, we might take our sense of existence as a good guide for the metaphysics of reality. This sounds like a perfectly viable option to me. 

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/3593/.

About the author

Alexandre Billon is Associate Professor of Philosophy at the University of Lille. Before that, he was a Postdoctoral Fellow at the Jean Nicod Institute. In his main works, he draws on the study of psychopathology to better understand the mind and the origin of our metaphysical intuitions. 

Posted on

Mario Hubert and Federica Malfatti – “Towards Ideal Understanding”

In this post,  Mario Hubert and Federica Isabella Malfatti discuss their article recently published in Ergo. The full-length version of their article can be found here.

“Sophia Kramskaya Reading” (1863) Ivan Kramskoi

If humans were omniscient, there would be no epistemology, or at least it would be pretty boring. What makes epistemology such a rich and often controversial endeavor are the limits of our understanding and the breadth of our own ignorance.

The world is like a large dark cave, and we are only equipped with certain tools (such as cameras, flashlights, or torches) to make particular spots inside the cave visible. For example, some flashlights create a wide light-cone with low intensity; others create a narrow light-cone with high intensity; some cameras help us to see infrared light to recognize warm objects, etc. From the snippets made visible by these tools, we may construct the inner structure of the cave.

The burden of non-omniscient creatures is to find appropriate tools to increase our understanding of the world and to identify and acknowledge the upper bound of what we can expect to understand. We try to do so in our article “Towards Ideal Understanding”, where we also identify five such tools: five criteria that can guide us to the highest form of understanding we can expect.

Imagine the best conceivable theory for some phenomenon. What would this theory be like? According to most philosophers of science and epistemologists, it would be:

  1. intelligible, i.e. easily applied to reality, and
  2. sufficiently true, i.e. sufficiently accurate about the nature and structure of the domain of reality for which it is supposed to account.

Our stance towards intelligibility and sufficient truth is largely uncontroversial, apart from our demand that we need more. What else does a scientific theory need to provide to support ideal understanding of reality? We think it also needs to fulfill the following three criteria:

  1. sufficient representational accuracy,
  2. reasonable endorsement, and
  3. fit.

The first criterion we introduce describes the relation between a theory and the world, while the other two describe the relation between the theory and the scientist.

We think that the importance of representational accuracy is not much appreciated in the literature (a notable exception is Wilkenfeld 2017). Some types of explanation aim to represent the inner structure of the world. For example, mechanistic explanations explain a phenomenon by breaking it up into (often unobservable) parts, whose interactions generate the phenomenon. But whether you believe in the postulated unobservable entities and processes depends on your stance in the realism-antirealism debate. We think, however, that even an anti-realist should agree that mechanisms can increase your understanding (see also Colombo et al. 2015). In this way, representational accuracy can be at least regarded as a criterion for the pragmatic aspect of truth-seeking. 

How a scientist relates to a scientific theory also matters for a deeper form of understanding. Our next two criteria take care of this relation. Reasonable endorsement describes the attitude of a scientist toward alternative theories such that the commitment to a theory must be grounded in good reasons. Fit is instead a coherence criterion, and it describes how a theory fits into the intellectual background of a scientist.

For example, it might happen that we are able to successfully use a theory (fulfilling the intelligibility criterion) but still find the theory strange or puzzling. This is not an ideal situation, as we argue in the paper. Albert Einstein’s attitude towards Quantum Mechanics exemplifies such a case. He, an architect of Quantum Mechanics, remained puzzled about quantum non-locality throughout his life to the point that he kept producing thought-experiments to emphasize the incompleteness of the theory. Thus, we argue that a theory that provides a scientist with the highest conceivable degree of understanding is one that does not clash with, but rather fits well into the scientist’s intellectual background.

The five criteria we discuss are probably not the whole story about ideal understanding, and there might be further criteria to consider. We regard the above ones as necessary, though not sufficient.

An objector might complain: If you acknowledge that humans are not omniscient, then why do you introduce ideal understanding, which seems like a close cousin of omniscence? If you can reach this ideal, then it is not an ideal. But if you cannot reach it, then why is it useful?

Similar remarks have been raised in political philosophy about the ideal state (Barrett 2023). Our response is sympathetic to Aristotle, who introduces three ideal societies even if they cannot be established in the world. These ideals are exemplars to strive for improvement, and they are also references to recognize how much we still do not understand. Furthermore, this methodology has been the standard for much of the history of epistemology (Pasnau 2018). Sometimes certain traditions need to be overcome, but keeping and aspiring to an ideal (even if we can never reach it) seems not to be one of them… at least to us.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4651/.

References

  • Barrett, J. (2023). “Deviating from the Ideal”. Philosophy and Phenomenological Research 107(1): 31–52.
  • Colombo, M., Hartmann, S., and van Iersel, R. (2015). “Models, Mechanisms, and Coherence”. The British Journal for the Philosophy of Science 66(1): 181–212.
  • Pasnau, R. (2018). After Certainty: A History of Our Epistemic Ideals and Illusions. Oxford University Press.
  • Wilkenfeld, D. A. (2017). “MUDdy Understanding”. Synthese, 194(4): 1-21.

About the authors

Mario Hubert is Assistant Professor of Philosophy at The American University in Cairo. From 2019 to 2022, he was the Howard E. and Susanne C. Jessen Postdoctoral Instructor in Philosophy of Physics at the California Institute of Technology. His research combines the fields of philosophy of physics, philosophy of science, metaphysics, and epistemology. His article When Fields Are Not Degrees of Freedom (co-written with Vera Hartenstein) received an Honourable Mention in the 2021 BJPS Popper Prize Competition.

Federica Isabella Malfatti is Assistant Professor at the Department of Philosophy of the University of Innsbruck. She studied Philosophy at the Universities of Pavia, Mainz, and Heidelberg. She was a visiting fellow at the University of Cologne and spent research periods at the Harvard Graduate School of Education and UCLA. Her work lies at the intersection between epistemology and philosophy of science. She is the leader and primary investigator of TrAU!, a project funded by the Tyrolean Government, which aims at exploring the relation between trust, autonomy, and understanding.

Posted on

Brigitte Everett, Andrew J. Latham and Kristie Miller  – “Locating Temporal Passage in a Block World”

In this post, Brigitte Everett, Andrew J. Latham and Kristie Miller discuss the article they recently published in Ergo. The full-length version of their article can be found here.

“Dynamism of a Cyclist” (1913) Umberto Boccioni

Imagine a universe where a single set of events exists. Past, present, and future events all exist, and they are all equally real—the extinction of the dinosaurs, the birth of a baby, the creation of a sentient robot. The sum total of reality never grows or shrinks, so the totality of events that exist never changes. We may call this a non-dynamical universe. Does time pass in such world? 

If your answer to the above question is “no”, then perhaps you think that time passes only in a dynamical universe.

A dynamist is someone who thinks that there is an objective present time, and that which time that is, constantly changes. Many dynamists think that time only passes in dynamical worlds (Smith 1994, Craig 2000, Schlesinger 1994). Perhaps more surprisingly, many non-dynamists—those who deny that there is an objective present time, and that which time that is, constantly changes—have also traditionally held that time does not pass in non-dynamical worlds.

However, recently some non-dynamists have argued that in our world there is anemic temporal passage, namely, very roughly, the succession of events that occurs in a non-dynamical world. (Deng 2013, Deng 2019, Bardon 2013, Skow 2015, Leininger 2018, Leininger 2021). These theorists argue that anemic temporal passage deserves the name “temporal passage”. One way of interpreting this claim is as the claim that anemic passage satisfies our ordinary, folk concept of temporal passage.

Viewed in this way, we can see a dispute between, on the one hand, those who think that anemic temporal passage is not temporal passage at all, because it does not satisfy our folk concept of temporal passage, and, on the other hand, those who think it is temporal passage, because it does. 

We sought to determine whether our folk concept of temporal passage is a concept of something that is essentially dynamical; that is, whether we have a folk concept of temporal passage that is only satisfied in dynamical worlds, or whether something that exists in non-dynamical worlds, such as anemic passage, can satisfy that concept. 

You might wonder why any of this matters. One reason is that the non-dynamical view of time has often been accused of being highly revisionary. It is often claimed to be a view on which what seem like platitudes turn out to be false. For instance, you might think it’s platitudinous that time passes, and yet, it is argued, if a non-dynamical view of time is true, then this platitude turns out to be false. So, if our world were indeed that way, it would turn out to be very different from how we take it to be.

To determine whether our folk concept of temporal passage would be satisfied in a non-dynamical world, we undertook several empirical studies that probe people’s concept of temporal passage. 

We found that, overall, participants judged that time passes in a non-dynamical universe, when our world was stipulated to be non-dynamical. That is, a majority of participants made this judgement. In particular, we found that a majority of people who in fact think that our world is non-dynamical, judge that there is temporal passage in it. As for people who in fact think that our world is most like a moving spotlight world, we found that they judge that, were our world non-dynamical, it would nevertheless contain temporal passage. Interestingly, though, with regards to people who think that either presentism or the growing block theory is most likely true of our world, we obtained a different result: they did not think that our world would contain temporal passage, were it non-dynamical. 

In a second experiment we asked participants to read a vignette claiming that “time flows or flies or marches, years roll, hours pass… time flows like a river” and other vivid descriptions of passage, and then we asked them to state how likely it is that the description is true of a dynamical vs. a non-dynamical world.  We found that participants judged that the description is equally likely to be true of a non-dynamical world as it is of a dynamical world. 

In the last experiment we probed whether people think that time passage is mind-dependent. Overall, we found that participants judged that time passes regardless of whether there are any minds to experience its passing or not.

Our results indicate, first, that the folk concept of temporal passage can be satisfied in a non-dynamical world, and second, that it is not a concept of something essentially mind-dependent. This suggests that non-dynamists should not concede that theirs is a view on which, in some ordinary sense, time fails to pass. 

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4639/.

References

  • Bardon, A. (2013). A Brief History of the Philosophy of Time. Oxford University Press.
  • Craig, W. L. (2000). The Tensed Theory of Time: A Critical Examination. Kluwer Academic.
  • Deng, N. (2013). “Our Experience of Passage on the B-Theory”. Erkenntnis 78(4): 713-726.
  • Deng, N. (2019). “One Thing After Another: Why the Passage of Time is Not an Illusion”. In A. Bardon, V. Arstila, S. Power & A. Vatakis (eds.) The Illusions of Time: Philosophical and Psychological Essays on Timing and Time Perception, pp. 3-15. Palgrave Macmillan.
  • Leininger, L. (2018). “Objective Becoming: In Search of A-ness”. Analysis, 78(1): 108-117.  
  • Leininger, L. (2021). “Temporal B-Coming: Passage Without Presentness”. Australasian Journal of Philosophy, 99(1): 1-17.
  • Schlesinger G. (1994). “Temporal Becoming”. In N. Oakland and Q. Smith (eds.) The New Theory of Time, pp. 214–220. Yale University Press.
  • Skow, B. (2015). Objective Becoming. Oxford University Press.
  • Smith, Q. (1994). “Introduction: The Old and New Tenseless Theory of Time”. In L. N.  Oaklander and Q. Smith (eds.) The New Theory of Time, pp. 17–22. Yale University Press.

About the authors

Brigitte Everett is a doctoral student at University of Sydney, Department of Philosophy. Her research interests focus on the philosophy of time.

Andrew J. Latham is an AIAS-PIREAU Fellow at the Aarhus Institute of Advanced Studies and Postdoctoral Researcher in the Department of Philosophy and History of Ideas at Aarhus University. He works on topics in philosophy of mind, metaphysics (especially free will), experimental philosophy and cognitive neuroscience.

Kristie Miller is Professor of Philosophy and Director of the Centre for Time at the University of Sydney. She writes on the nature of time, temporal experience, and persistence, and she also undertakes empirical work in these areas. At the moment, she is mostly focused on the question of whether, assuming we live in a four-dimensional block world, things seem to us just as they are. She has published widely in these areas, including three recent books: “Out of Time” (OUP 2022), “Persistence” (CUP 2022), and “Does Tomorrow Exist?” (Routledge 2023). She has a new book underway on the nature of experience in a block world, which hopefully will be completed by the end of 2024.

Posted on

Bert Baumgaertner and Charles Lassiter –“Convergence and Shared Reflective Equilibrium”

In this post, Bert Baumgaertner and Charles Lassiter discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Photo of two men looking down on the train tracks from a diverging bridge.
“Quai Saint-Bernard, Paris” (1932) Henri Cartier-Bresson

Imagine you’re convinced that you should pull the lever to divert the trolley because it’s better to save more lives. But suppose you find the thought of pushing the Fat Man off the bridge too ghoulish to consider seriously. You have a few options to resolve the tension:

  1. you might revise your principle that saving more lives is always better;
  2. you could revise your intuition about the Fat Man case;
  3. you could postpone the thought experiment until you get clearer on your principles;
  4. you could identify how the Fat Man case is different from the original one of the lone engineer on the trolley track.

These are our options when we are engaging in reflective equilibrium. We’re trying to square our principles and judgments about particular cases, adjusting each until a satisfactory equilibrium is reached.

Now imagine there’s a group of us, all trying to arrive at an equilibrium but without talking to one another. Will we all converge on the same equilibrium?

Consider, for instance, two people—Tweedledee and Tweedledum. They are both thinking about what to do in the many variations of the Trolley Problem. For each variation, Tweedledee and Tweedledum might have a hunch or they might not. They might share hunches or they might not. They might consider variations in the same order or they might not. They might start with the same initial thoughts about the problem or they might not. They might have the same disposition for relieving the tension or they might not.

Just this brief gloss suggests that there are a lot of places where Tweedledee and Tweedledum might diverge. But we didn’t just want suggestive considerations, we wanted to get more specific about the processes involved and how likely divergence or convergence would be.

To this end, we imagined an idealized version of the process. First, each agent begins with a rule of thumb, intuitions about cases, and a disposition for how to navigate any tensions that arise. Each agent considers a case at a time. “Considering a case” means comparing the case under discussion to the paradigm cases sanctioned by the rule. If the case under consideration is similar enough to the paradigm cases, the agent accepts the case, which amounts to saying, “this situation falls into the extension of my rule.” Sometimes, an agent might have an intuition that the case falls into the extension of the rule, but it’s not close enough to the paradigm cases. This is when our agents deliberate, using one of the four strategies mentioned above.

In order to get a sense of how likely it is that Tweedledee and Tweedledum would converge, we needed to systematically explore the space of the possible ways in which the process of reflective equilibrium could go. So, we built a computer model of it. As we built the model, we purposely made choices we thought would favor the success of a group of agents reaching a shared equilibrium. By doing so, we have a kind of “best case” scenario. Adding in real-world complications would make reaching a shared equilibrium only harder, not easier.

An example or story that is used for consideration, like a particular Trolley problem, is made up of a set of features. Other versions have some of the same features but differ on others. So we imagined there is a string of yes/no bits, like YYNY, where Y in positions 1, 2, and 4 means the case has that respective feature, while N in position 3 means the case does not. Of course examples used in real debates are much more complicated and nuanced, but having only four possible features should only make it easier to reach agreement. Cases have labels representing intuitions. A label of “IA” means a person has an intuition to accept the case as an instance of a principle, “IR” means to reject it, and “NI” means they have no intuition about it. Finally, a principle consists of a “center” case and a similarity threshold (how many bit values can differ?) that defines the extension of cases that fall under the principle. 

We then represented the process of reflective equilibrium as a kind of negotiation between principles and intuitions by checking whether the relevant case of the intuition is or isn’t a member of the extension of the principle. To be sure, the real world is much more complicated, but the simplicity of our model makes it easier to see what sorts of things can get in the way of reaching shared equilibrium.

What we found is that it is very hard to converge on a single interpersonal equilibrium. Even in the best case scenario, with very charitable interpretations of some “plausible” assumptions, we don’t see convergence.

Analysts of the process of reflective equilibrium are right that interpersonal convergence might not happen if people have different starting places. But they underestimate that even if Tweedledee and Tweedledum start from the same place, reaching convergence is hard. The reason is that, even if we rule out all of the implausible decision points, there are still so many plausible decision points at which Tweedledee and Tweedledum can diverge. They might both change their rule of thumb, for instance, but they might change it in slightly different ways. Small differences—particularly early in the process—lead to substantial divergence.

Why does this matter? Despite how challenging it is for our model, in the real world we find convergence all over the place—like philosophers’ intuitions about Gettier cases—supposedly from our La-Z-Boys. On our representation of reflective equilibrium, such convergence is highly unlikely, suggesting that we should look elsewhere for an explanation. One alternative explanation we suggest (and explore in other work) is the idea of “precedent”, i.e., information one has about the commitments and rules of others, and how those might serve as guides in one’s own process of deliberation.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4654/.

About the authors

Bert Baumgaertner grew up in Ontario, Canada, completing his undergraduate degree at Wilfrid Laurier University. He moved to the sunny side of the continent to do his graduate studies at University of California, Davis. In 2013 he moved to Idaho to start his professional career as a philosophy professor, where he concurrently developed a passion for trail running and through-hiking in the literal Wilderness areas of the Pacific Northwest. He is now Associate Professor of Philosophy at University of Idaho. He considers himself a computational philosopher whose research draws from philosophy and the cognitive and social sciences. He uses agent-based models to address issues in social epistemology. 

Charles Lassiter was born in Washington DC and grew up in Virginia, later moving to New Jersey and New York for undergraduate and graduate studies. In 2013, he left the safety and familiarity of the East Coast to move to the comparative wilderness of the Pacific Northwest for a job at Gonzaga University, where he is currently Associate Professor of Philosophy and Director of the Center for the Applied Humanities. His research focuses on issues of enculturation and embodiment (broadly construed) for an understanding of mind and judgment (likewise broadly construed). He spends a lot of time combing through large datasets of cultural values and attitudes relevant to social epistemology.

Posted on

Igor Douven, Frank Hindriks, and Sylvia Wenmackers – “Moral Bookkeeping”

In this post, Igor Douven, Frank Hindriks, and Sylvia Wenmackers discuss their article recently published in Ergo. The full-length version of their article can be found here.

Allegorical image of Justice punishing Injustice.
“Allegory of Justice Punishing Injustice” (1737) Jean-Marc Nattier

Imagine a mayor who has to decide whether to build a bridge over a nearby river, connecting two parts of the city. He is informed that the construction project will also negatively affect the local wild­life. The mayor responds: “I don’t care about what will happen to some animals. I want to improve the flow of traffic.” So, he has the bridge built, and the populations of wild animals decline as a result of it.

This fictional mayor sounds like a proper movie villain: he knows that his actions will harm wild animals and he doesn’t even care! We expect that people reading this vignette will blame him for his actions. But how does their moral verdict change if the mayor’s project happened to realize positive side-effects for the wildlife, although he was similarly indifferent to that? Would people praise him as much as they blamed him in the first case?

According to most philosophers, someone can only be praiseworthy if they had the intention to bring about a beneficial result. Yet, many philosophers also think that someone can be blamed for the negative side-effects of their actions, even if they did not intentionally cause them. This presumed difference between the assignment of praise and blame is the Mens Rea Asymmetry. (Mens rea is Latin for ‘guilty mind’.) However, data about how people actually assign praise or blame to others does not support this hypothesis.

One source of evidence that runs counter to the hypothesis of the Mens Rea Asymmetry is Joshua Knobe’s influential paper from 2003, which can be seen as the birth of experimental philosophy. His results show, among other things, that respondents do assign praise to agents who bring about a beneficial but unintended side-effect. We used the structure of the vignette from Knobe’s study to produce similar scenarios, including the mayor deciding about a bridge in the above example.

In order to explain the observed violations of the praise/blame asymmetry, we formulated a new hypothesis. Our moral compositionality hypothesis assumes that people evaluate others by taking into account their intentions as well as the outcome of their actions to come to an overall assignment of praise (when the judgment is net positive) or blame (when net negative). In principle, the overall judgment could be a complicated function of the two separate aspects, but we focused on a very simple version of the compositionality hypothesis: people’s overall judgment of someone’s actions is equal to the sum of their judgment of the agent’s intention and of the outcome of the action. We call this the Moral Bookkeeping hypothesis.

To put our hypothesis to the test, we asked nearly 300 participants to score how blameworthy or praiseworthy the mayor was for his decision and likewise for other agents in two similar scenarios. As already mentioned, we varied whether the potential side-effect of the decision was harmful or helpful. To study the respondents’ judgements of an agent’s intentions and outcomes separately, we included cases where the agent wasn’t informed about potential side-effects and where the potential side-effects didn’t occur after all. We also considered decision makers who were aware of potential side-effects, without knowing whether they would be positive, neutral or negative.

As expected, we found further evidence against the Mens Rea Asymmetry. Our results also corroborated the Moral Bookkeeping hypothesis, including its counterintuitive prediction that respondents still assign praise or blame to decision makers who weren’t aware of potential side-effects. Moreover, participants assigned more praise than blame to decision makers who unintentionally brought about the respective positive or negative side-effect. This finding remains puzzling to us as well.

Finally, based on our data, more complicated versions of the general compositionality thesis cannot be ruled out either. We hope that this work will inspire further experiments to unravel how exactly we come to our moral verdicts about others.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4645/.

References

  • Knobe, J. (2003). “Intentional Action and Side-effects in Ordinary Language”. Analysis, 64(3): 81–87.

About the authors

Igor Douven is a CNRS Research Professor at the IHPST/Panthéon-Sorbonne University in Paris.

Frank Hindriks is Professor of Ethics, Social and Political Philosophy at the Faculty of Philosophy of the University of Groningen, the Netherlands.

Sylvia Wenmackers is a Research Professor in Philosophy of Science at KU Leuven, Belgium.

Posted on

Bryan Pickel and Brian Rabern – “Against Fregean Quantification”

In this post, Bryan Pickel and Brian Rabern discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Still life of various kinds fruits laying on a tablecloth.
“Martwa natura” (1910) Witkacy

A central achievement of early analytic philosophy was the development of a formal language capable of representing the logic of quantifiers. It is widely accepted that the key advances emerged in the late nineteenth century with Gottlob Frege’s Begriffschrift. According to Dummett,

“[Frege] resolved, for the first time in the whole history of logic, the problem which had foiled the most penetrating minds that had given their attention to the subject.” (Dummett 1973: 8)

However, the standard expression of this achievement came in the 1930s with Alfred Tarski, albeit with subtle and important adjustments. Tarski introduced a language that regiments quantified phrases found in natural or scientific languages, where the truth conditions of any sentence can be specified in terms of meanings assigned to simpler expressions from which it is derived.

Tarski’s framework serves as the lingua franca of analytic philosophy and allied disciplines, including foundational mathematics, computer science, and linguistic semantics. It forms the basis of the predicate logic conventionally taught in introductory logic courses – recognizable by its distinctive symbols such as inverted “A’s” and backward “E’s,” truth-functions, predicates, names, and variables.

This formalism proves indispensable for tasks such as expressing the Peano Axioms, elucidating the truth-conditional ambiguity of statements like “Every linguist saw a philosopher,” or articulating metaphysical relationships between parts and wholes. Additionally, its computationally more manageable fragments have found applications in semantic web technologies and artificial intelligence.

Yet, from the outset there was dissatisfaction with Tarski’s methods. To see where the dissatisfaction originates first, consider the non-quantified fragment of the language. For this fragment, the truth conditions of any complex sentence can be specified in terms of the truth conditions of its simpler sentences, and the truth conditions of any simple sentence, in turn, can be specified in terms of the referents of its parts. For example, the sentence ‘Hazel saw Annabel and Annabel waved’ is true if and only if its component sentences ‘Hazel saw Annabel’ and ‘Annabel waved’ are both true. ‘Hazel saw Annabel’ is true if the referents of ‘Hazel’ and ‘Annabel’ stand in the seeing relation. ‘Annabel waved’ is true if the referent of ‘Annabel’ waved. For this fragment, then, truth and reference can be considered central to semantic theory.

This feature can’t be maintained for the full language, however. To regiment quantifiers, Tarksi  introduced open sentences and variables, effectively displacing truth and reference with “satisfaction by an assignment” and “value under an assignment”. Consider for instance a sentence such as  ‘Hazel saw someone who waved’. A broadly Tarskian analysis would be this: ‘there is an x such that: Hazel saw x and x waved’. For Tarski, variables do not refer absolutely, but only relative to an assignment. We can speak of the variable x as being assigned to different individuals: to Annabel or to Hazel. Similarly, an open sentence such as ‘Hazel saw x’ or ‘x waved’ is not true or false, but only true or false relative to an assignment of values to its variables.

This aspect of Tarski’s approach is the root cause of dissatisfaction, yet it constitutes his unique method for resolving “the problem” – i.e., the problem of multiple generality that Frege had previously solved. Tarski used the additional structure to explain the truth conditions of multiply quantified sentences such as `Everyone saw someone who waved’, or `For every y, there is an x such that: y saw x and x waved’. The overall sentence is true if for every assignment of values to ‘y’, there is an assignment of values to both ‘y’ and ‘x’ such that ‘y saw x’ and ‘x waved’ are both true on that assignment.

Tarksi’s theory is formally elegant, but its foundational assumptions are disputed. This has prompted philosophers to revisit Frege’s earlier approach to quantification.

According to Frege, a “variable” is not even an expression of the language but instead a typographic aspect of a distributed quantifier sign. So Frege would think of a sentence such as  ‘there is an x such that: Hazel saw x and x waved’ as divisible into two parts:

  1. there is an x such that: … x….
  2. Hazel saw … and … waved

Frege would say that expression (ii) is a predicate that is true or false of individuals depending on whether Hazel saw them and they waved. For Frege, this predicate is derived by starting with a full sentence such as ‘Hazel saw Annabel and Annabel waved’ and removing the name ‘Annabel’. In this way, Frege seems to give a semantics for quantification that more naturally extends the non-quantified portion of the language. As Evans says:

[T]he Fregean theory with its direct recursion on truth is very much simpler and smoother than the Tarskian alternative…. But its interest does not stem from this, but rather from examination at a more philosophical level. It seems to me that serious exception can be taken to the Tarskian theory on the ground that it loses sight of, or takes no account of, the centrality of sentences (and of truth) in the theory of meaning. (Evans 1977: 476)

In short: Frege did it first, and Frege did it better.

Our paper “Against Fregean Quantification” takes a closer look at these claims. We identify three features in which the Fregean approach has been held to make an advance on Tarski: it treats quantifiers as predicates of predicates, the basis of the recursion includes only names and predicates, and the complex predicates do not contain variable markers.

However, we show that in each case, the Fregean approach must similarly abandon the centrality of truth and reference to its semantic theory. Most surprisingly, we show that rather than extending the semantics of the non-quantified portion of the language, the Fregean turns ordinary proper names into variable-like expressions. In doing so, Frege leads to a typographic variant of the most radical of Tarskian views: variabilism, the view that names should be modeled as Tarskian variables.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2906/.

References

  • Dummett, Michael. (1973). Frege: Philosophy of Language. London: Gerald Duckworth.
  • Evans, Gareth. (1977). “Pronouns, Quantifiers, and Relative Clauses (I)”. Canadian Journal of Philosophy 7(3): 467–536.
  • Frege, Gottlob. (1879). Begriffsschrift: Eine der arithmetischen nachgebildete Formelsprache des reinen Denkens. Halle a.d.S.
  • Tarski, Alfred. (1935). “The Concept of Truth in Formalized Languages”. In Logic, Semantics, Metamathematics (1956): 152–278 . Clarendon Press.

About the authors

Bryan Pickel is Senior Lecturer in Philosophy at the University of Glasgow. He received his PhD from the University of Texas at Austin. His main areas of research are metaphysics, the philosophy of language, and the history of analytic philosophy.

Brian Rabern is Reader at the School of Philosophy, Psychology, and Language Sciences at the University of Edinburgh. Additionally, he serves as a software engineer at GraphFm. He received his PhD in Philosophy from the Australian National University. His main areas of research are the philosophy of language and logic.