Posted on

F. J. Elbert – “God and the Problem of Blameless Moral Ignorance”

Elohim is a Hebrew name for God. This picture illustrates the Book of Genesis. Adam is shown growing out of the earth, a piece of which Elohim holds in his left hand.
“Elohim Creating Adam” (1795) William Blake

In this post, F. J. Elbert discusses his article recently published in Ergo. The full-length version of the article can be found here.

The Abrahamic religions (Judaism, Christianity, and Islam) share more than just the belief that Abraham was an important prophet. They also hold in common the view that God is the perfectly good creator of the world who has designed it so that any gratuitous evil is of our choosing rather than God’s responsibility.

The origin story is the same in outline, and it is found in Genesis. God placed Adam and Eve in a garden free of evil. However, Adam and Eve knowingly and willingly disobeyed God. They introduced evil into the world by rebelling against their creator. God bore no fault in their fall. 

Suppose we grant that there is a creator. It does not follow that humans have an overriding moral obligation to praise and obey the being who created them. On the contrary, if the creator is responsible for a morally unsupportable evil, then it follows that the creator is not perfectly good. 

Consider the following origin story, which we can call “The Garden of Blameless Disobedience”. In this story, the creator gives Adam and Eve conflicting commands. Adam is told they can eat every fruit in the garden except apples, and Eve is told they can eat anything except strawberries. Suppose Adam eats strawberries, and Eve takes great delight in the occasional apple. Each disobeys a command the creator has given. However, assume each has an all-things-considered obligation, or one that trumps all other commitments, to obey the creator. In that case, the creator could not have a morally sufficient reason for giving them conflicting commands, because no moral good could possibly result. In the Garden of Eden, Adam and Eve are wholly responsible for introducing gratuitous evil into the world. In contrast, in the Garden of Blameless Disobedience, the creator is responsible for the evil and hence the creator is not God. 

There is a variant of “The Garden of Blameless Disobedience” in which Adam and Eve are also not culpable for introducing evil into the world. We can call it “The Garden of Blameless Confusion”. In it, the creator commands Eve not to eat strawberries, but the creator does not speak to Adam. However, Adam sincerely but mistakenly believes that God has given him a command that the only fruit they cannot eat are apples. In this garden, the creator designs Adam so that, through no fault of his own, he does not reliably form beliefs about what the creator has commanded. He accepts some commands as originating from his creator when they do not. As a result, Adam and Eve quarrel unnecessarily about what fruit they can and cannot eat. Again, assuming they have a paramount or overriding obligation to obey their creator, Adam’s sincere but mistaken belief that they should not eat apples can serve no greater moral purpose; there cannot be a morally sufficient reason for doing what is all things considered morally wrong. In the Garden of Blameless Confusion, the creator does not deserve unsurpassed praise and unquestioning obedience and therefore the creator is not God. 

My argument in “God and the Problem of Blameless Moral Ignorance” is that our world is much more like the Garden of Blameless Disobedience or the Garden of Blameless Confusion than the Garden of Eden.

Any creator whom we have an overriding obligation to praise and obey cannot be responsible for or the cause of any of our wrongdoing. God cannot create a state-of-affairs in which we stumble into evil. But suppose there is a creator who is the architect of a world in which we sometimes blamelessly attribute false commands to her or him. In that case, that creator is responsible for the ensuing evil. Since it is morally better that we don’t disobey God, even unwittingly, or violate one of our fundamental moral obligations, it is not enough that we are not culpable when we do. That we are blameless does not exonerate the creator.

Some theists agree. They deny that God is responsible for our mistaken moral beliefs or for attributing commands to Him that he did not give.

Nonetheless, they also hold that every acceptance of a false command and every fundamental mistaken moral belief is due to sin. They believe God has given us a faculty, the “sensus divinitatis”, which, somewhat like a conscience, provides all who do not hate God with knowledge of His existence and basic demands. According to them, all false beliefs about our fundamental moral obligations and God’s commands originate in pride and a rebellious desire to direct one’s life rather than submit to God’s will. 

However, we have overwhelming evidence that this latter claim is false. While it is undoubtedly the case that human beings often knowingly and willingly do what is wrong, there are also many instances in which people do what is wrong while sincerely aiming at the good and fulfilling God’s will.

Consider the following example (I discuss more in the paper). Some theists believe God has commanded them to provide women with abortions under certain circumstances. Others think that God has forbidden abortion in every instance. Can this difference in belief in every instance be attributed to a hatred of God? Surely not.

There are a host of cases in which sincere believers, roughly equal in charity and devotional practices, disagree about what our fundamental moral obligations are.

Given the existence of blameless moral ignorance, it is inconceivable that God exists. God cannot be responsible for evil which serves no greater moral purpose. Any creator who designs human beings so that they are blamelessly mistaken about what they most ought to do is a lesser god.

There cannot be a morally sufficient reason for either causing or allowing rational agents to do what is, all-things-considered, morally wrong. For the Creator of the world to be worthy of the highest praise and unquestioning obedience, the moral structure of the world must be good, and recognizably so.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2233/

About the author

F. J. Elbert received a Ph.D. in philosophy from Vanderbilt University. His research focuses on the implications of blameless fundamental moral disagreement in the fields of ethics, political philosophy, and philosophy of religion.

Posted on

Eliran Haziza – “Assertion, Implicature, and Iterated Knowledge”

Picture of various circles in many sizes and colors, all enclosed within one big, starkly black circle.
“Circles in a Circle” (1923) Wassily Kandinsky

In this post, Eliran Haziza discusses his article recently published in Ergo. The full-length version of Eliran’s article can be found here.

It’s common sense that you shouldn’t say stuff you don’t know. I would seem to be violating some norm of speech if I were to tell you that it’s raining in Topeka if I don’t know it to be true. Philosophers have formulated this idea as the knowledge norm of assertion: speakers must assert only what they know.

Speech acts are governed by all sorts of norms. You shouldn’t yell, for example, and you shouldn’t speak offensively. But the idea is that the speech act of assertion is closely tied to the knowledge norm. Other norms apply to many other speech acts: it’s not only assertions that shouldn’t be yelled, but also questions, promises, greetings, and so on. The knowledge norm, in some sense, makes assertion the kind of speech act that it is.

Part of the reason for the knowledge norm has to do with what we communicate when we assert. When I tell you that it’s raining in Topeka, I make you believe, if you accept my words, that it’s raining in Topeka. It’s wrong to make you believe things I don’t know to be true, so it’s wrong to assert them.

However, I can get you to believe things not only by asserting but also by implying them. To take an example made famous by Paul Grice: suppose I sent you a letter of recommendation for a student, stating only that he has excellent handwriting and attends lectures regularly. You’d be right to infer that he isn’t a good student. I asserted no such thing, but I did imply it. If I don’t know that the student isn’t good, it would seem to be wrong to imply it, just as it would be wrong to assert it.

If this is right, then the knowledge norm of assertion is only part of the story of the epistemic requirements of assertion. It’s not just what we explicitly say that we must know, it’s also what we imply.

This is borne out by conversational practice. We’re often inclined to reply to suspicious assertions with “How do you know that?”. This is one of the reasons to think there is in fact a knowledge norm of assertion. We ask speakers how they know because they’re supposed to know, and because they’re not supposed to say things they don’t know.

The same kind of reply is often warranted not to what is said but to what is implied. Suppose we’re at a party, and you suggest we try a bottle of wine. I say “Sorry, but I don’t drink cheap wine.” It’s perfectly natural to reply “How do you know this wine is cheap?” I didn’t say that this wine was cheap, but I did clearly imply it, and it’s perfectly reasonable to hold me accountable not only to knowing that I don’t drink cheap wine, but also to knowing that this particular wine is cheap.

Implicature, or what is implied, may not appear to commit us to knowing it because implicatures often can be canceled. I’m not contradicting myself if I say in my recommendation letter that the student has excellent handwriting, attends lectures regularly, and is also a brilliant student. Nor is there any inconsistency in saying that I don’t drink cheap wine, and this particular wine isn’t cheap. Same words, but the addition prevents what would have been otherwise implied.

Nevertheless, once an implicature is made (and it’s not made when it’s canceled), it is expected to be known, and it violates a norm if it’s not. So it’s not only assertion that has a knowledge norm, but implicature as well: speakers must imply only what they know. This has an interesting and perhaps unexpected consequence: If there is a knowledge norm for both assertion and implicature, the KK thesis is true.

The KK thesis is the controversial claim that you know something only if you know that you know it. This is also known as the idea that knowledge is luminous.

Why would it be implied by the knowledge norms of assertion and implicature? If speakers must assert only what they know, then any assertion implies that the speaker knows it. In fact, this seems to be why it’s so natural to reply “How do you know?” The speaker implies that she knows, and we ask how. But if speakers must know not only what they assert but also what they imply, then they must assert only what they know that they know. This reasoning can be repeated: if speakers must assert only what they know that they know, then any assertion implies that the speaker knows that she knows it. The speaker must know what she implies. So she must assert only what she knows that she knows that she knows. And so on.

The result is that speakers must have indefinitely iterated knowledge that what they assert is true: they must know that they know that they know that they know …

This might seem a ridiculously strict norm on assertion. How could anyone ever be in a position to assert anything?

The answer is that if the KK thesis is true, the iterated knowledge norm is the same as the knowledge norm: if knowing entails knowing that you know, then it also entails indefinitely iterated knowledge. So you satisfy the iterated knowledge norm simply by satisfying the knowledge norm. If we must know what we say and imply to be true, then knowledge is luminous.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2236/.

About the author

Eliran Haziza is a PhD candidate at the University of Toronto. He works mainly in the philosophy of language and epistemology, and his current research focuses on inquiry, questions, assertion, and implicature.

Posted on

Cameron Buckner – “A Forward-Looking Theory of Content”

Self-portrait of Vincent Van Gogh from 1889.
“Self-portrait” (1889) Vincent van Gogh

In this post, Cameron Buckner discusses the article he recently published in Ergo. The full-length version of Cameron’s article can be found here.

As far as kinds of thing go, representations are awfully weird. They are things that by nature are about other things. Van Gogh’s self-portrait is about Van Gogh; and my memory of breakfast this morning is about some recently-consumed steel-cut oats.

The relationship between a representation and its target implicates history; part of what it is to be a portrait of Van Gogh is to have been crafted by Van Gogh to resemble his reflection in a mirror, and part of what it is to be the memory of my breakfast this morning is to be formed through perceptual interaction with my steel-cut oats.

Mere historical causation isn’t enough for aboutness, though; a broken window isn’t about the rock thrown through it. Aboutness thus also seems to implicate accuracy or truth evaluations. The painting can portray Van Gogh accurately or inaccurately; and if I misremember having muesli for breakfast this morning, then my memory is false. Representation thus also introduces the possibility of misrepresentation.

As if things weren’t already bad enough, we often worry about indeterminacy regarding a representation’s target. Suppose, for example, that Van Gogh’s portrait resembles both himself and his brother Theo, and we can’t decide who it portrays. Sometimes this can be settled by asking about explicit intentions; we can simply ask Van Gogh who he intended to paint. Unfortunately, explicit intentions fail to resolve the content of basic mental states like concepts, which are rarely formed through acts of explicit intent.

To paraphrase Douglas Adams, allowing the universe to contain a kind of thing whose very nature muddles together causal origins, accuracy, and indeterminacy in this way made a lot of people very angry and has widely been regarded as a bad move.

There was a period from 1980-1995, which I call the “heyday of work on mental content”, where it seemed like the best philosophical minds were working on these issues and would soon sort them out. Fodor, Millikan, Dretske, and Papineau served as a generation of “philosophical naturalists” who hoped that respectable scientific concepts like information and biological function would definitively address these tensions.

Information theory promised to ground causal origins and aboutness in the mathematical firmament of probability theory, and biological functions promised to harmonize historical origins, correctness, and the possibility of error using the respectable melodies of natural selection or associative learning.

Dretske, for example, held that associative learning bestowed neural states with representational functions; by detecting correlations between bodily movements produced in response to external stimuli and rewards—such the contingency between a rat’s pressing of a bar when a light is on and receipt of a food pellet reward—Dretske held that instrumental conditioning creates a link between a perceptual state triggered by the light and a motor state that controls bar-pressing movements, causing the rat to reliably press the bar more often in the future when the light is activated. Dretske says in this case that the neural state of detecting the light indicates that light is on, and when learning recruits this indicator to control bar-pressing movements, it bestows upon it the function of indicating this state of affairs going forward— a function which it retains even if it is later triggered in error, by something else (thus explaining misrepresentation as well).

This is a lovely way of weaving together causal origins, accuracy, and determinacy, and, like many other graduate students in the 1990s and 2000s, I got awfully excited about it when I first heard about it. Unfortunately, it still doesn’t work. There are lots of quibbles, but the main issue is that, despite appearances, it still has a hard time allowing for a representation to be both determinate and (later) tokened in error.

A diagram of Dretske’s “structuring cause” solution to the problem of mental content. On his view, neural state N is about stimulus conditions F if learning recruits N to cause movements M because of its ability to indicate F in the learning history. In recruiting N to indicate F going forward, Dretske says that it provides a “structuring cause” explanation of behavior; that it indicated F in the past explains why it now causes M. However, if content is fixed in the past in this way, then organisms can later persist in error indefinitely (e.g. token N in the absence of F) without ever changing their representational strategies. On my view, such persistent error provides evidence that the organism doesn’t actually regard tokening N in the absence of F as an error, that F is not actually the content of N (by the agent’s own lights).
Figure1. A diagram of Dretske’s “structuring cause” solution to the problem of mental content. On his view, neural state N is about stimulus conditions F if learning recruits N to cause movements M because of its ability to indicate F in the learning history. In recruiting N to indicate F going forward, Dretske says that it provides a “structuring cause” explanation of behavior; that it indicated F in the past explains why it now causes M. However, if content is fixed in the past in this way, then organisms can later persist in error indefinitely (e.g. token N in the absence of F) without ever changing their representational strategies. On my view, such persistent error provides evidence that the organism doesn’t actually regard tokening N in the absence of F as an error, that F is not actually the content of N (by the agent’s own lights).

I present the argument as a dilemma on the term “indication”. Indication either requires perfect causal covariation, or something less. Consider the proverbial frog and its darting tongue; if the frog will also eat lead pellets flicked through its visual field, then its representation can only perfectly covary with some category that includes lead pellets, such as “small, dark, moving speck”. On this ascription, it looks impossible for the frog to ever make a mistake, because all and only small dark moving specks will ever trigger its tongue movements. If on the other hand indication during recruitment can be less than perfect, then we could say that the representation means something more intuitively satisfying like “fly”, but then we’ve lost the tight relationship between information theory and causal origins to settle indeterminacy, because there are lots of other candidate categories that the representation imperfectly indicated during learning (such as insect, food item, etc.).

This is all pretty familiar ground; what is less familiar is that there is a relatively unexplored “forward- looking” alternative that starts to look very good in light of this dilemma.

To my mind, the views that determine content by looking backward to causal history get into trouble precisely because they do not assign error a role in the content-determination process. Error on these views is a byproduct of representation; on backward-looking views, organisms can persist in error indefinitely despite having their noses rubbed in evidence of their mistake, like the frog that will go on eating BBs until its belly is full of lead.

Representational agents are not passive victims of error; in ideal circumstances, they should react to errors, specifically by revising their representational schemes to make those errors less likely in the future. Part of what it is to have a representation of X is to regard evidence that you’ve activated that representation in the absence of X as a mistake.

Content ascriptions should thus be grounded in the agent’s own epistemic capacities for revising its representations to better indicate their contents in response to evidence of representational error. Specifically, on my view, a representation means whatever it indicates at the end of its likeliest revision trajectory—a view that, not coincidentally, happens to fit very well with a family of “predictive processing” approaches to cognition that have recently achieved unprecedented success in cognitive science and artificial intelligence.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2238/.

About the author

Cameron Buckner is an Associate Professor in the Department of Philosophy at the University of Houston. His research primarily concerns philosophical issues which arise in the study of non-human minds, especially animal cognition and artificial intelligence. He just finished writing a book (forthcoming from OUP in Summer 2023) that uses empiricist philosophy of mind to understand recent advances in deep-neural-network-based artificial intelligence.

Posted on

Brendan Balcerak Jackson, David DiDomenico, and Kenji Lota – “In Defense of Clutter”

Picture of a cluttered room with books, prints, musical instruments, ceramic containers, and other random objects disorderly covering every bit of surface available.
“Old armour, prints, pictures, pipes, China (all crack’d), 
old rickety tables, and chairs broken back’d” (1882) Benjamin Walter Spiers

In this post, Brendan Balcerak Jackson, David DiDomenico, and Kenji Lota discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Suppose I believe that mermaids are real, and this belief brings me joy. Is it okay for me to believe that mermaids are real? On the one hand, it is tempting to think that if my belief doesn’t harm anyone, then it is okay for me to have it. On the other hand, it seems irrational for me to believe that mermaids are real when I don’t have any evidence or proof to support this belief. Are there standards that I ought to abide by when forming and revising my beliefs? If there are such standards, what are they?

Two philosophical views about the standards that govern what we ought to believe are pragmatism and the epistemic view. Pragmatism holds that our individual goals, desires, and interests are relevant to these standards. According to pragmatists, the fact that a belief brings me joy is a good reason for me to have it. The epistemic view holds that all that matters are considerations that speak for or against the truth of the belief; although believing that mermaids are real brings me joy, this is not a good reason because it is not evidence that the belief is true. 

Gilbert Harman famously argued for a standard on belief formation and revision that he called ‘The Principle of Clutter Avoidance’:

One should not clutter one’s mind with trivialities (Harman 1986: 12). 

For example, suppose that knowing Jupiter’s circumference would not serve any of my goals, desires, or interests. If I end up believing truly that Jupiter’s circumference is 272,946 miles (perhaps I stumble upon this fact while scrolling through TikTok), am I doing something I ought not to do?

According to Harman, I ought not to form this belief because doing so would clutter my mind. Why waste valuable cognitive resources believing things that are irrelevant to one’s own wellbeing? Harman’s view is that our cognitive resources shouldn’t be wasted in this way, and this is his rationale for accepting the Principle of Clutter Avoidance.

Many epistemologists are inclined to accept Harman’s principle, or something like it. This is significant because the principle appears to lend significant weight to pragmatism over the epistemic view. Picking up on Harman’s ideas about avoiding cognitive clutter, Jane Friedman has recently argued that Harman’s principle has the following potential implication:

Evidence alone doesn’t demand belief, and it can’t even, on its own, permit or justify belief (Friedman 2018: 576). 

Rather, genuine standards of belief revision must combine considerations about one’s interests with more traditional epistemic sorts of considerations. Friedman argues that the need to avoid clutter implies that evidence can be overridden by consideration of our interests: even if your evidence suggests that some proposition is true, Harman’s principle may prohibit you from believing it. According to Friedman, accepting Harman’s principle leads to a picture of rational belief revision that is highly “interest-driven”, according to which our practical interests have a significant role to play.

These are radical implications, in our view, and so we wonder whether Harman’s principle should be accepted. Is it a genuine principle of rational belief revision? Our aim in “In Defense of Clutter” is to argue that it is not. Moreover, we offer an alternative way to account for clutter avoidance that is consistent with the epistemic view.

Suppose that you believe with very good evidence that it will rain and, with equally good evidence, that if it will rain, then your neighbor will bring an umbrella to work. An obvious logical consequence of these two beliefs—one that we may suppose you are able to appreciate—is that your neighbor will bring an umbrella to work.

This information may well be unimportant for you. It may be that no current interest of yours would be served by settling the question of whether your neighbor will bring an umbrella to work. But suppose that in spite of this you ask the question anyway. Having asked it, isn’t it clear that you ought to answer it in the affirmative? At the very least, isn’t it clear that you are permitted to do so? The question has come up, and you can easily see the answer. How can you be criticized for answering it?

In general, if a question comes up, surely it is okay to answer it in whatever way is best supported by your evidence. According to the Principle of Clutter Avoidance, however, you should not answer the question, because this would be to form a belief that doesn’t serve any of your practical interests. This is implausible. The answer to your question clearly follows from beliefs that are well supported by your evidence.

Can we account for the relevance of clutter avoidance without being led to this implausible result? Here is our proposal. Rather than locating the significance of cognitive clutter at the level of rational belief revision, we locate its significance at earlier stages of inquiry.

Philosophers have written extensively on rational belief revision, but comparably little about earlier stages of inquiry; for example, about asking or considering questions, and about the standards that govern these activities. If we zoom out from rational belief revision and reorient our focus on earlier stages of inquiry, we can bring the significance of cognitive clutter into view.

We propose that clutter considerations play a role in determining how lines of inquiry ought to be opened and pursued over time, but they are irrelevant to closing lines of inquiry by forming beliefs.

It is okay to answer a question in whatever way is best supported by one’s evidence, but a thinker makes a mistake when they ask or consider junk questions—questions whose answers will not serve any of their interests. This enables us to take seriously the considerations of cognitive economy that Harman, Friedman, and many others find compelling, without thereby being led to an interest-driven epistemology.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2257/

References

  • Friedman, Jane (2018). “Junk Beliefs and Interest-Driven Epistemology”. Philosophy and Phenomenological Research, 97(3), 568–83.
  • Harman, Gilbert (1986). Change in View. MIT Press.

About the authors

Brendan Balcerak Jackson‘s research focuses on natural language semantics and pragmatics, as well as linguistic understanding and communication, and on reasoning and rationality more generally. He has a PhD in philosophy, with a concentration in linguistics, from Cornell University, and he has worked as a researcher and teacher at various universities in the United States, Australia, and Germany. Since April 2023, he is a member of the Semantic Computing Research Group at the University of Bielefeld.

David DiDomenico is a Lecturer in the Department of Philosophy at Texas State University. His research interests are in epistemology and the philosophy of mind.

Kenji Lota is a doctoral student at the University of Miami. They are interested in epistemology and the philosophy of language and action.

Posted on

Ten-Herng Lai – “Civil Disobedience, Costly Signals, and Leveraging Injustice”

Anti-riot police aiming for students' heads and violently dragging them off the streets during a protest in Taiwan in 2014.
“Anti-riot police aiming for students’ heads and violently dragging them off the streets” © Courtesy of Democracy at 4am

In this post, Ten-Herng Lai discusses the article he recently published in Ergo. The full-length version of Ten’s article can be found here.

Illegal activities that are caught are normally punished, often with good reason. Activities that are harmful to others should be deterred (Tadros, 2011). Offenders usually take advantage of others, and it is sometimes the business of the state to make sure that offenders relinquish the unfair benefits they have unjustly acquired (Dagger, 1997). The state is also in a good position to convey blame and express disapproval towards wrongdoers in our name as citizens (Duff, 2001). 

Not all offenders, however, are appropriate targets of punishment. For one reason or another, one may be excused or even fully justified in breaching the law. Many have argued that civil disobedience—a deliberate breach of the law that is predominantly nonviolent, often highly restrained, typically respectful even if confrontational, and primarily communicative in expressing disapproval towards policies or political inaction and demanding political change—falls under the category of permissible law breaching (Brownlee, 2012; Celikates, 2016; Markovits, 2005; Rawls, 1999; Smith, 2013).

Accordingly, civil disobedience serves as an auxiliary mechanism to our legal system. Our democracy is imperfect at best. Individual laws may be unjust, even if the overall rule of law is worth preserving. Despite our best efforts to uphold political equality and ensure that the rights and interests of all stakeholders are taken into consideration, we make mistakes. Legal means of addressing democratic failures often work, but ocassionally they turn out to be futile or simply take too long to facilitate urgently needed political change while people continue to suffer from injustice and irreversible harm. Civil disobedience is a call for immediate action: we need racial equality now; we need climate action now; we need to end gender-based oppression now; we need to pay attention to the voices of the politically marginalised now.

It is wrong for the state to punish civil disobedients when their actions are called for. We would effectively be deterring and silencing this indispensable remedy for our democratic deficits. Moreover, these activists have taken no unfair advantage over their fellow citizens through their illegal actions. Ordinary citizens do their fair share in supporting fair and just institutions by obeying the law. Civil disobedients, in contrast, put effort into improving the institutions through their political engagement. They sacrifice their time and effort, and sometimes even risk the hostility of their fellow citizens, to make the state more just. They do more than their fair share (Moraro, 2019). It is not just that the state, through punishment, would be blaming those who are not blameworthy. The state simply lacks any standing to blame these disobedients: their actions are called for because the state fails to live up to the standards of justice and democracy.

However, a problem arises when we consider how civil disobedience works. Civil disobedience works, as I contend, as a costly social signal. To bring about the necessary political change, we must effectively allocate public attention to worthy issues. The voices of different groups and parties, worthy and unworthy, compete for this limited resource. Civil disobedience is a solution to this problem. It is a reliable indicator of the worthiness of the underlying issue it represents. Civil disobedients speak in a way those without the relevant sincerity and seriousness would be unwilling to speak. The speech is costly by being illegal and thus punished. Those with a less urgent pleading would be unwilling to incur the costs of punishment, as the gains realized through political change are not worth the costs. Those with unreasonable political proposals would also be screened out. They would be paying a hefty price just to be heard and quickly dismissed.

By refraining from punishing civil disobedience, however, the state risks rendering civil disobedience as “cheap talk.” It is no longer costly and thus unable to distinguish itself from other sorts of political noise. Those who suffer from oppression and marginalization would thus be robbed of one effective means to distinguish themselves from others, as this reliable indicator of worthy and urgent issues is neutralized. They would be left with no morally appropriate means to call attention to their plight or would have to escalate and resort to more radical means of protest should such means be morally permissible (Delmas, 2018; Lai, 2019). Disturbingly, by attempting to adhere to the apparent demands of justice regarding punishing civil disobedience, the state would effectively silence the oppressed and marginalized.

Maybe civil disobedience works merely by capturing attention through its illegality and disruptive nature; or maybe it is costly (and thus reliable) only because of the brutal arrests, the burdensome trials, and the hefty fines. Regarding the former, it is dubious whether merely forcing others to listen works without also demonstrating relevant sincerity and seriousness; otherwise, advertisement bombardment would be more effective than it actually is. Regarding the latter, brutal arrests and burdensome trials are by no means morally innocuous. These unjustified acts are not solutions because, well, they are unjustified. Fines, on the other hand, can be sponsored or crowdfunded. The commodification of civil disobedience is highly undesirable because neither “pay to protest” nor “being paid to protest” helps to demonstrate the sincerity and seriousness of protestors.

Overall, (Houston) we have a problem. It is unjust to punish civil disobedience when the latter is called for, but not punishing civil disobedience risks rendering civil disobedience useless. This is because civil disobedience is reliable because it is costly and costly because it is punished. Civil disobedience leverages punitive injustice to amplify its illocutionary force, and by taking away the punishment civil disobedience is cheap, no longer perceived as reliable, and thus useless.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/1137/

References

  • Brownlee, K. (2012). Conscience and conviction: The case for civil disobedience. Oxford University Press.
  • Celikates, R. (2016). “Rethinking civil disobedience as a practice of contestation—Beyond the liberal paradigm”. Constellations, 23(1), 37–45.
  • Dagger, R. (1997). Civic virtues: Rights, citizenship, and republican liberalism. Oxford University Press.
  • Delmas, C. (2018). A Duty to Resist: When Disobedience Should Be Uncivil. Oxford University Press.
  • Duff, A. (2001). Punishment, communication, and community. Oxford University Press.
  • Lai, T.-H. (2019). “Justifying uncivil disobedience”. Oxford Studies in Political Philosophy, (5), 90–114.
  • Markovits, D. (2005). “Democratic disobedience”. Yale Law Journal, 114 (8), 1897–1952.
  • Moraro, P. (2019). “Punishment, Fair Play and the Burdens of Citizenship”. Law and Philosophy, 38 (3), 289–311.
  • Rawls, J. (1999). A Theory of Justice. Oxford University Press.
  • Smith, W. (2013). Civil disobedience and deliberative democracy. Routledge.
  • Tadros, V. (2011). The ends of harm: The moral foundations of criminal law. Oxford University Press.

About the author

Ten-Herng Lai is currently a Teaching Fellow at the University of Melbourne. He received his PhD from the Australian National University in 2020. In 2021, he was a Post-Doctoral Research Fellow of the Society for Applied Philosophy at the Australian National University. Starting August 2023, he will be a Lecturer in Philosophy at the University of Stirling. His research interests include social movements, democracy, statues and monuments.

Posted on

Thomas Brouwer – “Social Inconsistency”

A panorama of a chaotic social life in the Southern Netherlands in the 16th century, at a hectic time of transition from Shrove Tuesday to Lent, the period between Christmas and Easter.
“The Fight Between Carnival and Lent” (1559) Pieter Bruegel the Elder

In this post, Thomas Brouwer discusses the article he recently published in Ergo. The full-length version of Thomas’ article can be found here.

Social reality consists in all the things that we humans layer onto the world by means of our social interactions. It includes social norms, customs, fashions, conventions and laws; organizations such as businesses and universities; social groupings like genders, sub-cultures and socio-economic classes; artifacts such as tools, artworks, currencies, and buildings; languages, cuisines, and religions.

The elements that make up social reality come about in various ways, some the products of conscious design, some arising spontaneously out of social interactions. In neither case is quality of construction guaranteed. We are all familiar with the variety of defects social institutions can exhibit. They can be wasteful, unjust, fragile, and easily subverted; they can prove inflexible when circumstances change; they can be opaque. The focus of my investigation is a further, less familiar type of defect: inconsistency

Inconsistency is a logical notion. A set of statements is inconsistent when you can logically derive a contradiction from it. In other words, when it implies that something is the case and also not the case. Since it is hard to act effectively on contradictory information, inconsistency can be practically problematic; but inconsistency is also tricky philosophically. In many systems of logic – particularly classical logic and intuitionistic logic – contradictions have the troubling property that they entail everything. Once a body of claims implies that something both is and is not the case, it also implies any other claim.

Philosophers have often taken this to motivate a metaphysical claim, namely that the world itself has to be consistent. If you could write down everything true about it, your list would not contain any contradictions. The idea is simple: if there were inconsistencies among the facts, and inconsistencies entail everything, then literally everything would be the case. The moon would be made of cheese and pigs would fly. So, if the world were inconsistent, you’d think we’d have noticed.

Since the latter half of the twentieth century, however, logicians have developed alternative logics which don’t ‘explode’ (as logicians like to put it) in the face of inconsistency. This logical innovation has spurred a philosophical one: some philosophers have been exploring the view, once regarded as a non-starter, that the world can sometimes be inconsistent. This view is called dialetheism, and it comes in different flavours, depending on where in the world you suspect inconsistency. Often, arguments for dialetheism focus on logical paradoxes such as the Liar paradox (‘this sentence is false’), for which satisfying consistent solutions are hard to achieve.

The social world has so far received little attention from dialetheists, with the exception of Priest (1987, ch. 13) and Bolton & Cull (2020). Yet it might be one of the likeliest places to find inconsistency. Here is why.

One major way in which we shape social reality is by laying down conditions for certain social states of affairs. For example, by developing shared expectations and aesthetic reactions, we make it the case that if you put on a certain cut of trousers, you will be unfashionable; by passing a criminal law, we make it the case that if you commit a certain act, you will be a criminal. The mechanics of laying down conditions – or, as we might call it, social construction – have been variously described by social metaphysicians. In my article, I build particularly on Brian Epstein’s (2015) theory. An appealing feature of his theory, as I see it, is that it allows for a realistic amount of disorderliness in the construction of social reality. It allows that the different elements of social reality are constructed through disparate processes, which may involve entirely different people with a variety of purposes, and it allows that the people involved in these processes may lack insight into or substantive control over what they are doing. Social reality is just what ends up emerging out of this dispersed, uncoordinated and often confused activity.

One among many things that can go awry, amid this activity, is that we can end up laying down a condition for something to be the case, and also a condition for it not to be the case, in such a way that these conditions are jointly satisfiable. This is not the sort of thing that we would do if we were clear-eyed and coordinated, but we are not always clear-eyed and coordinated.

Complex regulations are a good case to think about. Consider for instance the intricacies of a tax code, and the scenarios that it yields for devising and revising criteria in muddled ways over time. It is not so strange to think that a person can end up both qualifying and not qualifying for some tax break. Or think about games: the philosopher Ted Cohen argued in 1990 that under the then-current rules of baseball, if a runner hit the base at the same time as being tagged, they were in and also out (and therefore not in). Such scenarios are not surprising on the kind of picture of social reality which I sketched. On that picture, consistency in the social world is something that we would have to achieve through care and coordination, not something that is already built in.

Philosophically, this is just an opening move. One might admit that yes, we can screw up our social institutions in such a way that they appear to produce contradictions. But are these really contradictions, or will a more subtle metaphysics reveal that these contradictions are mere surface appearances? I think many philosophers would want to say so. In my article, I develop and consider several cases against social inconsistency on their behalf. Some are more promising than others – but my ultimate conclusion is that we should remain open to social inconsistency.

If this is right, what follows? First off, unless we also want to think that absolutely everything is true, we should embrace some form of paraconsistent logic. But there are further consequences to think about as well. Social facts often have normative import; if you fall in a certain tax bracket, for example, then you should pay that much tax. If there are social inconsistencies, however, some of them could generate dilemmas: situations in which you ought to do something, but you also ought not do it. Many philosophers think dilemmas cannot happen, because of the principle that ought implies can. Social inconsistency might, among other things, give us a reason to re-examine that commitment.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2258/

References

  • Bolton, Emma and Matthew J. Cull (2020). “Contradiction Club: Dialetheism and the Social World”. Journal of Social Ontology 5(2), pp. 169–80.
  • Cohen, Ted (1990). “There Are No Ties at First Base”. Yale Review 79(2), pp. 314-22. Reprinted in Eric Bronson (ed.), Baseball and Philosophy (2004, pp. 73-86). McLean: Open Court Books.
  • Epstein, Brian (2015). The Ant Trap: Rebuilding the Foundations of the Social Sciences. Oxford: Oxford University Press.
  • Priest, Graham (1987/2006). In Contradiction (second edition). Oxford: Oxford University Press.

About the author

Thomas Brouwer is a Research Fellow and Research Development Assistant at the University of Leeds. He studied at the University of Leiden, in the Netherlands, did his PhD at Leeds, and worked at the University of Aberdeen before returning to Leeds. After working initially in metaphysics and the philosophy of logic, he now works mainly in social ontology. He is especially interested in the metaphysics of social facts, the actions and attitudes of groups, and the mechanics of social norms and conventions.

Posted on

Henry Clarke – “Mental Filing Systems: A User’s Guide”

The painting depicts a person looking out from an interior through a sash window, which echoes the theme of the article: a thinker's view of the world being made up of compartmentalized bodies of information.
“Tall Windows” (1913) Wilhelm Hammershøi

In this post, Henry Clarke discusses his article recently published in Ergo. The full-length version of Henry’s article can be found here.

For many, if not all, of the objects we can think about, we have a reasonably rich conception of what they are like. One of the basic representational functions of the mind is to draw together different bits of information to make up these conceptions. An image that naturally suggests itself is of the mind as a kind of filing system, with folders dedicated to different objects, and descriptions placed in each to be called upon when needed.

The mental filing system idea is philosophically useful in that it provides a framework for understanding how a thinker can treat her beliefs as being about one and the same thing. Doing so results in her being prepared to draw certain implications from the content of a conception without having to decide whether that identity holds. For example, someone with a mental file that contains the descriptions tall, leafy, has brown bark can immediately infer that something is tall, leafy, with brown bark. The inference presupposes that being tall, leafy, etc. are all true of one and the same object. This sort of presupposition is a fundamental part of thought, and of a thinker’s perspective on her thought. Having such a perspective sustains the sort of rationally structured view of the world that we can develop and make use of, and mental files have seemed to many like a good way of capturing this.

But does the mental filing system idea actually tell us something about how the mind works? Or is it just a dispensable metaphor? Rachel Goodman and Aidan Gray (2022) have argued (taking François Recanati’s theory of mental files as their working example), that mental filing – taking one’s beliefs immediately to be about the same thing – doesn’t require taking the file image too seriously. The source of their skepticism comes from an analysis of what it means to say that we ‘can’ treat our beliefs as being about the same thing without having to figure out the relevant identity. They argue that this is a matter of rational permissibility, and if that’s so, then mental files aren’t needed. 

Why would that be? Inferences are permissible because of the contents of the beliefs they involve. If reference isn’t enough to account for permissibility, as it appears not to be, then there must be some other feature that is. Following Kit Fine (2007) Goodman and Gray call this feature coordination. Coordination, like other representational features, can be attributed because it helps to make sense of what and how a thinker thinks what they do. But if we have this representational feature, which can be attributed to make sense of treating beliefs as being about the same thing, then files add nothing. You can have mental filing without mental files. The appeal of this result is that it seems to give a more refined picture of what is actually involved in rationally structured thinking, without unnecessary metaphors.

That is their argument, in outline. Does it work? The main problem is that it overlooks the psychology of mental filing. What calls for explanation is not just the permissibility of inferences that presuppose identity, but also the fact that thinkers are prepared to make them. Mental files can be brought in as entities whose function is to account for when a thinker is prepared to draw these inferences. The causal basis of that function might be described in other terms that tell us how the function is carried out. But as a hypothesis – that at some functional level there is something that brings together different bits of information and so provides the causal basis for a thinker being disposed to presuppose identity – mental files do the job nicely.

This causal-functional view of files shows that there is a notion of a mental file that does some work. This undermines Goodman and Grey’s argument because it renders coordination, the extra representational feature, redundant. Suppose we have the identity-presupposing dispositions in place, because of the presence of a mental file. Then the question is, do we need to add a representational feature (coordination) to make them permissible? It seems not. If a thinker is disposed to make the inferences, and nothing indicates that there is something faulty going on, then the inferences are in good standing. The results of the inferences will be (at least potentially) relied upon by the thinker in pursuing her plans and projects – things that matter to her. This means that, were there to be something to indicate that the conception in question was somehow incorrectly formed, then the thinker should be motivated to check the inferences she is disposed to draw. If she is rational, then she will do this, and so were there to be a problem, the inferences would not be made. Having the dispositions and monitoring their viability is enough for permissibility because it makes them manifestly reliable. So Goodman and Grey’s conclusion ought to be inverted: coordination doesn’t add anything that would otherwise be missing from the account that files provide. 

There is other work for files to do as well. Goodman and Grey suggest that the basis for coordination is a thinker reliably gathering information together that does concern one and the same object. But we need files for this to happen. Interpreting new information when finding out more about what objects are like calls upon the conceptions we already have. The content of a mental file will tell us how to locate the new information we get from various sources: in order to recognize something we’ve already encountered, or to determine (for example) that a new person we meet really is someone new and not an old acquaintance, we need to use information integrated by mental files.

Forming a picture of the world means gathering together these smaller-scale pictures of objects, and to do that, we need the kind of structure that mental files provide. They are not just a metaphor. But how exactly they work remains to be uncovered.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2249/

References

  • Kit Fine (2007). Semantic Relationism. Blackwell.
  • Rachel Goodman and Aidan Grey (2022). “Mental Filing”. Noûs, 56(1), 204–226.

About the author

Henry Clarke is a Senior Project Editor in the Humanities at Oxford University Press. He received his PhD from UCL in 2016. His research focuses on the philosophy of mind.

Posted on

Alycia LaGuardia-LoBianco – “Trauma and Compassionate Blame”

Allegoric fresco representing the sufferings of weak mankind, the well-armed strong, compassion and ambition in their quest for happiness.
Detail from the Beethoven Frieze “The Sufferings of Weak Mankind, the Well-Armed Strong, Compassion and Ambition” (1902) Gustav Klimt

In this post, Alycia LaGuardia-LoBianco discusses the article she recently published in Ergo. The full-length version of Alycia’s article can be found here.

When someone we love hurts us, our responses are influenced by our relationship with her: our hurt is tinged with love, care, expectations, a shared history, among other things. These responses may be further complicated if, in addition, that person’s harmful behavior has been shaped by a traumatic past. Having experienced a traumatic event may partly shape the way a person behaves. For instance, a veteran may lash out at her family; a victim of abuse may repeat that abuse on his family. Though this is of course not true for all survivors of trauma, there can be ways that past trauma shapes present behavior. And as a result of recognizing that a loved one’s harmful behavior may be caused by their past trauma, we may think that we shouldn’t blame them for the hurt they caused. After all, shouldn’t the trauma they’ve suffered exempt them from blame? 

I argue that the recognition of traumatic histories should have an impact on how we blame loved ones—but not by making blame inappropriate. Rather, these histories should motivate us to take a broader view of that person’s wrongdoing in the context of their traumatic past. It should motivate what I call ‘compassionate blame’: an attitude that considers the person as both someone who has caused harm and someone who has suffered harm. This attitude recognizes the unfortunate reality that someone has been unfairly shaped to commit harms, so that blame for that harm is bound up with compassion for the person who suffered. 

Should we blame those with a traumatic past for their harmful behavior?

When considering traumatic influences on harmful behavior, an intuitive view holds that survivors ought not be blamed for what they’ve done: traumatic histories exempt them from blame. Why might this be the case?

First, we might think that it is inappropriate to blame survivors because they have suffered from the trauma they’ve experienced. To heap blame upon a survivor may seem cruel or callous; they have already endured enough. This reason for exempting survivors is a version of a concern against blaming the victim, and it is admirably merciful.

However, the fact that one has suffered does not bear on whether they are blameworthy for their behavior, even when that suffering is relevantly connected to the subsequent harm committed. The consideration of avoiding a further burden on survivors may have an impact on how we express our blame, but it does not actually change whether survivors are blameworthy. So, the fact that survivors have suffered cannot be an exempting condition for blame.

Second, we might think that survivors ought not be blamed because they did not control the traumatic circumstances they endured. If the conditions that partly shaped a person’s behaviors are outside their control, we may be reluctant to blame them for those behaviors. After all, it seems an intuitive aspect of moral responsibility that we are only responsible for actions over which we have some relevant control. 

Although we should recognize that we are all vulnerable to good and bad luck, we should nonetheless be hesitant to forego responsibility because of it. Our choices and actions are built out of conditions of our past which are not entirely of our choosing. That we are sometimes responsible for conditions over which we had no control—including the ways our characters have been partly shaped by forces beyond us—is a widespread feature of our lives, and it does not normally undermine responsibility.

Similarly, genuine relationships seem to require a basic expectation of responsibility even among the vicissitudes of luck. Exempting a survivor’s behavior because of their past may result in treating them merely as the product of their trauma, and this would seem to hinder a genuine relationship with them. Moreover, exemption from blame risks undermining the seriousness of the wrong at issue.

Behind these objections is a broad concern about proper regard for survivors. We don’t want to patronizingly reduce survivors to their trauma, or to avoid blaming them in a way that is unfair to their victims, even though we do want to remain sensitive to their past suffering. Our relationship is with the person, not with their past, so we should, first, acknowledge that survivors are responsible for what they have done wrong, and then also ask how the reality of their trauma should impact our response. 

Cultivating an attitude of compassionate blame

It may be tempting to conclude from the foregoing arguments that, because trauma does not exempt, survivors should be straightforwardly blamed. Against this, I suggest that the reality of trauma should impact our blaming practices: we should be sensitive to the trauma endured and the harm committed in an attitude of compassionate blame.

Compassion is an emotion in which “the perception of the other’s negative condition evokes sorrow or suffering in the one who feels the emotion” (Snow 1991: 196) along with a set of beliefs about the other’s suffering (Snow 1991: 198). Blame adds an emotional valence to our beliefs regarding the connection between the survivor’s traumatic circumstances and their harmful behavior. Though they may seem to pull us in different directions, the feelings of compassion and blame are perfectly compatible, and we have complex emotional experiences of this sort all the time.

Compassionate blame allows us to recognize the seriousness of the harms at issue, treat the survivor as a responsible person, and appropriately acknowledge their suffering. It enables us to respond appropriately to a difficult situation in which those who have been hurt hurt others, and to do so in a way that attends to the complex moral features of these relationships.

Want more?

Read the full-length version of this article at https://journals.publishing.umich.edu/ergo/article/id/1116/

References

  • Snow, N. E. (1991). Compassion. American Philosophical Quarterly, 28(3), 195–205. 

About the author

photo of the author

Alycia LaGuardia-LoBianco is an Assistant Professor of Philosophy at Grand Valley State University, where she teaches and researches in feminist philosophy, ethics, moral psychology, and the philosophy of psychiatry. She is especially curious about how experiences of oppression, trauma, and mental illness shape personal identity and responsibility.

Posted on

Laura Schroeter and François Schroeter – “Bad News for Ardent Normative Realists?”

Portrait of a man composed by painting on the canvas various objects traditionally associated with fire – such as sticks, wood, guns and other tools – in such a way that they compose a human head.
“Fire” (1566) Giuseppe Arcimboldo

In this post, Laura and François Schroeter discuss their article recently published in Ergo. The full-length version of the article can be found here.

Many metaethicists are attracted to a position Matti Eklund (2017) calls ‘Ardent Normative Realism’. The main motivation behind this position can be illustrated with the help of a couple of examples.

Imagine you are disagreeing with a friend about whether abortion at 20 weeks is morally wrong. Imagine further that the two of you have a very different understanding of what it takes for an action to be morally wrong: you think that morality is determined by God’s law, while your friend does not. Despite this divergence, you two seem to be genuinely disagreeing about the same topic. If we interpreted you as talking past each other, we would be failing to take the normative authority of morality seriously (Enoch 2011). Both of you are interested in what is morally wrong tout court, not what is morally wrong according to the idiosyncratic standards of some individual.

Similarly, if we imagine two separate communities debating the same issue, we would have to say that they are interested in finding out whether abortion at 20 weeks is morally wrong tout court, rather than whether it is wrong according to the normative standards specific to the community. Settling for less would deflate the normative authority of morality.

In order to vindicate these intuitions, proponents of Ardent Normative Realism endorse a strong form of metaphysical realism in the moral domain. According to the Ardent Normative Realist, “reality itself favors certain ways of valuing and acting” (Eklund 2017: 1). If two communities disagree on moral questions, they cannot both be getting it right. At most, one of them is “limning” the normative structure of reality (22).

Now, suppose we grant that reality does indeed favor certain ways of valuing and acting. The Ardent Realist still faces an important problem. Given their radically different understandings of what makes an action morally wrong, how is it possible for individuals and communities to pick out the same reference with their moral terms? Contrast the moral term with the term ‘bachelor’, for example. The term ‘bachelor’ has the same reference, even when used by different individuals, because we all have very similar empirical criteria for who counts as a bachelor: unmarried eligible males. But imagine we introduce a new term, ‘nuba’, and different individuals have radically divergent views about what it takes for something to be a nuba. How can the term ‘nuba’ pick out the same property when it is used by individuals who rely on different application criteria?

To address this problem, many Ardent Realists have been tempted by a thesis Eklund calls ‘Referential Normativity’:

Two predicates or concepts conventionally associated with the same normative role are thereby determined to have the same reference. (Eklund 2017: 10)

Imagine that all it takes to count as competent with our new term, ‘nuba’, is that it plays the same normative role in one’s psychology that English speakers associate to ‘morally wrong’. For instance, if a speaker judges that an action is nuba, they will be disposed to avoid performing that action or to feel guilt if they do perform it. According to Referential Normativity, all it takes for speakers to pick out the same reference is that they take ‘nuba’ to play this normative role; their divergent empirical criteria for classifying actions as ‘nuba’ are strictly irrelevant to fixing its reference.

Obviously, it would be great news for Ardent Realists if Referential Normativity were true. However, we argue that Referential Normativity is just too good to be true. To show what’s problematic about it, we need to step back and ask foundational questions about how reference is determined. There is much controversy in the philosophical literature concerning this topic, but we seek to sidestep those divergences by focusing on points of agreement among theorists of reference determination.

What is the point of referential ascriptions? We suggest that ascribing a specific reference to an adjective like ‘nuba’ must:

(i) help explain the reasoning and actions of subjects using the term ‘nuba’, and

(ii) set truth-conditions for assessing whether assertions and beliefs involving ‘nuba’ are correct. 

Suppose, for instance, that we interpret competent speakers’ use of ‘nuba’ as attributing the property of being loud. This interpretation flouts both (i) and (ii). The interpretation is not explanatory because most users of ‘nuba’ will not associate its defining normative role with all and only loud actions, and so attributing this reference will not help to explain their reasoning and actions. And the interpretation does not set a plausible standard of correctness because there is no plausible story why all competent users are failing to live up to their semantic commitments if they fail to apply ‘nuba’ to loud actions. We must conclude that the interpretation of ‘nuba’ as referring to being loud is mistaken. 

In the full-length version of our paper, we examine different attempts to reconcile Referential Normativity with constraints (i) and (ii). We argue that these attempts all fail. In a nutshell, Referential Normativity tries to pull a rabbit out of a hat. The mere normative role associated with a term like ‘morally wrong’ is insufficient to ground the ascription of any empirically instantiated property as its reference. 

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/1135/

References

  • Eklund, M. (2017). Choosing Normative Concepts. Oxford, Oxford University Press.
  • Enoch, D. (2011). Taking Morality Seriously: A Defense of Robust Realism. Oxford, Oxford University Press.

About the authors

Picture of the first author.
Picture of the second author.

Laura and François Schroeter are Associate Professors of Historical and Philosophical Studies at the University of Melbourne. 

Laura received her PhD from the University of Michigan. After that, she took up a postdoctoral fellowship at the Research School of the Social Sciences at the Australian National University. She joined the University of Melbourne in 2008. Her research focuses on the philosophy of language, the philosophy of mind, and metaethics. She has written extensively about two-dimensional semantics, concept individuation, and normative concepts.

François received his PhD from the University of Fribourg. He joined the Philosophy Department at Melbourne in 2003, after spending time at the University of Michigan and at the Research School of Social Sciences at the Australian National University. He is interested in normative concepts, metaethics, and moral psychology.

Posted on

Kengo Miyazono – “Visual Experiences without Presentational Phenomenology”

The image represents a landscape in the style of cubism, where the surfaces of three dimensional objects are laid out in two-dimensional space with alienating effects. This is meant to be somewhat analogous to the visual experience of patients with derealization/depersonalization disorder described in the article.
“Mediterranean Landscape” (1952) © Pablo Picasso

In this post, Kengo Miyazono discussed the article he recently published in Ergo. The full-length version of Kengo’s paper can be found here.

Compare the following quotes.

[1] Suppose you are standing in a field on a bright sunny day. Your vision is good, and you know that, and you’ve no thought to distrust your eyes. A friend shouts from behind. You turn. It looks as if a rock is flying at your face. You wish not to be hit. [...] Your visual experience will place a moving rock before the mind in a uniquely vivid way. Its phenomenology will be as if a scene is made manifest to you. [...] Such phenomenology involves a uniquely vivid directedness upon the world. Visual phenomenology makes it for a subject as if a scene is simply presented. Veridical perception, illusion and hallucination seem to place objects and their features directly before the mind. (Sturgeon 2000, 9)
[2] Everything appears as through a veil [...] Things do not look as before, they are somehow altered, they seem strange, two-dimensional. [...] Everything seems extraordinarily new as if I had not seen it for a long time. (Jaspers 1997, 62) 
[3] Familiar things look strange and foreign. [...] It’s all just there and it’s all strange somehow. I see everything through a fog. Fluorescent lights intensify the horrible sensation and cast a deep veil over everything. I’m sealed in plastic wrap, closed off, almost deaf in the muted silence. It is as if the world were made of cellophane or glass. (Simeon & Abugel 2006, 81) 

The first quote is from Scott Sturgeon’s discussion of the phenomenology of visual experience. The second and the third quotes are subjective reports of patients with depersonalization-derealization disorder. In my view, these quotes, although taken from very different contexts, are referring to the same thing. Or, more precisely, the first quote is describing the presence of something, while the second and the third quotes are describing the absence of it. The thing in question is “presentational phenomenology” (Chudnoff 2012; “Scene-Immediacy” in Sturgeon’s own terminology).

My hypothesis is that presentational phenomenology is absent from visual experiences in cases of derealization. This hypothesis provides a plausible explanation of the peculiar subjective reports of derealization. Frequent expressions of derealization reported in the Cambridge Depersonalization Scale (Sierra & Berrios 2000) include the following:

Out of the blue, I feel strange, as if I were not real or as if I were cut off from the world.
What I see looks ‘flat’ or ‘lifeless’, as if I were looking at a picture.
My surroundings feel detached or unreal, as if there were a veil between me and the outside world. 

A remarkable feature of the subjective reports of derealization is that they are metaphorical, not literal. As Jaspers points out, it seems as though it is impossible for the patients to express their experience directly. They do not think that the world has really changed; they just feel as if everything looked different to them. (Jaspers 1997: 62). 

Another remarkable feature is that the metaphorical expressions of derealization have some recurrent themes. People with derealization often say that they feel as if they were in a “fog”, “dream”, or “bubble”, or as if there were a “veil” or a “glass wall” between them and external objects. Metaphors of this kind seem to express the idea of indirectness or detachment. They also say that they feel as if they were looking at a “picture” or a “movie”, or as if external objects were “flat”. Metaphors of this kind seem to express the idea of representation.

My hypothesis explains why subjective reports of derealization tend to be metaphorical rather than literal. When presentational phenomenology is absent from visual experience, most patients (except philosophers of mind) do not have a suitable concept (such as the concept of “presentational phenomenology”) to refer to what is missing in a direct, non-metaphorical manner; the best thing they can do is to describe it metaphorically. 

My hypothesis also explains the recurrent themes of the metaphors, namely indirectness and representation. In general, presentational phenomenology involves a sense of directness (e.g. “place objects and their features directly before the mind” in the first quote above) as well as a sense of presentation (e.g. “as if a scene is simply presented” in the first quote). Thus, it makes sense that patients with depersonalization-derealization disorder would use metaphorical expressions of in-directness and re-presentation in order to signal its absence.

Is the hypothesis that presentational phenomenology is absent from visual experiences in cases of derealization also empirically plausible?

The general consensus in the empirical and clinical literature is that affective or interoceptive abnormalities are at the core of depersonalization and derealization (e.g. Sierra 2009; Sierra & Berrios 1998; Seth, Suzuki, & Critchley 2012). One might think that this is a problem: the empirically and clinically plausible view might seem to be that derealization is an affective or interoceptive abnormality rather than an abnormality in presentational phenomenology. Note, however, that this interpretation presupposes that an abnormality in presentational phenomenology is not also an affective or interoceptive abnormality. A different, better interpretation is also available: that an abnormality in presentational phenomenology in itself constitutes, at least in part, the affective/interoceptive abnormality in question. This interpretation suggests that these are not at all alternative accounts, and that presentational phenomenology is, generally speaking, a kind of affective phenomenology.  

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/1156/.

References

  • Chudnoff, Elijah (2012). Presentational Phenomenology. In Sofia Miguens and Gerhard Preyer (Eds.), Consciousness and Subjectivity (51–729). Ontos Verlag.
  • Jaspers, Karl (1997). General Psychopathology (Vol. 1). Trans. J. Hoenig and Marian W. Hamilton. Johns Hopkins University Press.
  • Seth, Anil K., Keisuke Suzuki, and Hugo D. Critchley (2012). “An Interoceptive Predictive Coding Model of Conscious Presence”. Frontiers in Psychology2(395), 1–16.
  • Sierra, Mauricio (2009). Depersonalization: A New Look at A Neglected Syndrome. Cambridge University Press.
  • Sierra, Mauricio and German E. Berrios (1998). Depersonalization: Neurobiological Perspectives. Biological Psychiatry44(9), 898–908.
  • Sierra, Mauricio and German E. Berrios (2000). “The Cambridge Depersonalisation Scale: A New Instrument for the Measurement of Depersonalisation”. Psychiatry Research93(2), 153–164.
  • Simeon, Daphne and Jeffrey Abugel (2006). Feeling Unreal: Depersonalization Disorder and the Loss of the Self. Oxford University Press.
  • Sturgeon, Scott (2000). Matters of Mind: Consciousness, Reason and Nature. Routledge.

About the author

Kengo Miyazono is Associate Professor of Philosophy at Hokkaido University. Previously, he was Associate Professor at Hiroshima University and Research Fellow at the University of Birmingham. He received his PhD from the University of Tokyo. He specializes in philosophy of mind, philosophy of psychology, and philosophy of psychiatry.