Posted on

Kristie Miller – “Against Passage Illusionism”

Detail of Salvador Dalì’s tarot card “The Magician” (1983)

In this post, Kristie Miller discusses her article recently published in Ergo. The full-length version of Kristie’s article can be found here.

It might seem obvious that we experience the passing of time. Certainly, in some trivial sense we do. It is now late morning. Earlier, it was early morning. It seems to me as though some period of time has elapsed since it was early morning. Indeed, during that period it seemed to me as though time was elapsing, in that I seemed to be located at progressively later times.

One question that arises is this: in what do these seemings consist? One way to put the question is to ask what content our experience has. What state of the world does the experience represent as being the case?

Philosophers disagree about which answer is correct. Some think that time itself passes. In other words, they think that there is a unique set of events that are objectively, metaphysically, and non-perspectivally present, and that which events those are, changes. Other philosophers disagree. They hold that time itself is static; it does not pass, because no events are objectively, metaphysically, and non-perspectivally present, such that which events those are, changes. Rather, whether an event is present is a merely subjective or perspectival matter, to be understood in terms of where the event is located relative to some agent.

Those who claim that time itself passes typically use this claim to explain why we experience it as passing: we experience time as passing because it does. What, though, should we say if we think that time does not pass, but is rather static? You might think that the most natural thing to say would be that we don’t experience time as passing. We don’t represent there being a set of events that are non-perspectivally present, and that which those are, changes. Of course, we represent various events as occurring in a certain temporal order, and as being separated by a certain temporal duration, and we experience ourselves as being located at some times (rather than others) – but none of that involves us representing that some events have a special metaphysical status, and that which events have that status, changes. So, on this view, we have veridical experiences of static time.

Interestingly, however, until quite recently this was not the orthodox view. Instead, the orthodoxy was a view known as passage illusionism. This is the view that although time does not pass, it nevertheless seems to us as though it does. So, we are subject to an illusion in which things seem to us some way that they are not. In my paper I argue against passage illusionism. I consider various ways that the illusionist might try to explain the illusion of time passing, and I argue that none of them is plausible.

The illusionist’s job is quite difficult. First, the illusion in question is pervasive. At all times that we are conscious, it seems to us as though time passes. Second, the illusion is of something that does not exist – it is not an experience which could, in other circumstances, be veridical.

In the psychological sciences, illusions are explained by appealing to cognitive mechanisms that typically function well in representing some feature(s) of our environment. In most conditions, these mechanisms deliver us veridical experiences. In some local environments, however, certain features mislead the mechanism to misrepresent the world, generating an illusion. These kinds of explanation, however, involve illusions that are not pervasive (they occur only in some local environments) and are not of something that does not exist (they are the product of mechanisms that normally deliver veridical experiences). This gives us reason to be hesitant that any explanation of this kind will work for the passage illusionist.

I consider a number of mechanisms that represent aspects of time, including those that represent temporal order, duration, simultaneity, motion and change. I argue that, regardless of how we think about the content of mental states, we should conclude that none of the representational states generated by these mechanisms individually, or jointly, represent time as passing.

First, suppose we think that the content of our experiences is exhausted by the things in the world that those experiences typically co-vary with.  For instance, suppose you have a kind of mental state which typically co-varies with the presence of cows. On this view, that mental state represents cows, and nothing more. I argue that if we take this view of representational content, then none of the contents generated by the functioning of the various mechanisms that represent aspects of time, could either severally or, importantly, jointly, represent time as passing. For even if our brains could in some way ‘knit together’ some of these contents into a new percept, such contents don’t have the right features to generate a representation of time passing. For instance, they don’t include a representation of objective, non-perspectival presence. So, if we hold this view on mental content, we should think that passage illusionism is false.

Alternatively, we might think that our mental states do represent the things in the world with which they typically co-vary, but that their content is not exhausted by representing those things. So, the illusionist could argue that we experience passage by representing various temporal features, such that our experiences have not only that content, but also some extra content, and that jointly this generates a representation of temporal passage.

I argue that it is very hard to see why we would come to have experiences with this particular extra content. Representing that certain events are objectively, metaphysically, and non-perspectivally present, and that which event these are, changes, is a very sophisticated representation. If it is not an accurate representation, it’s hard to see why we would come to have it. Further, it seems plausible that the human experience of time is, in this regard, similar to the experience of some non-human animals. Yet it seems unlikely that non-human animals would come to have such sophisticated representations, if the world does not in fact contain passage.

So, I conclude, it is much more likely, if time does not pass, that we have veridical experiences of a static world rather than illusory experiences of a dynamical world.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2914/.

About the author

Kristie Miller is Professor of Philosophy and Director of the Centre for Time at the University of Sydney. She writes on the nature of time, temporal experience, and persistence, and she also undertakes empirical work in these areas. At the moment, she is mostly focused on the question of whether, assuming we live in a four-dimensional block world, things seem to us just as they are. She has published widely in these areas, including three recent books: “Out of Time” (OUP 2022), “Persistence” (CUP 2022), and “Does Tomorrow Exist?” (Routledge 2023). She has a new book underway on the nature of experience in a block world, which hopefully will be completed by the end of 2024. 

Posted on

Markus Pantsar – “On Radical Enactivist Accounts of Arithmetical Cognition”

Two children selling fruit from a basket count the coins they just received.
Detail of “The Little Fruit Seller” (c. 1670-1675) Bartolomé Esteban Murillo

In this post, Markus Pantsar discusses the article he recently published in Ergo. The full-length version of his article can be found here.

Traditionally, cognitive science has held the view that the human mind works through, or is at least best explained by, mental repre­sentations and computations (e.g., Chomsky 1965/2015; Fodor 1975; Marr 1982; Newell 1980). Radical enactivist accounts of cognition challenge this paradigm. According to them, the most basic forms of cognition do not to involve mental representations or mental content; representations (and content) exist only in minds that have access to linguistic and sociocultural truth-telling practices (Hutto and Myin 2013, 2017).

As presented by Hutto and Myin, radical enactivism is a general approach to the philosophy of cognition. It is partly from this generality that it gets much of its force and appeal. However, a general theory of cognition ultimately needs to be tested on particular cognitive phenomena. In my paper, I set out to do just that with regard to arithmetical cognition. I am not a radical enactivist, but neither am I antagonistic to the approach. My aim is to provide a dispassionate analysis based on the progress that has been made in the empirical study and philosophy of numerical cognition.

Arithmetical cognition is especially suited to test radical enactivism (Zahidi 2021). This is not because arithmetic itself suggests the existence of non-linguistic representations. In fact, since Dedekind and Peano presented an axiomatization for arithmetic, it became clear that the entire arithmetic of natural numbers can be presented in a very simple language with only a handful of rules (i.e., the axioms) (Dedekind 1888; Peano 1889).

It is not arithmetic as a mathematical theory that presents challenges for radical enactivism; it is rather the development of arithmetic. This development happens on two levels. First, at the level of individuals, we have the ontogenetic development of arithmetical cognition. Second, at the level of populations and cultures, we have the phylogenetic and cultural-historical development of arithmetic. In my paper I focus on the ontogenetic level, because it is at that level that radical enactivism faces its most serious challenge.

It is commonly accepted that, in learning arithmetical knowledge and skills, children apply their innate, evolutionarily-acquired proto-arithmetical abilities (Pantsar 2014, 2019). These abilities – sometimes also called “quantical” (Núñez 2017) – are already present in human infants, and we share them with many non-human animals.

According to the most common view, there are two main proto-arithmetical abilities (Knops 2020). The first is subitizing: the ability to determine the amount of objects in our field of vision without counting. Subitizing enables detecting exact quantities, but it stops working after three or four objects. For larger collections, there is an estimating ability. This ability is not limited to small quantities, but it gets increasingly inaccurate as the size of the observed collection increases.

For the present topic, the literature on subitizing and estimating presents interesting questions. Following the work of Elizabeth Spelke (2000) and Susan Carey (2009), it is commonplace to associate each ability with a special core cognitive system (Hyde 2011). Subitizing is associated with the object tracking system (OTS), which allows for the parallel observation of objects in the subitizing range, up to three or four. Estimating is associated with the approximate number system (ANS), which is thought to be a numerosity-specific system.

The problem for the radical enactivist is that, under most interpretations, both the OTS and ANS are based on non-linguistic representations. The OTS is based on the observed objects occupying mental object files, one file for one object (Beck 2017; Carey 2009). For example, when I see three apples, three object files are occupied, and we can understand this as a representation of the number of the apples.

The ANS, on the other hand, is usually interpreted as representing quantities on a mental number line (Dehaene 2011). This line is likely to be logarithmic, given that the estimating ability becomes less accurate as the quantities become larger. Studies on anumerical cultures in the Amazon provide further evidence of this; members of those cultures tend to place quantities on a (physical) number line in a logarithmic manner (Dehaene et al. 2008; but see Núñez 2011).

Therefore, we have good empirical evidence in support of the idea that proto-arithmetical abilities are to be interpreted in terms of non-linguistic representations. Now the question is: can radical enactivism provide an alternative explanation for proto-arithmetical abilities without evoking representations?

This proves to be difficult, because it requires answering what is perhaps the most fundamental question in the field: namely, what exactly is a mental representation? Should visual memories, for example, be considered representations? For the radical enactivist they should not, but little evidence or argumentation has been provided to support this denial. In the present context, we must ask: could the OTS and the ANS work without using representations? Radical enactivism says so, but there is little solid evidence in support of this view.

Nonetheless, it should also be noted that the object files and the mental number line as explanations of the functioning of the OTS and the ANS, respectively, are currently nothing more than theoretical postulations : neither object files nor a mental number line have been located in the brain at the neuronal level, although fMRI studies give us good clues on where to look (Nieder 2016).

To be sure, some monkey studies have detected the existence of number neurons: i.e., specific groups of neurons whose firing is connected to observing a particular (small) quantity of objects (Nieder 2016), and one could infer that such number neurons count as representations of quantities in the brain. But this inference is exactly the kind of inference that radical enactivists have warned us against. Radical enactivists agree that there is non-linguistic processing of information in the brain, but they deny that in such cases there is content, i.e., representations. In the words of Hutto and Myin, brains process non-linguistic information-as-covariance, but not information-as-content (Hutto and Myin 2013:67).

In conclusion, where do we stand? Is there a way forward in the debate on representations? I believe there is, but it would be spurious to claim that philosophers can find it on their own. Instead, we will need a better empirical understanding of the neuronal activity associated with the functioning of the OTS and the ANS. At the same time, it would also be misguided to expect empirical data alone to resolve the issue. We will not find groups of neurons that are unassailably non-linguistic representations, and philosophers will need to continue working with empirical researchers in an effort to gain more knowledge about the proto-arithmetical abilities.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/3120/.

References

  • Beck, J. (2017). “Can Bootstrapping Explain Concept Learning?” Cognition 158:110–21.
  • Carey, S. (2009). The Origin of Concepts. Oxford: Oxford University Press.
  • Chomsky, Noam (2015). Aspects of the Theory of Syntax (50th anniversary ed.). MIT Press. (Original work published 1965)
  • Dedekind, Richard. (1888). Richard Dedekind: was sind und was sollen die Zahlen?: Stetigkeit und irrationale Zahlen. 1. Auflage. edited by S. Müller-Stach. Berlin [Heidelberg]: Springer Spektrum.
  • Dehaene, S., V. Izard, E. Spelke, and P. Pica. (2008). “Log or Linear? Distinct Intuitions of the Number Scale in Western and Amazonian Indigene Cultures.” Science 320:1217–20.
  • Dehaene, Stanislas. (2011). The Number Sense: How the Mind Creates Mathematics, Revised and Updated Edition. Revised, Updated ed. edition. New York: Oxford University Press.
  • Fodor, J. (1975). The Language of Thought. New York: Harvard University Press.
  • Hutto, D. D., and E. Myin. (2013). Radicalizing Enactivism. Basic Minds without Content. Cambridge, MA: MIT Press.
  • Hutto, D. D., and E. Myin. (2017). Evolving enactivism. Basic minds meet content. Cambridge, MA: MIT Press.
  • Hyde, D. C. (2011). “Two Systems of Non-Symbolic Numerical Cognition.” Frontiers in Human Neuroscience 5:150.
  • Knops, A. (2020). Numerical Cognition. The Basics. New York: Routledge.
  • Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. San Francisco: W.H. Freeman and Company.
  • Newell, A. (1980). “Physical symbol systems.” Cognitive Science 4(2):135–83.
  • Nieder, A. (2016). “The Neuronal Code for Number.” Nature Reviews Neuroscience 17(6):366.
  • Núñez, Rafael E. (2011). “No Innate Number Line in the Human Brain.” Journal of Cross-Cultural Psychology 42(4):651–68.
  • Núñez, Rafael E. (2017). “Is There Really an Evolved Capacity for Number?” Trends in Cognitive Science 21:409–24.
  • Pantsar, Markus. (2014). “An Empirically Feasible Approach to the Epistemology of Arithmetic.” Synthese 191(17):4201–29. doi: 10.1007/s11229-014-0526-y.
  • Pantsar, Markus. (2019). “The Enculturated Move from Proto-Arithmetic to Arithmetic.” Frontiers in Psychology 10:1454.
  • Peano, G. (1889). “The Principles of Arithmetic, Presented by a New Method.” Pp. 101–34 in Selected Works of Giuseppe Peano, edited by H. Kennedy. Toronto; Buffalo: University of Toronto Press.
  • Spelke, Elizabeth S. (2000). “Core Knowledge.” American Psychologist 55(11):1233–43. doi: 10.1037/0003-066X.55.11.1233.
  • Zahidi, K. (2021). “Radicalizing numerical cognition.” Synthese 198(Suppl 1):529–45.

About the author

Markus Pantsar is a guest professor at the RWTH University in Aachen. He has the title of docent at University of Helsinki. Pantsar’s main research fields are philosophy of mathematics and artificial intelligence. His upcoming book “Numerical Cognition and the Epistemology of Arithmetic” (Cambridge University Press) will present a detailed, empirically-informed philosophical account of arithmetical knowledge.

Posted on

Cameron Buckner – “A Forward-Looking Theory of Content”

Self-portrait of Vincent Van Gogh from 1889.
“Self-portrait” (1889) Vincent van Gogh

In this post, Cameron Buckner discusses the article he recently published in Ergo. The full-length version of Cameron’s article can be found here.

As far as kinds of thing go, representations are awfully weird. They are things that by nature are about other things. Van Gogh’s self-portrait is about Van Gogh; and my memory of breakfast this morning is about some recently-consumed steel-cut oats.

The relationship between a representation and its target implicates history; part of what it is to be a portrait of Van Gogh is to have been crafted by Van Gogh to resemble his reflection in a mirror, and part of what it is to be the memory of my breakfast this morning is to be formed through perceptual interaction with my steel-cut oats.

Mere historical causation isn’t enough for aboutness, though; a broken window isn’t about the rock thrown through it. Aboutness thus also seems to implicate accuracy or truth evaluations. The painting can portray Van Gogh accurately or inaccurately; and if I misremember having muesli for breakfast this morning, then my memory is false. Representation thus also introduces the possibility of misrepresentation.

As if things weren’t already bad enough, we often worry about indeterminacy regarding a representation’s target. Suppose, for example, that Van Gogh’s portrait resembles both himself and his brother Theo, and we can’t decide who it portrays. Sometimes this can be settled by asking about explicit intentions; we can simply ask Van Gogh who he intended to paint. Unfortunately, explicit intentions fail to resolve the content of basic mental states like concepts, which are rarely formed through acts of explicit intent.

To paraphrase Douglas Adams, allowing the universe to contain a kind of thing whose very nature muddles together causal origins, accuracy, and indeterminacy in this way made a lot of people very angry and has widely been regarded as a bad move.

There was a period from 1980-1995, which I call the “heyday of work on mental content”, where it seemed like the best philosophical minds were working on these issues and would soon sort them out. Fodor, Millikan, Dretske, and Papineau served as a generation of “philosophical naturalists” who hoped that respectable scientific concepts like information and biological function would definitively address these tensions.

Information theory promised to ground causal origins and aboutness in the mathematical firmament of probability theory, and biological functions promised to harmonize historical origins, correctness, and the possibility of error using the respectable melodies of natural selection or associative learning.

Dretske, for example, held that associative learning bestowed neural states with representational functions; by detecting correlations between bodily movements produced in response to external stimuli and rewards—such the contingency between a rat’s pressing of a bar when a light is on and receipt of a food pellet reward—Dretske held that instrumental conditioning creates a link between a perceptual state triggered by the light and a motor state that controls bar-pressing movements, causing the rat to reliably press the bar more often in the future when the light is activated. Dretske says in this case that the neural state of detecting the light indicates that light is on, and when learning recruits this indicator to control bar-pressing movements, it bestows upon it the function of indicating this state of affairs going forward— a function which it retains even if it is later triggered in error, by something else (thus explaining misrepresentation as well).

This is a lovely way of weaving together causal origins, accuracy, and determinacy, and, like many other graduate students in the 1990s and 2000s, I got awfully excited about it when I first heard about it. Unfortunately, it still doesn’t work. There are lots of quibbles, but the main issue is that, despite appearances, it still has a hard time allowing for a representation to be both determinate and (later) tokened in error.

A diagram of Dretske’s “structuring cause” solution to the problem of mental content. On his view, neural state N is about stimulus conditions F if learning recruits N to cause movements M because of its ability to indicate F in the learning history. In recruiting N to indicate F going forward, Dretske says that it provides a “structuring cause” explanation of behavior; that it indicated F in the past explains why it now causes M. However, if content is fixed in the past in this way, then organisms can later persist in error indefinitely (e.g. token N in the absence of F) without ever changing their representational strategies. On my view, such persistent error provides evidence that the organism doesn’t actually regard tokening N in the absence of F as an error, that F is not actually the content of N (by the agent’s own lights).
Figure1. A diagram of Dretske’s “structuring cause” solution to the problem of mental content. On his view, neural state N is about stimulus conditions F if learning recruits N to cause movements M because of its ability to indicate F in the learning history. In recruiting N to indicate F going forward, Dretske says that it provides a “structuring cause” explanation of behavior; that it indicated F in the past explains why it now causes M. However, if content is fixed in the past in this way, then organisms can later persist in error indefinitely (e.g. token N in the absence of F) without ever changing their representational strategies. On my view, such persistent error provides evidence that the organism doesn’t actually regard tokening N in the absence of F as an error, that F is not actually the content of N (by the agent’s own lights).

I present the argument as a dilemma on the term “indication”. Indication either requires perfect causal covariation, or something less. Consider the proverbial frog and its darting tongue; if the frog will also eat lead pellets flicked through its visual field, then its representation can only perfectly covary with some category that includes lead pellets, such as “small, dark, moving speck”. On this ascription, it looks impossible for the frog to ever make a mistake, because all and only small dark moving specks will ever trigger its tongue movements. If on the other hand indication during recruitment can be less than perfect, then we could say that the representation means something more intuitively satisfying like “fly”, but then we’ve lost the tight relationship between information theory and causal origins to settle indeterminacy, because there are lots of other candidate categories that the representation imperfectly indicated during learning (such as insect, food item, etc.).

This is all pretty familiar ground; what is less familiar is that there is a relatively unexplored “forward- looking” alternative that starts to look very good in light of this dilemma.

To my mind, the views that determine content by looking backward to causal history get into trouble precisely because they do not assign error a role in the content-determination process. Error on these views is a byproduct of representation; on backward-looking views, organisms can persist in error indefinitely despite having their noses rubbed in evidence of their mistake, like the frog that will go on eating BBs until its belly is full of lead.

Representational agents are not passive victims of error; in ideal circumstances, they should react to errors, specifically by revising their representational schemes to make those errors less likely in the future. Part of what it is to have a representation of X is to regard evidence that you’ve activated that representation in the absence of X as a mistake.

Content ascriptions should thus be grounded in the agent’s own epistemic capacities for revising its representations to better indicate their contents in response to evidence of representational error. Specifically, on my view, a representation means whatever it indicates at the end of its likeliest revision trajectory—a view that, not coincidentally, happens to fit very well with a family of “predictive processing” approaches to cognition that have recently achieved unprecedented success in cognitive science and artificial intelligence.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2238/.

About the author

Cameron Buckner is an Associate Professor in the Department of Philosophy at the University of Houston. His research primarily concerns philosophical issues which arise in the study of non-human minds, especially animal cognition and artificial intelligence. He just finished writing a book (forthcoming from OUP in Summer 2023) that uses empiricist philosophy of mind to understand recent advances in deep-neural-network-based artificial intelligence.

Posted on

Henry Clarke – “Mental Filing Systems: A User’s Guide”

The painting depicts a person looking out from an interior through a sash window, which echoes the theme of the article: a thinker's view of the world being made up of compartmentalized bodies of information.
“Tall Windows” (1913) Wilhelm Hammershøi

In this post, Henry Clarke discusses his article recently published in Ergo. The full-length version of Henry’s article can be found here.

For many, if not all, of the objects we can think about, we have a reasonably rich conception of what they are like. One of the basic representational functions of the mind is to draw together different bits of information to make up these conceptions. An image that naturally suggests itself is of the mind as a kind of filing system, with folders dedicated to different objects, and descriptions placed in each to be called upon when needed.

The mental filing system idea is philosophically useful in that it provides a framework for understanding how a thinker can treat her beliefs as being about one and the same thing. Doing so results in her being prepared to draw certain implications from the content of a conception without having to decide whether that identity holds. For example, someone with a mental file that contains the descriptions tall, leafy, has brown bark can immediately infer that something is tall, leafy, with brown bark. The inference presupposes that being tall, leafy, etc. are all true of one and the same object. This sort of presupposition is a fundamental part of thought, and of a thinker’s perspective on her thought. Having such a perspective sustains the sort of rationally structured view of the world that we can develop and make use of, and mental files have seemed to many like a good way of capturing this.

But does the mental filing system idea actually tell us something about how the mind works? Or is it just a dispensable metaphor? Rachel Goodman and Aidan Gray (2022) have argued (taking François Recanati’s theory of mental files as their working example), that mental filing – taking one’s beliefs immediately to be about the same thing – doesn’t require taking the file image too seriously. The source of their skepticism comes from an analysis of what it means to say that we ‘can’ treat our beliefs as being about the same thing without having to figure out the relevant identity. They argue that this is a matter of rational permissibility, and if that’s so, then mental files aren’t needed. 

Why would that be? Inferences are permissible because of the contents of the beliefs they involve. If reference isn’t enough to account for permissibility, as it appears not to be, then there must be some other feature that is. Following Kit Fine (2007) Goodman and Gray call this feature coordination. Coordination, like other representational features, can be attributed because it helps to make sense of what and how a thinker thinks what they do. But if we have this representational feature, which can be attributed to make sense of treating beliefs as being about the same thing, then files add nothing. You can have mental filing without mental files. The appeal of this result is that it seems to give a more refined picture of what is actually involved in rationally structured thinking, without unnecessary metaphors.

That is their argument, in outline. Does it work? The main problem is that it overlooks the psychology of mental filing. What calls for explanation is not just the permissibility of inferences that presuppose identity, but also the fact that thinkers are prepared to make them. Mental files can be brought in as entities whose function is to account for when a thinker is prepared to draw these inferences. The causal basis of that function might be described in other terms that tell us how the function is carried out. But as a hypothesis – that at some functional level there is something that brings together different bits of information and so provides the causal basis for a thinker being disposed to presuppose identity – mental files do the job nicely.

This causal-functional view of files shows that there is a notion of a mental file that does some work. This undermines Goodman and Grey’s argument because it renders coordination, the extra representational feature, redundant. Suppose we have the identity-presupposing dispositions in place, because of the presence of a mental file. Then the question is, do we need to add a representational feature (coordination) to make them permissible? It seems not. If a thinker is disposed to make the inferences, and nothing indicates that there is something faulty going on, then the inferences are in good standing. The results of the inferences will be (at least potentially) relied upon by the thinker in pursuing her plans and projects – things that matter to her. This means that, were there to be something to indicate that the conception in question was somehow incorrectly formed, then the thinker should be motivated to check the inferences she is disposed to draw. If she is rational, then she will do this, and so were there to be a problem, the inferences would not be made. Having the dispositions and monitoring their viability is enough for permissibility because it makes them manifestly reliable. So Goodman and Grey’s conclusion ought to be inverted: coordination doesn’t add anything that would otherwise be missing from the account that files provide. 

There is other work for files to do as well. Goodman and Grey suggest that the basis for coordination is a thinker reliably gathering information together that does concern one and the same object. But we need files for this to happen. Interpreting new information when finding out more about what objects are like calls upon the conceptions we already have. The content of a mental file will tell us how to locate the new information we get from various sources: in order to recognize something we’ve already encountered, or to determine (for example) that a new person we meet really is someone new and not an old acquaintance, we need to use information integrated by mental files.

Forming a picture of the world means gathering together these smaller-scale pictures of objects, and to do that, we need the kind of structure that mental files provide. They are not just a metaphor. But how exactly they work remains to be uncovered.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2249/

References

  • Kit Fine (2007). Semantic Relationism. Blackwell.
  • Rachel Goodman and Aidan Grey (2022). “Mental Filing”. Noûs, 56(1), 204–226.

About the author

Henry Clarke is a Senior Project Editor in the Humanities at Oxford University Press. He received his PhD from UCL in 2016. His research focuses on the philosophy of mind.

Posted on

Kengo Miyazono – “Visual Experiences without Presentational Phenomenology”

The image represents a landscape in the style of cubism, where the surfaces of three dimensional objects are laid out in two-dimensional space with alienating effects. This is meant to be somewhat analogous to the visual experience of patients with derealization/depersonalization disorder described in the article.
“Mediterranean Landscape” (1952) © Pablo Picasso

In this post, Kengo Miyazono discussed the article he recently published in Ergo. The full-length version of Kengo’s paper can be found here.

Compare the following quotes.

[1] Suppose you are standing in a field on a bright sunny day. Your vision is good, and you know that, and you’ve no thought to distrust your eyes. A friend shouts from behind. You turn. It looks as if a rock is flying at your face. You wish not to be hit. [...] Your visual experience will place a moving rock before the mind in a uniquely vivid way. Its phenomenology will be as if a scene is made manifest to you. [...] Such phenomenology involves a uniquely vivid directedness upon the world. Visual phenomenology makes it for a subject as if a scene is simply presented. Veridical perception, illusion and hallucination seem to place objects and their features directly before the mind. (Sturgeon 2000, 9)
[2] Everything appears as through a veil [...] Things do not look as before, they are somehow altered, they seem strange, two-dimensional. [...] Everything seems extraordinarily new as if I had not seen it for a long time. (Jaspers 1997, 62) 
[3] Familiar things look strange and foreign. [...] It’s all just there and it’s all strange somehow. I see everything through a fog. Fluorescent lights intensify the horrible sensation and cast a deep veil over everything. I’m sealed in plastic wrap, closed off, almost deaf in the muted silence. It is as if the world were made of cellophane or glass. (Simeon & Abugel 2006, 81) 

The first quote is from Scott Sturgeon’s discussion of the phenomenology of visual experience. The second and the third quotes are subjective reports of patients with depersonalization-derealization disorder. In my view, these quotes, although taken from very different contexts, are referring to the same thing. Or, more precisely, the first quote is describing the presence of something, while the second and the third quotes are describing the absence of it. The thing in question is “presentational phenomenology” (Chudnoff 2012; “Scene-Immediacy” in Sturgeon’s own terminology).

My hypothesis is that presentational phenomenology is absent from visual experiences in cases of derealization. This hypothesis provides a plausible explanation of the peculiar subjective reports of derealization. Frequent expressions of derealization reported in the Cambridge Depersonalization Scale (Sierra & Berrios 2000) include the following:

Out of the blue, I feel strange, as if I were not real or as if I were cut off from the world.
What I see looks ‘flat’ or ‘lifeless’, as if I were looking at a picture.
My surroundings feel detached or unreal, as if there were a veil between me and the outside world. 

A remarkable feature of the subjective reports of derealization is that they are metaphorical, not literal. As Jaspers points out, it seems as though it is impossible for the patients to express their experience directly. They do not think that the world has really changed; they just feel as if everything looked different to them. (Jaspers 1997: 62). 

Another remarkable feature is that the metaphorical expressions of derealization have some recurrent themes. People with derealization often say that they feel as if they were in a “fog”, “dream”, or “bubble”, or as if there were a “veil” or a “glass wall” between them and external objects. Metaphors of this kind seem to express the idea of indirectness or detachment. They also say that they feel as if they were looking at a “picture” or a “movie”, or as if external objects were “flat”. Metaphors of this kind seem to express the idea of representation.

My hypothesis explains why subjective reports of derealization tend to be metaphorical rather than literal. When presentational phenomenology is absent from visual experience, most patients (except philosophers of mind) do not have a suitable concept (such as the concept of “presentational phenomenology”) to refer to what is missing in a direct, non-metaphorical manner; the best thing they can do is to describe it metaphorically. 

My hypothesis also explains the recurrent themes of the metaphors, namely indirectness and representation. In general, presentational phenomenology involves a sense of directness (e.g. “place objects and their features directly before the mind” in the first quote above) as well as a sense of presentation (e.g. “as if a scene is simply presented” in the first quote). Thus, it makes sense that patients with depersonalization-derealization disorder would use metaphorical expressions of in-directness and re-presentation in order to signal its absence.

Is the hypothesis that presentational phenomenology is absent from visual experiences in cases of derealization also empirically plausible?

The general consensus in the empirical and clinical literature is that affective or interoceptive abnormalities are at the core of depersonalization and derealization (e.g. Sierra 2009; Sierra & Berrios 1998; Seth, Suzuki, & Critchley 2012). One might think that this is a problem: the empirically and clinically plausible view might seem to be that derealization is an affective or interoceptive abnormality rather than an abnormality in presentational phenomenology. Note, however, that this interpretation presupposes that an abnormality in presentational phenomenology is not also an affective or interoceptive abnormality. A different, better interpretation is also available: that an abnormality in presentational phenomenology in itself constitutes, at least in part, the affective/interoceptive abnormality in question. This interpretation suggests that these are not at all alternative accounts, and that presentational phenomenology is, generally speaking, a kind of affective phenomenology.  

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/1156/.

References

  • Chudnoff, Elijah (2012). Presentational Phenomenology. In Sofia Miguens and Gerhard Preyer (Eds.), Consciousness and Subjectivity (51–729). Ontos Verlag.
  • Jaspers, Karl (1997). General Psychopathology (Vol. 1). Trans. J. Hoenig and Marian W. Hamilton. Johns Hopkins University Press.
  • Seth, Anil K., Keisuke Suzuki, and Hugo D. Critchley (2012). “An Interoceptive Predictive Coding Model of Conscious Presence”. Frontiers in Psychology2(395), 1–16.
  • Sierra, Mauricio (2009). Depersonalization: A New Look at A Neglected Syndrome. Cambridge University Press.
  • Sierra, Mauricio and German E. Berrios (1998). Depersonalization: Neurobiological Perspectives. Biological Psychiatry44(9), 898–908.
  • Sierra, Mauricio and German E. Berrios (2000). “The Cambridge Depersonalisation Scale: A New Instrument for the Measurement of Depersonalisation”. Psychiatry Research93(2), 153–164.
  • Simeon, Daphne and Jeffrey Abugel (2006). Feeling Unreal: Depersonalization Disorder and the Loss of the Self. Oxford University Press.
  • Sturgeon, Scott (2000). Matters of Mind: Consciousness, Reason and Nature. Routledge.

About the author

Kengo Miyazono is Associate Professor of Philosophy at Hokkaido University. Previously, he was Associate Professor at Hiroshima University and Research Fellow at the University of Birmingham. He received his PhD from the University of Tokyo. He specializes in philosophy of mind, philosophy of psychology, and philosophy of psychiatry.