Posted on

Mario Hubert and Federica Malfatti – “Towards Ideal Understanding”

In this post,  Mario Hubert and Federica Isabella Malfatti discuss their article recently published in Ergo. The full-length version of their article can be found here.

“Sophia Kramskaya Reading” (1863) Ivan Kramskoi

If humans were omniscient, there would be no epistemology, or at least it would be pretty boring. What makes epistemology such a rich and often controversial endeavor are the limits of our understanding and the breadth of our own ignorance.

The world is like a large dark cave, and we are only equipped with certain tools (such as cameras, flashlights, or torches) to make particular spots inside the cave visible. For example, some flashlights create a wide light-cone with low intensity; others create a narrow light-cone with high intensity; some cameras help us to see infrared light to recognize warm objects, etc. From the snippets made visible by these tools, we may construct the inner structure of the cave.

The burden of non-omniscient creatures is to find appropriate tools to increase our understanding of the world and to identify and acknowledge the upper bound of what we can expect to understand. We try to do so in our article “Towards Ideal Understanding”, where we also identify five such tools: five criteria that can guide us to the highest form of understanding we can expect.

Imagine the best conceivable theory for some phenomenon. What would this theory be like? According to most philosophers of science and epistemologists, it would be:

  1. intelligible, i.e. easily applied to reality, and
  2. sufficiently true, i.e. sufficiently accurate about the nature and structure of the domain of reality for which it is supposed to account.

Our stance towards intelligibility and sufficient truth is largely uncontroversial, apart from our demand that we need more. What else does a scientific theory need to provide to support ideal understanding of reality? We think it also needs to fulfill the following three criteria:

  1. sufficient representational accuracy,
  2. reasonable endorsement, and
  3. fit.

The first criterion we introduce describes the relation between a theory and the world, while the other two describe the relation between the theory and the scientist.

We think that the importance of representational accuracy is not much appreciated in the literature (a notable exception is Wilkenfeld 2017). Some types of explanation aim to represent the inner structure of the world. For example, mechanistic explanations explain a phenomenon by breaking it up into (often unobservable) parts, whose interactions generate the phenomenon. But whether you believe in the postulated unobservable entities and processes depends on your stance in the realism-antirealism debate. We think, however, that even an anti-realist should agree that mechanisms can increase your understanding (see also Colombo et al. 2015). In this way, representational accuracy can be at least regarded as a criterion for the pragmatic aspect of truth-seeking. 

How a scientist relates to a scientific theory also matters for a deeper form of understanding. Our next two criteria take care of this relation. Reasonable endorsement describes the attitude of a scientist toward alternative theories such that the commitment to a theory must be grounded in good reasons. Fit is instead a coherence criterion, and it describes how a theory fits into the intellectual background of a scientist.

For example, it might happen that we are able to successfully use a theory (fulfilling the intelligibility criterion) but still find the theory strange or puzzling. This is not an ideal situation, as we argue in the paper. Albert Einstein’s attitude towards Quantum Mechanics exemplifies such a case. He, an architect of Quantum Mechanics, remained puzzled about quantum non-locality throughout his life to the point that he kept producing thought-experiments to emphasize the incompleteness of the theory. Thus, we argue that a theory that provides a scientist with the highest conceivable degree of understanding is one that does not clash with, but rather fits well into the scientist’s intellectual background.

The five criteria we discuss are probably not the whole story about ideal understanding, and there might be further criteria to consider. We regard the above ones as necessary, though not sufficient.

An objector might complain: If you acknowledge that humans are not omniscient, then why do you introduce ideal understanding, which seems like a close cousin of omniscence? If you can reach this ideal, then it is not an ideal. But if you cannot reach it, then why is it useful?

Similar remarks have been raised in political philosophy about the ideal state (Barrett 2023). Our response is sympathetic to Aristotle, who introduces three ideal societies even if they cannot be established in the world. These ideals are exemplars to strive for improvement, and they are also references to recognize how much we still do not understand. Furthermore, this methodology has been the standard for much of the history of epistemology (Pasnau 2018). Sometimes certain traditions need to be overcome, but keeping and aspiring to an ideal (even if we can never reach it) seems not to be one of them… at least to us.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4651/.

References

  • Barrett, J. (2023). “Deviating from the Ideal”. Philosophy and Phenomenological Research 107(1): 31–52.
  • Colombo, M., Hartmann, S., and van Iersel, R. (2015). “Models, Mechanisms, and Coherence”. The British Journal for the Philosophy of Science 66(1): 181–212.
  • Pasnau, R. (2018). After Certainty: A History of Our Epistemic Ideals and Illusions. Oxford University Press.
  • Wilkenfeld, D. A. (2017). “MUDdy Understanding”. Synthese, 194(4): 1-21.

About the authors

Mario Hubert is Assistant Professor of Philosophy at The American University in Cairo. From 2019 to 2022, he was the Howard E. and Susanne C. Jessen Postdoctoral Instructor in Philosophy of Physics at the California Institute of Technology. His research combines the fields of philosophy of physics, philosophy of science, metaphysics, and epistemology. His article When Fields Are Not Degrees of Freedom (co-written with Vera Hartenstein) received an Honourable Mention in the 2021 BJPS Popper Prize Competition.

Federica Isabella Malfatti is Assistant Professor at the Department of Philosophy of the University of Innsbruck. She studied Philosophy at the Universities of Pavia, Mainz, and Heidelberg. She was a visiting fellow at the University of Cologne and spent research periods at the Harvard Graduate School of Education and UCLA. Her work lies at the intersection between epistemology and philosophy of science. She is the leader and primary investigator of TrAU!, a project funded by the Tyrolean Government, which aims at exploring the relation between trust, autonomy, and understanding.

Posted on

Brigitte Everett, Andrew J. Latham and Kristie Miller  – “Locating Temporal Passage in a Block World”

In this post, Brigitte Everett, Andrew J. Latham and Kristie Miller discuss the article they recently published in Ergo. The full-length version of their article can be found here.

“Dynamism of a Cyclist” (1913) Umberto Boccioni

Imagine a universe where a single set of events exists. Past, present, and future events all exist, and they are all equally real—the extinction of the dinosaurs, the birth of a baby, the creation of a sentient robot. The sum total of reality never grows or shrinks, so the totality of events that exist never changes. We may call this a non-dynamical universe. Does time pass in such world? 

If your answer to the above question is “no”, then perhaps you think that time passes only in a dynamical universe.

A dynamist is someone who thinks that there is an objective present time, and that which time that is, constantly changes. Many dynamists think that time only passes in dynamical worlds (Smith 1994, Craig 2000, Schlesinger 1994). Perhaps more surprisingly, many non-dynamists—those who deny that there is an objective present time, and that which time that is, constantly changes—have also traditionally held that time does not pass in non-dynamical worlds.

However, recently some non-dynamists have argued that in our world there is anemic temporal passage, namely, very roughly, the succession of events that occurs in a non-dynamical world. (Deng 2013, Deng 2019, Bardon 2013, Skow 2015, Leininger 2018, Leininger 2021). These theorists argue that anemic temporal passage deserves the name “temporal passage”. One way of interpreting this claim is as the claim that anemic passage satisfies our ordinary, folk concept of temporal passage.

Viewed in this way, we can see a dispute between, on the one hand, those who think that anemic temporal passage is not temporal passage at all, because it does not satisfy our folk concept of temporal passage, and, on the other hand, those who think it is temporal passage, because it does. 

We sought to determine whether our folk concept of temporal passage is a concept of something that is essentially dynamical; that is, whether we have a folk concept of temporal passage that is only satisfied in dynamical worlds, or whether something that exists in non-dynamical worlds, such as anemic passage, can satisfy that concept. 

You might wonder why any of this matters. One reason is that the non-dynamical view of time has often been accused of being highly revisionary. It is often claimed to be a view on which what seem like platitudes turn out to be false. For instance, you might think it’s platitudinous that time passes, and yet, it is argued, if a non-dynamical view of time is true, then this platitude turns out to be false. So, if our world were indeed that way, it would turn out to be very different from how we take it to be.

To determine whether our folk concept of temporal passage would be satisfied in a non-dynamical world, we undertook several empirical studies that probe people’s concept of temporal passage. 

We found that, overall, participants judged that time passes in a non-dynamical universe, when our world was stipulated to be non-dynamical. That is, a majority of participants made this judgement. In particular, we found that a majority of people who in fact think that our world is non-dynamical, judge that there is temporal passage in it. As for people who in fact think that our world is most like a moving spotlight world, we found that they judge that, were our world non-dynamical, it would nevertheless contain temporal passage. Interestingly, though, with regards to people who think that either presentism or the growing block theory is most likely true of our world, we obtained a different result: they did not think that our world would contain temporal passage, were it non-dynamical. 

In a second experiment we asked participants to read a vignette claiming that “time flows or flies or marches, years roll, hours pass… time flows like a river” and other vivid descriptions of passage, and then we asked them to state how likely it is that the description is true of a dynamical vs. a non-dynamical world.  We found that participants judged that the description is equally likely to be true of a non-dynamical world as it is of a dynamical world. 

In the last experiment we probed whether people think that time passage is mind-dependent. Overall, we found that participants judged that time passes regardless of whether there are any minds to experience its passing or not.

Our results indicate, first, that the folk concept of temporal passage can be satisfied in a non-dynamical world, and second, that it is not a concept of something essentially mind-dependent. This suggests that non-dynamists should not concede that theirs is a view on which, in some ordinary sense, time fails to pass. 

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4639/.

References

  • Bardon, A. (2013). A Brief History of the Philosophy of Time. Oxford University Press.
  • Craig, W. L. (2000). The Tensed Theory of Time: A Critical Examination. Kluwer Academic.
  • Deng, N. (2013). “Our Experience of Passage on the B-Theory”. Erkenntnis 78(4): 713-726.
  • Deng, N. (2019). “One Thing After Another: Why the Passage of Time is Not an Illusion”. In A. Bardon, V. Arstila, S. Power & A. Vatakis (eds.) The Illusions of Time: Philosophical and Psychological Essays on Timing and Time Perception, pp. 3-15. Palgrave Macmillan.
  • Leininger, L. (2018). “Objective Becoming: In Search of A-ness”. Analysis, 78(1): 108-117.  
  • Leininger, L. (2021). “Temporal B-Coming: Passage Without Presentness”. Australasian Journal of Philosophy, 99(1): 1-17.
  • Schlesinger G. (1994). “Temporal Becoming”. In N. Oakland and Q. Smith (eds.) The New Theory of Time, pp. 214–220. Yale University Press.
  • Skow, B. (2015). Objective Becoming. Oxford University Press.
  • Smith, Q. (1994). “Introduction: The Old and New Tenseless Theory of Time”. In L. N.  Oaklander and Q. Smith (eds.) The New Theory of Time, pp. 17–22. Yale University Press.

About the authors

Brigitte Everett is a doctoral student at University of Sydney, Department of Philosophy. Her research interests focus on the philosophy of time.

Andrew J. Latham is an AIAS-PIREAU Fellow at the Aarhus Institute of Advanced Studies and Postdoctoral Researcher in the Department of Philosophy and History of Ideas at Aarhus University. He works on topics in philosophy of mind, metaphysics (especially free will), experimental philosophy and cognitive neuroscience.

Kristie Miller is Professor of Philosophy and Director of the Centre for Time at the University of Sydney. She writes on the nature of time, temporal experience, and persistence, and she also undertakes empirical work in these areas. At the moment, she is mostly focused on the question of whether, assuming we live in a four-dimensional block world, things seem to us just as they are. She has published widely in these areas, including three recent books: “Out of Time” (OUP 2022), “Persistence” (CUP 2022), and “Does Tomorrow Exist?” (Routledge 2023). She has a new book underway on the nature of experience in a block world, which hopefully will be completed by the end of 2024.

Posted on

Bert Baumgaertner and Charles Lassiter –“Convergence and Shared Reflective Equilibrium”

In this post, Bert Baumgaertner and Charles Lassiter discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Photo of two men looking down on the train tracks from a diverging bridge.
“Quai Saint-Bernard, Paris” (1932) Henri Cartier-Bresson

Imagine you’re convinced that you should pull the lever to divert the trolley because it’s better to save more lives. But suppose you find the thought of pushing the Fat Man off the bridge too ghoulish to consider seriously. You have a few options to resolve the tension:

  1. you might revise your principle that saving more lives is always better;
  2. you could revise your intuition about the Fat Man case;
  3. you could postpone the thought experiment until you get clearer on your principles;
  4. you could identify how the Fat Man case is different from the original one of the lone engineer on the trolley track.

These are our options when we are engaging in reflective equilibrium. We’re trying to square our principles and judgments about particular cases, adjusting each until a satisfactory equilibrium is reached.

Now imagine there’s a group of us, all trying to arrive at an equilibrium but without talking to one another. Will we all converge on the same equilibrium?

Consider, for instance, two people—Tweedledee and Tweedledum. They are both thinking about what to do in the many variations of the Trolley Problem. For each variation, Tweedledee and Tweedledum might have a hunch or they might not. They might share hunches or they might not. They might consider variations in the same order or they might not. They might start with the same initial thoughts about the problem or they might not. They might have the same disposition for relieving the tension or they might not.

Just this brief gloss suggests that there are a lot of places where Tweedledee and Tweedledum might diverge. But we didn’t just want suggestive considerations, we wanted to get more specific about the processes involved and how likely divergence or convergence would be.

To this end, we imagined an idealized version of the process. First, each agent begins with a rule of thumb, intuitions about cases, and a disposition for how to navigate any tensions that arise. Each agent considers a case at a time. “Considering a case” means comparing the case under discussion to the paradigm cases sanctioned by the rule. If the case under consideration is similar enough to the paradigm cases, the agent accepts the case, which amounts to saying, “this situation falls into the extension of my rule.” Sometimes, an agent might have an intuition that the case falls into the extension of the rule, but it’s not close enough to the paradigm cases. This is when our agents deliberate, using one of the four strategies mentioned above.

In order to get a sense of how likely it is that Tweedledee and Tweedledum would converge, we needed to systematically explore the space of the possible ways in which the process of reflective equilibrium could go. So, we built a computer model of it. As we built the model, we purposely made choices we thought would favor the success of a group of agents reaching a shared equilibrium. By doing so, we have a kind of “best case” scenario. Adding in real-world complications would make reaching a shared equilibrium only harder, not easier.

An example or story that is used for consideration, like a particular Trolley problem, is made up of a set of features. Other versions have some of the same features but differ on others. So we imagined there is a string of yes/no bits, like YYNY, where Y in positions 1, 2, and 4 means the case has that respective feature, while N in position 3 means the case does not. Of course examples used in real debates are much more complicated and nuanced, but having only four possible features should only make it easier to reach agreement. Cases have labels representing intuitions. A label of “IA” means a person has an intuition to accept the case as an instance of a principle, “IR” means to reject it, and “NI” means they have no intuition about it. Finally, a principle consists of a “center” case and a similarity threshold (how many bit values can differ?) that defines the extension of cases that fall under the principle. 

We then represented the process of reflective equilibrium as a kind of negotiation between principles and intuitions by checking whether the relevant case of the intuition is or isn’t a member of the extension of the principle. To be sure, the real world is much more complicated, but the simplicity of our model makes it easier to see what sorts of things can get in the way of reaching shared equilibrium.

What we found is that it is very hard to converge on a single interpersonal equilibrium. Even in the best case scenario, with very charitable interpretations of some “plausible” assumptions, we don’t see convergence.

Analysts of the process of reflective equilibrium are right that interpersonal convergence might not happen if people have different starting places. But they underestimate that even if Tweedledee and Tweedledum start from the same place, reaching convergence is hard. The reason is that, even if we rule out all of the implausible decision points, there are still so many plausible decision points at which Tweedledee and Tweedledum can diverge. They might both change their rule of thumb, for instance, but they might change it in slightly different ways. Small differences—particularly early in the process—lead to substantial divergence.

Why does this matter? Despite how challenging it is for our model, in the real world we find convergence all over the place—like philosophers’ intuitions about Gettier cases—supposedly from our La-Z-Boys. On our representation of reflective equilibrium, such convergence is highly unlikely, suggesting that we should look elsewhere for an explanation. One alternative explanation we suggest (and explore in other work) is the idea of “precedent”, i.e., information one has about the commitments and rules of others, and how those might serve as guides in one’s own process of deliberation.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4654/.

About the authors

Bert Baumgaertner grew up in Ontario, Canada, completing his undergraduate degree at Wilfrid Laurier University. He moved to the sunny side of the continent to do his graduate studies at University of California, Davis. In 2013 he moved to Idaho to start his professional career as a philosophy professor, where he concurrently developed a passion for trail running and through-hiking in the literal Wilderness areas of the Pacific Northwest. He is now Associate Professor of Philosophy at University of Idaho. He considers himself a computational philosopher whose research draws from philosophy and the cognitive and social sciences. He uses agent-based models to address issues in social epistemology. 

Charles Lassiter was born in Washington DC and grew up in Virginia, later moving to New Jersey and New York for undergraduate and graduate studies. In 2013, he left the safety and familiarity of the East Coast to move to the comparative wilderness of the Pacific Northwest for a job at Gonzaga University, where he is currently Associate Professor of Philosophy and Director of the Center for the Applied Humanities. His research focuses on issues of enculturation and embodiment (broadly construed) for an understanding of mind and judgment (likewise broadly construed). He spends a lot of time combing through large datasets of cultural values and attitudes relevant to social epistemology.

Posted on

Igor Douven, Frank Hindriks, and Sylvia Wenmackers – “Moral Bookkeeping”

In this post, Igor Douven, Frank Hindriks, and Sylvia Wenmackers discuss their article recently published in Ergo. The full-length version of their article can be found here.

Allegorical image of Justice punishing Injustice.
“Allegory of Justice Punishing Injustice” (1737) Jean-Marc Nattier

Imagine a mayor who has to decide whether to build a bridge over a nearby river, connecting two parts of the city. He is informed that the construction project will also negatively affect the local wild­life. The mayor responds: “I don’t care about what will happen to some animals. I want to improve the flow of traffic.” So, he has the bridge built, and the populations of wild animals decline as a result of it.

This fictional mayor sounds like a proper movie villain: he knows that his actions will harm wild animals and he doesn’t even care! We expect that people reading this vignette will blame him for his actions. But how does their moral verdict change if the mayor’s project happened to realize positive side-effects for the wildlife, although he was similarly indifferent to that? Would people praise him as much as they blamed him in the first case?

According to most philosophers, someone can only be praiseworthy if they had the intention to bring about a beneficial result. Yet, many philosophers also think that someone can be blamed for the negative side-effects of their actions, even if they did not intentionally cause them. This presumed difference between the assignment of praise and blame is the Mens Rea Asymmetry. (Mens rea is Latin for ‘guilty mind’.) However, data about how people actually assign praise or blame to others does not support this hypothesis.

One source of evidence that runs counter to the hypothesis of the Mens Rea Asymmetry is Joshua Knobe’s influential paper from 2003, which can be seen as the birth of experimental philosophy. His results show, among other things, that respondents do assign praise to agents who bring about a beneficial but unintended side-effect. We used the structure of the vignette from Knobe’s study to produce similar scenarios, including the mayor deciding about a bridge in the above example.

In order to explain the observed violations of the praise/blame asymmetry, we formulated a new hypothesis. Our moral compositionality hypothesis assumes that people evaluate others by taking into account their intentions as well as the outcome of their actions to come to an overall assignment of praise (when the judgment is net positive) or blame (when net negative). In principle, the overall judgment could be a complicated function of the two separate aspects, but we focused on a very simple version of the compositionality hypothesis: people’s overall judgment of someone’s actions is equal to the sum of their judgment of the agent’s intention and of the outcome of the action. We call this the Moral Bookkeeping hypothesis.

To put our hypothesis to the test, we asked nearly 300 participants to score how blameworthy or praiseworthy the mayor was for his decision and likewise for other agents in two similar scenarios. As already mentioned, we varied whether the potential side-effect of the decision was harmful or helpful. To study the respondents’ judgements of an agent’s intentions and outcomes separately, we included cases where the agent wasn’t informed about potential side-effects and where the potential side-effects didn’t occur after all. We also considered decision makers who were aware of potential side-effects, without knowing whether they would be positive, neutral or negative.

As expected, we found further evidence against the Mens Rea Asymmetry. Our results also corroborated the Moral Bookkeeping hypothesis, including its counterintuitive prediction that respondents still assign praise or blame to decision makers who weren’t aware of potential side-effects. Moreover, participants assigned more praise than blame to decision makers who unintentionally brought about the respective positive or negative side-effect. This finding remains puzzling to us as well.

Finally, based on our data, more complicated versions of the general compositionality thesis cannot be ruled out either. We hope that this work will inspire further experiments to unravel how exactly we come to our moral verdicts about others.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4645/.

References

  • Knobe, J. (2003). “Intentional Action and Side-effects in Ordinary Language”. Analysis, 64(3): 81–87.

About the authors

Igor Douven is a CNRS Research Professor at the IHPST/Panthéon-Sorbonne University in Paris.

Frank Hindriks is Professor of Ethics, Social and Political Philosophy at the Faculty of Philosophy of the University of Groningen, the Netherlands.

Sylvia Wenmackers is a Research Professor in Philosophy of Science at KU Leuven, Belgium.

Posted on

Bryan Pickel and Brian Rabern – “Against Fregean Quantification”

In this post, Bryan Pickel and Brian Rabern discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Still life of various kinds fruits laying on a tablecloth.
“Martwa natura” (1910) Witkacy

A central achievement of early analytic philosophy was the development of a formal language capable of representing the logic of quantifiers. It is widely accepted that the key advances emerged in the late nineteenth century with Gottlob Frege’s Begriffschrift. According to Dummett,

“[Frege] resolved, for the first time in the whole history of logic, the problem which had foiled the most penetrating minds that had given their attention to the subject.” (Dummett 1973: 8)

However, the standard expression of this achievement came in the 1930s with Alfred Tarski, albeit with subtle and important adjustments. Tarski introduced a language that regiments quantified phrases found in natural or scientific languages, where the truth conditions of any sentence can be specified in terms of meanings assigned to simpler expressions from which it is derived.

Tarski’s framework serves as the lingua franca of analytic philosophy and allied disciplines, including foundational mathematics, computer science, and linguistic semantics. It forms the basis of the predicate logic conventionally taught in introductory logic courses – recognizable by its distinctive symbols such as inverted “A’s” and backward “E’s,” truth-functions, predicates, names, and variables.

This formalism proves indispensable for tasks such as expressing the Peano Axioms, elucidating the truth-conditional ambiguity of statements like “Every linguist saw a philosopher,” or articulating metaphysical relationships between parts and wholes. Additionally, its computationally more manageable fragments have found applications in semantic web technologies and artificial intelligence.

Yet, from the outset there was dissatisfaction with Tarski’s methods. To see where the dissatisfaction originates first, consider the non-quantified fragment of the language. For this fragment, the truth conditions of any complex sentence can be specified in terms of the truth conditions of its simpler sentences, and the truth conditions of any simple sentence, in turn, can be specified in terms of the referents of its parts. For example, the sentence ‘Hazel saw Annabel and Annabel waved’ is true if and only if its component sentences ‘Hazel saw Annabel’ and ‘Annabel waved’ are both true. ‘Hazel saw Annabel’ is true if the referents of ‘Hazel’ and ‘Annabel’ stand in the seeing relation. ‘Annabel waved’ is true if the referent of ‘Annabel’ waved. For this fragment, then, truth and reference can be considered central to semantic theory.

This feature can’t be maintained for the full language, however. To regiment quantifiers, Tarksi  introduced open sentences and variables, effectively displacing truth and reference with “satisfaction by an assignment” and “value under an assignment”. Consider for instance a sentence such as  ‘Hazel saw someone who waved’. A broadly Tarskian analysis would be this: ‘there is an x such that: Hazel saw x and x waved’. For Tarski, variables do not refer absolutely, but only relative to an assignment. We can speak of the variable x as being assigned to different individuals: to Annabel or to Hazel. Similarly, an open sentence such as ‘Hazel saw x’ or ‘x waved’ is not true or false, but only true or false relative to an assignment of values to its variables.

This aspect of Tarski’s approach is the root cause of dissatisfaction, yet it constitutes his unique method for resolving “the problem” – i.e., the problem of multiple generality that Frege had previously solved. Tarski used the additional structure to explain the truth conditions of multiply quantified sentences such as `Everyone saw someone who waved’, or `For every y, there is an x such that: y saw x and x waved’. The overall sentence is true if for every assignment of values to ‘y’, there is an assignment of values to both ‘y’ and ‘x’ such that ‘y saw x’ and ‘x waved’ are both true on that assignment.

Tarksi’s theory is formally elegant, but its foundational assumptions are disputed. This has prompted philosophers to revisit Frege’s earlier approach to quantification.

According to Frege, a “variable” is not even an expression of the language but instead a typographic aspect of a distributed quantifier sign. So Frege would think of a sentence such as  ‘there is an x such that: Hazel saw x and x waved’ as divisible into two parts:

  1. there is an x such that: … x….
  2. Hazel saw … and … waved

Frege would say that expression (ii) is a predicate that is true or false of individuals depending on whether Hazel saw them and they waved. For Frege, this predicate is derived by starting with a full sentence such as ‘Hazel saw Annabel and Annabel waved’ and removing the name ‘Annabel’. In this way, Frege seems to give a semantics for quantification that more naturally extends the non-quantified portion of the language. As Evans says:

[T]he Fregean theory with its direct recursion on truth is very much simpler and smoother than the Tarskian alternative…. But its interest does not stem from this, but rather from examination at a more philosophical level. It seems to me that serious exception can be taken to the Tarskian theory on the ground that it loses sight of, or takes no account of, the centrality of sentences (and of truth) in the theory of meaning. (Evans 1977: 476)

In short: Frege did it first, and Frege did it better.

Our paper “Against Fregean Quantification” takes a closer look at these claims. We identify three features in which the Fregean approach has been held to make an advance on Tarski: it treats quantifiers as predicates of predicates, the basis of the recursion includes only names and predicates, and the complex predicates do not contain variable markers.

However, we show that in each case, the Fregean approach must similarly abandon the centrality of truth and reference to its semantic theory. Most surprisingly, we show that rather than extending the semantics of the non-quantified portion of the language, the Fregean turns ordinary proper names into variable-like expressions. In doing so, Frege leads to a typographic variant of the most radical of Tarskian views: variabilism, the view that names should be modeled as Tarskian variables.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2906/.

References

  • Dummett, Michael. (1973). Frege: Philosophy of Language. London: Gerald Duckworth.
  • Evans, Gareth. (1977). “Pronouns, Quantifiers, and Relative Clauses (I)”. Canadian Journal of Philosophy 7(3): 467–536.
  • Frege, Gottlob. (1879). Begriffsschrift: Eine der arithmetischen nachgebildete Formelsprache des reinen Denkens. Halle a.d.S.
  • Tarski, Alfred. (1935). “The Concept of Truth in Formalized Languages”. In Logic, Semantics, Metamathematics (1956): 152–278 . Clarendon Press.

About the authors

Bryan Pickel is Senior Lecturer in Philosophy at the University of Glasgow. He received his PhD from the University of Texas at Austin. His main areas of research are metaphysics, the philosophy of language, and the history of analytic philosophy.

Brian Rabern is Reader at the School of Philosophy, Psychology, and Language Sciences at the University of Edinburgh. Additionally, he serves as a software engineer at GraphFm. He received his PhD in Philosophy from the Australian National University. His main areas of research are the philosophy of language and logic.

Posted on

Christopher Frugé – “Janus-Faced Grounding”

In this post, Christopher Frugé discusses the article he recently published in Ergo. The full-length version of his article can be found here.

Picture of the Roman double-faced god Janus: one face, older in age, looks back to the past, while the other, younger, looks forward to the future.
Detail of “Bust of the Roman God Janus” (1569) © The New York Public Library

Grounding is the generation of the less fundamental from the more fundamental. The fact that Houston is a city is not a fundamental aspect of reality. Rather, it’s grounded in facts about people and, ultimately, fundamental physics.

What is the status of grounding itself? Most theorists of ground think that grounding is non-fundamental and so must itself be grounded. Yet, if grounding is always grounded, then every grounding fact generates an infinite regress of the grounding of grounding facts, where each grounding fact needs to be grounded in turn. I argue that this regress is vicious, and so some grounding facts must be fundamental.

Grounding theorists take grounding to be grounded because it seems to follow from two principles about fundamentality. Purity says that fundamental facts can’t contain any non-fundamental facts. Completeness says that the fundamental facts generate all the non-fundamental facts. The idea behind purity is that the fundamental is supposed to be ‘pure’ of the non-fundamental. There can’t be any chairs alongside quarks at the most basic level of reality. Completeness stems from the thought that the fundamental is ‘enough’ to generate the rest of reality. Once the base layer has been put in place, then it produces everything else.

These principles are plausible, but they lead to regress. For example, the fact that Houston is a city is grounded in certain fundamental physical facts. By the standard construal of purity, this grounding fact is non-fundamental, since it ‘contains’ a non-fundamental element: the fact that Houston is a city. But by completeness, non-fundamental facts must be grounded, so this grounding fact must be grounded. But then this grounding fact must be grounded for the same reason, and so on forever.

We have what’s called the fact regress:

The standard take among grounding theorists is that the regress isn’t vicious, just a surprising discovery. This is because it doesn’t violate the well-foundedness of grounding. Well-foundedness requires that at some point each path of A grounds B and C grounds A and D grounds C… must come to an end.

The fact regress doesn’t violate well-foundedness, because each grounding fact can ground out in something fundamental. It’s just that each grounding fact needs to be grounded. Consider a case where A is fundamental and grounds B, but this grounding fact is grounded in a fundamental C. And that grounding fact is grounded in a fundamental D and so on. This satisfies well-foundedness but is an instance of the fact regress.

Nonetheless, I claim that the fact regress is still vicious. This is because what’s grounded doesn’t merely depend on its ground but also depends on the grounds of its grounding fact – and on the grounds of each grounding fact in the path of grounding of grounding. Call this connection dependence.

Why is connection dependence a genuine form of dependence? Suppose that A grounds B, where B isn’t grounded in anything else. But say that C grounds that A grounds B, where A grounds B isn’t grounded in anything else. Then, B depends not just on A but also on C. If C were removed, then A wouldn’t ground B. So then B would not be generated by anything and so would not come into being. For example, if a collection of particles ground the composite whole of those particles only via a composition operation grounding this grounding fact, then if, perhaps counterpossibly, there were no composition operation then those particles would not ground that whole. Similar reasoning applies at each step in the path of grounding of grounding.

So, then, the fact regress is bad for the same reason that violations of well-foundedness are bad. Without well-foundedness, it could be that each ground would need to be grounded in turn, and so the creation of a non-fundamental element of reality would never end up coming about because it would always require a further ground. Yet, given the fact regress, there can also be no stopping point—no point from which what’s grounded is ultimately able to be generated from its grounds. So determination, and hence what’s determined, would always be deferred and never achieved.

Therefore, I uphold well-connectedness, which requires that every path of grounding of grounding facts terminates in an ungrounded grounding fact:

This prohibits the fact regress.

Well-connectedness falls out of the proper interpretation of completeness, which imposes the requirement that the fundamental is enough for the non-fundamental. For any portion of non-fundamental reality, there is some portion of the fundamental that is ‘enough’ to produce it. If well-connectedness is violated, then there is no portion of fundamental reality that is sufficient unto itself to produce any bit of non-fundamental reality. There would always have to be a further determination of how the fundamental determines the non-fundamental. But at some point the grounding of grounding must stop. Some grounding facts must be fundamental.

However, the fact regress seems to fall out of completeness and purity. So what gives? I think the key is to see that the proper interpretation of purity doesn’t require that grounding facts be grounded.

There’s a distinction between what’s called factive and non-factive grounding. Roughly put, A non-factively grounds B if and only if given A then A generates B. A factively grounds B just in case A non-factively grounds B and A obtains. So it could be that A non-factively grounds B even if B doesn’t obtain since A doesn’t obtain. Thus, in a legitimate sense, the fact that A non-factively grounds B doesn’t ‘contain’ A or B, since that grounding fact can obtain without either A or B obtaining. We can think of the non-factive grounding facts as ‘mentioning’ the ground and ground without ‘containing’ them. But this is consistent with purity, since fundamental non-factive grounding facts don’t have any non-fundamental constituents.

Want more?

Read the full article at: https://journals.publishing.umich.edu/ergo/article/id/4664/.

About the author

Christopher Frugé is a Junior Research Fellow at the University of Oxford in St John’s College. He received his PhD from Rutgers. He works on foundational and normative ethics as well as metaphysics. 

Posted on

Gabriel De Marco and Thomas Douglas – Nudge Transparency Is Not Required for Nudge Resistibility

In this post, Gabriel De Marco and Thomas Douglas discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Image of a variety of cakes on display.
“Cakes” (1963) Wayne Thiebaud © National Gallery of Art

Consider,

Food Placement. In order to encourage healthy eating, cafeteria staff place healthy food options at eye-level, whereas unhealthy options are placed lower down. Diners are more likely to pick healthy foods and less likely to pick unhealthy foods than they would have been had foods instead been distributed randomly.

Interventions like this are often called nudges. Though many agree that it is, at least sometimes, permissible to nudge people, there is a thriving debate about when, exactly, it is so.

In the now-voluminous literature on the ethics of nudging, some authors have suggested that nudging is permissible only when the nudge is easy to resist. But what does it take for a nudge to be easy to resist? Authors rarely give accounts of this, yet they often seem to assume what we call

The Awareness Condition (AC). A nudge is easy to resist only if the agent can easily become aware of it.

We think AC is false. In our paper, we mount a more developed argument for this, but in this blog post we simply advance one counterexample and consider one possible response to it.

Here’s the counterexample:

Giovanni and Liliana: Giovanni, the owner of a company, wants his workers to pay for the more expensive, unhealthy snacks in the company cafeteria, so, without informing his office workers, he instructs the cafeteria staff to place these snacks at eye level. While in line at the cafeteria, Liliana (who is on a diet) sees the unhealthy food, and is a bit tempted by it, partly as a result of the nudge. Recognizing the temptation, she performs a relatively easy self-control exercise: she reminds herself of her plan to eat healthily, and why she has it. She thinks about how following a diet is going to be difficult, and once she starts making exceptions, it’s just going to be easier to make exceptions later on. After this, she decides to take the salad and leave the chocolate pudding behind. Although she was aware that she was tempted to pick the chocolate pudding, she was not aware that she was being nudged, nor did she have the capacity to easily become aware of this, since Giovanni went to great lengths to hide his intentions.

Did Liliana resist the nudge? We think so. We also think that the nudge was easily resistible for her, even though she did not have the capacity to easily become aware of the fact that she was being nudged. If you agree, then we have a straightforward counterexample to AC.

In response, someone might argue that, although Liliana resists something, she does not resist the nudge. Rather, she resists the effects of the nudge: the (increased) motivation to pick the chocolate pudding. Resisting the nudge, rather than its effects, requires that one intends to act contrary to the nudge. But Liliana doesn’t intend to do that. Although she intends to pick the healthy option, to pick the salad, or to not pick the chocolate pudding, she does not intend to act contrary to the nudge.

If resisting a nudge requires that one intend to act contrary to the nudge, then Liliana does not resist the nudge, and the counterexample to AC fails. Yet we do not think that resisting a nudge requires that one intend to act contrary to the nudge. While we grant that a way of resisting a nudge is to do so while intending to act contrary to it, and that resisting it in this way requires awareness of the nudge, we do not think that this is the only way to resist a nudge. Partly, we think this because we find it plausible that Liliana (and agents in other similar cases) do resist the nudge.

But further, we think that, if resisting a nudge requires intending to act contrary to the nudge, this will cast doubt on the thought that nudges ought to be easy to resist. Suppose that there are two reasonable ways of understanding “resisting a nudge.” On one understanding, resistance requires that the agent acts contrary to the nudge and intends to do so. Liliana does not resist the nudge on this understanding. On a second, broader way of understanding resistance, one need not intend to act contrary to the nudge in order to resist it; it is enough simply to act contrary to the nudge. Liliana does resist the nudge in this way.

Now consider two claims:

The strong claim: A nudge is permissible only if it is easy to act contrary to it with the intention of doing so.

The weak claim: A nudge is permissible only if it is easy to act contrary to it.

Are these claims plausible? We think that the weak claim might be, but the strong claim is not.

Consider again Food Placement. This was a case of a nudge just like Giovanni’s nudge, except that the food placement is intended to get more people to pick the healthy food option over the unhealthy one, rather than the reverse. In this version of the case, Giovanni wants to do what is in the best interests of his staff. According to the strong claim, this nudge would be impermissible insofar as his staff cannot easily become aware of the nudge. And this is so even though it would be permissible for Giovanni to put the healthy foods at eye level randomly. Moreover, it would remain so even if all the following are true:

  1. the nudge only very slightly increases the nudgee’s motivation to take the healthy food,
  2. the nudgee acts contrary to this motivation and picks the same unhealthy food she would have picked in the absence of the nudge,
  3. she finds it very easy to act contrary to the nudge in this way,
  4. her acting contrary to the nudge in this way is a reflection of her values or desires, and
  5. her acting contrary to the nudge is the result of normal deliberation which is not significantly influenced by the nudge.

We find it hard to believe that this nudge is impermissible, or even more weakly, that we have a strong or substantial reason against implementing it.

We think, then, that if nudges have to be easily resistible in order to be ethically acceptable, this will be because something like the weak claim holds. On this view, a nudge can meet this requirement if it is easy for the nudgee to resist it in our broader sense, and this is compatible with it being difficult for the nudgee to become aware of the nudge, as in our Giovanni and Liliana case.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4635/.

About the authors

Gabriel De Marco is a Research Fellow in Applied Moral Philosophy at the Oxford Uehiro Centre for Practical Ethics. His research focuses on free will, moral responsibility, and the ethics of influence.

Tom Douglas is Professor of Applied Philosophy and Director of Research at the Oxford Uehiro Centre for Practical Ethics. His research focuses especially on the ethics of using medical and neuro-scientific technologies for non-therapeutic purposes, such as cognitive enhancement, crime prevention, and infectious disease control. He is currently leading the project ‘Protecting Minds: The Right to Mental Integrity and the Ethics of Arational Influence‘, funded by the European Research Council.

Posted on

Cathy Mason – “Reconceiving Murdochian Realism”

In this post, Cathy Mason discusses the article she recently published in Ergo. The full-length version of Cathy’s article can be found here.

A picture of a vase with irises.
“Irises” (1890) Vincent van Gogh

Iris Murdoch’s ethics is filled with discussions of moral reality, moral truth and how things really stand morally. What exactly does she mean by these? Her style is certainly a non-standard philosophical style, and her ideas are remarkably wide-ranging, but it can seem appealing to think that at heart her metaethical commitments largely align with standard realists’. I suggest, however, that this reading of Murdoch is mistaken: her realism amounts to something else altogether.

I take standard realism to be roughly captured by the following definition from Sayre-McCord:

Moral realists hold that there are moral facts, that it is in light of these facts that peoples’ moral judgments are true or false, and that the facts being what they are (and so the judgments being true, when they are) is not merely a reflection of our thinking the facts are one way or another. That is, moral facts are what they are even when we see them incorrectly or not at all. (Sayre-McCord 2005: 40)

Does Murdoch subscribe to this view? It can certainly be tempting to think so. She repeatedly talks about ‘realism’ and ‘objectivity’, and remarks like the following seem well-understood in standard realist terms:

The authority of morals is the authority of truth, that is of reality. (TSG 374)

The ordinary person does not, unless corrupted by philosophy, believe that he creates values by his choices. He thinks that some things really are better than others and that he is capable of getting it wrong. (TSG 380)

Here, Murdoch clearly commits to the idea that some moral claims are true, and that what makes them true is not something to do with the valuer, but something about the world. All this sounds very much like standard realism.

However, it would be a mistake to think that these surface similarities point towards a deeper congruence between Murdoch and standard realists. For a start, realists typically take moral facts to be one kind among many. Just as there are mathematical facts and psychological facts, so too there are moral facts. Yet Murdoch repeatedly insists that all reality is moral—and thus that all facts are in some sense moral facts (e.g. IP 329, OGG 357, MGM 35). Moreover, though Murdoch insists on the truth of some moral claims, she understands the notion of truth very differently from standard realists.  Whereas realists typically regard truth as something abstract, Murdoch suggests that it can only be understood in relation to truthfulness and the search for truth. The seeming agreement between Murdoch and standard realists on the truth of some ethical claims thus belies deeper disagreements between them.

What’s more, standard realism is hard to square with some wider views Murdoch holds. First, she suggests that some moral concepts can be genuinely private: fully virtuous agents may have different moral concepts without either of their conceptual schemas being inaccurate or incomplete. Second, she suggests that there can be private moral reasons: moral reasons need not be universal. It is hard to see how there could be room for private moral concepts and reasons within standard realism: either there are facts corresponding to a moral belief, or there are not. If there are, then it is a kind of moral ignorance to ignore such facts. If not, then the belief is simply false. Finally, Murdoch rejects the idea common in standard realism that the moral supervenes on the non-moral, since she suggests that there simply is no non-moral reality.

What, then, does Murdoch have in mind when she discusses realism? In most cases where Murdoch introduces ideas such as realism or objectivity, she is discussing the moral perceiver’s relation to the thing perceived, rather than only talking about the thing perceived. Her realism is a claim about the reality of the moral where reality is understood as that which is discerned by the virtuous perceiver.

Take, for example, the following passages:

[T]he realism (ability to perceive reality) required for goodness is a kind of intellectual ability to perceive what is true, which is automatically at the same time a suppression of self. (OGG 353)

[A]nything which alters consciousness in the direction of unselfishness, objectivity and realism is to be connected with virtue. (TSG 369)

In both of these quotes, Murdoch discusses the relation between a moral perceiver and the thing perceived. Realism or objectivity is talked of not as a metaphysical feature of objects, properties or facts, but as a feature of moral agents who are epistemically engaged with the world.

Of course, the standard realist might allow that there is such a thing as realism as a feature of a moral perceiver, and understand this in terms of accessing facts or properties which independently exist. Yet this ordering of explanations is ruled out by Murdoch’s insistence that reality itself is a normative (moral) concept. What is objectively real, for Murdoch, cannot be understood apart from ethics, apart from the essentially human activity of seeking to understand the world which is subject to moral evaluation. This is not to suggest that reality is a solely moral concept: it is also linked to truth, to how the world is. But it is to suggest that a conception of how the world is, of reality, must be essentially ethical.

What kind of relation, then, must the realistic observer stand into the thing observed? Murdoch suggests that no non-moral answer can be given here, no description that demarcates the realistic stance in an ethically neutral way. However, a description can be given in rich ethical terms. To be realistic is best understood as doing justice to the thing one is confronted with, being faithful to the reality of it, being truthful about it, and so on. All of these terms capture the idea that perception can be genuinely cognitive, whilst at the same time being a fundamentally ethical task.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4653/.

References

  • Murdoch, Iris (1999). “The Idea of Perfection”. In Peter Conradi (Ed.), Existentialists and Mystics: Writings on Philosophy and Literature (299–337). Penguin. [IP]
  • Murdoch, Iris (1999). “On God and Good”. In Peter Conradi (Ed.), Existentialists and Mystics: Writings on Philosophy and Literature (337–63). Penguin. [OGG]
  • Murdoch, Iris (1999). “The Sovereignty of Good Over Other Concepts”. In Peter Conradi (Ed.), Existentialists and Mystics: Writings on Philosophy and Literature (363–86). Penguin. [TSG]
  • Murdoch, Iris (2012). “Metaphysics as a Guide to Morals”. Vintage Digital. [MGM]
  • Sayre-McCord, Geoffrey (2005). “Moral Realism”. In David Copp (Ed.), The Oxford Handbook of Ethical Theory (39–62). Oxford University Press.

About the author

Cathy Mason is an Assistant Professor in Philosophy at the Central European University (Vienna). She is currently working on a book on Iris Murdoch’s ‘metaethics’, as well as some ideas concerning the ethics of friendship.

Posted on

Victor Lange and Thor Grünbaum – “Measurement Scepticism, Construct Validation, and Methodology of Well-Being Theorising”

A young pregnant woman is holding a small balance for weighing gold. In front of her is a jewelry box and a mirror; on her right, a painting of the last judgment.
“Woman Holding a Balance” (c. 1664) Johannes Vermeer

In this post, Victor Lange and Thor Grünbaum discuss the article they recently published in Ergo. The full-length version of their article can be found here.

Many of us think that decisions and actions are justified, at least partially, in relation to how they affect the well-being of the involved individuals. Consider how politicians and lawmakers often justify, implicitly or explicitly, their policy decisions and acts by reference to the well-being of citizens. In more radical terms, one might be an ethical consequentialist and claim that well-being is the ultimate justification of any decision or action.

It would therefore be wonderful if we could precisely measure the well-being of individuals. Contemporary psychology and social science contain a wide variety of scales for this purpose. Most often, these scales measure well-being by self-reports. For examples, subjects rate the degree to which they judge or feel satisfied with their own lives or they report the ratio of positive to negative emotions. Yet, even though such scales have been widely adopted, many researchers express scepticism about whether they actually measure well-being at all. In our paper, we label this view measurement scepticism about well-being. 

Our aim is not to develop or motivate measurement scepticism. Instead, we consider a recent and interesting reply to such scepticism, put forward by Anna Alexandrova (2017; see also Alexandrova and Haybron, 2016). According to Alexandrova, we can build an argument against measurement scepticism by employing a standard procedure of scientific psychology called construct validation. 

Construct validation is a psychometric procedure. Researchers use the procedure to assess the degree to which a scale actually measures its intended target phenomenon. If psychologists and social scientists have a reliable procedure to assess the degree to which a scale really measures what it is intended to measure, it seems obvious that we should use it to test well-being measurements. For the present purpose, let us highlight two key aspects of the procedure. 

First, construct validation utilises convergent and discriminant correlational patterns between the scores of various scales as a source of evidence. Convergent correlations concern the relation between scores on the target scale (intended to measure well-being) and scores on other scales (assumed to measure either well-being or some closely related phenomenon, such as wealth or physical health). Discriminant correlations concern non-significant relations between scores on the target scale and scores on scales that we expect to measure phenomena unrelated to well-being (e.g., scales measuring perceptual acuity). When assessing the construct validity of a scale, researchers evaluate a scale by considering whether it exhibits attractive convergent correlations (whether subjects with high scores on the target well-being scale also score high on physical health, for example) and discriminant correlations (e.g., whether subjects’ scores on the target well-being scale have significant correlations with perceptual acuity).

Second, the examination of correlational patterns depends on theory. Initially, we need a theory to build our scale (for instance, a theory of how well-being is expressed in the target population). Moreover, we need a theory to tell us what correlations we should expect (i.e. how answers on our scale should correlate with other scales). This means that, when engaging in construct validation, researchers test a scale and its underlying theory holistically. That is, the construct validation of the target scale involves testing both the scale and the theory of well-being that underlies it. Consequently, the procedure of construct validation requires that researchers remain open to revising their underlying theory if they persistently observe the wrong correlational patterns. Given this holistic nature of the procedure, correlational patterns might lead to revisions of one’s theory of well-being, perhaps even to abandoning it. 

The question now is this: Does the procedure of construct validation provide a good answer to measurement scepticism about well-being? While we acknowledge that for many psychological phenomena (e.g., intelligence) the procedures of construct validation might provide a satisfying reply to various forms of measurement scepticism, things are complicated with well-being. Here the normative nature of well-being rears its philosophical head. We argue that an acceptable answer to the question depends on the basic assumptions about the methodology of well-being theorising. Let us clarify by distinguishing between two methodological approaches.

First, methodological naturalism about well-being theorising claims that we should theorise about well-being in the same way we investigate any other natural phenomenon, namely, by ordinary inductive procedures of scientific investigation. Consequently, our theory of well-being should be open to revision on empirical grounds. Second, methodological non-naturalism claims that theorising about well-being should be limited to the methods known from traditional (moral) philosophy. The question of well-being is a question about what essentially and non-derivatively makes a person’s life go best. Well-being has an ineliminative normative or moral nature. Hence, the question of what well-being is, is a question only for philosophical analysis.  

The reader might see the problem now. Since construct validation requires openness to theory revision by correlational considerations, it is a procedure that only a methodological naturalist can accept. Consequently, if measurement scepticism is motivated by a form of non-naturalism, we cannot reject it by using construct validation. Non-naturalists will not accept that theorising about well-being can be a scientific and empirical project. This result is all the more important because many proponents of measurement scepticism seem to be methodological non-naturalists.  

In conclusion, if justifying an action or a social policy over another often requires assessing consequences for well-being, then scepticism about measurement of well-being becomes an important obstacle. We cannot address this scepticism head-on with the procedures of construct validation. Such procedures assume something the sceptic might not accept, namely, that our theory of well-being should be open to empirical revisions. Instead, we need to start by making our methodological commitments explicit. 

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4663/.

References

  • Alexandrova, Anna (2017). A Philosophy for the Science of Well-Being. Oxford University Press. 
  • Alexandrova, Anna and Daniel M. Haybron (2016). “Is Construct Validation Valid?” Philosophy of Science, 83(5), 1098–109. 

About the authors

Victor Lange is a PhD-fellow at the Section for Philosophy and a member of the CoInAct group at the Department of Psychology, University of Copenhagen. His research focuses upon attention, meditation, psychotherapy, action control, mental action, and psychedelic assisted therapy. He is a part of the platform Regnfang that publishes podcasts about the sciences of the mind.

Thor Grünbaum is an associate professor at the Section for Philosophy and the Department of Psychology, University of Copenhagen. He is head of the CoInAct research group. His research interests are in philosophy of action (planning, control, and knowledge), philosophy of psychology (explanation, underdetermination, methodology), and cognitive science (sense of agency, prospective memory, action control).

Posted on

Kristie Miller – “Against Passage Illusionism”

Detail of Salvador Dalì’s tarot card “The Magician” (1983)

In this post, Kristie Miller discusses her article recently published in Ergo. The full-length version of Kristie’s article can be found here.

It might seem obvious that we experience the passing of time. Certainly, in some trivial sense we do. It is now late morning. Earlier, it was early morning. It seems to me as though some period of time has elapsed since it was early morning. Indeed, during that period it seemed to me as though time was elapsing, in that I seemed to be located at progressively later times.

One question that arises is this: in what do these seemings consist? One way to put the question is to ask what content our experience has. What state of the world does the experience represent as being the case?

Philosophers disagree about which answer is correct. Some think that time itself passes. In other words, they think that there is a unique set of events that are objectively, metaphysically, and non-perspectivally present, and that which events those are, changes. Other philosophers disagree. They hold that time itself is static; it does not pass, because no events are objectively, metaphysically, and non-perspectivally present, such that which events those are, changes. Rather, whether an event is present is a merely subjective or perspectival matter, to be understood in terms of where the event is located relative to some agent.

Those who claim that time itself passes typically use this claim to explain why we experience it as passing: we experience time as passing because it does. What, though, should we say if we think that time does not pass, but is rather static? You might think that the most natural thing to say would be that we don’t experience time as passing. We don’t represent there being a set of events that are non-perspectivally present, and that which those are, changes. Of course, we represent various events as occurring in a certain temporal order, and as being separated by a certain temporal duration, and we experience ourselves as being located at some times (rather than others) – but none of that involves us representing that some events have a special metaphysical status, and that which events have that status, changes. So, on this view, we have veridical experiences of static time.

Interestingly, however, until quite recently this was not the orthodox view. Instead, the orthodoxy was a view known as passage illusionism. This is the view that although time does not pass, it nevertheless seems to us as though it does. So, we are subject to an illusion in which things seem to us some way that they are not. In my paper I argue against passage illusionism. I consider various ways that the illusionist might try to explain the illusion of time passing, and I argue that none of them is plausible.

The illusionist’s job is quite difficult. First, the illusion in question is pervasive. At all times that we are conscious, it seems to us as though time passes. Second, the illusion is of something that does not exist – it is not an experience which could, in other circumstances, be veridical.

In the psychological sciences, illusions are explained by appealing to cognitive mechanisms that typically function well in representing some feature(s) of our environment. In most conditions, these mechanisms deliver us veridical experiences. In some local environments, however, certain features mislead the mechanism to misrepresent the world, generating an illusion. These kinds of explanation, however, involve illusions that are not pervasive (they occur only in some local environments) and are not of something that does not exist (they are the product of mechanisms that normally deliver veridical experiences). This gives us reason to be hesitant that any explanation of this kind will work for the passage illusionist.

I consider a number of mechanisms that represent aspects of time, including those that represent temporal order, duration, simultaneity, motion and change. I argue that, regardless of how we think about the content of mental states, we should conclude that none of the representational states generated by these mechanisms individually, or jointly, represent time as passing.

First, suppose we think that the content of our experiences is exhausted by the things in the world that those experiences typically co-vary with.  For instance, suppose you have a kind of mental state which typically co-varies with the presence of cows. On this view, that mental state represents cows, and nothing more. I argue that if we take this view of representational content, then none of the contents generated by the functioning of the various mechanisms that represent aspects of time, could either severally or, importantly, jointly, represent time as passing. For even if our brains could in some way ‘knit together’ some of these contents into a new percept, such contents don’t have the right features to generate a representation of time passing. For instance, they don’t include a representation of objective, non-perspectival presence. So, if we hold this view on mental content, we should think that passage illusionism is false.

Alternatively, we might think that our mental states do represent the things in the world with which they typically co-vary, but that their content is not exhausted by representing those things. So, the illusionist could argue that we experience passage by representing various temporal features, such that our experiences have not only that content, but also some extra content, and that jointly this generates a representation of temporal passage.

I argue that it is very hard to see why we would come to have experiences with this particular extra content. Representing that certain events are objectively, metaphysically, and non-perspectivally present, and that which event these are, changes, is a very sophisticated representation. If it is not an accurate representation, it’s hard to see why we would come to have it. Further, it seems plausible that the human experience of time is, in this regard, similar to the experience of some non-human animals. Yet it seems unlikely that non-human animals would come to have such sophisticated representations, if the world does not in fact contain passage.

So, I conclude, it is much more likely, if time does not pass, that we have veridical experiences of a static world rather than illusory experiences of a dynamical world.

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/2914/.

About the author

Kristie Miller is Professor of Philosophy and Director of the Centre for Time at the University of Sydney. She writes on the nature of time, temporal experience, and persistence, and she also undertakes empirical work in these areas. At the moment, she is mostly focused on the question of whether, assuming we live in a four-dimensional block world, things seem to us just as they are. She has published widely in these areas, including three recent books: “Out of Time” (OUP 2022), “Persistence” (CUP 2022), and “Does Tomorrow Exist?” (Routledge 2023). She has a new book underway on the nature of experience in a block world, which hopefully will be completed by the end of 2024.