Posted on

Corey Dethier – “Interpreting the Probabilistic Language in IPCC Reports”

A young sibyl (sacred interpreter of the word of god in pagan religions) argues with an old prophet (sacred interpreter of the word of god in monotheistic religions). It looks as if the discussion will go on for a long while.
Detail of “A sibyl and a prophet” (ca. 1495) Andrea Mantegna

In this post, Corey Dethier discusses his article recently published in Ergo. The full-length version of Corey’s article can be found here.

Every few years, the Intergovernmental Panel on Climate Change (IPCC) releases reports on the current status of climate science. These reports are massive reviews of the existing literature by the most qualified experts in the field. As such, IPCC reports are widely taken to represent our best understanding of what the science currently tells us. For this reason, the IPCC’s findings are important, as is their method of presentation.

The IPCC typically qualifies its findings using different scales. In its 2013 report, for example, the IPCC says that the sensitivity of global temperatures to increases in CO2 concentration is “likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence) and very unlikely greater than 6°C (medium confidence)” (IPCC 2013, 81).

You might wonder what exactly these qualifications mean. On what grounds does the IPCC say that something is “likely” as opposed to “very likely”? And why does it assign “high confidence” to some claims and “medium confidence” to others? If you do wonder about this, you are not alone. Even many of the scientists involved in writing the IPCC reports find these qualifications confusing (Janzwood 2020; Mach et al. 2017). My recent paper – “Interpreting the Probabilistic Language in IPCC Reports” – aims to clarify this issue, with particular focus on the IPCC’s appeal to the likelihood scale.

Traditionally, probabilistic language such as “likely” has been interpreted in two ways. On a frequentist interpretation, something is “likely” when it happens with relatively high frequency in similar situations, while it is “very likely” when it happens with a much greater frequency. On a personalist interpretation, something is “likely” when you are more confident that it will happen than not, while something is “very likely” when you are much more confident.

Which of these interpretations better fits the IPCC’s practice? I argue that neither of them does. My main reason is that both interpretations are closely tied to specific methodologies in statistics. The frequentist interpretation is appropriate for “classical” statistical testing, whereas the personalist interpretation is appropriate when “Bayesian” methods are used. The details about the differences between these methods do not matter for our present purposes. My main point is that climate scientists use both kinds of statistics in their research, and since the IPCC’s report reviews all of the relevant literature, the same language is used to summarize results derived from both methods.

If neither of the traditional interpretations works, what should we use instead? My suggestion is the following: we should understand the IPCC’s use of probabilistic terms more like a letter grade (an A or a B or a C, etc.) than as strict probabilistic claims implying a certain probabilistic methodology.

An A in geometry or English suggests that a student is well-versed in the subject according to the standards of the class. If the standards are sufficiently rigorous, we can conclude that the student will probably do well when faced with new problems in the same subject area. But an A in geometry does not mean that the student will correctly solve geometry problems with a given frequency, nor does it specify an appropriate amount of confidence that you should have that they’ll solve a new geometry problem. 

The IPCC’s use of terms such as “likely” is similar. When the IPCC says that a claim is likely, that’s like saying that it got a C in a very hard test. When the IPCC says that sensitivity is “extremely unlikely less than 1°C”, that’s like saying that this claim fails the test entirely. In this analogy, the IPCC’s judgments of confidence reflect the experts’ evaluation of the quality of the class or test: “high confidence” means that the experts think that the test was very good. But even when a claim passes the test with full marks, and the test is judged to be very good, this only gives us a qualitative evaluation. Just as you shouldn’t conclude that an A student will get 90% of problems right in the future, you also shouldn’t conclude that something that the IPCC categorizes as “very likely” will happen at least 90% of the time. The judgment has an important qualitative component, which a purely numerical interpretation would miss.

It would be nice – for economists, for insurance companies, and for philosophers obsessed with precision – if the IPCC could make purely quantitative probabilistic claims. At the end of my paper, I discuss whether the IPCC should strive to do so. I’m on the fence: there are both costs and benefits. Crucially, however, my analysis suggests that this would require the IPCC to go beyond its current remit: in order to present results that allow for a precise quantitative interpretation of its probability claims, the IPCC would have to do more than simply summarize the current state of the research. 

Want more?

Read the full article at https://journals.publishing.umich.edu/ergo/article/id/4637/.

References

  • IPCC (2013). Climate Change 2013: The Physical Science Basis. Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Thomas F. Stocker, Dahe Qin at al. (Eds.). Cambridge University Press.
  • Janzwood, Scott (2020). “Confident, Likely, or Both? The Implementation of the Uncertainty Language Framework in IPCC Special Reports”. Climatic Change 162, 1655–75.
  • Mach, Katharine J., Michael D. Mastrandrea, at al. (2017). “Unleashing Expert Judgment in Assessment”. Global Environmental Change 44, 1–14.

About the author

Corey Dethier is a postdoctoral fellow at the Minnesota Center for Philosophy of Science. He has published on a variety of topics relating to epistemology, rationality, and scientific method, but his main research focus is on epistemological and methodological issues in climate science, particularly those raised by the use of idealized statistical models to answer questions about climate change.