Four Waves in the Philosophy of Statistics

January 25, 2014

Deborah Mayo is posting slides from here Philosophy of Statistics course here.

Mayo on Confirmation and the Tacking Problem

October 20, 2013

Deborah Mayo has an engaging post on Bayesian confirmation theory here.

Epistemic Utility Theory 2013

June 4, 2013

Summer School on Epistemic Utility Theory
EUT 2013, Bristol
August 17-18, 2013

EUT is organized by the Department of Philosophy at the University of Bristol and coordinated by Richard Pettigrew, Jason Konek, and Ben Levinstein.


  • Jim Joyce
  • Katie Steele
  • Rachael Briggs
  • Branden Fitelson
  • Kenny Easwaran

Deadline for registration is July 15, 2013. Grad Student Paper CfP deadline: July 5, 2013.

For details, visit

[Cross posted at Certain Doubts]

PROGIC 2013 Program

May 6, 2013

Munich Center for Philosophy of Science

September 17-18

Invited talks

  • Igor Douven (Groningen) Conditionals and closure. Abstract.
  • Alan Hájek (Australia National University) Probabilities of counterfactuals and counterfactual probabilities. Abstract.
  • Kevin T. Kelly & Hanti Lin (Carnegie Mellon) Qualitative reasoning that tracks Jeffrey conditioning. Abstract.
  • Hannes Leitgeb (Munich) Belief and stable probability. Abstract.
  • Peter Milne (Stirling) Information, confirmation, and conditionals. Abstract.

Contributed talks

  • Glauber De Bona, Fabio G. Cozman & Marcelo Finger (São Paulo) Towards classifying propositional probabilistic logics.
  • Liam Bright (Carnegie Mellon) Measuring degrees of incoherence.
  • Teddy Groves (Kent) An application of Carnapian inductive logic to philosophy of statistics.
  • Hykel Hosni (LSE/Scuola Normale Superiore), Tommaso Flaminio (DiSTA) & Lluís Godo (IIIA) On the logical structure of de Finetti’s notion of event.
  • Arthur Paul Pedersen (Max Planck Institute) Prospects for a theory of non-Archimedean expected utility: Impossibilities and possibilities.
  • Dana Scott (Carnegie Mellon) A stochastic λ-calculus.
  • Stanislav O. Speranski (Novosibirsk State) Quantification over events in probability logic and its applications to elementary analysis.
  • Sean Walsh (Irvine) Empiricism, probability, and knowledge of arithmetic.
  • Jon Williamson (Kent) & Jürgen Landes (Munich) Objective Bayesian epistemology for inductive logic on predicate languages.

For more information:

Rotten apples, Lockean belief, and Booleanosis

May 3, 2013

The Lockean thesis maintains that an individual fully believes a proposition p just when he has a high level of confidence in p.  The received view has it that the problem with Lockean accounts of qualitative belief  is summed up by Henry Kyburg’s lottery paradox, which pits high-probability acceptance rules against the rule of adjunction. For Kyburg, there was no paradox, but instead a misplaced commitment to the rule of adjunction, a condition he famously described as “conjunctivitis”.  

Less observed is a problem for Lockean belief and disjunction (Kyburg, Teng, and Wheeler 2007). It turns out that Lockeans expose themselves to a pernicious form of amalgamation reversals (a.k.a., “Simpson’s paradox”) which cannot be handled by the known recipes for avoiding such reversals (Good and Mittal 1987).  Below the fold is an example and short discussion.

Read the rest of this entry »

CMU Ockham’s Razor Workshop: Follow-up

June 27, 2012

The CMU Ockham’s Razor workshop was last weekend, and several participants have written up comments on their blogs:

Deborah Mayo, at her blog Error Statistics [here];

Cosma Shalizi, at his blog, Three-Toed Sloth  [Day 1, Day 2, Day 3] ;

Larry Wasserman, on his new blog, Normal Deviate [here], which also has a nice précis of Peter Grunwald’s talk on “Self-repairing Bayesian Statistics”.

Schervish on Strictly Proper Scoring Rules for Incentive-Compatible Elicitation

April 10, 2012

Mark Schervish, Professor and Head of the Department of Statistics at Carnegie Mellon University, will deliver a Games and Decisions lecture, “Incentive-Compatible Elicitation,” on Wednesday, April 11, 2012, at Carnegie Mellon University.

Schervish joined the Department of Statistics in 1979 after earning a doctoral degree in statistics from the University of Illinois at Urbana-Champaign and a master’s degree in applied mathematics from the University of Michigan. His research interests in statistics are broad, spanning problems concerning foundations, methodology, theory, and applications. In addition to numerous articles, Schervish is author of Theory of Statistics (Springer) and co-author of Rethinking the Foundations of Statistics (with Teddy Seidenfeld and Jay Kadane; CUP) and Probability and Statistics (with Morris H. DeGroot; Addison-Wesley). What follows is an abstract of his Games and Decisions lecture.

Strictly proper scoring rules have been advertised as tools that allow the elicitation of various aspects of subjective probability distributions by providing the proper incentives to induce agents to honestly report their beliefs. We give a brief overview and report some results that raise some questions about the ability to implement the incentive structure as intended. The results extend to all statistical decision problems and raise issues that should be addressed whenever applying statistical decision theory in practice.

Games and Decisions Group
Department of Philosophy
Carnegie Mellon University

Wednesday, April 11, 2012
12:30-1:30 pm   Baker Hall 135

As usual, all are invited to attend. To ensure that we can accommodate all lunchtime guests, please contact Teddy Seidenfeld or Kevin Zollman to signal your intention to attend.

Distributions as Programs, and Limitations on Automating Probabilistic Inference

March 27, 2012

Daniel Roy, Newton Fellow at the University of Cambridge, will deliver a Games and Decisions lecture, “Distributions as Programs, and Limitations on Automating Probabilistic Inference,” on Wednesday, March 28, 2012, at Carnegie Mellon University. What follows is an abstract of his Games and Decisions lecture.

When is Bayesian reasoning possible? And when is it efficient?

In this talk we will explore a computational perspective on these questions, investigating the class of computable probability distributions and exploring the fundamental limitations of using this class to describe and compute conditional distributions. In addition to describing a computable distribution having a noncomputable conditional distribution, and thus demonstrating the impossibility of automating generic probabilistic inference with algorithms (even inefficient ones), we will highlight some positive results showing that computing conditional probabilities is possible in the presence of additional structure like exchangeability and noise (both of which are common in hierarchical Bayesian models), and also some results about the efficiency of computing conditional probabilities.

This theoretical work bears on work in Machine Learning and Artificial Intelligence on formal “probabilistic programming” languages (which enable the specification of complex probabilistic models) and their implementations (which can be used to perform automated Bayesian reasoning), and also provides a fresh take on foundational questions about conditional probability.

Games and Decisions Group
Department of Philosophy
Carnegie Mellon University

Wednesday, March 28, 2012
12:30-1:30 pm   Baker Hall 135

As usual, all are invited to attend. To ensure that we can accommodate all lunchtime guests, please contact Teddy Seidenfeld or Kevin Zollman to signal your intention to attend.

Decisions without Sharp Probabilities, or Unsharp Probabilities without Decisions?

February 21, 2012

We are pleased to announce that Paul Weirich, Curators’ Professor of the Department of Philosophy at the University of Missouri-Columbia, will deliver a Games and Decisions lecture, “Decisions without Sharp Probabilities,” on Wednesday, February 22, 2012, at Carnegie Mellon University. Weirich, having earned a B.A. in philosophy from Saint Louis University, pursed a doctorate in philosophy at the University of California, Los Angeles, earning a Ph.D. in 1977 for his thesis, Probability and Utility for Decision Theory, written under the supervision of Tyler Burge. In addition to numerous articles, Weirich is author of Collective Rationality: Equilibrium in Cooperative Games (OUP: 2010), Realistic Decision Theory: Rules for Nonideal Agents in Nonideal Circumstances (OUP: 2004), Decision Space: Multidimensional Utility Analysis (CUP: 2001), and Equilibrium and Rationality: Game Theory Revised by Decision Rules (CUP: 1998).  What follows is an abstract of his Games and Decisions lecture.

Adam Elga (2010) argues that no principle of rationality leads from unsharp probabilities to decisions. He concludes that a perfectly rational agent does not have unsharp probabilities. This paper defends unsharp probabilities. It shows how unsharp probabilities may ground rational decisions.

Unsharp probabilities arise from sparse or unspecific evidence. For example, meteorological evidence, because unspecific, often does not suggest a sharp probability that tomorrow will bring rain. An agent may assign to rain a range of probabilities going from, say, 0.4 to 0.6. Elga argues that unsharp probability assignments may lead an agent to a sure loss. In this event, a dilemma arises: the agent may have either unsharp probability assignments that accurately represent evidence, or sharp probabilities that prevent sure losses. Should an agent’s probability assignments be faithful to the evidence, or should they promote practical success? This paper maintains that an agent’s probability assignments can attain both goals.

Games and Decisions Group
Department of Philosophy
Carnegie Mellon University

Wednesday, February 22, 2012
12:30-1:30 pm   Baker Hall 135

As usual, all are invited to attend. To ensure that we can accommodate all lunchtime guests, please contact Teddy Seidenfeld or Kevin Zollman to signal your intention to attend.

Kelly and Lin at the ILLC

January 19, 2012

At the ILLC in Amsterdam, a new monthly LogiCIC seminar series has been organized within the ERC project on “The Logical Structure of Correlated Information Change”. The organizers of the first seminar, Sonja Smets and Nina Gierasimczuk, invite all to participate.

Every month, the seminar will host one or two invited speakers who present their latest research results on topics in Logic, Epistemology and Philosophy of Science. For the opening of this seminar next Tuesday, two speakers will present: Kevin T. Kelly and Hanti Lin from Carnegie Mellon University.

Time: Tuesday, January 24 2012, 16:00-18:00
Place: Amsterdam, Science Park 904, room A1.10

16:00-16:50 Kevin T. Kelly (joint with Hanti Lin), “Propositional Reasoning that Tracks Probabilistic Reasoning”
16:50-17:10 Coffee Break
17:10-18:00 Hanti Lin (joint with Kevin T. Kelly), “Uncertain Acceptance and Contextual Dependence on Questions”


Title: Propositional Reasoning that Tracks Probabilistic Reasoning
Abstract: This paper concerns the extent to which propositional reasoning can track probabilistic reasoning, which addresses kinematic problems that extend the familiar Lottery paradox. An acceptance rule (Leitgeb 2010) assigns to each Bayesian credal state p a propositional belief revision method B_p, which specifies an initial belief state B_p(\top), that is revised into the new propositional belief state B(E) upon receipt of information E. The acceptance rule *tracks* Bayesian conditioning when B_p(E) = B_p|_E(\top), for every E such that p(E) > 0; namely, when acceptance by propositional belief revision equals Bayesian conditioning followed by acceptance. Standard proposals for acceptance and belief revision do not track Bayesian conditioning. The “Lockean” rule that accepts propositions above a probability threshold is subject to the familiar lottery paradox (Kyburg 1961), and we show that it is also subject to new and more stubborn paradoxes when the tracking property is taken into account. Moreover, we show that the familiar AGM approach to belief revision (Harper 1975 and Alchourrón, Gärdenfors, and Makinson 1985) cannot be realized in a sensible way by an acceptance rule that tracks Bayesian conditioning. Finally, we present a plausible, alternative approach that tracks Bayesian conditioning and avoids all of the paradoxes. It combines an odds-based acceptance rule proposed originally by Levi (1996) with a non-AGM belief revision method proposed originally by Shoham (1987). As an application, the Lottery paradox turns out to receive a new solution motivated by dynamic concerns.

Title: Uncertain Acceptance and Contextual Dependence on Questions
Abstract: The preface paradox goes like this: an author may argue for a thesis in each chapter of her book, but in the preface she does not want to be committed to the conjunction of all theses, allowing for the possibility of error. The paradox illustrate a problem about acceptance of uncertain propositions across questions: for each chapter, there is the binary question whether its conclusion is correct; the preface asks a more complex question, namely, which theses are correct. The paradox is that asking for more can yield less. This paper addresses the extent to which acceptance of uncertain propositions depends on the question in context, by providing two impossibility results formulated in the following. Let uncertainty be modeled by subjective probability. Understand a *question* as having potential, complete answers that are mutually exclusive and jointly exhaustive; understand *answers* as disjunctions of complete answers. Assume that accepted answers within each question are closed under entailment. Assume, further, that acceptance is *sensible* in the sense that contradiction is never accepted, that answers of certainty are always accepted, and that every answer can be accepted without certainty. Then, as our first result, it is impossible that acceptance is *independent of questions*, namely, that if a proposition is accepted as an answer to a question, then it is accepted in every question to which it is an answer.

In light of the preceding result, one might settle on a weaker sense of question-independence. Say that a question is *refined* by another question if and only if each answer to the former question continues to be an answer to the latter question. As a weakening of question-independence, *refinement-monotonicity* requires that when an answer is accepted in a question, that answer is also accepted in every refined question. But refinement-monotonicity is too strong to be plausible, because, due to our second result, it is inconsistent with two intuitive principles for reasoning within each individual question. These two principles are: *cautious monotonicity* (i.e., do not retract accepted propositions when you learn what you already accept), and *case reasoning* (i.e., accept a proposition if it would be accepted no matter whether information E or its negation is learned), where information learning is assumed to follow the Bayesian ideal of conditioning.


Get every new post delivered to your Inbox.

Join 80 other followers