In light of some recent discussion over at New Apps, I bring you Clark Glymour’s manifesto …

December 23, 2011

Recent posts like this and like this over at the excellent New Apps blog have generated some intense discussion and have prompted Clark Glymour to write the statement that follows. Clark asked me to post the statement on Choice and Inference, and I’m happy to do so because I think Clark keeps it real.

[Note: I went to Carnegie Mellon and believe that it is a really fantastic, cutting-edge place.].

[Update: The previous version attributed a remark of Dick Rorty’s to Brian Leiter. The corrected version follows below. See [1] as well as the corresponding note at the end of this version.]

%%%%% begin Clark’s statement %%%%%

Manifesto
I am sometimes credited with the remark, due to Nelson Goodman, that “there are two kinds of people in the world: the logical positivists and the god-damned English professors.” While it’s a cute summary, I don’t agree. Departments of English provide sinecures for good authors who lack a mass audience and would otherwise go hungry or not write; they contain people who know a lot about the history of literature, and someone ought to know that. Similar plaudits apply to some faculty in history and in modern languages. Humanities departments also house faculty whose principal work is a great deal of foolishness, garbed in neolexia, who spread it to undergraduates. Nothing would be lost and something would be gained if these people were pruned from universities and offered work with brooms.

Neither do I agree about the logical positivists. Carnap’s work, and that of his disciples, such as Hempel, is largely a history of missed opportunities. Except for Godel’s theorems, the philosophical implications of the mathematical, statistical and empirical sciences developing all around him were essentially ignored, and Carnap’s “principle of tolerance” was an invitation to triviality. As Russell put it, “God exists,” “God doesn’t exist”—no problem for Carnap, just different languages. And as Dana Scott once said, “Carnap was great at defining, but he never proved a damned thing.” Actually he did, but almost entirely elementary things, or as Awodey and Carus say of his work on categoricity, “trivial proofs.” Reichenbach, who was more closely engaged with the sciences, was ever a day late and a dollar short. His work on special relativity was unsound, and inferior to previous work in English; his quantum logic was a mess and ignored the previous good work by Birkhoff; his confused theory of probability was justly eclipsed by Kolmogoroff’s.

Richard Rorty[1] has written that contemporary philosophers are largely embarrassed by the positivists. I am not. For all I find them wanting in retrospect, Carnap was the grandfather of artificial intelligence: his students, Walter Pitts and Herbert Simon, were among the fathers. The echo of the Aufbau must have been heard in Carnap’s teaching. Reichenbach’s student, Hilary Putnam, combined computation theory, logic, and Reichenbach’s central idea about inductive inference to create the subject of computational learning theory. Reichenbach’s Elements of Symbolic Logic was the most serious attempt to formalize substantial parts of ordinary language, and for some while it had an influence in linguistics.

There is a larger reason I do not find the positivists embarrassing: the contrast case on the continent. The positivists, not just those two I have emphasized, wrote with scientific and liberal ambitions, and at least with a passing connection with mathematics and science; in a time in which philosophy on the continent was embracing obscurantism and vicious, totalitarian politics they stood for liberal politics. When National Socialism came, they left home and country, but not in some cases, as with Hempel, before helping to ferry Jews out of Germany. Compare Heidegger, whose defenses of National Socialism echo some of his philosophical views (the German language is, next to Greek, closest to “Being”) or Merleau-Ponty (Stalin’s mass murders were regrettable, but necessary to the advance of socialism.) Sartre sat out much of World War II as a Vichy professor, replacing a Jew who had been dismissed. There is no thinking in these people worthy of the title; Sartre’s work varies from sophomoric (Les Mouches) to a series of puns passing as profound (L’Etre et le Ne’ant). The heirs of their remoteness from analytic thought were LeCans and Derrida and Pol Pot. Their political heirs are English professors who remonstrate about sexual oppression, but have never guided a frightened woman through a mindless, aggressive crowd to a clinic. That’s embarrassing.

Contemporary “formal philosophers” have two ancestors: Carnap, who promoted the linguistic mode they practice (logify everything) and the English mathematical philosophers, Russell and Ramsey (probabilify everything). Much as I approve of Ramsey and Russell, I do not wholly approve of either legacy. Much of the work in formal philosophy is ill-motivated technicalia, much of it is ritualized (yet another soundness and completeness theorem for yet another system of modal logic, etc.), much of it (for example, the ever growing work on singular causation) is in Carnap’s faulty spirit: definitions without proofs or algorithms and neglect of the relevant work in computer science.

Contemporary philosophy of science has another ancestor, Thomas Kuhn. He is the unwitting grandfather of the incessant summaries of scientific work, supplemented with comments vague or vapid, now passing as philosophy in many departments. Such work is more often welcome there than is “formal philosophy” perhaps because it takes less effort to understand, or perhaps not: not much is illuminated when some simple principle about explanation is illustrated by a recapitualation of string theory.

I advocate material philosophy, and I will try to explain what I mean, which is actually rather broad, and of course rather vague. In The Dynamics of Reason, Michael Friedman wrote that the service of philosophy is to provide “new frameworks, new possibilities for science that are in some sense outside of science.” I paraphrase, and I agree. Friedman gives no examples from 20th century philosophy but there are many. I have already mentioned two, Carnap’s Aufbau and Putnam’s creation of computational learning theory, which had anticipations in other philosophical work, for example John Kemeny’s. Ramsey’s work in mathematics and in the foundations of subjective probability is another case. Each of these efforts had enormous ramifications, but I do not expect or demand that all of the work in the spirit of Friedman’s vision be so consequential. I could give a very long list of examples. I will give a few.

Patrick Suppes was among the first to realize the implications for education of the digital computer, and he inaugurated a broad project on computer instruction combined with empirical research on learning. Along the way he won the National Medal of Science. His very idea has been wonderfully continued by two of my colleagues, Wilfried Sieg and Richard Scheines. David Lewis rose to a challenge about how meanings could arise without a pre-established understanding between communicators, and in Convention he answered it. Brian Skyrms and his students have extended the basic ideas to a variety of settings, and Skyrms has used related techniques (evolutionary game theory) to speculate on the evolution of norms. Lewis contributed a logical theory where one was really needed, for counterfactuals. In the 1960s there was a lot of writing about relativity and conventionality; David Malament really understood the theory, and cleared matters up. He went on to investigate ways in which features of gravitational models are in principle underdetermined, and, perhaps as an amusement, to compute lower bounds on the energy required to execute a causal circle. Philosophers and others learned from him. From John Earman we learned how various pieces of modern cosmology do not fit together, where the holes are, and much else. From Eliot Sober we got a new take on evolution. Philosophers and statisticians alike want to posit probabilities over sentences, but how would that work with a language adequate to science and mathematics, say first order logic? Haim Gaifman told us, and worked out the implications for what is and what is not learnable. Putnam’s innovation opened the way to generalization to many epistemological and methodological issues. Gaifman, Kevin Kelly and Scott Weinstein seized the opportunity. Bayesian statisticians overlooked many fundamental issues: decisions among multiple agents, resolution of incoherence, etc. Teddy Seidenfeld and his collaborators addressed them. Peter Spirtes and Richard Scheines combined work in statistics and computer science to produce the graphical representation of causal relations, the fundamental result on the implications of such representations for experimental prediction, and the first feasible procedures for searching for such models from data. Their work is used now in many places; the website with software deriving from their ideas receives a hundred hits a week. Recently, collaborating with a computer scientist, Patrick Hoyer, Frederick Eberhardt broke outside of traditional experimental design to give almost complete procedures for learning linear structures from experiments. And so on. (My apologies to the many contributors my brief summary omits, especially to those using philosophical background to write insightfully and importantly about public policy.)

Why should this work be done in philosophy departments? At least for two reasons. Because philosophy teaches an eye for hidden presuppositions, equivocations, bad arguments generally; and because philosophy departments can be homes to brilliant people who are, at least initially, outsiders to the science of the day, people who will take up questions that may have been made invisible to scientists because of disciplinary blinkers, people who look at issues, in small ways or large, just as Friedman’s vision proposes. A real use of philosophy departments is to provide shelter for such thinkers, and in the long run they may be the salvation of philosophy as an academic discipline.

One might think this work, and much else like it, that realizes Friedman’s vision in various ways, would be an inspiration to philosophers. Not so. It is largely regarded as marginal or idiosyncratic, “not philosophy.” Philosophy, while it can be combined with empirical work, is an a priori effort, and the tools of the a priori are opinion, logic, mathematics and the theory and practice of computation. To use them, Friedman’s vision requires as well a knowledge of the sciences. Learning logic and mathematics, learning to prove and to program, or at least how to write a decent algorithm, requires some sustained effort that philosophers have largely foresworn not only for themselves but also in the instruction they give to their graduate students. The run of philosophers use, and even acknowledge as philosophical tools, only the first, called “intuition.” (I am reminded of a remark by a philosopher, Laurie Paul in fact, who complained when I used a bit of elementary Boolean algebra in a lecture that philosophers should not be expected to know such things. In one sense of “expected” she was, alas, right.) I do not think philosophical work based only on intuition is always worthless, but it is a little bit like refusing to learn to walk on perfectly good legs and instead walking on your fingertips. It is obtuse.

Of late it has been remarked that there is a sociological break in philosophy. More a fragmentation, I should say. Conventional analytic philosophy–analytic metaphysics, theoretical ethics, traditional epistemology, philosophy of mind—has become cramped and parochial, a subject on the verge of swallowing itself. The same could be said for a good deal of formal philosophy. As Tim Maudlin put it to me once, normal science may be boring but it produces something, normal philosophy is boring and produces nothing. (Again, I paraphrase.)

Salvation? Were I a university administrator facing a contracting budget, I would not look to eliminate biosciences or computer engineering. I would notice that the philosophers seem smart, but their writings are tediously incestuous and of no influence except among themselves, and I would conclude that my academy could do without such a department. (Phi Beta Kappa would protest, of course.) But not if I found that my philosophy department retrieved a million dollars a year in grants and fellowships, and contained members whose work is cited and used in multiple subjects, and whose faculty taught the traditional subject well to the university’s undergraduates. I am in such a department, and I will never again be a university administrator, but the time is here when many university administrators are in fact in the situation I imagine, and some of them may come to conclusions like mine.

Clark Glymour

[1] In a previous post this was misattributed to Brian Leiter. My apologies to Professor Leiter.


Touching remarks concerning the origins of the PGR

December 8, 2011

As the most powerful man in academic philosophy, law professor Brian Leiter needs no introduction to the philosophical community. Equipped with a deep understanding of measurement, Professor Leiter, with the help of several other expert methodologists, has prepared the latest version of The Philosophical Gourmet Report. With all of the serious discussion that surrounds the PGR, I enjoyed reading the following lighthearted remarks by Professor Leiter concerning the origins of the PGR:

“I get asked this at various intervals, and while it’s been covered in some news stories over the years, I think I’ve never posted the explanation here, so I might as well to satisfy the curiosity of anyone who is curious.

I first produced a short version of the PGR in 1989, when I was a PhD student at Michigan.  It was for undergrads at Michigan thinking about grad school in philosophy, and it was based on the research I had done on PhD programs in philosophy prior to coming to Michigan.   In the 1980s, one of the best-known rankings was the “Gourman Report,” by Jack Gourman, a Cal State poli sci professor, who ranked all fields (and assigned minute numerical differences:  e.g., Princeton was 4.89 in philosophy, but Pitt was 4.82), but never disclosed the methodology.  My suspicion was that Gourman simply adjusted his rankings every few years based on the most recent National Research Council ranking (this was when the NRC actually did useful reputational surveys)–so, in the 1980s, the last one was 1982.  And it was already becoming out-of-date when I was a senior in college in 1983-84.   Anyway, I called my type-written report on philosophy PhD programs in 1989 the “Anti-Gourman Report.”

To my surprise, it was popular not just with the undergrads at Michigan, but with my fellow students, who asked if they could photocopy it and send it to friends at their undergrad schools.  And so it began.  I updated it each year, giving my ‘gestalt’ sense of programs, listing major faculty moves, and so on.   As it grew more and more popular via the informal photocopy method of distribution, I decided I better change the name, lest Jack Gourman get cranky!  Since Gourman was close to Gourmand, and since I wasn’t catering to Gourmands, but Gourmets, I settled on….”

I can only speculate as to how many other leading philosophers spent as much time ranking things during their graduate studies.  Perhaps this is an occasion for a poll!


Joe Halpern: Substantive Rationality and Backward Induction

December 1, 2011

The following news is from Yang Liu and Rush Stewart over at FPCU (http://blogs.cuit.columbia.edu/logic/):

Substantive Rationality and Backward Induction
Joe Halpern (CS, Cornell)
Friday, December 9, 11 AM
716 Philosophy Hall, Columbia University

Abstract.
 Some of the major puzzles in game theory today involve the notion of rationality. Assuming that all players are rational, and know that they are all rational, and know that they know, etc., results in strategies that seem highly irrational. At the 1998 TARK (Theoretical Aspects of Rationality and Knowledge) conference, there was a 2.5 hour round table, involving some leading game theorists and philosphers, on “Common knowledge of rationality and the backward induction solution for games of perfect information”. During the discussion Robert Aumann stated the following theorem:

  • Common knowledge of substantive rationality implies the backward induction solution in games of perfect information.

Robert Stalnaker then stated the following theorem:

  • Common knowledge of substantive rationality does not imply the backward induction solution in games of perfect information.

In this talk I will carefully explain all the relevant notions (games of perfect information, knowledge and common knowledge, strategies, rationality, and substantive rationality) and explain why, although both Aumann and Stalnaker were apparently using the same definitions, they were able to (correctly) prove such different results. The key turns out to lie in getting a good model of counterfactual reasoning in games. I will in fact provide a formal model that allows us to prove both results and to understand the technical differences between them. The model has the added advantage of giving us a deeper insight into what conclusions we can draw from rationality and common knowledge of rationality. No prior knowledge will be assumed.


Greg Wheeler on coherence — 12pm on 11/14 in 716 Philosophy Hall at Columbia University

November 12, 2011

Here is the abstract for Greg’s talk:

%%%%%%%%%%%%%%%%%%%%%

Coherence at last!

Gregory Wheeler (joint work with Richard Scheines, Carnegie Mellon)

Coherentism maintains that coherent beliefs are more likely to be true than incoherent beliefs, and that coherent evidence provides more confirmation of a hypothesis when the evidence is made coherent by the explanation provided by that hypothesis. Although probabilistic models of credence ought to be well-suited to justifying such claims, negative results from Bayesian epistemology have suggested otherwise. In this essay we argue that the connection between coherence and confirmation should be understood as a relation mediated by the causal relationships among the evidence and a hypothesis, and we offer a framework for doing so by fitting together probabilistic models of coherence, confirmation, and causation. We show that the causal structure among the evidence and hypothesis is sometimes enough to determine whether the coherence of the evidence boosts confirmation of the hypothesis, makes no difference to it, or even reduces it.  We also show that, ceteris paribus, it is not the coherence of the evidence that boosts confirmation, but rather the ratio of the coherence of the evidence to the coherence of the evidence conditional on a hypothesis.


Two items that might be of interest to the FE community

October 9, 2011

First, slides from each of the Progic 2011 talks are now available.

Second, if you happen to be around Toronto later this month, then you might want to check of the following event honoring the work of Isaac Levi.


Progic 2011 on September 10th and 11th at Columbia University

September 2, 2011

The Progic conference series is intended to promote interactions between probability and logic. The fifth installment of the series will be held at Columbia University in New York on September 10th and 11th of 2011. While several of the earlier Progic meetings included a special focus, Progic 2011 will honor Haim Gaifman‘s contributions to the intersection of probability and logic. Progic 2011 will consist of 11 talks, including invited talks by the following:

Progic 2011 will also include a memorial session to honor Horacio Arlo-Costa (Carnegie Mellon) who was scheduled to speak at the conference but passed away on July 14, 2011.

Here is the schedule:

Saturday, September 10th – in 602 Hamilton Hall

Morning session

9:45-10:00 Opening remarks

10:00-11:00 Mixing modality and probability (yet again)

Dana Scott (Carnegie Mellon)

11:00-11:20 Q&A

10 min break

11:30-12:00 From Bayesian epistemology to inductive logic

Jon Williamson (Kent)

12:00-12:10 Q&A

5 min break

12:15-12:45 Ultralarge lotteries: dissolving the lottery paradox using non-standard analysis

Sylvia Wenmackers (Groningen)

12:45-12:55 Q&A

Lunch

Afternoon session

2:25-3:25 T.b.a.

Rohit Parikh (CUNY Graduate Center)

3:25-3:45 Q&A

5 min break

3:50-4:20 Coherence based probability logic: philosophical and psychological applications

Niki Pfeifer (Munich)

4:20-4:30 Q&A

20 min break

4:50-5:20 Matryoshka epistemology: the role of cores in belief and decision

Paul Pedersen (Carnegie Mellon)

5:20-5:30 Q&A

5:30 – 6:30 Memorial for Horacio Arlo Costa

Sunday, September 11th – in 403 IAB

Morning session

10:25-10:30 Opening announcements

10:30-11:30 Pure inductive logic

Jeff Paris (Manchester)

11:30-11:50 Q&A

10 min break

12:00-12:30 Probabilities on sentences in an expressive logic

M. Hutter (ANU), J. Lloyd (ANU), K. Ng (ANU), and W. Uther (National ICT)

12:30-12:40 Q&A

Lunch

Afternoon session

2:10-2:40 Confirmation as partial entailment: a representation theorem in inductive logic

Vincenzo Crupi (Munich) and Katya Tentori (Trento)

2:40-2:50 Q&A

5 min break

 2:55-3:25 On a priori and a posteriori reasoning

Anubav Vasudevan (Chicago)

3:25-3:35 Q&A

10 min break

3:45-4:45 T.b.a.

Haim Gaifman (Columbia)

4:45-5:05 Q&A

5:05-5:15 Closing remarks


WSF Interactive Broadcast: The Illusion of Certainty

June 21, 2011

This interactive broadcast from the World Science Festival begins at 2pm on June 22 and builds on The Illusion of Certainty: Risk, Probability, and Chance, which was held at WSF on June 2. Here is the abstract from that earlier program:

“Stuff happens. The weather forecast says it’s sunny, but you just got drenched. You got a flu shot—but you’re sick in bed with the flu. Your best friend from Boston met your other best friend from San Francisco. Coincidentally. What are the odds? Risk, probability, chance, coincidence—they play a significant role in the way we make decisions about health, education, relationships, and money. But where does this data come from and what does it really mean? How does the brain find patterns and where can these patterns take us? When should we ditch the data and go with our gut? Join us in a captivating discussion that will demystify the chancy side of life.”

 


Follow

Get every new post delivered to your Inbox.

Join 89 other followers