NASSLLI 2012 – Scholarships Available

December 27, 2011

The North American Summer School for Logic, Language, and Information
June 18-22, 2012
University of Texas, Austin
http://nasslli2012.com/

There are 50 student scholarships available. (But hurry!)


In light of some recent discussion over at New Apps, I bring you Clark Glymour’s manifesto …

December 23, 2011

Recent posts like this and like this over at the excellent New Apps blog have generated some intense discussion and have prompted Clark Glymour to write the statement that follows. Clark asked me to post the statement on Choice and Inference, and I’m happy to do so because I think Clark keeps it real.

[Note: I went to Carnegie Mellon and believe that it is a really fantastic, cutting-edge place.].

[Update: The previous version attributed a remark of Dick Rorty's to Brian Leiter. The corrected version follows below. See [1] as well as the corresponding note at the end of this version.]

%%%%% begin Clark’s statement %%%%%

Manifesto
I am sometimes credited with the remark, due to Nelson Goodman, that “there are two kinds of people in the world: the logical positivists and the god-damned English professors.” While it’s a cute summary, I don’t agree. Departments of English provide sinecures for good authors who lack a mass audience and would otherwise go hungry or not write; they contain people who know a lot about the history of literature, and someone ought to know that. Similar plaudits apply to some faculty in history and in modern languages. Humanities departments also house faculty whose principal work is a great deal of foolishness, garbed in neolexia, who spread it to undergraduates. Nothing would be lost and something would be gained if these people were pruned from universities and offered work with brooms.

Neither do I agree about the logical positivists. Carnap’s work, and that of his disciples, such as Hempel, is largely a history of missed opportunities. Except for Godel’s theorems, the philosophical implications of the mathematical, statistical and empirical sciences developing all around him were essentially ignored, and Carnap’s “principle of tolerance” was an invitation to triviality. As Russell put it, “God exists,” “God doesn’t exist”—no problem for Carnap, just different languages. And as Dana Scott once said, “Carnap was great at defining, but he never proved a damned thing.” Actually he did, but almost entirely elementary things, or as Awodey and Carus say of his work on categoricity, “trivial proofs.” Reichenbach, who was more closely engaged with the sciences, was ever a day late and a dollar short. His work on special relativity was unsound, and inferior to previous work in English; his quantum logic was a mess and ignored the previous good work by Birkhoff; his confused theory of probability was justly eclipsed by Kolmogoroff’s.

Richard Rorty[1] has written that contemporary philosophers are largely embarrassed by the positivists. I am not. For all I find them wanting in retrospect, Carnap was the grandfather of artificial intelligence: his students, Walter Pitts and Herbert Simon, were among the fathers. The echo of the Aufbau must have been heard in Carnap’s teaching. Reichenbach’s student, Hilary Putnam, combined computation theory, logic, and Reichenbach’s central idea about inductive inference to create the subject of computational learning theory. Reichenbach’s Elements of Symbolic Logic was the most serious attempt to formalize substantial parts of ordinary language, and for some while it had an influence in linguistics.

There is a larger reason I do not find the positivists embarrassing: the contrast case on the continent. The positivists, not just those two I have emphasized, wrote with scientific and liberal ambitions, and at least with a passing connection with mathematics and science; in a time in which philosophy on the continent was embracing obscurantism and vicious, totalitarian politics they stood for liberal politics. When National Socialism came, they left home and country, but not in some cases, as with Hempel, before helping to ferry Jews out of Germany. Compare Heidegger, whose defenses of National Socialism echo some of his philosophical views (the German language is, next to Greek, closest to “Being”) or Merleau-Ponty (Stalin’s mass murders were regrettable, but necessary to the advance of socialism.) Sartre sat out much of World War II as a Vichy professor, replacing a Jew who had been dismissed. There is no thinking in these people worthy of the title; Sartre’s work varies from sophomoric (Les Mouches) to a series of puns passing as profound (L’Etre et le Ne’ant). The heirs of their remoteness from analytic thought were LeCans and Derrida and Pol Pot. Their political heirs are English professors who remonstrate about sexual oppression, but have never guided a frightened woman through a mindless, aggressive crowd to a clinic. That’s embarrassing.

Contemporary “formal philosophers” have two ancestors: Carnap, who promoted the linguistic mode they practice (logify everything) and the English mathematical philosophers, Russell and Ramsey (probabilify everything). Much as I approve of Ramsey and Russell, I do not wholly approve of either legacy. Much of the work in formal philosophy is ill-motivated technicalia, much of it is ritualized (yet another soundness and completeness theorem for yet another system of modal logic, etc.), much of it (for example, the ever growing work on singular causation) is in Carnap’s faulty spirit: definitions without proofs or algorithms and neglect of the relevant work in computer science.

Contemporary philosophy of science has another ancestor, Thomas Kuhn. He is the unwitting grandfather of the incessant summaries of scientific work, supplemented with comments vague or vapid, now passing as philosophy in many departments. Such work is more often welcome there than is “formal philosophy” perhaps because it takes less effort to understand, or perhaps not: not much is illuminated when some simple principle about explanation is illustrated by a recapitualation of string theory.

I advocate material philosophy, and I will try to explain what I mean, which is actually rather broad, and of course rather vague. In The Dynamics of Reason, Michael Friedman wrote that the service of philosophy is to provide “new frameworks, new possibilities for science that are in some sense outside of science.” I paraphrase, and I agree. Friedman gives no examples from 20th century philosophy but there are many. I have already mentioned two, Carnap’s Aufbau and Putnam’s creation of computational learning theory, which had anticipations in other philosophical work, for example John Kemeny’s. Ramsey’s work in mathematics and in the foundations of subjective probability is another case. Each of these efforts had enormous ramifications, but I do not expect or demand that all of the work in the spirit of Friedman’s vision be so consequential. I could give a very long list of examples. I will give a few.

Patrick Suppes was among the first to realize the implications for education of the digital computer, and he inaugurated a broad project on computer instruction combined with empirical research on learning. Along the way he won the National Medal of Science. His very idea has been wonderfully continued by two of my colleagues, Wilfried Sieg and Richard Scheines. David Lewis rose to a challenge about how meanings could arise without a pre-established understanding between communicators, and in Convention he answered it. Brian Skyrms and his students have extended the basic ideas to a variety of settings, and Skyrms has used related techniques (evolutionary game theory) to speculate on the evolution of norms. Lewis contributed a logical theory where one was really needed, for counterfactuals. In the 1960s there was a lot of writing about relativity and conventionality; David Malament really understood the theory, and cleared matters up. He went on to investigate ways in which features of gravitational models are in principle underdetermined, and, perhaps as an amusement, to compute lower bounds on the energy required to execute a causal circle. Philosophers and others learned from him. From John Earman we learned how various pieces of modern cosmology do not fit together, where the holes are, and much else. From Eliot Sober we got a new take on evolution. Philosophers and statisticians alike want to posit probabilities over sentences, but how would that work with a language adequate to science and mathematics, say first order logic? Haim Gaifman told us, and worked out the implications for what is and what is not learnable. Putnam’s innovation opened the way to generalization to many epistemological and methodological issues. Gaifman, Kevin Kelly and Scott Weinstein seized the opportunity. Bayesian statisticians overlooked many fundamental issues: decisions among multiple agents, resolution of incoherence, etc. Teddy Seidenfeld and his collaborators addressed them. Peter Spirtes and Richard Scheines combined work in statistics and computer science to produce the graphical representation of causal relations, the fundamental result on the implications of such representations for experimental prediction, and the first feasible procedures for searching for such models from data. Their work is used now in many places; the website with software deriving from their ideas receives a hundred hits a week. Recently, collaborating with a computer scientist, Patrick Hoyer, Frederick Eberhardt broke outside of traditional experimental design to give almost complete procedures for learning linear structures from experiments. And so on. (My apologies to the many contributors my brief summary omits, especially to those using philosophical background to write insightfully and importantly about public policy.)

Why should this work be done in philosophy departments? At least for two reasons. Because philosophy teaches an eye for hidden presuppositions, equivocations, bad arguments generally; and because philosophy departments can be homes to brilliant people who are, at least initially, outsiders to the science of the day, people who will take up questions that may have been made invisible to scientists because of disciplinary blinkers, people who look at issues, in small ways or large, just as Friedman’s vision proposes. A real use of philosophy departments is to provide shelter for such thinkers, and in the long run they may be the salvation of philosophy as an academic discipline.

One might think this work, and much else like it, that realizes Friedman’s vision in various ways, would be an inspiration to philosophers. Not so. It is largely regarded as marginal or idiosyncratic, “not philosophy.” Philosophy, while it can be combined with empirical work, is an a priori effort, and the tools of the a priori are opinion, logic, mathematics and the theory and practice of computation. To use them, Friedman’s vision requires as well a knowledge of the sciences. Learning logic and mathematics, learning to prove and to program, or at least how to write a decent algorithm, requires some sustained effort that philosophers have largely foresworn not only for themselves but also in the instruction they give to their graduate students. The run of philosophers use, and even acknowledge as philosophical tools, only the first, called “intuition.” (I am reminded of a remark by a philosopher, Laurie Paul in fact, who complained when I used a bit of elementary Boolean algebra in a lecture that philosophers should not be expected to know such things. In one sense of “expected” she was, alas, right.) I do not think philosophical work based only on intuition is always worthless, but it is a little bit like refusing to learn to walk on perfectly good legs and instead walking on your fingertips. It is obtuse.

Of late it has been remarked that there is a sociological break in philosophy. More a fragmentation, I should say. Conventional analytic philosophy–analytic metaphysics, theoretical ethics, traditional epistemology, philosophy of mind—has become cramped and parochial, a subject on the verge of swallowing itself. The same could be said for a good deal of formal philosophy. As Tim Maudlin put it to me once, normal science may be boring but it produces something, normal philosophy is boring and produces nothing. (Again, I paraphrase.)

Salvation? Were I a university administrator facing a contracting budget, I would not look to eliminate biosciences or computer engineering. I would notice that the philosophers seem smart, but their writings are tediously incestuous and of no influence except among themselves, and I would conclude that my academy could do without such a department. (Phi Beta Kappa would protest, of course.) But not if I found that my philosophy department retrieved a million dollars a year in grants and fellowships, and contained members whose work is cited and used in multiple subjects, and whose faculty taught the traditional subject well to the university’s undergraduates. I am in such a department, and I will never again be a university administrator, but the time is here when many university administrators are in fact in the situation I imagine, and some of them may come to conclusions like mine.

Clark Glymour

[1] In a previous post this was misattributed to Brian Leiter. My apologies to Professor Leiter.


Kyburg Special Issue in Synthese

December 18, 2011

Horacio Arló-Costa and I were finishing the Synthese special issue on Henry Kyburg this past summer when Horacio passed away.   The issue is in queue for final production, but papers are all available through Online First.

It is a terrific collection of papers, which stands as much as a tribute to Horacio as it does to Henry.  Thanks again to all of the contributors, and thanks especially to John Symons and Vincent Hendricks at Synthese for championing the project.

 


CfP: ESSLLI 2012 Student Session

December 17, 2011

ESSLLI 2012 STUDENT SESSION
The 24th European Summer School in Logic, Language and Information

Opole, Poland, August 6-17, 2012

Deadline for submissions: March 20, 2012

http://loriweb.org/ESSLLI2012StuS/

ABOUT:

The Student Session of the 24th European Summer School in Logic, Language, and Information (ESSLLI) will take place in Opole, Poland on August 6-17, 2012. We invite submissions of original, unpublished work from students in any area at the intersection of Logic & Language, Language & Computation, or Logic & Computation. Read the rest of this entry »


Touching remarks concerning the origins of the PGR

December 8, 2011

As the most powerful man in academic philosophy, law professor Brian Leiter needs no introduction to the philosophical community. Equipped with a deep understanding of measurement, Professor Leiter, with the help of several other expert methodologists, has prepared the latest version of The Philosophical Gourmet Report. With all of the serious discussion that surrounds the PGR, I enjoyed reading the following lighthearted remarks by Professor Leiter concerning the origins of the PGR:

“I get asked this at various intervals, and while it’s been covered in some news stories over the years, I think I’ve never posted the explanation here, so I might as well to satisfy the curiosity of anyone who is curious.

I first produced a short version of the PGR in 1989, when I was a PhD student at Michigan.  It was for undergrads at Michigan thinking about grad school in philosophy, and it was based on the research I had done on PhD programs in philosophy prior to coming to Michigan.   In the 1980s, one of the best-known rankings was the “Gourman Report,” by Jack Gourman, a Cal State poli sci professor, who ranked all fields (and assigned minute numerical differences:  e.g., Princeton was 4.89 in philosophy, but Pitt was 4.82), but never disclosed the methodology.  My suspicion was that Gourman simply adjusted his rankings every few years based on the most recent National Research Council ranking (this was when the NRC actually did useful reputational surveys)–so, in the 1980s, the last one was 1982.  And it was already becoming out-of-date when I was a senior in college in 1983-84.   Anyway, I called my type-written report on philosophy PhD programs in 1989 the “Anti-Gourman Report.”

To my surprise, it was popular not just with the undergrads at Michigan, but with my fellow students, who asked if they could photocopy it and send it to friends at their undergrad schools.  And so it began.  I updated it each year, giving my ‘gestalt’ sense of programs, listing major faculty moves, and so on.   As it grew more and more popular via the informal photocopy method of distribution, I decided I better change the name, lest Jack Gourman get cranky!  Since Gourman was close to Gourmand, and since I wasn’t catering to Gourmands, but Gourmets, I settled on….”

I can only speculate as to how many other leading philosophers spent as much time ranking things during their graduate studies.  Perhaps this is an occasion for a poll!


Joe Halpern: Substantive Rationality and Backward Induction

December 1, 2011

The following news is from Yang Liu and Rush Stewart over at FPCU (http://blogs.cuit.columbia.edu/logic/):

Substantive Rationality and Backward Induction
Joe Halpern (CS, Cornell)
Friday, December 9, 11 AM
716 Philosophy Hall, Columbia University

Abstract.
 Some of the major puzzles in game theory today involve the notion of rationality. Assuming that all players are rational, and know that they are all rational, and know that they know, etc., results in strategies that seem highly irrational. At the 1998 TARK (Theoretical Aspects of Rationality and Knowledge) conference, there was a 2.5 hour round table, involving some leading game theorists and philosphers, on “Common knowledge of rationality and the backward induction solution for games of perfect information”. During the discussion Robert Aumann stated the following theorem:

  • Common knowledge of substantive rationality implies the backward induction solution in games of perfect information.

Robert Stalnaker then stated the following theorem:

  • Common knowledge of substantive rationality does not imply the backward induction solution in games of perfect information.

In this talk I will carefully explain all the relevant notions (games of perfect information, knowledge and common knowledge, strategies, rationality, and substantive rationality) and explain why, although both Aumann and Stalnaker were apparently using the same definitions, they were able to (correctly) prove such different results. The key turns out to lie in getting a good model of counterfactual reasoning in games. I will in fact provide a formal model that allows us to prove both results and to understand the technical differences between them. The model has the added advantage of giving us a deeper insight into what conclusions we can draw from rationality and common knowledge of rationality. No prior knowledge will be assumed.


Follow

Get every new post delivered to your Inbox.

Join 84 other followers