You are currently browsing the category archive for the ‘Semantics’ category.
After a longish break, the Montague Grammar series returns with the first installment discussing Montague’s actual analysis of his fragment of English. Montague’s analysis of nouns (proper and common) hinges on logical devices known as generalized quantifiers, which were first studied by Mostowski and Lindström in the 1960s. They noticed that some concepts, like “there exist uncountably many X”, were not definable in terms of the “ordinary” existential and universal quantifiers: new quantifiers, the generalized quantifiers, had to be introduced. Generalized quantifiers are not of a greater “order” than ordinary quantifiers: first-order generalized quantifiers make use of exactly the same model-theoretic framework as the ordinary quantifiers, but divvy up those structures in different ways. Montague’s logic is a “higher-order” intensional logic, but the same principles apply: his new quantificational devices do not fundamentally alter the theory of types on which they are based.
There are three generalized quantifiers introduced in PTQ. The first one is written with a combining inverted breve: Î. This symbolizes λuφ, the set of all objects where proposition φ holds when u is substituted for its variable. The second one is written with a circonflex, used in intensional logic to symbolize the intension of a word: ^uφ represents “the property of objects of type u expressed by φ”. The third generalized quantifier, written with an asterisk, applies the breve quantifier to properties of individual concepts (intensions): a* is equivalent to Î[I(^a)], or the set of properties which the intension of the term a (the function determining what object it is in various possible worlds) has. If we iterate the asterisk, we get the set of properties of properties of an individual concept (in PTQ this is written with a cursive P, and forms the basis of Montague’s semantic analysis of the word “to be”).
These devices are necessary to incorporate the “puzzles of intensionality” which motivated Montague Grammar, such as the difference between de re and de dicto readings of “John talks about a unicorn”, into its semantic analysis: in particular, the asterisk is used to symbolize the “meaning” of a word, since no matter whether one is using an expression in an extensional or an intensional context it can be defined by all the properties of its “sense”. (In early Montague Grammar seminars, they used to ask “What is the meaning of life?” and answer by writing “life*” on the chalkboard, but this really means that the meaning of life is everything that is true of every way it could be.)
Earlier this month (a productive one for posts, if nothing else) I offered a speculative application of hybrid modal logic to literary language. In thinking about that sort of thing, I’m operating at the limits of my abilities, which are well beyond the limits of what can be coherently conceived and formulated: but I think there’s room for a little more clarity, so I’m going to elaborate on a connection between hybridity and Heidegger’s concept of Ereignis that occurred to me some years ago.
In German Ereignis ordinarily means “event” or “a happening”; but it occurs throughout Heidegger’s work with slightly different valences, and assumes a central place in his thought following the Kehre or “turn” after Being and Time. His “secret book” from that time, only published in 1989, bore the title Beiträge zur Philosophie (vom Ereignis); this was rendered by the English translators as Contributions to Philosophy (from Enowning). Now, “enowning” is probably not a word you’ve been throwing around in your everyday speech, but there’s an etymological logic to it: eigen means “own”, in the sense of one’s own, and forms the root of the more familiar Heideggerian term eigentlich (“authentic”). Heidegger plays off this etymological origin in his use of the term, which leads other translators to render the word as “event of appropriation” or “propriation”: as for his own instructions, Heidegger insisted it was untranslatable.
Well, what the hell is Ereignis? Understanding the concept requires situating it within Heidegger’s later conception of the “history of being”; I’ve talked smack about that view of the history of philosophy before, but for the moment we’ll give him his due. For later Heidegger, “being” is not a brute fact or timeless dimension of human experience but something that irrupted into human consciousness with the Greeks and can undergo decisive changes (such as he hoped the Nazi-Zeit would bring). Ereignis is a word for that irruptive dimension, the historical point at which thought can latch onto Being: it is equally implicated in thought, being, and history.
What could this possibly have to do with the expressive facilities for talking about individual possible worlds? Well, consider this: each use of a “nominal” to refer to a point is a miniature instance of Ereignis (though Heidegger called it a singulare tantum, Latinspeak for mass noun, we won’t reopen that issue at this point). Using the nominal, we can talk about how something is in a way distinct from its “essence” as parceled out over different possible configurations. Furthermore, I think there is a natural-language phenomenon which illustrates this very nicely: plays on words like the title of this post, which “hybridize” sayings and phrases in a way that subverts figural expectations: that sort of hybrid language creates a path of access to a truly singular description, one which breaks the bonds of the “eternal return” of metaphor in cliche and offers access to a form of words which says one thing, be it true or not. (I’m attracted to this as a theory of prose, but perhaps it has the consequence that one is not speaking prose without even knowing it after all.)
Earlier this decade, I resolved not to kill myself until Blackburn, de Rijke, and Venema’s Modal Logic came out in paperback. It was a good decision, since it’s a terrific book; I still haven’t absorbed everything in it, but I’ve had five years or so to think about the “high points” of their exposition. One of these is their introductory treatment of hybrid logic, which was invented by Arthur Prior in the ’60s and which has gradually become a topic of widespread interest: today there is an entry in the Stanford Encylopedia of Philosophy, but I’ll provide a slightly different exposition for very different purposes here. I’ve already talked quite a bit about Kripke semantics for modal logic, which hinges on “points” or “possible worlds”, at which points modal formulas are evaluated to determine their validity or invalidity. The basic idea of hybrid logic is to stop comparing formulas across possible worlds and instead to consider them at a particular point.
The concept came to Prior in tense-logical form, as a formalization of sentences describing events occurring at a particular point in time; but it can be generalized to semantics for alethic, deontic, and epistemic modalities as well. The main hybrid operator is usually written in the form “@nq”, which like the email symbol indicates that “at” the possible world n under consideration (which is designated by a “nominal”, a name for the state) proposition q holds. Now, the proof- and model theory of hybrid logic is well understood by this point, but the practical functions it might serve in analysis are less clear: I’d like to suggest one that’s been kicking around my head for a while. Ordinary modal logic is often used to formalize phenomena that are “modally robust”, like counterfactual causal dependency: “if the proper conditions obtained, q would happen”. Such things are, in the language of the neo-Kantian Wilhelm Windelband, “nomothetic”: they illustrate a world of laws (such as Wilfrid Sellars claimed concepts were unthinkable without).
Now, to my mind hybrid logic tackles the other area outlined by Windelband, the “idiographic”. In history and other Geisteswissenschaften, attention must be paid to individual persons and events using Verstehen, powers of understanding that do not operate according to strict, exceptionless laws. Hybrid modalities give us the expressive resources to talk about individual “states of affairs”: in fact, it’s possible to translate the whole of first-order logic into hybrid logic (provided we introduce an additional hybrid operator ∀ quantifying over points). It seems to me this is also a serviceable representation of the non-metaphorical description of human events, that which does not strictly fall under the ambit of natural laws or those of “poetic politics” without being meaningless: obviously our access to such truths is a bit more fallible than what we can merely deduce, but even those who scoff at “unscientific” social theory can hardly do without such observations.
I finally finished Badiou’s Being and Event; I’m not sure I would break his ideas out at job interviews (though any philosophy would probably be out of place at the job interviews I’m likely to have), but I really found it pretty educational and not very implausible. However, I am enough of a partisan of what Badiou elsewhere calls “the little style” in philosophy of logic to want to take his philosophy and recast it in more familiar analytic contexts; one of which is, of course, “context” itself. Since the discovery of “double indexing” by Hans Kamp in the ’60s, the importance of assessing utterances in temporal and locational context (in fact, assessing them in all the different contexts that make essential contributions to their meaning) has bulked large in formal semantics. Context also includes the relevant parts of prior discourse: Kamp’s “Discourse Representation Theory” and Irene Heim’s “file-change semantics” attempt to demonstrate how anaphoric dependency of a pronoun on an earlier noun phrase works by a process of contextual adjustment.
Finally, context includes the nebulous category of “common knowledge”, what is presupposed by participants in a communicative interaction; theories like the Gricean theory of “implicature” attempt to account for features of discourse that exploit “maxims” of conversational practice to say more than they literally say. However, this side of pragmatics — though it is potentially the most interesting — founders on theoretical problems that do not confront theories of indexicals or other “explicit” context-dependent items. Grice’s work elaborates how we might construct the meaning (that is, the intention in the mind of the speaker which they wish to convey) of a clever double entendre by considering the plain form of the words against the background of conversational dynamics; I once tried to make this dual character of the implicature, our ability to “intuit” its meaning by systematically working it out, clear by drawing an analogy to the “realizability” semantics for mathematical constructivism.
Enter Badiou? Perhaps. Being and Event is thoroughly opposed to mathematical constructivism of the intuitionistic variety and also to the less-commonly-explored implications of “constructibility” in set theory (he misses a chance to include “minimal” logic — the subintuitionistic logic which excludes the rule ex falso quodlibet, “conclude anything you like from a logical falsehood” — at a place in such considerations where it would have been appropriate, but it is of limited mathematical interest anyway). Badiou exults the power of “generic sets”, sets inconstructible by specification conditions but which can be approached sidelong through the technique called “forcing”: to him, they are ontologically equivalent to the field of truths, which cannot be collapsed into mere knowledges amenable to constructivistic-nominalistic treatment. Now, the reader may say, “What could this possibly have to do with context in semantics? Badiou is not especially sympathetic to the idea of language as the major determinant of thought anyway.” I think it might be important (in no very fetishistic way) to consider one such possible connection.
Robert Stalnaker has formalized some of the above pragmatic considerations by speaking of a “context set”, a set of possible worlds compatible with the information which has been presented to date: context change changes the context set. Perhaps the context set is a “generic set”, and instead of “what is said” in an utterance directly or through implicature being solely a matter of the appropriate constructions we enter the domain of Badiou’s “truth-procedures”, where “militancy” and “fidelity” to the truth are important (rather than merely oodles of bon sens). On this view, determining the symbolic valences or “poetic meaning” of an utterance would be equivalent to Badiou’s definition of scientific truth as a practical application of “forcing conditions” being compatible with a statement extending our knowledge: not forced on us by an irresistible Logic Of The Real, but a selective, partially unsystematic, and perhaps in a very properly Freudian sense partially unconscious process that still yields meaningful results.
It’s time for the world’s shortest and most simple-minded introduction to the model theory of modal logic. Since “model theory” generally employs some fairly exotic concepts, I suppose it’d be best to begin by trying to concretize the idea of a model of a sentence. A model of a logical sentence establishes a systematic correspondence between the parts of that sentence and mathematical entities possessing the same formal properties. Since these sentences, like sentences in a natural language, can be indefinitely complicated by operations for building new sentences out of old ones (like joining two sentences by “and”), establishing this correspondence for all sentences requires a way of “disassembling” an arbitrary sentence: as I’ve remarked before, in model theory this takes the form of a recursive definition, where the meaning of a longer sentence is defined in terms of its subsentences until we reach semantic primitives, which are “satisfied” (represented) by arbitrarily chosen mathematical entities. For example, the recursive definition for a logic involving disjunction (“inclusive or”) would feature a clause stating this:
“X v Y” is satisfied if either X is satisfied or Y is satisfied.
All the regular “extensional” connectives have relatively simple clauses like this: when we get to the quantifiers things get a little more complicated, but by making the mathematical entities satisfying quantified sentences infinite sequences and assigning quantified variables to specific positions in those sequences (“for all x” being satisfied by all sequences varying in at most the xth position, “there is some x” satisfied by at least one sequence varying in at most the xth position) Tarski solved that problem in the ’30s. He used this complete recursive definition of logical sentences to define a logical truth as a sentence which cannot fail to be satisfied. What took a longer time was figuring out how to model-theoretically represent modal or “intensional” operators, which cause the meaning of a sentence to not be a straightforward truth-functional consequence of the meaning of its component parts; Kripke solved this problem by developing a model theory using “possible worlds”. A model of an ordinary first-order language consists of an ordered pair, the sentences of the language and a satisfaction relation: a model of a modal language consists of an ordered triple, written <W,R,V>. Let me explain each element.
W is the set of possible worlds; V is the set of the sets of the propositions which are true in particular possible worlds, and R is the “accessibility relation”, which determines how relevant truth in one possible world is to truth in another. If a statement is true in an accessible world, the statement is possibly true in the world under consideration (symbolized ◊x): if a statement is true in all accessible worlds, the statement is necessarily true in the world under consideration (□x). Varying the accessibility relation is very important for comparative study of modal logics: a different accessibility relation gives you a model of a different modal logic. Conveniently for us, Montague uses the most intuitive accessibility relation, an equivalence relation: it is reflexive (world x is accessible from itself), transitive (if x is accessible from y and y is accessible from z, x is accessible from z), and symmetric (if x is accessible from y, y is accessible from x). This means that all worlds are accessible from each other, such that a statement which is true in some possible world is possibly true in all worlds, or is necessarily possible: this is the model-theoretic equivalent of an axiom of the modal logic S5, ◊x→□◊x, and S5 is the language defined by the accessibility relation of equivalence.
But the language of metaphysical possibility and necessity, or “alethic” modality, is not the only intensional logic possible. One other such logic is tense logic, which has historical roots in the ancient and medieval philosophy of time but erupted into the modern philosophical consciousness through the work of Arthur Prior. Ordinary tense logic is “multi-modal”, featuring two primitive modalities Gx (x is going to be the case) and Hx (x has been the case). These can be combined to express many of the statements about time, such as Gx → GHx (if it is going to be the case that x, after that point it is going to be the case that x has been the case). Ordinary tense logic requires an accessibility relation modeling the order of instants of time before and after other instants, and Montague chooses a linear order like “less than or equal to”, which is reflexive, transitive, and antisymmetric (if a is accessible from b, and b is accessible from a, then a must be b). Gx is true if x is true at some instant of time following (accessible from) the instant of time under consideration, and Hx is true if x was true at some instant of time which the instant of time under consideration is accessible from.
There is one further application of modality in Montague Grammar: intensions. The arbitrary mathematical entities representing things and truth-values in Montague Grammar are called “e” and “t”: Montague says we can think of them as the numbers 0 and 1, if we like (i.e. it’s not important what they actually are). He adds to this a third entity, “s”, representing “senses” or intensions: these three categories combine type-theoretically, as in <s,t>, a function from senses to truth values. Generally speaking an intension is a function from possible worlds to “extensions”, either things or truth-values: the intension of “blue” would be an ordered pair of possible worlds and a set of the sets of things which are blue in each possible world. Carnap developed intensions as a way of modeling Frege’s idea of the Sinn or “cognitive significance” of a word, not just what it happened to refer to but what it would refer to if the world were different (or, as the case might very well be, we believed it was different): the adequacy of this as an interpretation of Frege has been hotly contested for decades, but a highly ramified use of intensions is critical for Montague’s analysis of the meaning of words, even common nouns like “ball”.
Having discussed categorial grammar, I can introduce a logical notation employed by Montague which in some respects runs counter to it in intention: the “lambda calculus”. In the early 1930s, the logician Alonzo Church was searching for an alternative to axiomatic set theory for formulating fundamental mathematical principles. Instead of the “functions-as-graphs” concept set theory borrowed from Frege, Church wanted to use the more intuitive conception we have of mathematical functions as methods or rules for deriving an answer by following precise steps. The solution he came up with, the lambda calculus, involves two complementary ideas — a way of specifying the methodical content of a function, and a way of computing that function for a specific value.
The first operation, called “function abstraction”, takes a variable and indicates the procedure the value of the variable is to be substituted into. The variable is written after a lower-case Greek lambda (from whence the name) and before a period separating it from the expression of the procedure: λx.x+2 signifies the function that takes a number and adds two to it. Function abstractions can be nested within other function abstractions: using a procedure developed before Church by Moses Schoenfinkel (but known as “currying”, after Haskell Curry, who developed ideas similar to Church’s contemporaneously) functions of two variables can be represented by iteration of function abstractions using only one variable: λx.λy.(x+y) represents the familiar procedure of adding one variable to another. As originally conceived, lambda abstraction could employ predicates as well: λP.”John is P” would symbolize a function taking any predicate and applying it to John — but although a variant of this predicate abstraction is crucially important for Montague Grammar, it is fraught with peril, as I will explain below.
Like quantifiers, the lambda expression binds the variable in the expression: if it is not the case that all variables are bound or “spoken for” by variables, either directly or by currying, in an expression, then the function is not fully defined. If the function is fully defined, then we can generate a result by the operation of “function application”: written (f)x, it returns the value of the function f for the value x. The application (λx.x+2)3 returns the value 5, for example, since 3 is substituted in the expression x+2 and that expression is evaluated. So far, the lambda calculus might seem superfluous, since we already know how to carry out the operation of defining a function and evaluating it. However, “currying” gives a little taste of the power of defining mathematical concepts this way: and in fact all the mathematical objects used in set theory can be given “functional” definitions using the lambda calculus.
For example, function abstraction doesn’t have to be tied to a “concrete” mathematical procedure: we can put a lambda next to a variable ranging over functions, and define a “function of functions” like composition. Even the natural numbers can be defined using the lambda calculus: in “Church numerals”, the number 0 is represented as λf.λx.x, a function which takes another function and applies it to x 0 times, returning x. In fact, the lambda calculus was so powerful one could easily derive a contradiction similar to Russell’s paradox for naive set theory by abstracting over predicates, as was quickly noticed by the pioneering computer scientist Stephen Kleene. There are two ways around this. One way is to stick with the contradiction-free fragment of lambda calculus abstracting only over functions, the “λI-calculus”; and although this notation is not powerful enough to derive all set-theoretic concepts it is far from useless, as it is expressively equivalent to the formal model of computation devised by Church’s student Alan Turing, the “Turing Machine”.
Consequently, all computer languages employ methods similar to the lambda calculus for specifying subroutines, and the “functional” programming languages directly emulate the lambda calculus’s ability to specify functions of functions (in fact, programs written in them are “desugared” into a version of lambda calculus during compilation). However, the λI-calculus isn’t quite enough for the purposes of Montague Grammar — so I need to say a little bit about the other way around the paradoxes, the typed lambda calculus. The theory of types was introduced by Bertrand Russell to deal with the set-theoretic paradoxes: in all its forms, it amounts to carefully circumscribing the “levels” involved in a mathematical operation, to prevent paradoxical entities like “the set of all sets which are members of themselves”. Using a variant of the theory of types developed by Frank Ramsey, Church introduced a contradiction-free version of the lambda calculus. In the typed lambda calculus, the “type” of the function input and its output are both specified using the notation A→B: the function can only accept inputs of type A and return outputs of type B.
As before, this restriction becomes more intelligible when you consider more complicated formulations: a function which takes a function as argument and returns another function would be typed (A→B)→(A→B); the input must have the type of a function, A→B. Going into how Montague Grammar uses typed lambda expressions would be too much too soon, but it is critically important that the prospective Montague Grammarian develop some facility with them. (If my exposition has left you cold, the paper-puzzle game Alligator Eggs may trick you into “getting” these concepts.)
I’ll start off the series on Montague Grammar today. The exposition will follow Montague’s paper “The Proper Treatment of Quantification in Ordinary English”, since that has historically been the most influential of Montague’s semantic writings: I guess I might say something about “Universal Grammar” at the end, although I expect the reader will find mastering the ideas of “PTQ” to be more than enough. As for acquiring your own copy, “PTQ” was published after Montague’s untimely death in a Synthese volume, then reprinted in Formal Philosophy, Montague’s collected papers. Most large university libraries will have Formal Philosophy and the paper is (rather unhappily) short and suitable for photocopying: however, it was also recently made available again in the anthology Formal Semantics, co-edited by Barbara Partee (a linguist who is responsible for Montague’s posthumous influence in that discipline).
The paper has four sections: the first is devoted to syntactic rules for a fragment of English — which is small, but includes a number of “intensional” verbs that make trouble for less complicated semantic approaches. These syntactic rules make use of categorial grammar, the topic of this post. I think explaining categorial grammar in sufficient generality, for people who might have no more “mathematical sophistication” than I had when I started out reading this stuff, will require going pretty far back: back, in fact, to Dirichlet’s definition of a mathematical function. When we are using them naively, to calculate results, mathematical functions seem to be “rules” which we follow to arrive at a certain result. But this approach is fraught with unclarity, and a major advance in the foundations of mathematics occurred when Gustav Lejeune Dirichlet defined a mathematical function as a collection of ordered pairs, one element of a pair being from the domain and another from the range; such a structure is known as a “graph”.
Now, if you took high school algebra after the introduction of “New Math” (i.e., are not seventy years old) someone once tried to teach you this definition; maybe it even took. But the real power of Dirichlet’s definition comes when you consider “higher-order” functions like composition, where you feed the results of one function into another function. Getting the composition of “functions-as-rules” straight in your head is very tricky, but the definition in terms of graphs is simple; just as one function can be represented by the ordered pairs <d, r>, composition can be represented by an ordered pair containing an ordered pair of the two functions being composed, and the composed function with the first function’s domain and (a subset of) the second function’s range: <<<a, b>,<c,d>>,<a,d>>. In this way, you can explain the functional articulation of mathematical concepts with ease.
What does all this have to do with the semantics of natural language? Well, enter Gottlob Frege. Frege’s attempt to formalize the language of mathematics required an analysis of language which divided up parts of speech in a really novel way, inventing the quantifiers we are familiar with today: functions-as-graphs are at the heart of his method. For example, Frege analyzed predicates (“x is red”) as functions from objects to truth-values: if an object possesses the property described by a predicate, the function maps it onto the truth-value “true”, and if not onto “false”. That might seem obvious, but other parts of speech can be given more complex but illuminating glosses in this manner: the Polish logician Kazimierz Adjukiewicz consequently took Frege’s syntactic analysis and formalized it as categorial grammar.
The building-blocks of categorial grammar are noun phrases (often written “N”), sentences (“S”), and functional relationships between them (symbolized by a slash): the predicate example given above would be written “N/S”, since it takes a noun and returns a sentence. An adverb, which takes a predicate and returns another predicate, would be written (N/S)/(N/S). Now, Montague added one further twist to the categorial-grammar framework; he noticed that some expressions of English were categorially equivalent, yet commonly identified as different parts of speech. For example, some verb phrases modifying another verb phrase (“try to”) would have the same analysis as the adverbs above. To “save the appearances”, he used a double slash, e.g. “N//S”, to keep one set of expressions distinct from the others.
Boy, digging into Continental philosophy and applying it to analytic issues could get to be a habit; this week’s selection is Gadamer’s Truth and Method, which has been a surprising read in a couple of ways. I was previously quite familiar with Habermas’ critique of Gadamer’s attitude to tradition, and recently had my concomitant suspicions about Gadamer’s “anti-Nazi” attitudes confirmed by revisionist histories of his life during the Third Reich. Gadamer is certainly much more the conservative than the left-liberal “Continental philosophers” of the American continent let on, but the substance of his conservatism in his “great work” is different than I expected. He does take off from Heidegger, but not in the direction of decisionistic Existenz, or critique of metaphysics: he makes an ingenious application of the elements of Heidegger’s thought which are only intimated in Being and Time, especially Heidegger’s positive doctrine of Being. Gadamer’s critique of “aesthetic consciousness” and “historical consciousness” aims to reaffirm the importance of being to the employment of normative, “common-sensical” categories in the humanities and social sciences: he argues that without the human mind’s concrete involvement with the world and others (which is encapsulated in Heidegger’s refined concept of Being), these domains are unintelligible.
I am less impressed with the final section of the work, on language: it seems to suffer from the fact that Gadamer’s claims are often very inexact (in a way that post-structuralist thought is only purported to be). However, it still seems that the general thrust of the work has some bearing on issues in analytic philosophy of language, particularly the choice between “realistic” truth-conditional semantics or an “anti-realist” one based on meaning as conceptual role. Contemporary opinion seems to reject the flirtation of the ’80s and ’90s with “inferentialist” accounts of meaning as use, but I think the cost of the decision for realism is perhaps underestimated. Perhaps the real trouble with an “entity” theory of meaning conceptual-role semantics hoped to supplant is not its “Platonist” assumptions about the structure of propositions, but that it really implies a “realism” about the reference of common-sensical discourse; to speak Gadamerese, realistic talk about the famous “middle-sized dry goods” implies that we have an unquestioned and unquestionable grasp of their being, which is not at all derived from and perhaps even incompatible with a reduction of mental states or properties of ordinary physical objects as we encounter them into microphysical substrates. In other words, maybe our ordinary discourse is already “semantically perfect” in its reference to reality — too perfect to permit scientific recalibrations of the kind desirable to many of the people who want to go in for realism in semantics.
I’m now about halfway through Being and Event, and I have some preliminary comments. Firstly, the explanations of set theory Badiou includes are really very good, and the book wouldn’t be the worst way to be introduced to axiomatic set theory: it is true that set theorists don’t really think the axioms have the ontological significance Badiou attributes to them, but the actual setting-out of the formal machinery is not inaccurate. Secondly, the figure Badiou reminds me of the most is the mature John Dewey. This might be surprising, since Badiou is associated with positive valuations for things Dewey is held to have inveighed against, like Platonism — but they are as one in a kind of structuralism about logic and the philosophy of thought. What I mean by “structuralism” in this context is that the features of logical articulation Badiou and Dewey engage with are grasped independently of an ultimate import, as “furniture of the world”: this is quite different from a formalism where the “normative” significance of thought, its cultural value or the end it purportedly tends toward, bulks much larger.
Perhaps such an approach is naive, for reasons familiar from mainstream analytic philosophy, but it certainly has its liberating aspects. However, my third observation has to do with a way in which Badiou’s theory of the “event” is quite agreeably consonant with some mainstream themes. For Badiou, an event comes into existence when we are dealing with a “situation” composed of elements which are not “normal”, i.e. not ordinary sets whose members are all subsets which are themselves composed only of subsets (that is, with no urelements in the picture). The “normal” makes up the natural world, but there are configurations of objects which cannot be completely accounted for in naturalistic discourse: as part of his program for relating “presentation” and representation of objects to their political analogues, Badiou gives an analogy to a family where the members are not known to the authorities, and which only appears as a unit on outings.
The appearance of the family is an “event”, since the family members are “presented” but not represented in the state apparatus. But with this explanation, Badiou returns from the further reaches of Marxist criticism of state power to a very familiar theme in analytic thought: Frege’s sense and reference. Frege called the sense of a word its “manner of presentation”, and drawing on Badiou we have an explanation of this: the event is the sense of the situation, and the history that begins with it is the “cognitive significance” sense is invoked to explain, the thoughts we have about the world which swing free of a scientifically regimented account of the world’s inhabitants (the “natural multiples”, or, one might say, references). In short, perhaps what we have from Badiou is a very elegant story about the larger significance of Sinn and its relationship to “what there is”; I suppose further reading will make it clear what he himself thinks about the role of these events in “subjectivation”.
This weekend, avoiding some other things I probably should have been doing, I re-read part I of the Philosophical Investigations. Some weeks previously I read the Tractatus, and the same thing struck me both times: just how little Wittgenstein is a soup-to-nuts metaphysician, and how unsuitable his work is for generalizing beyond the very specific problems he sets himself in the philosophy of logic and philosophy of mind. That being said, I’m going to try with a feature of Wittgenstein’s discussion of rule-following in the Investigations which is little-remarked-upon. As is well-known, in the middle of part I Wittgenstein rejects what seems to be an intuitive understanding of how human beings follow rules in, e.g., continuing numerical series; he holds that following a rule is to engage in a practice, rather than have an infinitely extensible prototype of the rule’s applications.
What commentators pay relatively little attention to is one of the options Wittgenstein rejects: that to be able to continue a mathematical series is to have an idea of its algebraic equation. He argues that a would-be calculator may have an idea of the equation for a series and yet be unable to produce the series, while someone who has an intuitive grasp of the rule can produce the series without having an explicit understanding of the equation that “produces” it. This point about the adequacy of a rough-and-ready understanding dovetails nicely with another neglected point at the beginning of the book. In dismissing the project of discerning “logical form” so central to the Tractatus, Russell, and Frege, Wittgenstein argues that one of the failings of understanding language by assimilating words to the tractable form of the proper name is that this falsifies, not only the practice of using the word grammatically, but also the thought which is actually associated with it. Since the majority of the Investigations is very hard on the use of mental states to conceptualize understanding, this early attention to language and thought has gone unnoticed.
These two facets of Wittgenstein’s work, taken together, suggest a program for the philosophy of mind which is moderately “anti-logocentric”: if it is wrong to understand the use of concepts by appeal to logic, because this falsifies the actual habits of mind, we not only need to ensure that mental processes are instantiable either in familiar domains or in only never-never-land. Additionally, the surveying of human beings’ grasp of the world in understanding (Wittgenstein later uses “grammar” in a more positive sense to describe this) must form an unprejudiced basis for thinking of the capacities we attribute to the human mind and the “mechanisms” by which they operate. For example, consider the concept. Today it is no longer the function from extensions to truth-values that Frege described: we attribute concepts to the “realm of sense”, and attempt to describe their enabling of general thought in terms compatible with what we learn from the most trustworthy cognitive science.
But according to Wittgenstein, balance is not enough: if we prejudice the operations of concepts by understanding them as “intensional”, we would be leaving the sober analysis of mind for an unfortunately “metaphysical” consideration of the forms by which we represent thought. It seems to me that Wittgenstein is provocatively saying that this, which many today understand as the primary task of the “philosophy of thought”, is not enough if we want to truly understand the mind and language.