You are currently browsing the category archive for the ‘Logic’ category.
When it comes to philosophical logic, the University of Amsterdam and Stanford’s Johan van Benthem is the current ‘heavyweight champion’. His innovative approach to the mechanics of modal logic has been dazzling people since the 1980s: the most recent incarnation of his wisdom is a lecture course “Modal Logic for Open Minds“, which addresses Amsterdam topics like bisimulation in a friendly manner. Very cool.
Very interesting: the complete German text of Gottlob Frege’s Grundgesetze der Arithmetik (Basic Laws of Arithmetic) has been put online here. There have been some online projects to translate the entirety of these volumes into English for years, and this resource surely ought to help with that; however, for readers with some German the ability to nullify the paltry availability of the German text is tres cool. Take a look.
*What proves this?*
I have some idea that the computer science community, which even writes in English worldwide, has gotten the notion via someone or other’s writing on the P and NP problem (forever solved negatively, at the cost of also solving one of the greatest problems of mathematics ever) that “you’re never gonna stop all the teenage leather” — and adult leather is kind of wahnsinnige. However, I would like to say something further: Mahatma Ghandi was one of the worst people who ever lived; and his only effective British critic, “George Orwell”, rather better. Ghandi wanted everyone to live, that is die, that is die fast under horrible circumstances amidst undistributed plenty; Orwell wanted everyone to die, that is some people to die, the right people to die, the fewest possible people to die.
My contribution to the debate? Ghandi said that Hindustan and Pakistan were the two “eyes of India”: I say, to myself if necessary that like “Blinky” on the Simpsons, theoretical logic has three eyes — proof theory, model theory, and recursion theory. Any genuine logical result is perfectly intertranslatable between the three logical idioms; anything less is “philosophical logic”, an attempt to understand why you don’t understand, why it has to be some way you don’t know it is, because you can’t know it is, because it isn’t. To conclude abrupto, Anil Gupta’s “revision theory of truth” is Anil Gupta’s, that is John Nash’s: they are friends, I guess, and Nash explained his view of the world to Gupta and unlike Nash Gupta found it simpatico. But inasmuch as ye have done it unto one of the least of these my brethren, don’t forget that Woodrow Wilson was a hophead.
Material culled from a series of emails sent on June 17, based on work done a year ago:
It is rather obvious that model theory was invented in the fabulous ’30s, by the man who later coined the term “theory of models” and wrote further works on the topic philosophers don’t read since it “wouldn’t do”. I think the pudding-proof is that the Completeness Theorem — of which Loewenheim-Skolem (a deliciously Teutonic hyphenated name) is a consequence when you really understand what it says — is not really a model-theoretic result.
The model-theoretic idiom in which the completeness theorem was cast (what’s his name and what was he about?) partially conceals that it’s just a consequence of the structure of the first-order proof system; if you doubt this, consider Kalmar’s proof for propositional completeness. We need model theory for limitative results (!) like non-arithmeticality, incompleteness, and maybe even the very unheimlich undecidability theorem (oh, if we cast the problem in that language of “productions” that mysteriously appeared at some point, I guess not).
Recent results suggest a vast preponderance of oracles make P unequal to NP; You Know This. What does this finding in “empirical mathematics” mean? I thought it meant this: the P and NP problem can be solved negatively not only because of the obvious observation about a proposition of n literals having n^2 combinations of stipulatively independent truth-values, but also BECAUSE THE CONTINUUM HYPOTHESIS IS UNSOLVABLE. “Sometimes you can put a 2^aleph-null set (the oracularly solvable SAT) and an aleph-null set in one-one correspondence, but most of the time you can’t.”
What could that even mean? That you are confused when you think you can do this, like people who believe in the Rebus of Picardy ‘cuz they read about it in Vico — or just ‘cuz. The “reducible” set of real cardinality, “solvable SAT”, is fake: if P=NP you already know the answers regarding satisfiability you would otherwise have to work out nondeterministically like any old validity testing in propositional logic, you just don’t know that you know them for whatever reason (including, I suppose, that it is lucrative to be confuséd).
To be crystal-clear about something you already understand better than me, what I am saying is this: it is a reductio that P cannot equal NP because it would mean the end of logic and the end of the world; strictly speaking, the oracles that permit P=NP reduce the level of propositional complexity by one order to the cardinality 2^aleph-null-minus-one, which it is well known is equipollent to aleph-null. This is a husteron proteron. Furthermore, that people who float the idea that 99% of oracles are in the first class and one percent are in the second are in total possession of the true meaning of what they say is dubious: they could be schizophrenic, or in love.
UPDATE 6/29/09-6/30/09: If people are still “curious”, let me give my reply to the objection to P != NP from “oracles”. For those that don’t know, an oracle in computation/recursion theory is a database of computations (states of a Turing machine, or instructions for another model of computation) that a regular computer can “query” and use to determine its next state. Although “oracle machines” can solve problems that unrelativized TMs cannot, like the Halting problem, they stricto sensu do not change the complexity of a problem.
How do oracles touch the question of SAT? Like this: an oracle for a satisfiability-prover is essentially equivalent to quantifiers for a first-order language built upon the sentences of propositional logic being quantified over; a clue as to this is given in the first edition of Hopcroft and Ullman, where the oracle used to “prove” P=NP contains the set of all true quantified Boolean sentences. How can first-order logic and propositional logic interact? Well, the Henkin-style proofs of completeness for first-order logic involve “scapegoat” theories, where every sentence involving quantification is reduced to an existentially quantified sentence and then cashed out in terms of the propositional sentences that satisfy it.
In categorial terms, an oracle is a “subobject classifier” mapping an n-tuple onto a truth-value; as previously mentioned here, adding subobject classifiers to a cartesian closed category (the categorial model of computability) creates an elementary topos capable of modeling first-order logic. What’s the catch for SAT? Firstly, the set of true sentences of predicate logic is recursively enumerable, not recursive: an algorithm can “eventually” produce all its truths, but no algorithm can discover all the sentences invalid in predicate logic.
Secondly, an oracle that can be applied to SAT to make P=NP come out true is, as mentioned above, a device that reduces the complexity of the proposition being considered by one order — putting it within (well within) the class of transfinite numbers with a cardinality equal to that of the natural numbers. I find the most intuitive way to think about this is of an oracle containing the truth-value of every “literal” in the proposition under consideration, which adds just enough “randomness” for the proof procedure to be deterministic and in P from there on out; you could also think of an oracle machine, using quantified formulae, which solved the validity of the SAT sentence’s negation.
Thirdly, as Löwenheim-Skolem (an essential part of Lindström’s Theorem) shows any set of sentences that has a model — that is, an interpretation where all theorems are represented — has a countable model, i.e. one whose elements are recursively enumerable like the natural numbers they correspond to; “richer” models of computation have a level of complexity which is, well, too complex. At any rate, oracles are not enough to override our overwhelming impression that P cannot equal NP; as for “natural proofs” and other recent developments, I like functional and complete propositional logic better than, well…
Now, for a piece of philosophy of mathematics. It’s hard to see how you could take a “sideways-on” approach to mathematics: it is true if anything is, right? Well, maybe we can stretch that out a little further than what is involved in the usual Third Realm calisthenics. A piece of mathematics is something that will be true for all time: mathematical insight means seeing into the ages, what was and ever will be. Conversely, lack of mathematical acuity — our inability to solve a problem — is caused by the deceptions of the age, our follies and delusions.
As regards my own very modest mathematical output, I have adduced considerations that P cannot equal NP in “A Cool Million (Some Thoughts on P and NP)”, “Further Thoughts on P and NP“, and “Why P Cannot Equal NP“. They do not have the form of a formal proof and have failed to convince the contemporary complexity community (as a whole), but it is my honest conviction that the very simple logical difficulties associated with reducing the complexity of DEXPTIME-hard NP-complete problems are insuperable. The considerations are “dumb”, but honestly limitative results always are — the Incompleteness Theorem is a piece of simple trickery that would hold no interest were it not true.
Really, I think the hope that a brand new algorithm will crack the problem is a pipe dream; the essential nondeterminacy of the Satisfiability problem is just untouchable. Why does it appeal, then? Because computer science is about technical control, and the temptation of a computational Eden where all cryptographic algorithms can be cracked and mathematical proofs grow on trees is just too strong. (The harder-headed claim that problems from oracles — which are rather moldy, as they can be read about in the original edition of Hopcroft and Ullman — and other considerations may make the P and NP question impossible to solve, but it may just be our own damn fault for not defining our terms clearly. Always hard to tell.)
Before blogs really got off the ground, I spent my days (!) writing ‘experimental’ reviews of books on Amazon. Often I had no very good idea of what the book was about, and even when I did many people thought I did a very poor job conveying it; I’ve only recently brought my favorable/unfavorable response ratio to 1:1. Furthermore, nobody but nobody has ever bought me anything off my Amazon Wish List — this is too bad, because a couple of the selections are very attractively priced, so much so that the unionized Powell’s employees a lot of people want to help out with online purchases would just tell you to “go for it” (they’re not talking to me, long story).
Here are three selections:
Wilfrid Hodges, Model Theory (paperback)
‘High-test’ logic textbooks are often incredibly expensive: as previously mentioned Cambridge’s theoretical computer science series is reasonable, but North-Holland/Elsevier, Springer, and Oxford books easily can cost $200 apiece. The same was true of the “big red” model theory textbook from Wilfrid Hodges, which retails at $280 in the hardback edition available since 1993 but now costs only $64.80 in the paperback edition released this year, just a little more than the extracts from the book published as A Shorter Model Theory. Hodges takes a more mathematical — specifically, algebraic — approach to model theory than the old standby by Chang and Keisler, but at this price you can afford to be (slightly) fashionable.
Friedrich Nietzsche, Sämtliche Werke: Kritischen Studienausgabe (15 vol box set)
I’m no big fan of Nietzsche, but the over-reliance on tendentious interpretations surely does those of us being subjected to “perspectival thinking” and “ethical naturalism” by intellectual rounders no favors. The student with a grasp of German and a reasonable budget for scholarly books need not take das Wort of Walter Kaufmann, R.J. Hollingdale, or anyone more recent about Nietzsche’s ideas, since the complete edition of his works and notes compiled by Colli and Montinari — though purged of some critical apparatus for the student edition — is available for under 200 dollars; as a reviewer points out, this works out to something like $12.50 a volume (their figure is lower; it used to be an even better deal, but the dollar’s slide against the Euro intervened). In particular, the material in Colli-Montinari vol. VIII (selections from which made up the Wille zur Macht and Kate Sturge’s recent Writings from the Late Notebooks) is very interesting and very different from what one might expect.
Various Artists, The Complete Stax-Volt Singles Vol. 2 (9 CDs)
This is one I’ve been waiting on a long time; long enough, in fact, for most of the material to become available free on the Internet. But artists need your money, just like Stax artists did when they sold the rights to their ’60s hits at the end of the decade and had to record a lot of new material very quickly (this box set covers only three years). Surprise, surprise: a lot of these records — like Lynda Lyndell’s “What A Man”, Jean Knight’s “Mr. Big Stuff”, and Eddie Floyd’s “I’ve Never Found A Girl (To Love Me Like You Do)” — were instant classics. The first box set is in your public library; the second should be in somebody’s stocking.
1) Is forcing a model of the full resolution calculus? That forcing statements are written with a double turnstile suggests it was originally intended as a semantics of something; resolution seems to be the neatest proof-theoretic fit to the check of consistency against generic sets (hard to say whether countably transitive models introduce more structure than first-order resolution can account for, though).
UPDATE: The answer to this question is “Duh, no, since forcing creates models of ZF”. Alternate suggestion: perhaps Gentzen’s “reduction rules” for proving the consistency of arithmetic with methods stronger than those of ordinary logic are a better fit.
2) Do Lindström’s theorem and the “standard translation” of modal logics into first-order logic suggest first-order logic is the most expressive complete logic? The “negative translation” of classical logic into intuitionistic logic seems to suggest otherwise, but the completeness of IPC is a thorny thicket. (I must admit ignorance of completeness w.r.t. languages with generalized quantifiers, although they certainly do not fit Lindström’s characterization.)
3) What is the semantical status of cut-elimination proofs? We know by Guessing What Great Logicians Thought that they are not proof-theoretical analogues of completeness proofs, since completeness proofs for sequent calculi (e.g., Kleene’s) do not operate on the same principles. How about soundness? Cut-elimination demonstrates that inferences introduced by an “irregular” procedure, an instance of the cutrule, can be proven using more strictly regimented rules — in other words, that the cut-free rules suffice to justify the Urform of an inference justified any old way (Cut).
I always thought one of the more interesting and relatively unremarked-upon elements of the Philosophical Investigations was the item included in the Preface after Wittgenstein’s thanks to Ramsey:
Even more than this — always certain and forcible — criticism I am indebted to that which a certain lecturer at this university, Mr P. Sraffa, for many years unceasingly practiced on my thoughts. I am indebted to this stimulus for the most consequential ideas of this book.
(For those who don’t know “P. Sraffa” was Piero Sraffa, editor of Ricardo’s collected works and a Marxist economist; he and Wittgenstein were great friends, partially because he was as Keynes said “the most leisured man who ever lived”. He also holds the distinction of having purchased the books which Gramsci’s Prison Notebooks are commentaries on.)
However, do you suppose Wittgenstein’s later work was also in part insipred by a reaction to the rise of Tarskian semantics similar to C.S. Lewis’ famous reaction to Anscombe’s dinner-table refutation: the idea that someone had “sharpened” one’s own theses to the point they pointed in the opposite direction, and that the thing to do was concentrate on something one knew better?