Monday, May 23, 2005

The Unusual Effectiveness of Logic in Computer Science

A couple of days ago I found a non-technical logic-in-computer-science paper in my folder of unused papers that has turned out to be a highly interesting read. The paper is:

On the Unusual Effectiveness of Logic in Computer Science by Halpern, Halper, Immerman, Kolaitis, Vardi, and Vianu

The authors of these papers are well-renowned authorities in the area of logic in computer science, each with his own enormous contribution to the area.

Here is a summary of the paper. The authors start by mentioning two attempts of explaning the unreasonable effectiveness of mathematics in natural sciences by E.P. Wigner, a Nobel Laureate for Physics, and R.W. Hamming, an ACM Turing Award (which can be construed as the Nobel Prize for Computer Science) winner. In their separate papers, Wigner and Hamming give a number of examples supporting his argument, but conclude that the question "Why is mathematics so unreasonably effective" remains open. The above paper by Halpern et al. is basically an attempt to answer the question "why logic is so effective in computer science". Like Wigner and Hamming did, Halpern et al. give a number of examples supporting their assertion:

  • Example 1: Descriptive Complexity. The reader should know roughly what this is by now.
  • Example 2: Logic as a Database Query Language. This area is devoted to the study of abstract query languages in database theory. Examples of abstract query languages are relational calculus (equivalent to first-order logic), and conjunctive queries (see here for a definition).
  • Example 3: Type Theory in Programming Language Research. Type theory is often used for building useful type systems for strongly-typed programming languages including Haskell (the computer science reader should know that C, for example, is not strongly-typed).
  • Example 4: Reasoning about Knowledge. You may think of this area as "the theory of agents". Reasoning about knowledge plays a key role in burgeoning computer science fields including distributed computing, game theory, and AI.
  • Example 5: Automated Verification in Semiconductor Designs. The meaning is self-explanatory.

Aside: I would love to see proof complexity and bounded arithemetic discussed in this paper.

Incidentally, Halpern et al. make a very interesting remark in their paper --- that logic seems more connected to computer science than to mathematics. To a theoretical computer scientist like myself, this claim is so conspicuous. I'm curious whether philosophers and/or mathematicians and/or logicians think so too ...


Jon said...

I think that the claim that logic is more useful for computer science than for mathematics should be taken with a grain of salt. It is true that things like proof theory and modal logic are far better suited to computer science. While there are interactions between, say, maths and proof theory, most of this is in the direction maths -> proof theory.

However, traditional model theory has had very important ramifications for mathematics. A famous example is Hrushovski's proof of the function-field version of the Mordell-Lang conjecture. More systematic examples arise in group theory. There, it turns out that groups of finite morley rank are a good place to look for generalising a lot of the finite group theory stuff (see Alexandre Borovik's homepage for more on this).

Peter McB said...

We have to be extremely careful here that we are reasoning from cause to effect, and not the reverse. Mathematics may be "unusually effective" in the natural sciences because the only way we can apprehend nature is through tools we have developed, the primary one of which is mathematics. If there were elements of Nature for which mathematics was not effective, we would not be able to model these, and so would not know of them.

The same is true for logic in computer science. Over the last 50 years we have developed a particular model of computation (Turing machines, et al) which may or may not be the only model of computational phenomena. The model we have developed seems to have a close fit with logic, but since logic is currently our only way of apprehending formally the phenomena we call computation, this close fit is hardly a surprise.

Many criticisms have been made of Turing-type models of computation, for example, that they do not deal with interacting processes, nor with processes that are always on, nor with processes that continue to receive inputs while processing, etc. (See the work of Peter Wegner, for example, on models of interactive computation.) Arguably, no logic thus far proposed deals adequately with phemomena such as these.

The real mystery to me is how proponents of the Turing model could continue to argue for it when all around them, starting with their desktop PC, are computational phenomena which do not fit it.

twidjaja said...

Thanks to both of you for the interesting comments. I feel I've been learning something new and useful from these comments.

Note, however, that I said "logic seems more useful for computer science than for mathematics", not "is more useful". I said this because mathematical logic (proof theory, finite and infinite model theory, and recursion theory) has been applied to many subareas of computer science (eg, computation theory, theory of agents, AI, distributed computing, programming language, model checking, cryptography, security, ...), the reverse of this statement being partially true also. These extremely fruitful interactions between logic and computer science have been going well for more than half a century and have not shown any sign of faltering. Some people, thus, have regarded logic as "the calculus of computer science".

Notwithstanding the deep connection between math and logic that Jon mentioned, being a pure math student myself for three years, most people in my math department think that it is pointless to study metamathematics (ie, logic). The same goes with some other math departments in Australia that I've visited. This just gives me an impression that logic is just not part of mainstream mathematics.

Peter is of course right to say that logic seems unusually effective for computer science because logic may be the only way for us, at least for now, to view our popular models of computation, like Turing Machines. Who knows what will happen if we someday have something like analog computers ... I'll be interested to know whether our knowledge of logic still helps. I hope so though ... otherwise, an entirely new logic will be developed for it.

Jon said...

I believe that the reason why many mathematicians are anti-logic is because they think of logic as studying foundations of mathematics. This is exemplified by identifying "logic" with "metamathematics". It is true that the average mathematician could care less whether or not they use, say, the axiom of choice. And they have good cause for that position.

The same is certainly not true for model theory, where one cannot escape it. This is not surprising, since when you play with infinite objects, you need a sound basis off which to work. The way that a lot of mathematicians escape it is by sticking to more traditional subjects (say, finite group theory).

As far as analogue computers go, it is certainly reasonable that control theory should play a large role in their study. If you want to formally verify such beasts, then you run into all sorts of difficulties if you try to just use traditional model checking wizardry. Check out the stuff of Jen Davoren (who is at your (old) university) for how to approach this using logic. Basically, one takes a product of a modal logic of space and a temporal logic, which allows one to talk about flows, set-valued maps etc. This way, it links in logic with things used in control theory.

twidjaja said...

Jon, you're right about most mathematicians thinking of logic as the study of the foundation of math. Also, thanks for the pointer to Jen Davoren. Will be interesting to have a chat with him before I leave.

Peter, have you looked at computability logic (see for example This logic seems to capture the notion of interactive computation.

Peter McB said...

Thanks, Anthony. I have seen computability logic, but don't yet understand it. My issue is with any discrete formalism trying to capture interaction, which my intuitions tell me is only going to be properly understood when modeled as if it was a continuous phenomenon. Vaughan Pratt's work (Stanford CS) on categorical models for concurrency, although not there yet, is in the right direction, IMO. See, for example, this.

You may be interested in the papers presented at this recent workshop, on the Foundations of Interactive Computation, FInCo. The papers will eventually be published in ENTCS (Electronic Notes in Theoretical CS).

-- Peter McB.

Anonymous said...

please don't think i'm claiming to have special insights. but these are my opinions (since reading Arbib's Brains Machines and Maths more than 40 years ago)

I think maths can be thought of a pidgin English or English for a special purpose.
It is interesting to note that, 30 or 40 years after the typesetting program Latex, we still are not able to write mathematical notation, e.g. on a tablet, and have the computer understand it.

Is a Turing machine model adequate?
It seems to be usable for understanding a human receiving and sending lines of text. We (i.e. humanity)have a lot of experience over the past 50 years, in working out to implement a device, e.g. to play a simple game, as a "state machine".

Can a Turing machine handle interacting processes? I think it can, because we can imagine a control panel showing the processes - hence we can think about how interacting with the processes could be programmed as a state machine.

I wonder if the number of states we need to look at is very large, e.g. a human might look at a screen, and decide what to do, e.g. what marks they will make on a graphical tablet. THis is a very large alphabet of possible inputs - even for very crude resolution it would be 100*100 grid of bits. This gives 2^10000 possible inputs. The is a large number. To simplify it to something that run in our universe, we need to simplify. E.g. a grid of 16*16, perhaps enought to represent chinese symbols, e.g., would need 256 bits. This gives an alphabet of size 2^256 which is still as big as our universe.

I think it's ok to say the Turing model will suffice. Why not allow an alphabet such as A-Z, but also other alphabets, e.g. the set of all sentences or texts that there is room for on the input screen...

I think the Turing model is simply a state machine model - we can generalise, perhaps, from the original model.