## Saturday, October 29, 2005

### Favorite Logic of October 2005: Monadic Second-Order Logic

previous

We now move to a logic that is much more expressive than FO. In the previous "Favorite Logic" posts, I have talked about the gory details of FO. Now, rather than babble about the current logic under consideration, I shall just sample some results that are particularly interesting from my point of view. The readers that happen to be "inspired" by this post are welcome to send me an email for further results or references.

Monadic second-order logic (MSO) is a natural extension of first-order logic with quantifiers over sets of elements in the universe. More precisely, if σ is a vocabulary, MSO sentences ψ over σ are of the form

ψ := Q1X1...QmXm φ

where Qi ∈ {∃,∀}, Xi is a second-order variable, and φ is an FO sentence over σ ∪ {Xi: 1 ≤ i ≤ m}. So, that's the syntax. I will define the semantics by means of an example, the reader should be able to generalize easily.

Suppose σ = {E} is just a vocabulary with one binary relation symbol. We wish to express 2-colorability, i.e., the property

2COL = { G : G is a 2-colorable graph }

It can be proved that 2COL is inexpressible in FO. But, is it expressible in MSO? Positive.

∃ X ∃ Y( (X and Y partitions the vertex-set) and (any two adjacent vertices cannot both belong to X or Y) )

Suppose we are given a graph G = (V,E). Then, we can evaluate this formula as follows: this formula is true in G iff there exist unary relations (i.e. sets) X and Y over G (i.e. X, Y ⊆ V) such that the body of this formula evaluates to true in the structure (V,E,X,Y) (i.e. iff G is 2-colorable). In this case, we intend X and Y to correspond to the two color classes that partition the vertex-set.
The first conjunct above can be expressed by the FO sentence ∀x( X(x) <-> Y(x) ). As an exercise, the reader might enjoy coming up with an FO formula that expresses the second conjunct.

In fact, we can use the same trick to express k-colorability in MSO. Hence, we can immediately conclude that model-checking for MSO is NP-hard when only the models are part of the input (i.e. data-complexity is NP-hard). In fact, since MSO sentences are closed under negation, we also conclude that non-k-colorability is in MSO and therefore model-checking is coNP-hard as well.

Let's conclude this post with a beatiful result that connects MSO to formal language theory. The proof of the result can be found in Straubing's monograph

Finite Automata, Formal Logic, and Circuit Complexity

[Incidentally, David Molnar had a nice post that explores the connection between Straubing's result on "simulating monoids" and homomorphic cryptosystems. So, I thought it would be nice to mention a result from his book, although it was originally proven by Buchi in 1960s.] Roughly speaking, the result says:

A language (a set of strings) L is regular iff L is MSO-definable

What does it mean for a language to be MSO-definable? For notational convenience, let's restrict ourselves to the alphabet {0,1}. First, a binary string s = s0...sn-1 can be considered as a logical structure

S(s) := ({0,...,n-1}; <, U)

whose universe is {0,...,n-1} encoding the "positions" of the string, < is interpreted as the usual linear ordering over such universe, and U is a unary relation with the interpretation that i ∈ U iff si=1. So, if S(s) is a logical encoding of a string s and φ is a formula over the vocabulary {<,U}, the notion of 'S(s) |= φ' makes sense. The language defined by φ, then, is

{ s ∈ {0,1}* : S(s) |= φ }

Buchi's result says that MSO-definable languages are precisely regular languages. Doner, Thatcher, and Wright extended this result to trees:

A set of trees is regular iff it is definable in MSO.

These days, this result is widely employed in the area of "Foundations of XML", which have put automata, logic, and complexity theory under the same roof, which is no short of wondrous. If this interests you, take a loot at some surveys in here.

## Monday, October 24, 2005

### Let's play games

About a month ago I saved a draft on the complexity of games in finite model theory. Suddenly, this morning David Eppstein left a comment on my previous post, which reminded me about this draft. Incidentally, I have been a fan of his recreational math pages, especially the combinatorial game theory page, since two years ago, after taking a course on the theory of computation. Anyway, let's play some games now! The games that we will play are not something I invent myself ;-). They are actually pervasive in finite model theory.

Here is the description. Two graphs G and G', and a natural number k are given. There are two players in this game, Spoiler and Duplicator. They will play for k rounds. In the i'th round (1 ≤ i ≤ k), Spoiler starts by picking a vertex in one of the graphs. Then, Duplicator will have to respond by picking a vertex in the other graph. After k rounds have been played, we obtain the vertices a0, ..., ak of G, and the vertices b0, ..., bk of G'. Duplicator is declared to be the winner if the subgraphs of G and G' that are induced by ai's and bj's are isomorphic. Otherwise, Spoiler wins. By the way, why are the players called Spoiler and Duplicator? If you think about this, Spoiler's goal is to show that the two graphs are not isomorphic (i.e. to spoil), while Duplicator's goal is to show that the two graphs are isomorphic by matching (or "duplicating") each of Spoiler's moves in such a way no difference in the two graphs are revealed. So, Duplicator is the "good" guy, while Spoiler is the "bad" guy. Duplicator has a winning strategy on this game if no matter how Spoiler moves in the i'th round, Duplicator can always find a move that will guarantee a "win" for Duplicator.

The problem is, given G, G', and k, decide whether Duplicator has a winning strategy. Determine its complexity! I will post the solution and some open problems later.

But, let me give you a simple example. Suppose G = Kk and G' = Kk+1. Then, it is easy to see that Duplicator has a winning strategy for the i-round game, where 0 ≤ i ≤ k (why?). However, a moment's thought will reveal that, for any i-round game on G and G' where i > k, Spoiler has a winning strategy. Where G and G' are more complicated structures, it becomes harder to decide whether Duplicator has a winning strategy.

## Sunday, October 23, 2005

I have just added some new links to several interesting blogs to my blogroll: David Molnar's blog, Kurt Van Etten's, and Scott Aaronson's (the complexity zoo keeper). It's great to see that the blogging community grows very well. This comes at one price however: there is just no time to visit regularly every single interesting blog there is. There are just too many good ones these days!

Also, I have recently discovered a nice-looking online textbook on 'universal algebra'. I haven't read it yet, though. But, it has some nice applications to computer science (automata theory, rewriting systems, etc.) and model theory. If you are a complexity theorist, a motivation to look at this book may be Straubing's monograph

Finite Automata, Formal Logic, and Circuit Complexity (Progress in Theoretical Computer Science), Springer, 1994.

In this monograph, Straubing demonstrates the possibility of attacking tantalizing open problems in circuit complexity using automata theory, semigroup theory, and finite model theory. In fact, I have heard a rumour that Straubing very recently proved a separation result in complexity theory using this technique ...

## Saturday, October 22, 2005

### Strange things happen

Around this time last year I was busy preparing my applications to PhD programs in the U.K. and North America. In particular, University of Cambridge, which happened to be one of those universities I was interested in , requires each applicant to write a short research proposal. [As you of course know, I chose University of Toronto, which was my first preference all along.] So, I had a discussion with Anuj Dawar, who was the finite model theorist at Cambridge I wished to work with, on what sort of research directions to pursue for PhD studies. He gave me two research problems:

1. Homomorphism preservation theorem in the finite.
2. Do order-invariant FO properties have the zero-one law?

Don't worry if you don't know what they mean. The important thing to know is that both of these problems are long-standing open problems in the area. Both of these problems looked formidable to me then (and they still do now), and so I didn't bother thinking about them. Anyway, to my surprise, both of these problems have been resolved this year in quick succession. The first problem was solved positively by Benjamin Rossman, who this year just started his graduate school. His paper titled

Existential Positive Types and Preservation Under Homomorphisms

was recently published in this year's LICS proceeding and awarded Kleene's award for the best student paper. [Congratulation Ben!] You can find his paper on his web site. The second problem was very recently solved by Leonid Libkin, my supervisor. You have to wait if you want to get the paper.

I wish I had collaborated with these guys on these problems. D'oh!

## Thursday, October 20, 2005

### Special Programmes on Logic and Algorithms in Cambridge

During the first half of next year, there will be a series of workshops on Logic and Algorithms (broadly construed) at Isaac Newton Institute for Mathematical Sciences (Cambridge). Looking at their workshop titles is enough to convince ourselves that there is something for everyone:

1. Finite and Algorithmic Model Theory (9-13 January).

2. Logic and Databases (27 February - 3 March).

3. Mathematics of Constraint Satisfaction: Algebra, Logic, and Graph Theory (20-24 March).

4. New Directions in Proof Complexity (10-13 April).

5. Constraints and Verification (8-12 May).

6. Games and Verification (3-7 July).

There will be quite a number of invited speakers from University of Toronto. They include Stephen Cook, Leonid Libkin, and Toniann Pitassi.

## Monday, October 17, 2005

### Logics capturing complexity classes

David Molnar asked a good question: (in my words) what does it mean to have a logic for a complexity class over ordered structures and over all (i.e. both ordered and unordered) structures? To answer this question, I will have to define what it means for a logic to capture a complexity class. As always, I assume that you have read preliminaries 1 and 2, which can be accessed from the sidebar. [Please let me know if your browser has any problem displaying the symbols.]

We will first recall the definition of "expressibility" of a property in a logic. Suppose we are given a σ-property (or boolean query) Q over a class C of σ-structures, i.e., Q is just a subset of C. Then, Q is said to be expressible in a logic L over C if there exists an L-sentence f such that, for any A ∈ C, A ∈ Q iff A |= f. When C is not mentioned, C is assumed to be the class of all finite σ-structures.

Observe that one can think of boolean queries as an analogue of the notion of languages, in the sense of formal language theory, or, more loosely, "computational problems". For example, consider the problem of testing hamiltonianicity of a graph. We can think of this problem as a boolean query H over the graph vocabulary such that a (finite) graph is in H iff it is hamiltonian. As a matter of fact, it is possible to encode any σ-structure as a (binary) string, and a string as a τ-structure for some vocabulary τ. Therefore, we can view any σ-boolean query as a language, and vice versa. So, we may redefine any complexity class K to be the set of all σ-properties that are computable in K, where σ ranges over all relational vocabularies. Hence, PTIME is the set of boolean queries that are computable in polynomial-time. For convenience, given a class C of structures, we also define KC to be the set of properties Q that are computable in K, when we restrict the input to Q to be all structures in C that have the same vocabulary as Q. When C is not mentioned, we assume C to be the set of all finite structures.

We are now ready to define the most important definition in descriptive complexity. A logic L is said to capture a complexity class K over a class C of finite structures if for any Q ∈ C: Q ∈ KC iff Q is expressible in L over C. Again, when C is not mentioned, it is assumed to be the set of all finite structures.

We know that existential second-order logic (ESO) captures NP. This is Fagin's result, which was discussed in Lance's blog recently. On the other hand, it is open whether there exists a logic that captures PTIME. We know, however, that there exists a logic (FO+LFP: first-order logic + least fixed point operator) that captures PTIME over all ordered structures. [Ordered structures mean that, we assume a binary relation in the structure to be interpreted as a total linear ordering on the universe.] This result was proven independently by Neil Immerman and Moshe Vardi in the 80's. You might be wondering whether FO+LFP captures PTIME. The answer is no: even testing whether a graph has an even number of elements cannot be done in FO+LFP without the help of an "external" linear order. The question whether there exists a logic for PTIME was first raised by Chandra and Harel in the 70's. Also, as I mentioned in a previous post, Yuri Gurevich conjectured that there is no logic for P (and this will imply that P ≠ NP). On a related note, it is also open whether there is a logic for any important complexity class below NP such as PTIME, L, NL, etc.

A relevant question is the following: why do we want to have a logic that capture complexity classes over all finite structures (not just over ordered structures)? The answer is technical: linear orderings make an inexpressibility proof more complicated by an order of magnitude!

## Saturday, October 15, 2005

### Fun Proof Techniques

About three years ago, I took a course on the theory of computation. I remember that during the second lecture, our instructor introduced us to the mathematical background, and in particular proof techniques. He started saying "in mathematics, we have proof by induction, proof by construction, and proof by contradiction ...". Before he finished the lecture, he presented to us a list of fun proof techniques which work in practice, some of which are listed here. My favorite ones are:

1. Proof by blatant assertion: this is marked by expressions like "obviously", "trivially", "any moron can see that".
2. Proof by example: you sketch a proof that the theorem is true for n = 4, and claim that this proof generalizes to all n.
3. Proof by picture: picture + handwaving makes a pretty compelling argument in a classroom.
4. Proof by omission: we state the theorem and the proof is left as an exercise for the reader.
5. Proof by accumulated evidence: for three decades we have worked very hard to produce an efficient algorithm for NP-complete problems without results; therefore P != NP.
6. Proof by reference to famous people: I saw Karp on the elevator, and he thought that our problem is NP-complete!
7. Proof by confusion: P = NP <=> P - NP = 0 <=> P(1-N) = 0 <=> P = 0 or N = 1; but the last expression is false and therefore P != NP.

What are your favorite proof techniques?

### An approach to prove P != NP

Let K be a complexity class. We wish to recursively enumerate exactly those "graph languages" that can be decided in K. By graph languages, we mean languages whose inputs are graphs (say in adjacency matrix format) such that two isomorphic graphs are either both in the language or both not in the language. So, graph languages are "closed under isomorphisms". If this can be done for K, we say that the set of K graph properties is recursively indexable.

The set of NP graph properties is known to be recursively indexable. This is due to the fact that we have a logic for NP (i.e. existential second-order logic). On the other hand, it is a long-standing open problem whether P graph properties are recursively indexable. [No, you can't use the usual enumeration for clocked poly-time Turing machines as they contain "bad" graph languages.] Of course, the existence of a logic for P, which is another big open problem in finite model theory, will imply that P graph properties are recursively indexable. [(LFP+<)inv captures PTIME properties over ordered structures only.] Yuri Gurevich conjectured that P graph properties are not recursively indexable, and therefore, there is no logic for P. See his paper. In particular, if the conjecture is true, then P is different from NP as NP graph properties are recursively indexable! So, try to prove that P graph properties are not recursively indexable!

## Thursday, October 13, 2005

### Favorite Logic of September 2005: First-order Logic (3)

previous|
next

We are somewhat behind our schedule. So, let's close it out now. This post concludes our series of posts on first-order logic as the favorite logic of last month (oops!). I will give a glimpse of the expressiveness of first-order logic, and tools for showing inexpressibility results. The literature on this topic is so vast that it is impossible to cover all possible results and tools; so, I will mention only a selection of results and tools, and give examples on how to apply the tools mentioned. Also, see the preliminaries before reading this post.

Let's get right down to business. Our main problem can be described as follows: suppose Q is a σ-property (i.e. boolean query), and we want to know whether there exists a first-order sentence that defines Q. If there is one, then it suffices to come up with a sentence f that defines Q. If not, what shall we do?

Well ... in model theory, we have a plethora of tools in our toolbox:

1. Compactness: a theory T is consistent iff every finite subset of T is consistent.
2. (Restricted form of) Lowenheim-Skolem: If a theory has an infinite model, then it has a countable model.

Let's try compactness on an example. Suppose we want to show that the property GC testing graph connectivity is not first-order definable. Suppose to the contrary that f defines GC. Then, consider the theory T = {f} ∪ { πn : n is positive nat. number } where πn is the first-order sentence

∃x∃y( there is no path of length n from x to y )

Clearly, this sentence is first-order. By compactness, it is easy to show that T is consistent (just take the biggest n for which πn appears in this finite subset of T and "elongate" the path from x to y by increasing the number of elements in the universe). So, we have a model M for T. But in M, x and y are not connected by a (finite) path, contradicting f. Therefore, f cannot be a first-order sentence.

That's short and simple! The problem, however, is that this doesn't prove that FO cannot express GC in the finite (i.e. the ground set of GC is restricted to all finite graphs only), as compactness theorem fails in the finite. So, how do we prove that GC cannot be expressed in the finite? It is a sad fact that most of the tools from model theory fails in the finite. Some of the few that survive include Ehrenfeucht-Fraisse game theorem, Gaifmann-locality, and Hanf-locality. Here we shall use the notion of Hanf-locality to prove that GC is not FO definable.

Let me try to intuitively illustrate the notion of Hanf-locality for graphs. A graph property P (over all finite structures) is said to be hanf-local if there exists a number d such that for every two finite graphs A and B:

that A and B d-locally look the same implies that A and B agree on P.

The smallest such number d is called hanf-locality rank of P. What do we mean by "d-locally look the same"? A and B d-locally look the same if there exists a bijection g from V(A) to V(B), the vertices of A and B, such that for every c in V(A) there exists an isomorphism from the d-neighborhood of A around c to the d-neighborhood of B around g(c) which maps c to g(c). That is, if you have only limited local vision (say up to d distance) and if you pick a spot c in the graph A, I can always give you another spot in the graph B (via the bijection) such that you cannot see the difference between your surroundings on these two different spots.

Now, our strategy for proving that GC is not first-order expressible is:

1. Show that every first-order definable query is hanf-local.
2. Show that GC is not hanf-local.

We won't prove item 1. We will however sketch how to prove item 2. Suppose GC is hanf-local with rank d. Then, define two graphs G and G' as follows. Take a number p >> d. G is a cycle with 2p nodes, and G' is a disjoint union of cycles each with p nodes. You can easily check that there is only one type of d-neighborhood (up to isomorphism) around any vertex c in G and G', i.e., the 2d+1-path with c right in the middle. So, just take any arbitrary bijection between G and G' to show that they d-locally look the same. However, G is connected but G' is not, contradicting the assumption that GC is hanf-local!

I would love to talk about Ehrenfeucht-Fraisse games and Gaifmann locality, which are equally important tools in finite model theory for proving FO inexpressibility. Sadly, this will only make this post longer than it already is. So, I will refer you to Leonid Libkin's Elements of Finite Model Theory for more on this (and also for the proper acknowledgement of these results!).

## Tuesday, October 11, 2005

### Happy Thanksgiving!

Most of you in the U.S. might probably be bewildered by what I'm saying. However, yesterday was a thanksgiving day in Canada. This was my first encounter of any kind of "thanksgiving day" in my entire life as I am from Australia.

Anyway, I guess I will have to get used to this custom as I will be here for another 5 years. So, happy thanksgiving! Thanks to all of you who have left useful comments on my blog. I do learn a lot from all these comments.

### Lance's Recent Post on Fagin's Theorem

Lance Fortnow posted his favorite theorem of this month on his blog: it's Fagin's theorem, one of the most important results (and the first of its kind) in descriptive complexity theory. Nice!

## Monday, October 10, 2005

### Favorite Logic of September 2005: First-order Logic (2)

(previous|next)

Howdy! Today we will resume on our discussion of first-order logic. In particular, we will talk about model-checking.

Diving right in ... the problem is as follows: given a finite structure M and an FO formula f, we want to test whether M |= f, i.e., f is true in M. Immediately, I hear you say that this problem is trivially decidable as we can use the inductive definition of M |= f to do the testing. Well ... it is decidable, but we want to know its complexity. We won't talk about how to represent M, although it matters in this case, as we will only mention results. You can represent f in any way you like, e.g., the parse tree for f.

There are three common measures for the complexity of this problem:

1. Data Complexity: the input is M (i.e. f is fixed). So, we measure the performance of any algorithm for the problem in terms of |M| only.
2. Expression complexity: the input is f (i.e. M is fixed).
3. Combined complexity: the input is (M,f).

Data complexity measure is commonly used in the field of databases, where usually M is very big and the query f is small. Combined complexity and expression complexity measure is commonly used in the area of verification (commonly called model-checking), where f can be very large and complex. If the reader is curious about any of these areas, I will gladly recommend "Foundations of Databases" by Abiteboul et al. and "Logic in Computer Science" by Huth et al.

It is not hard to analyze the running time of the algorithm for testing M |= f induced by the usual inductive definition of |=. You will get O(|f| x |M|^k) where k is the width of f, i.e., the maximum number of free variables in every subformula of f. So, k does depend on f. That is, the algorithm runs polynomial in |M| but exponential in |f|. In fact, one can show that the data complexity of this problem with respect to FO is AC^0, where as the expression and combined complexity is PSPACE-complete. As an exercise, the reader might want to try to prove the latter as it is quite simple (hint: reduce the problem QBF).

Papadimitriou and Yannakakis have an interesting paper that refines these complexity measures using the theory of parameterized complexity. The motivation of this paper is the following: although |f| << |M|, the fact that |f| is an exponent of |M| is no good because even for "small" |f|, say |f| = 20, an algorithm that runs in O(n^20) cannot be considered tractable (especially when n is really big, like in database applications). So, the question then is whether it is possible to have an algorithm for testing M |= f that runs in g(|f|)|M|^c where g is a function from natural numbers to natural numbers, and c is a constant independent of both |M| and |f|. The result of this paper essentially stipulates that any algorithm for evaluating M |= f must have |f| as an exponent of |M|, unless the W hierarchy collapses. See the following reference if interested:

On Complexity of Database Queries, Papadimitriou and Yannakakis, JCSS, Vol 58, 1999.

We will next talk about the expressive power of FO.

## Saturday, October 01, 2005

### Bugs!

I have just spent about two hours writing about the second part of favourite logic of september 2005 and, when I saved it as a draft, my firefox just freezed and all the text got erased. Ah well. In view of this and that there is no working computers in my office, I think I will just start posting when I get my powerbook back, which is next week.