## Monday, December 19, 2005

### Immerman's descriptive complexity survey

The most recent issue of SIGACT Newsletter (vol. 36, number 4) contains a guest column concerning progress in descriptive complexity by Neil Immerman. [Thanks to Kurt for making me aware of this article.] The article concisely surveys some recent progress in finite model theory with an emphasis on descriptive complexity. The style is extremely informal, and I believe is a perfect weekend read. I highly recommend the article to anybody interested in logic or computational complexity.

Let me say a little bit about the content of the article (no spoiler!). It starts by motivating the study of descriptive complexity. Then, it moves on to the spectrum problem and Fagin's theorem (that NP = existential second-order logic), which marked the beginning of descriptive complexity. We then see some other capturing results in descriptive complexity (e.g. characterizations of P, PSPACE, etc.). My favorite capturing result mentioned here is DSPACE[nv] = VAR[v+1], where VAR[k] denotes the set of problems expressible by a sequence of first-order sentences using at most k variables. And so forth ... but let me mention another interesting thing written in the article. We notice that most of the capturing results use a linear ordering on the universe. [In fact, a linear ordering is necessary for these results to work.] For example, Immerman-Vardi theorem says that FO(LFP) captures PTIME over the class of ordered structures. However, FO(LFP) does not capture PTIME over the class of all structures. In fact, a long-standing open problem in finite model theory is whether there exists a logic capturing order-independent PTIME problems. The article nicely surveys our progress towards resolving this problem. Another interesting thing that you will get to know a bit about is the phenomenon that one logic may be exponentially more "succinct" than another logic with the same expressive power.

A fair question to ask now is to what extent has descriptive complexity has helped us to answer structural complexity questions such as separating two complexity classes. Let me try to provide some answers to this question. I believe that the first switching lemma, which separates AC0 from LOGSPACE, was proved by Miklos Ajtai using finite model theory. This result was independently discovered by Saxe, Sipser, and Furst around the same time using combinatorics. Most people, even finite model theorists, think that the combinatorial proof provided by Saxe et al. is easier to understand than Ajtai's proof. [Of course, we now have Hastad's switching lemma with some elegant proofs.] Next I will mention Immerman-Szelepscenyi theorem that NSPACE(f(n)) is closed under complementation. Immerman proved this by appealing to descriptive complexity, while Szelepscenyi by using some elementary combinatorics. In this sense, a skeptic might say that descriptive complexity has not really helped structural complexity theory. But, what has? Nothing much really, with expander graphs as one only exception (recall Reingold's recently proved theorem that L = SL). On the other hand, I believe that descriptive complexity helps us understand computational complexity. Many problems or classes in complexity theory have simple and elegant correspondence in descriptive complexity theory. In fact, some logical characterizations of complexity classes are commonly used by computational complexity theorists (e.g. logical characterizations of each level in the poly-time hierarchy). In this sense, I am positive that one day somebody who understands computational complexity, with some help from descriptive complexity, will resolve some of the most tantalizing open problems in structural complexity.

## Saturday, December 10, 2005

### Foundations of XML: Validating XML Documents (1)

Apologies for my long silence. Things got unexpectedly hectic for me in the past two to three weeks for various reasons. Anyway, as I promised in my previous post, I will further talk about foundations of XML, should the reader be interested. I received two comments expressing interests, one from Jon Cohen, which assure me that at least somebody is going to read this post :-). So, on with our discussion on foundations of XML. Let's spend some time now talking about validation of XML documents. [I am still preparing my post on navigating XML documents and XPath.] In this post, I will talk about a practical validation language that goes by the name of DTD (Document Type Definition).

Roughly speaking, the aim of validation is to ensure that an XML document "makes sense". More precisely, the problem is that, given an XML document (i.e. a labeled unranked tree T) and a statement f in some specification language L, we want to test whether T satisfies f. The most frequently used language L for validation of XML documents is DTD (Document Type Definition). Here, we use a formal model of DTDs that does not incorporate XML "data values", e.g., texts within XML tags (see my previous post). Let us start with an example. As always, we fix a finite alphabet Σ of discourse. Continuing with our example in the previous post, suppose we want to test whether an unranked tree T satisfies the properties:

1. its root is labeled "books",
2. every node labeled "books" may have zero or more children labeled "book", and
3. every node labeled "book" must have four children labeled "title", "author", "year", and "publisher" (in this order), optionally followed by one child labeled "rank".

A DTD that asserts this property is the following:
`books     -> book*book      -> title, author, year, publisher, rank?title     -> εauthor    -> εyear      -> εpublisher -> εrank      -> ε`

If you have studied regular languages, regular expressions, and context-free grammars, then chances are that you wouldn't have any problem understanding what the previous DTD says. Loosely, each rule (i.e. V -> e, where V is a "label" in Σ and e is a regular expression over Σ) specifies that a V-labeled node in an unranked tree T has children with labels c1, ..., cn that can be generated by e. The Kleene star '*' stands for "zero or more". The question mark '?' abbreviates "zero or one". Commas ',' mean "followed by" in the sense of string concatenation. Finally, the symbol 'ε' is the empty string. Therefore, formally, a DTD over Σ is a pair (r,P) where r is the root symbol in Σ and P is a set of rules over Σ such that r does not appear in the "body" of any rules in P (i.e. on the right hand side).

Although DTD is a nice practical specification language for XML, it does not satisfy some nice properties of both practical and theoretical interests. For example, the sets of trees that can be recognized by DTDs are not closed under unions and complementations. [Recall that regular languages are closed under unions, intersection, and complementation (see here for more closure properties of regular languages).] Also, DTDs do not recognize "contexts" (e.g. how do you create a DTD that uses the tag "name" to mean two different things such as "name" of a "book", which should have no children, and "name" of a "person", which should have exactly two children labeled "First Name" and "Last Name"?). Nonetheless, this problem does not seem to bother programmers that much. This explains why DTD is still the most popular XML validation language. However, there are some XML validation languages that address the aforementioned problems. I believe the most popular such language is XSD (XML Schema Definition). We shall consider a formal model of XSD next. We shall later see that XSD has the full power of the yardstick logic for XML validation, which is monadic second-order logic.