_{n}(f) and the limit, if exists, by μ(f). [Of course, this means that μ and μ

_{n}ranges between 0 and 1.] In other words, every sentence f is

*almost surely false*or

*almost surely true*. For simplicity, let us restrict our attention to undirected loopless graphs, i.e., σ contains only one binary relation E which is both symmetric and irreflexive. Below is perhaps the most fundamental theorem about the 0-1 law.

Theorem (Fagin 1976, Glebskii et al. 1969): First-order logic admits the 0-1 law

Most finite model theorists equate the admission of 0-1 law with the inability to do "counting". Consider the following properties EVEN = { G : G has even number of vertices }. We see that μ

_{n}(EVEN) = 1 iff n is even, and hence μ(EVEN) does not exist. I am yet to see a property whose asymptotic probability μ converges to something other than 0 or 1, or diverges, but has nothing to do with counting. Consider the following properties:

- μ(BIPARTITE) = 0

- μ(HAMILTONIAN) = 1

- μ(CONNECTED) = 1

- μ(TREE) = 0

On the other hand, if you admit constant or function symbols, it is possible to have first-order properties whose asymptotic probability converges to something other than 0 or 1. So, it is crucial to assume that our vocabulary σ be purely relational.

What is the use of 0-1 law in finite model theory apart from the amusement of some theoreticians? [I will have to confess many proofs about the 0-1 law, including Fagin's proof of the above theorem, are usually of great beauty.] Well, a primary concern of finite model theory is to determine how expressive a logic is over a given class of finite structures. By proving that a logic admits the 0-1 law, we show that

**any**property whose asymptotic probability does not converge to either 0 or 1 is not expressible in the logic. For example, an immediate corollary of the above theorem is that first-order logic cannot express EVEN.

Let me briefly outline the proof ideas of the above theorem which recurs in 0-1 law proofs. We only need to show that, for each natural number k, there exists a property P

_{k}of graphs such that

- μ(P
_{k}) = 1, and

- Any two models of P
_{k}cannot be distinguished by a first-order sentence of quantifier rank k. [Recall that the quantifier rank of a first-order sentence is the maximum depth of any nesting of quantifiers of the sentence.]

Why is this sufficient? Let f be a first-order sentence of quantifier rank k. There are two cases. First, there exists a model of P

_{k}that is also a model of f. In this case, it is not hard to show that

*any*model of P

_{k}is also a model of f, which implies that μ(f) ≥ μ(P

_{k}) = 1. The second case is that all models of P

_{k}are not models of f, i.e., they are models of "not f". Therefore, μ("not f") ≥ μ(P

_{k}) = 1. That is, we have μ(f) = 0.

The property P

_{k}is commonly referred to as an

*extension axiom*. Note that P

_{k}does not have to be L-definable (where L is the logic that is to admit 0-1 law).

Two excellent expositions of the basics of 0-1 laws include Leonid Libkin's Elements of Finite Model Theory chapter 12, and James Lynch DIMACS 1995-1996 Tutorial notes on "Random Finite Models".

## 5 comments:

Again, thanks for some great posts!

Can you tell us anything about quantitative strengthenings of 0-1 laws? What are the typical convergence rates of the probabilities as model size grows?

Also, for places/logics where the 0-1 law fails, are there any remnants of the phenomenon?

Thanks for your comments!

Great questions!

> Can you tell us anything about

> quantitative strengthenings of 0-1 laws?

> What are the typical convergence rates

> of the probabilities as model size

> grows?

If I understand your question correctly, the most well-known quantitative strengthening of 0-1 laws (for first-order logic) is probably Spencer-Shelah theorem.

Let me first fix some notations before stating the theorem. The random graph model (G_n,p(n)), where G_n is the set of all graphs with vertices {0,...,n-1}, and p(n) is a function from G_n to [0,1], is basically the probability space that you get by picking each edge with probability p(n). You can check that we used the model (G_n,1/2) in the posting. Random graph models of the form (G_n,c), where c is a constant, is called

uniform random graph models. However, graph theorists like Joel Spencer mostly deal with non-uniform probability model. Consider the function p_c(n) = 1/n^c, where c is a real number. [This means that one deals with "sparse" graphs.] With respect to the model (G_n,p_c(n)), when does first-order logic admit the 0-1 law? Spencer-Shelah theorem says that it admits the 0-1 law iff c is irrational. This is perhaps the deepest theorem wrt 0-1 law, which is covered in full detail in Spencer's book "The Strange Logic of Random Graphs".> Also, for places/logics where the 0-1

> law fails, are there any remnants of the

> phenomenon?

0-1 law is actually only one of the many properties that logicians concern. It is the strongest, and the most useful, of such properties by the way. In the case when a logic fails to have the 0-1 law, logicians consider some other properties like:

1. Convergence. A logic has the convergence property if mu(f) always exists for each sentence f in the logic.

2. Slow convergence rate (if 1 fails). A logic has the slow convergence rate property if mu_{n+1}(f)-mu_{n}(f) converges.

Lynch's tutorial is probably the best source of information on the issues above.

Thanks!

Spencer-Shelah sounds fascinating, although that's not the kind of result I was thinking about.

I wanted to know, for the (G_n,1/2) model, if you fix a first-order graph property f with mu(f) = 0 (/1), how fast does the probability mu_{n}(f) of satisfying it approach 0 (/1) as n increases? Is it always exponentially, or slower in some cases?

I checked with my supervisor and the calculation in his book (chapter 12). It is exponentially fast.

Great post, very useful indeed.

Post a Comment