Thursday, June 22, 2006
Australia qualifies for round of 16
After losing 2 - 0 to Brazil, The Socceroos had to at least draw against Croatia. What a nerve-racking (and controversial) game that was! In the end, Australia managed to draw 2 - 2 against Croatia. Although I am supporting Australia through and through, I think that The Socceroos didn't secure the desired result convincingly. In a way, we were lucky to get a draw! Next: Italy vs. Australia.
Wednesday, June 14, 2006
Linear Temporal Logic (1)
As their name suggests, temporal logics are logics where truth values of formulas may change over time. The significance of temporal logic in computer science is indisputable, especially in verification of safety-critical reactive systems. Model checkers of many flavors of temporal logics have been developed to the extent that they can quickly verify real-world systems with a huge number of states. Furthermore, in contrast to many other tools and techniques of formal methods which are semi-automatic and relatively expensive, model checking is fully-automatic and quite cheap (excellent free model checkers like SPIN exist). Last year these tools and techniques were awarded the ACM Paris Kanellakis Theory and Practice Award. Amir Pnueli, the researcher who introduced the idea of using temporal logic in verification, was awarded the ACM Turing Award in 1996.
As we mentioned, temporal logic comes in a number of flavors. At present we deal only with Linear Temporal Logic (LTL). It is linear because we evaluate each LTL formula with respect to a vertex-labeled infinite path p0p1..., where each vertex pi in the path corresponds to a point in time. "Present" is p0. "Future" is any of pi with i ≥ 0. Each vertex is labeled by a subset of 2Σ where Σ is a set of atomic propositions. In other words, the label of a vertex indicates which atomic propositions are true in that vertex. Here is a somewhat contrived example. Suppose that Σ = {a,b}. As an example, take the path {a}2000{a,b}ω, which is a path whose first 2000 vertices are {a}-labeled and the rest of the vertices are {a,b}-labeled. An example of an LTL formula is "at some point in the future, 'b' holds", which is true. Another example is "a holds until b holds", which is also true.
More formally, the syntax of LTL formulas can be defined in Backus-Naur form as follows:
φ, ψ := p (p ∈ Σ) | φ ∧ ψ | ¬ φ | X φ | φ U ψ
Given a 2Σ-labeled (infinite) path w, we write wi to refer to the infinite subpath of w starting from the ith vertex (the initial vertex is the 0th vertex). We may now define the semantics of LTL formulas:
We use F φ to abbreviate "true U φ", which says that φ holds in some future. Note that "future" also includes the present. We use G φ to abbreviate ¬ F ¬ φ, which says that φ must hold at all points in the path, i.e., globally. Finally, note that {¬,∧} captures all other boolean connectives, which allows us to freely use any boolean connectives when we write formulas.
How do we use LTL to verify real world systems? The answer requires that we first define the semantics of LTL formulas with respect to Kripke Structures. A Kripke structure K over a finite alphabet Σ is a tuple (S,E,λ: S -> 2Σ), where S is a set of vertices (or states)and E a set of directed edges on S. It is normally assumed that each vertex in S has at least one outgoing edge. The function λ defines which propositions in Σ are true in a given state. The reader should convince himself that the notion of Kripke structures can be used to model many practical systems. You can first start with traffic lights, in which case &Sigma = {green,red,yellow}, S consists of three states (each corresponding to the event where the light is green, red, or yellow), and E models how traffic lights work (e.g. green -> yellow -> red). A more interesting example includes Dijkstra's algorithm for the dining philosopher problem.
Now, given a state s in structure K, we may define the semantics of LTL formulas: K,s |= φ if for all directed path w in K starting from s, we have w |= φ. For example, if K models traffic lights, then the formulas G( green -> F red ) and G( green -> X yellow ) have to hold in every state of K. [For example, the first formula says that, it is globally true that if the light is presently green, then at some point in the future it will turn red.]
What we are interested in is an algorithm that solves the following model-checking problem: given a Kripke structure K over Σ, a state s in K, and an LTL formula φ over Σ, decide whether K,s |= φ. The problem is PSPACE-complete. The best known algorithm, which runs in 2O(|φ|) x |K|, was given by Vardi and Wolper by utilizing automata theory on infinite strings. Although the complexity of the algorithm is exponential in the size of the formula, the algorithm runs quite fast in practice because practical specifications are usually short. We shall talk about this algorithm in the near future.
As we mentioned, temporal logic comes in a number of flavors. At present we deal only with Linear Temporal Logic (LTL). It is linear because we evaluate each LTL formula with respect to a vertex-labeled infinite path p0p1..., where each vertex pi in the path corresponds to a point in time. "Present" is p0. "Future" is any of pi with i ≥ 0. Each vertex is labeled by a subset of 2Σ where Σ is a set of atomic propositions. In other words, the label of a vertex indicates which atomic propositions are true in that vertex. Here is a somewhat contrived example. Suppose that Σ = {a,b}. As an example, take the path {a}2000{a,b}ω, which is a path whose first 2000 vertices are {a}-labeled and the rest of the vertices are {a,b}-labeled. An example of an LTL formula is "at some point in the future, 'b' holds", which is true. Another example is "a holds until b holds", which is also true.
More formally, the syntax of LTL formulas can be defined in Backus-Naur form as follows:
φ, ψ := p (p ∈ Σ) | φ ∧ ψ | ¬ φ | X φ | φ U ψ
Given a 2Σ-labeled (infinite) path w, we write wi to refer to the infinite subpath of w starting from the ith vertex (the initial vertex is the 0th vertex). We may now define the semantics of LTL formulas:
- w |= p (p ∈ Σ) if the label of the initial vertex of w contains p
- w |= φ ∧ ψ if w |= φ and w |= ψ
- w |= ¬ φ if it is not the case that w |= φ
- w |= X φ if w1 |= φ
- w |= φ U ψ if there exists j ≥ 0 such that wj |= ψ and for all 0 ≤ i < j, we have wi |= φ
We use F φ to abbreviate "true U φ", which says that φ holds in some future. Note that "future" also includes the present. We use G φ to abbreviate ¬ F ¬ φ, which says that φ must hold at all points in the path, i.e., globally. Finally, note that {¬,∧} captures all other boolean connectives, which allows us to freely use any boolean connectives when we write formulas.
How do we use LTL to verify real world systems? The answer requires that we first define the semantics of LTL formulas with respect to Kripke Structures. A Kripke structure K over a finite alphabet Σ is a tuple (S,E,λ: S -> 2Σ), where S is a set of vertices (or states)and E a set of directed edges on S. It is normally assumed that each vertex in S has at least one outgoing edge. The function λ defines which propositions in Σ are true in a given state. The reader should convince himself that the notion of Kripke structures can be used to model many practical systems. You can first start with traffic lights, in which case &Sigma = {green,red,yellow}, S consists of three states (each corresponding to the event where the light is green, red, or yellow), and E models how traffic lights work (e.g. green -> yellow -> red). A more interesting example includes Dijkstra's algorithm for the dining philosopher problem.
Now, given a state s in structure K, we may define the semantics of LTL formulas: K,s |= φ if for all directed path w in K starting from s, we have w |= φ. For example, if K models traffic lights, then the formulas G( green -> F red ) and G( green -> X yellow ) have to hold in every state of K. [For example, the first formula says that, it is globally true that if the light is presently green, then at some point in the future it will turn red.]
What we are interested in is an algorithm that solves the following model-checking problem: given a Kripke structure K over Σ, a state s in K, and an LTL formula φ over Σ, decide whether K,s |= φ. The problem is PSPACE-complete. The best known algorithm, which runs in 2O(|φ|) x |K|, was given by Vardi and Wolper by utilizing automata theory on infinite strings. Although the complexity of the algorithm is exponential in the size of the formula, the algorithm runs quite fast in practice because practical specifications are usually short. We shall talk about this algorithm in the near future.
Monday, June 12, 2006
Tuesday, May 02, 2006
Game theoretic characterization of treewidth
There is no doubt that the notion of treewidth introduced by Robertson and Seymour is one of the most important concepts in math and computer science introduced in the past thirty years. Roughly speaking, treewidth of a graph G measures how much G looks like a tree. For example, the treewidths of a tree, of a cycle, and of a clique (with n vertices) are respectively 1, 2, and n.
Some of the readers might not be aware of an intuitive way of understanding the concept of treewidths that is provided by the cops-and-robber games, as pointed out by Seymour and Thomas in their paper "Graph searching and a min-max theorem for treewidth", Journal of Combinatorial Theory B, Vol. 58 (1993). The following presentation follows the second paper cited at the bottom. The k-cops-and-robber games, where k > 1, takes a graph G = (V,E) as a parameter. Each game has two players: the cops and the robber. The game goes as follows:
If this game has no end (i.e. the robber can evade capture), then the robber wins. The cops are said to have a winning strategy on the k-cops-and-robber game on G if they can play in such a way that they will eventually win the game. Otherwise, we say that the robber has a winning strategy on this game.
It is helpful to think of the set Xi+1 - Xi as set of vertices above which the cops are hovering in helicopters in round i+1. [This tells us that cops can move from vertices to vertices unrestrictedly.] Also, the set Xi+1 ∩ Xi can be thought of as the vertices where the cops have landed from helicopters. So, it takes two moves for a cop to be at a particular vertex v (i.e. on the ground): she must first ride a helicopter to v, and second land the helicopter at v. On the other hand, a cop may take off to another vertex (but still is in a helicopter) in just one move. In addition, the robber can see the cops in the helicopter, and may elude capture by running at great speed to another vertex as long as he does not run through a cop on the ground during his escape.
Theorem (Seymour and Thomas): A graph G has treewidth ≤ k-1 iff the cops have a winning strategy in the k-cops-and-robber game on G.
Equivalently, a graph G has treewidth k-1 iff k is the smallest number k' such that the cops have a winning strategy in the k'-cops-and-robber game on G. Let's try this game on a tree. The cops can win the game using 2 cops only. The idea is to gradually shrink the set of vertices the robber can go to in the next round by alternately moving the two cops in a "depth-first traversal" manner. On the other hand, it is clear that the robber player has a winning strategy on a tree if he is chased by only a poor lone cop.
In recent years, some researchers in databases and verification have proposed variants of the cops-and-robber games so as to explain some phenomena in those fields. Let me conclude by mentioning two such papers:
Some of the readers might not be aware of an intuitive way of understanding the concept of treewidths that is provided by the cops-and-robber games, as pointed out by Seymour and Thomas in their paper "Graph searching and a min-max theorem for treewidth", Journal of Combinatorial Theory B, Vol. 58 (1993). The following presentation follows the second paper cited at the bottom. The k-cops-and-robber games, where k > 1, takes a graph G = (V,E) as a parameter. Each game has two players: the cops and the robber. The game goes as follows:
- In round 0, the cops choose a subset X0 of V of size at most k. The robber chooses a vertex v0 of V - X0.
- In round i+1, the cops choose a subset Xi+1 of V of size at most k. If possible, the robber chooses a vertex vi+1 of V - Xi+1 such that there is a (possibly empty) path from vi to vi+1 which does not pass through Xi ∩ Xi+1. Otherwise, the cops win the game right there.
If this game has no end (i.e. the robber can evade capture), then the robber wins. The cops are said to have a winning strategy on the k-cops-and-robber game on G if they can play in such a way that they will eventually win the game. Otherwise, we say that the robber has a winning strategy on this game.
It is helpful to think of the set Xi+1 - Xi as set of vertices above which the cops are hovering in helicopters in round i+1. [This tells us that cops can move from vertices to vertices unrestrictedly.] Also, the set Xi+1 ∩ Xi can be thought of as the vertices where the cops have landed from helicopters. So, it takes two moves for a cop to be at a particular vertex v (i.e. on the ground): she must first ride a helicopter to v, and second land the helicopter at v. On the other hand, a cop may take off to another vertex (but still is in a helicopter) in just one move. In addition, the robber can see the cops in the helicopter, and may elude capture by running at great speed to another vertex as long as he does not run through a cop on the ground during his escape.
Theorem (Seymour and Thomas): A graph G has treewidth ≤ k-1 iff the cops have a winning strategy in the k-cops-and-robber game on G.
Equivalently, a graph G has treewidth k-1 iff k is the smallest number k' such that the cops have a winning strategy in the k'-cops-and-robber game on G. Let's try this game on a tree. The cops can win the game using 2 cops only. The idea is to gradually shrink the set of vertices the robber can go to in the next round by alternately moving the two cops in a "depth-first traversal" manner. On the other hand, it is clear that the robber player has a winning strategy on a tree if he is chased by only a poor lone cop.
In recent years, some researchers in databases and verification have proposed variants of the cops-and-robber games so as to explain some phenomena in those fields. Let me conclude by mentioning two such papers:
- Gottlob, Leone, and Scarcello. Robbers, marshals, and guards: game theoretic and logical characterization of hypertree width, JCSS 66, 2003.
- Berwanger, Dawar, Hunter, and Kreutzer. DAG-width and Parity Games, STACS 2006.
Tuesday, April 25, 2006
Automatic Structures: Part 2 -- Buchi-Bruyer Theorem
We now concentrate on the proof of Buchi-Bruyer Theorem, probably the most important theorem concerning automatic structures, which says that the set of regular relations coincides with the set of relations definable in Muniv (defined below). By the way, this theorem is folklore in the sense that its proof is unpublished, but well-known to everybody in the community.
We first need some definitions. Suppose that s := (s1, ...,sn) ∈ (Σ*)n. Then, define a string [s] over the alphabet (Σ∪ {#})n whose length is max{s1,...,sn}, and whose ith symbol is (a1,...,an) where aj is the ith symbol of sj, if i ≤ |sj|, and aj is #, otherwise. One might visualize [s] as the string obtained by placing s1,...,sn in a left-aligned column and pad each string si with # so that each of the resulting rows is of equal length. After that, we consider this matrix as a string [s] whose jth position is the jth column of the string. A subset S of (Σ*)n is said to be regular if the set { [s] : s ∈ S } is regular.
Fix an alphabet of size at least two. Consider the infinite structure Muniv := (Σ*, ≤, (La)a ∈ Σ, el) where
What properties are (first-order) definable in Muniv? Here are some simple ones:
Apologize for the overloading of the symbol '≤' because of the lack of HTML symbols. For example, the first property above can be expressed as
∃ s( s ≤ y ∧ el(x,s) ).
The second property is also easily expressible, but you might need more quantifiers and make use of the relation im-pref. The third property can be expressed by saying that |x| ≤ |y| and there is no z with x < z < y (here '<' is the irreflexive version of the prefix relation ≤).
Now, a subset S of (Σ*)n is said to be definable in Muniv if there exists a first-order formula φ(x1,...,xn) in the vocabulary of Muniv such that
S = { s : Muniv |= φ(s) }.
Theorem (Buchi-Bruyer): A subset S of (Σ*)n is definable in Muniv iff S is regular.
Proof sketch:
(<=) Suppose that S is recognized by the automaton A = (Q,q0,F,δ: Q x Σ -> Q), where Q = {q0,...,ql} is a finite set of states, q0 ∈ Q is the initial state, F ⊆ Q is the set of final states, and δ the transition function. So, for all s ∈ (Σ*)n, s ∈ S iff the string s1...sk = [s] is accepted by A,i.e., there exists a run p0...pk such that p0 = q0, pk ∈ F, and δ(pi,si+1) = pi+1. The defining formula for S is
φ(x1,...,xn) = ∃v0,...,vl( ψlen ∧ ψchar ∧ ψstart ∧ ψend ∧ ψtrans )
where:
The reader should convince herself that all of the above sentences are definable in Muniv.
(=>) The proof is by induction on the formula φ(x1,...,xn) defining S. The base case (i.e. atomic formulas) is very easy (left to the reader). Further, the case where φ is of the form ψ1 ∨ ψ2 or ¬ ψ follows immediately from the properties that regular languages are closed under union and complementation. What remains is to prove this for the case where φ is of the form ∃xn+1 ψ(x1,...,xn+1). Suppose that the n+1-ary regular relation R defined by ψ is recognized by the automaton A = (Q,q0,F,δ). To construct an automaton A' for S, one first applies the pumping lemma to show that: there exists a number K such that if (s1,...,sn+1) ∈ R, where |sn+1| > |sj| for 1 ≤ j ≤ n, then there exists another string s'n+1 such that (s1,...,s'n+1) ∈ R and |s'n+1| ≤ max{|sj| : 1 ≤ j ≤ n} + K. We construct A' as follows. First, make K+1 isomorphic copies of A (with different labels), where the ith copy is denoted by Ai = (Qi = {q0i,...,qli},q0i,Fi,δi). The states of A' are the union of the Qis. The start states consist of all the q0is. The final states are the union of the Fis. The transition function δ' works as follows: whenever δi(qji,(c1,...,cn+1)) = qhi, where cm ∈ (Σ &cup {#}), we put δ'(qji,(c1,...,cn) = qhi. Note that A' is non-deterministic. It is easy to check that A' recognizes S. (QED)
We first need some definitions. Suppose that s := (s1, ...,sn) ∈ (Σ*)n. Then, define a string [s] over the alphabet (Σ∪ {#})n whose length is max{s1,...,sn}, and whose ith symbol is (a1,...,an) where aj is the ith symbol of sj, if i ≤ |sj|, and aj is #, otherwise. One might visualize [s] as the string obtained by placing s1,...,sn in a left-aligned column and pad each string si with # so that each of the resulting rows is of equal length. After that, we consider this matrix as a string [s] whose jth position is the jth column of the string. A subset S of (Σ*)n is said to be regular if the set { [s] : s ∈ S } is regular.
Fix an alphabet of size at least two. Consider the infinite structure Muniv := (Σ*, ≤, (La)a ∈ Σ, el) where
- the universe is the set of all Σ-strings,
- x ≤ y iff x is a prefix of y,
- La(x) is true iff the rightmost symbol of x is a, and
- el(x,y) is true iff |x| = |y| (|x| denotes the length of x).
What properties are (first-order) definable in Muniv? Here are some simple ones:
- |x| ≤ |y| (i.e. the string x is no longer than the string y),
- |x| = |y| + k for some fixed constant k,
- im-pref(x,y) (i.e. x = y.a for some letter a ∈ Σ), and
- the kth symbol of x is a (where k is fixed and a ∈ Σ).
Apologize for the overloading of the symbol '≤' because of the lack of HTML symbols. For example, the first property above can be expressed as
∃ s( s ≤ y ∧ el(x,s) ).
The second property is also easily expressible, but you might need more quantifiers and make use of the relation im-pref. The third property can be expressed by saying that |x| ≤ |y| and there is no z with x < z < y (here '<' is the irreflexive version of the prefix relation ≤).
Now, a subset S of (Σ*)n is said to be definable in Muniv if there exists a first-order formula φ(x1,...,xn) in the vocabulary of Muniv such that
S = { s : Muniv |= φ(s) }.
Theorem (Buchi-Bruyer): A subset S of (Σ*)n is definable in Muniv iff S is regular.
Proof sketch:
(<=) Suppose that S is recognized by the automaton A = (Q,q0,F,δ: Q x Σ -> Q), where Q = {q0,...,ql} is a finite set of states, q0 ∈ Q is the initial state, F ⊆ Q is the set of final states, and δ the transition function. So, for all s ∈ (Σ*)n, s ∈ S iff the string s1...sk = [s] is accepted by A,i.e., there exists a run p0...pk such that p0 = q0, pk ∈ F, and δ(pi,si+1) = pi+1. The defining formula for S is
φ(x1,...,xn) = ∃v0,...,vl( ψlen ∧ ψchar ∧ ψstart ∧ ψend ∧ ψtrans )
where:
- ψlen asserts that |vi| = max{|xj| : 1 ≤ j ≤ n} + 1.
- ψchar asserts that [(v1,...,vl)] = w0...wk is a characteristic sequence, i.e., each vi is a string of 0s and 1s, and that, for each position h, exactly one of the strings vis have value 1. [Intuitively, we want each wj to represent the state pj in our accepting run.]
- ψstart asserts that the first position of v0 has value 1.
- ψend asserts that the last position of some vj, where qj ∈ F, has value 1.
- ψtrans asserts that the transition from wj to wj+1 respects δ. [This will be a huge table, quite tedious to write down.]
The reader should convince herself that all of the above sentences are definable in Muniv.
(=>) The proof is by induction on the formula φ(x1,...,xn) defining S. The base case (i.e. atomic formulas) is very easy (left to the reader). Further, the case where φ is of the form ψ1 ∨ ψ2 or ¬ ψ follows immediately from the properties that regular languages are closed under union and complementation. What remains is to prove this for the case where φ is of the form ∃xn+1 ψ(x1,...,xn+1). Suppose that the n+1-ary regular relation R defined by ψ is recognized by the automaton A = (Q,q0,F,δ). To construct an automaton A' for S, one first applies the pumping lemma to show that: there exists a number K such that if (s1,...,sn+1) ∈ R, where |sn+1| > |sj| for 1 ≤ j ≤ n, then there exists another string s'n+1 such that (s1,...,s'n+1) ∈ R and |s'n+1| ≤ max{|sj| : 1 ≤ j ≤ n} + K. We construct A' as follows. First, make K+1 isomorphic copies of A (with different labels), where the ith copy is denoted by Ai = (Qi = {q0i,...,qli},q0i,Fi,δi). The states of A' are the union of the Qis. The start states consist of all the q0is. The final states are the union of the Fis. The transition function δ' works as follows: whenever δi(qji,(c1,...,cn+1)) = qhi, where cm ∈ (Σ &cup {#}), we put δ'(qji,(c1,...,cn) = qhi. Note that A' is non-deterministic. It is easy to check that A' recognizes S. (QED)
Thursday, April 20, 2006
Automatic Structures: Part 1
Several weeks ago, my supervisor Leonid Libkin gave a presentation about automatic structures in our reading group meeting. I would like to talk about these nice animals in some details. This is the first part in the series of posts giving a flavor of automatic structures and sketching some important proofs. If there are interests, I will also talk about how one might apply automatic structures to program verification (this line of research is still under intense development).
Finite model theory primarily concerns finite structures, and has been successfully applied to database theory, and logical approaches to verification (i.e. model checking). On the other hand, finite structures are often too restrictive. For example, when modeling C programs, the use of infinite structures is often inevitable. For this reason, a lot of effort has been put into extending the framework of finite model theory to infinite structures. Several such approaches include metafinite model theory, embedded finite model theory, and automatic structures. In the sequel, we are primarily interested in automatic structures.
Roughly speaking, automatic structures are structures whose universe, and relations can each be represented by a finite automaton. A simple example of automatic structures is Presburger arithmetic (N,+), where N is the set of natural numbers. Here is how one might represent (N,+) using finite automata. Represent the universe N in binary in reverse order (e.g. 4 = 001, 2 = 01); call such a representation bin(N). So, bin(N) is a language over Σ = {0,1}. It is simple to devise a finite automaton that recognizes precisely bin(N). Now, how do we represent the 3-ary relation
+ = { (bin(i),bin(j),bin(k)) : i + j = k, i,j,k ∈ N }
with an automaton? First, it is possible to represent the relation + as a language L over the alphabet {0,1,#}3 defined in the following way. For i,j,k ∈ bin(N), concatenate i,j,k with the padding symbol # so that the resulting strings i',j',k' are of the same length. For example, if i = 001, j = 01, and k = 00001, then i' = 001##, j' = 01###, and k' = 00001. Now, we may treat the tuple (i',j',k') as a string over the alphabet {0,1,#}3, e.g., for i',j',k' in the above example, the resulting string is (0,0,0)(0,1,0)(1,#,0)(#,#,0)(#,#,1). Then, if i + j = k, put the string (i',j',k') in L. Having defined L, it is not hard to exhibit an automaton that recognizes precisely L. [This automaton resembles the commonplace algorithm for addition.]
What is so cool about automatic structures? First, it is somewhat immediate that automatic structures have decidable FO theories. Second, there exists a universal automatic structure Muniv, which is a structure in which all other automatic structures can be interpreted with FO translation (or reduction). Furthermore, the set of all relations definable in Muniv captures precisely all regular relations, which gives an easy way to prove properties about regular languages. By the way, automatic structures also give a nice way of proving that model checking some types of infinite transition systems be decidable.
Anyway, this post was merely intended to whet the reader's appetite. I hope this was enticing enough. In the next post, I will give a precise definition of automatic structures and prove the Buchi-Bruyer theorem that the set of relations definable in Muniv coincides with all regular relations.
Finite model theory primarily concerns finite structures, and has been successfully applied to database theory, and logical approaches to verification (i.e. model checking). On the other hand, finite structures are often too restrictive. For example, when modeling C programs, the use of infinite structures is often inevitable. For this reason, a lot of effort has been put into extending the framework of finite model theory to infinite structures. Several such approaches include metafinite model theory, embedded finite model theory, and automatic structures. In the sequel, we are primarily interested in automatic structures.
Roughly speaking, automatic structures are structures whose universe, and relations can each be represented by a finite automaton. A simple example of automatic structures is Presburger arithmetic (N,+), where N is the set of natural numbers. Here is how one might represent (N,+) using finite automata. Represent the universe N in binary in reverse order (e.g. 4 = 001, 2 = 01); call such a representation bin(N). So, bin(N) is a language over Σ = {0,1}. It is simple to devise a finite automaton that recognizes precisely bin(N). Now, how do we represent the 3-ary relation
+ = { (bin(i),bin(j),bin(k)) : i + j = k, i,j,k ∈ N }
with an automaton? First, it is possible to represent the relation + as a language L over the alphabet {0,1,#}3 defined in the following way. For i,j,k ∈ bin(N), concatenate i,j,k with the padding symbol # so that the resulting strings i',j',k' are of the same length. For example, if i = 001, j = 01, and k = 00001, then i' = 001##, j' = 01###, and k' = 00001. Now, we may treat the tuple (i',j',k') as a string over the alphabet {0,1,#}3, e.g., for i',j',k' in the above example, the resulting string is (0,0,0)(0,1,0)(1,#,0)(#,#,0)(#,#,1). Then, if i + j = k, put the string (i',j',k') in L. Having defined L, it is not hard to exhibit an automaton that recognizes precisely L. [This automaton resembles the commonplace algorithm for addition.]
What is so cool about automatic structures? First, it is somewhat immediate that automatic structures have decidable FO theories. Second, there exists a universal automatic structure Muniv, which is a structure in which all other automatic structures can be interpreted with FO translation (or reduction). Furthermore, the set of all relations definable in Muniv captures precisely all regular relations, which gives an easy way to prove properties about regular languages. By the way, automatic structures also give a nice way of proving that model checking some types of infinite transition systems be decidable.
Anyway, this post was merely intended to whet the reader's appetite. I hope this was enticing enough. In the next post, I will give a precise definition of automatic structures and prove the Buchi-Bruyer theorem that the set of relations definable in Muniv coincides with all regular relations.
Wednesday, March 22, 2006
Reading Group
I am at present organizing a reading group on "Model Checking on Nice Infinite Structures". Its purpose is to help the participants get a big picture of the area, know which problems are presently open, and understand proof ideas without having to read every paper. In sum, if you want to get up to speed with the state of the art, attend the meetings!
Two upcoming meetings are as follows:
Feel free to stop by if you are in Toronto.
I've tried to give a flavor of the area before. I will say a little bit more. The main concern of the area is to come up with nice infinite structures on which model checking L-formulas (for some nice logic L) is computationally effective. For example, Buchi's theorem and Rabin's theorem give effective procedures for model-checking MSO formulas on, respectively, the infinite one-way successor S1S = (N,) and the infinite binary tree S2S = ({0,1}*,S0,S1) where
It turns out that one can obtain a large number of decidability results via interpretation to S2S. As I mention previously, one can show that model checking MSO formulas on pushdown graphs is effective. In fact, using the same technique, one can show that the MSO theory of ordered rational numbers (Q,<) is decidable.
There is more to "Model Checking on Nice Infinite Structures" than what I just mentioned. For example, one of the most important research directions is to come up with "ways of transforming structures" that preserve decidability of L-theories. Examples of such transformations include unfoldings, and iterations. The latter transformation is the main concern of Muchnik's theorem, which will be presented by Pablo Barcelo some time this week. These decidability-preserving transformations are so powerful that one can obtain the decidability of S2S by reducing it to a trivial edge-labeled graph with one node and two self-loops of different labels. It turns out that one can obtain a rich theory by, for example, applying MSO-interpretations and the unfolding operation in alternation. This is the subject of Caucal's hierarchy. For more, see for the link to Thomas's survey in my previous post.
Another hot topic includes "Automatic Structures". Roughly speaking, automatic structures are logical structures that can be presented by finite automata. For example, one can obtain a characterization of regular languages using this technique. For more, see this paper.
Two upcoming meetings are as follows:
- My supervisor Leonid Libkin will give an introduction to Automatic Structures on Thursday, March 23, from 2pm-3pm, UofT St. George Campus, PT378.
- Pablo Barcelo will give the first part of his presentation on Walukiewicz's proof of Muchnik's theorem on Friday, March 24, from 3pm-4pm, UofT St. George Campus, PT378.
Feel free to stop by if you are in Toronto.
I've tried to give a flavor of the area before. I will say a little bit more. The main concern of the area is to come up with nice infinite structures on which model checking L-formulas (for some nice logic L) is computationally effective. For example, Buchi's theorem and Rabin's theorem give effective procedures for model-checking MSO formulas on, respectively, the infinite one-way successor S1S = (N,) and the infinite binary tree S2S = ({0,1}*,S0,S1) where
- N is the set of natural numbers, and the graph of the usual successor function on N.
- {0,1}* stands for bit strings (including the empty string),
S0 = { (w,w0) : w ∈ {0,1}* },
and
S1 = { (w,w1) : w ∈ {0,1}* }.
It turns out that one can obtain a large number of decidability results via interpretation to S2S. As I mention previously, one can show that model checking MSO formulas on pushdown graphs is effective. In fact, using the same technique, one can show that the MSO theory of ordered rational numbers (Q,<) is decidable.
There is more to "Model Checking on Nice Infinite Structures" than what I just mentioned. For example, one of the most important research directions is to come up with "ways of transforming structures" that preserve decidability of L-theories. Examples of such transformations include unfoldings, and iterations. The latter transformation is the main concern of Muchnik's theorem, which will be presented by Pablo Barcelo some time this week. These decidability-preserving transformations are so powerful that one can obtain the decidability of S2S by reducing it to a trivial edge-labeled graph with one node and two self-loops of different labels. It turns out that one can obtain a rich theory by, for example, applying MSO-interpretations and the unfolding operation in alternation. This is the subject of Caucal's hierarchy. For more, see for the link to Thomas's survey in my previous post.
Another hot topic includes "Automatic Structures". Roughly speaking, automatic structures are logical structures that can be presented by finite automata. For example, one can obtain a characterization of regular languages using this technique. For more, see this paper.
Friday, March 17, 2006
403 Forbidden
Some of you have informed me of not being able to access this blog recently. In fact, I am aware of this problem when trying to publish some of my recent posts.
I recently learned that is caused by problems with some of the Blogger's servers (for more, read this). This affects some, but not all, blogs from Blogger (unfortunately, including Logicomp). But, this problem goes away usually within 24 hours. So, if your computer fails to access this blog next time, please be patient and come back the following morning :-)
I recently learned that is caused by problems with some of the Blogger's servers (for more, read this). This affects some, but not all, blogs from Blogger (unfortunately, including Logicomp). But, this problem goes away usually within 24 hours. So, if your computer fails to access this blog next time, please be patient and come back the following morning :-)
Thursday, March 09, 2006
Model-checking on infinite transition systems
"Program testing can be used to show the presence of bugs, but never to show their absence!", said Edsger Dijkstra. Model-checking is one well-known approach to automatic verification of programs. The framework of model-checking can be described as follows. Given a representation of a program P as a finite transition system (a.k.a. Kripke Structures) M(P) and given a formal specification f in a specification language L, check whether f is true in M(P), in symbols M(P) |= f. The specification language can be any of your favorite logics; but, the most frequently used ones include LTL, CTL, CTL*, and μ-calculus. Recently, a lot of effort has been made to extend the framework to suitable classes of infinite structures. In this post, I will mostly talk about model-checking on infinite transition systems. The logic that we frequently use in this case is monadic second-order logic (MSO) as
Here is a quick memory refresher: MSO is first-order logic (FO) that is extended by quantification over sets and atomic formulas of the form "x ∈ X" with the meaning that the element x of the domain D of given interpretation belongs to the set X, which is interpreted as a subset of D.
I will talk about one simple kind of infinite transition systems that goes by the name of pushdown graphs. A pushdown graph is nothing but the transition graph of a pushdown automaton. Here, a pushdown automaton is a tuple (Q,A,Γ, q0, Z0, Δ), where Q is a finite set of states, A the input alphabet, Γ the stack alphabet, Z0 ∈ Γ the initial stack symbol, and the transition relation Δ is a finite subset of Q x A x Γ x Γ* x Q, where (q,a,v,&alpha,q') is to be interpreted as "Whenever I am on a state q, see the letter a on the input tape, and see the letter v on the stack tape, I will replace v by the word α and move to a new state q'". Further, for a technical reason, it is usually wise to assume that there is no transition rule that pops the stack symbol Z0. Now a pushdown graph for this automaton is the infinite graph G = (V,(Ea)a ∈ A) where:
A result of Muller-Schupp is that MSO model-checking problem on pushdown graphs is decidable. The proof of this result is by direct MSO-interpretation to S2S (MSO theory of 2-successors), and uses Rabin's deep result that S2S be decidable. I recommend
W. Thomas. Constructing Infinite Graphs with a Decidable MSO-theory
for a nice presentation of this proof.
Now comes the most important question. What sort of queries can you ask in MSO regarding pushdown graphs? The most useful one is reachability, i.e., given two configurations C and C', determine whether C' is reachable from C. This is how you write it in MSO:
REACH(x,x') ≡ ∀ X( x∈X and ∀y,z( y∈X and E(y,z) --> z∈X) --> x'∈X)
As usual, E(x,y) is an abbreviation for "ORa ∈ A E(x,y)". It turns out that there are lots of fancy infinite graphs on which MSO model-checking are decidable. But, this is a subject of future posts.
- It subsumes most modal logics that we use in verification including all the afore-mentioned logics, and
- MSO is a well-behaved and well-studied logic.
Here is a quick memory refresher: MSO is first-order logic (FO) that is extended by quantification over sets and atomic formulas of the form "x ∈ X" with the meaning that the element x of the domain D of given interpretation belongs to the set X, which is interpreted as a subset of D.
I will talk about one simple kind of infinite transition systems that goes by the name of pushdown graphs. A pushdown graph is nothing but the transition graph of a pushdown automaton. Here, a pushdown automaton is a tuple (Q,A,Γ, q0, Z0, Δ), where Q is a finite set of states, A the input alphabet, Γ the stack alphabet, Z0 ∈ Γ the initial stack symbol, and the transition relation Δ is a finite subset of Q x A x Γ x Γ* x Q, where (q,a,v,&alpha,q') is to be interpreted as "Whenever I am on a state q, see the letter a on the input tape, and see the letter v on the stack tape, I will replace v by the word α and move to a new state q'". Further, for a technical reason, it is usually wise to assume that there is no transition rule that pops the stack symbol Z0. Now a pushdown graph for this automaton is the infinite graph G = (V,(Ea)a ∈ A) where:
- V is the set of configurations of the automaton (i.e. words from QΓ*, a product of the current state and the stack configurations) that are reachable from q0Z0 by a finite number of applications of Δ,
- Ea is the set of all pairs (qvw,q'αw) from V2 for which there is a transition (q,a,v,α,q').
A result of Muller-Schupp is that MSO model-checking problem on pushdown graphs is decidable. The proof of this result is by direct MSO-interpretation to S2S (MSO theory of 2-successors), and uses Rabin's deep result that S2S be decidable. I recommend
W. Thomas. Constructing Infinite Graphs with a Decidable MSO-theory
for a nice presentation of this proof.
Now comes the most important question. What sort of queries can you ask in MSO regarding pushdown graphs? The most useful one is reachability, i.e., given two configurations C and C', determine whether C' is reachable from C. This is how you write it in MSO:
REACH(x,x') ≡ ∀ X( x∈X and ∀y,z( y∈X and E(y,z) --> z∈X) --> x'∈X)
As usual, E(x,y) is an abbreviation for "ORa ∈ A E(x,y)". It turns out that there are lots of fancy infinite graphs on which MSO model-checking are decidable. But, this is a subject of future posts.
Tuesday, March 07, 2006
Slides from Logic and Databases Workshop at Cambridge
Last week I was attending a logic and databases workshop at Cambridge, UK, as part of a special programme on logic and algorithms at Newton Institute. As there were some excellent talks, I want to point out that there the slides are available online. I personally recommend the following slides: Neven's, Lynch's, Segoufin's, Schweikardt's, Libkin's, Szeider's, and Koch's.
Resuming to Blog
Apologies for being silent for a long time. Thanks for those who for the last few months left comments on my blog or emailing me directly, and apologies if I did not reply. Until very recently, I was not so sure of which research track in finite model theory I wanted to pursue for my graduate studies, even though I have a general overview of the area. It is a difficult, but crucial, task for a new researcher in finite model theory (I believe even for 1st year graduate students like myself) to pin down exactly where (s)he should start, as the area is both deep and diverse. Realizing that this is extremely important, I desperately tried to read more and talk more with my advisor and others (yes, at the expense of blogging and replying emails). I am glad to say that I have found something that I really like and am confident that I can contribute to; which also implies that I may resume blogging.
I realize that it will take some time for my blog readers to resume reading this blog (I certainly hope they will), and so I will start slow. Just in case you are in Toronto, I am planning to start a reading group focusing on "monadic theory of tree-like structures" and applications to computer verification (e.g. see this book). If you're curious but not sure of what it is, don't worry as I will give you a flavor of the area in the next few posts. The goal is to get a big picture of the area, know which problems are open, and understand important proof ideas without having to read each paper. [Each participant will take turn to present one result each time we meet.] Everyone is warmly welcome to participate. [I will make a more detailed announcement in a week.]
I realize that it will take some time for my blog readers to resume reading this blog (I certainly hope they will), and so I will start slow. Just in case you are in Toronto, I am planning to start a reading group focusing on "monadic theory of tree-like structures" and applications to computer verification (e.g. see this book). If you're curious but not sure of what it is, don't worry as I will give you a flavor of the area in the next few posts. The goal is to get a big picture of the area, know which problems are open, and understand important proof ideas without having to read each paper. [Each participant will take turn to present one result each time we meet.] Everyone is warmly welcome to participate. [I will make a more detailed announcement in a week.]
Saturday, February 11, 2006
New Book on Parameterized Complexity
Jorg Flum and Martin Grohe just published their new book titled Parameterized Complexity Theory. Flum and Grohe are two giants in finite model theory who have contributed so much towards the development of parameterized complexity theory, especially logical characterizations of parameterized complexity classes. At this moment, one can only purchase the book from Springer as they just appeared some time this week.
I can't compare this book with Downey and Fellow's book as I haven't read Flum and Grohe's. However, I heard from several people who proof-read the book before publications that Flum and Grohe's book is much more well-written and readable than Downey and Fellow's.
I can't compare this book with Downey and Fellow's book as I haven't read Flum and Grohe's. However, I heard from several people who proof-read the book before publications that Flum and Grohe's book is much more well-written and readable than Downey and Fellow's.
Sunday, January 22, 2006
How much damage can be caused by a peer reviewer having a bad day?
Check out this cool article! You will find some hilarious reviews for award-winning papers authored by Dijkstra, Codd, Shannon, etc.
Saturday, January 21, 2006
Almost sure theory
We all know that the Hilbert's Enstceidungsproblem, i.e. deciding whether a first-order sentence is true, is undecidable. Church, Turing, and others proved it. Recall that first-order logic admits the 0-1 law, i.e., every FO sentence is almost surely true (AST) or almost surely false (ASF). So, what about the problem of deciding whether an FO sentence be AST? It turns out that this is decidable. I shall outline some important ideas for proving this statement.
Let us restrict ourselves to graphs as before. In proving that FO admits the 0-1 law, we mentioned the extension axioms EA = {Pk}k ∈ N, where N is the set of all natural numbers. Extension axioms are nothing more than graph properties. In our case, Pks are FO-expressible. Although for the purpose of this post we need not understand what our extension axioms are, I will just define it for future reference:
∀x1, ..., x2k( ∧i ≠ j xi &ne xj ->
∃z( ∧1 ≤ i ≤ 2k z ≠ xi
∧i ≤ k E(z,xi)
∧i > k ¬E(z,xi)))
Recall that a theory T is a set of sentences. A model of T is a structure that satisfies all sentences of T. The theory T is consistent if it has a model, and it is complete iff for each sentence f (of the same vocabulary), we have either T |= f, i.e. every model of T is a model of f, or T |= ¬ f. A consistent and complete theory T is decidable if it is possible to decide whether T |= f, for any given sentence f.
Theorem: EA is consistent (and has a unique countable model), complete, and decidable
We will not prove the first two statements, as we will need more model theory (see Libkin's "Elements of Finite Model Theory" for more details). To prove that EA is decidable, we need only invoke an old result from recursive theory that every recursively axiomatizable theory is decidable. The above definition of Pk gives us a recursive enumeration of the axioms.
The unique countable model for EA, denoted RG, is called the random graph, as its probabilistic construction is often emphasized in the literature. Using the above theorem, we derive an easy corollary:
Corollary: For any FO sentence f, RG |= f iff μ(f) = 1.
With these results, it is a breeze to conclude that deciding whether an FO sentence be AST is computable. To determine the exact complexity of the problem, one needs a more hairy analysis, which we will not do here. It turns out that the problem is PSPACE-complete, the proof of which can be found in
E. Granjean. Complexity of the first-order theory of almost all finite structures, Information and Control, Volume 57, 1983.
Recent work in this area mostly deals with determining fragments of second-order logic that admits the 0-1 law, and whether the almost sure theory with respect to the fragment be decidable. [It is easy to see that SO in its full power does not admit the 0-1 law.] This, however, lies outside the scope of our current discussion.
Let us restrict ourselves to graphs as before. In proving that FO admits the 0-1 law, we mentioned the extension axioms EA = {Pk}k ∈ N, where N is the set of all natural numbers. Extension axioms are nothing more than graph properties. In our case, Pks are FO-expressible. Although for the purpose of this post we need not understand what our extension axioms are, I will just define it for future reference:
∀x1, ..., x2k( ∧i ≠ j xi &ne xj ->
∃z( ∧1 ≤ i ≤ 2k z ≠ xi
∧i ≤ k E(z,xi)
∧i > k ¬E(z,xi)))
Recall that a theory T is a set of sentences. A model of T is a structure that satisfies all sentences of T. The theory T is consistent if it has a model, and it is complete iff for each sentence f (of the same vocabulary), we have either T |= f, i.e. every model of T is a model of f, or T |= ¬ f. A consistent and complete theory T is decidable if it is possible to decide whether T |= f, for any given sentence f.
Theorem: EA is consistent (and has a unique countable model), complete, and decidable
We will not prove the first two statements, as we will need more model theory (see Libkin's "Elements of Finite Model Theory" for more details). To prove that EA is decidable, we need only invoke an old result from recursive theory that every recursively axiomatizable theory is decidable. The above definition of Pk gives us a recursive enumeration of the axioms.
The unique countable model for EA, denoted RG, is called the random graph, as its probabilistic construction is often emphasized in the literature. Using the above theorem, we derive an easy corollary:
Corollary: For any FO sentence f, RG |= f iff μ(f) = 1.
With these results, it is a breeze to conclude that deciding whether an FO sentence be AST is computable. To determine the exact complexity of the problem, one needs a more hairy analysis, which we will not do here. It turns out that the problem is PSPACE-complete, the proof of which can be found in
E. Granjean. Complexity of the first-order theory of almost all finite structures, Information and Control, Volume 57, 1983.
Recent work in this area mostly deals with determining fragments of second-order logic that admits the 0-1 law, and whether the almost sure theory with respect to the fragment be decidable. [It is easy to see that SO in its full power does not admit the 0-1 law.] This, however, lies outside the scope of our current discussion.
Saturday, January 07, 2006
The 0-1 Law
To start off the year 2006, I want to talk about the 0-1 law, which is an extremely important and elegant topic in finite model theory. A logic L (over a fixed vocabulary σ) is said to admit the 0-1 law if, for any sentence f of L, the fraction of σ-structures with universe {0,...,n-1} that satisfy the sentence f converges to either 0 or 1 as n approaches infinity. We denote this fraction by μn(f) and the limit, if exists, by μ(f). [Of course, this means that μ and μn ranges between 0 and 1.] In other words, every sentence f is almost surely false or almost surely true. For simplicity, let us restrict our attention to undirected loopless graphs, i.e., σ contains only one binary relation E which is both symmetric and irreflexive. Below is perhaps the most fundamental theorem about the 0-1 law.
Theorem (Fagin 1976, Glebskii et al. 1969): First-order logic admits the 0-1 law
Most finite model theorists equate the admission of 0-1 law with the inability to do "counting". Consider the following properties EVEN = { G : G has even number of vertices }. We see that μn(EVEN) = 1 iff n is even, and hence μ(EVEN) does not exist. I am yet to see a property whose asymptotic probability μ converges to something other than 0 or 1, or diverges, but has nothing to do with counting. Consider the following properties:
On the other hand, if you admit constant or function symbols, it is possible to have first-order properties whose asymptotic probability converges to something other than 0 or 1. So, it is crucial to assume that our vocabulary σ be purely relational.
What is the use of 0-1 law in finite model theory apart from the amusement of some theoreticians? [I will have to confess many proofs about the 0-1 law, including Fagin's proof of the above theorem, are usually of great beauty.] Well, a primary concern of finite model theory is to determine how expressive a logic is over a given class of finite structures. By proving that a logic admits the 0-1 law, we show that any property whose asymptotic probability does not converge to either 0 or 1 is not expressible in the logic. For example, an immediate corollary of the above theorem is that first-order logic cannot express EVEN.
Let me briefly outline the proof ideas of the above theorem which recurs in 0-1 law proofs. We only need to show that, for each natural number k, there exists a property Pk of graphs such that
Why is this sufficient? Let f be a first-order sentence of quantifier rank k. There are two cases. First, there exists a model of Pk that is also a model of f. In this case, it is not hard to show that any model of Pk is also a model of f, which implies that μ(f) ≥ μ(Pk) = 1. The second case is that all models of Pk are not models of f, i.e., they are models of "not f". Therefore, μ("not f") ≥ μ(Pk) = 1. That is, we have μ(f) = 0.
The property Pk is commonly referred to as an extension axiom. Note that Pk does not have to be L-definable (where L is the logic that is to admit 0-1 law).
Two excellent expositions of the basics of 0-1 laws include Leonid Libkin's Elements of Finite Model Theory chapter 12, and James Lynch DIMACS 1995-1996 Tutorial notes on "Random Finite Models".
Theorem (Fagin 1976, Glebskii et al. 1969): First-order logic admits the 0-1 law
Most finite model theorists equate the admission of 0-1 law with the inability to do "counting". Consider the following properties EVEN = { G : G has even number of vertices }. We see that μn(EVEN) = 1 iff n is even, and hence μ(EVEN) does not exist. I am yet to see a property whose asymptotic probability μ converges to something other than 0 or 1, or diverges, but has nothing to do with counting. Consider the following properties:
- μ(BIPARTITE) = 0
- μ(HAMILTONIAN) = 1
- μ(CONNECTED) = 1
- μ(TREE) = 0
On the other hand, if you admit constant or function symbols, it is possible to have first-order properties whose asymptotic probability converges to something other than 0 or 1. So, it is crucial to assume that our vocabulary σ be purely relational.
What is the use of 0-1 law in finite model theory apart from the amusement of some theoreticians? [I will have to confess many proofs about the 0-1 law, including Fagin's proof of the above theorem, are usually of great beauty.] Well, a primary concern of finite model theory is to determine how expressive a logic is over a given class of finite structures. By proving that a logic admits the 0-1 law, we show that any property whose asymptotic probability does not converge to either 0 or 1 is not expressible in the logic. For example, an immediate corollary of the above theorem is that first-order logic cannot express EVEN.
Let me briefly outline the proof ideas of the above theorem which recurs in 0-1 law proofs. We only need to show that, for each natural number k, there exists a property Pk of graphs such that
- μ(Pk) = 1, and
- Any two models of Pk cannot be distinguished by a first-order sentence of quantifier rank k. [Recall that the quantifier rank of a first-order sentence is the maximum depth of any nesting of quantifiers of the sentence.]
Why is this sufficient? Let f be a first-order sentence of quantifier rank k. There are two cases. First, there exists a model of Pk that is also a model of f. In this case, it is not hard to show that any model of Pk is also a model of f, which implies that μ(f) ≥ μ(Pk) = 1. The second case is that all models of Pk are not models of f, i.e., they are models of "not f". Therefore, μ("not f") ≥ μ(Pk) = 1. That is, we have μ(f) = 0.
The property Pk is commonly referred to as an extension axiom. Note that Pk does not have to be L-definable (where L is the logic that is to admit 0-1 law).
Two excellent expositions of the basics of 0-1 laws include Leonid Libkin's Elements of Finite Model Theory chapter 12, and James Lynch DIMACS 1995-1996 Tutorial notes on "Random Finite Models".
Starting the year 2006
Sorry for having been silent for quite a while. This winter "break" turned out to be quite a busy one for me, with quite a bit of homework and research to do. Anyway, belated happy new year! I hope that this will be a great and productive year for all of us.
I just resumed reading the lattest on the blogsphere, with still quite a bit of catching up to do. For now, I will mention mention a couple of noteworthy things that happened:
I just resumed reading the lattest on the blogsphere, with still quite a bit of catching up to do. For now, I will mention mention a couple of noteworthy things that happened:
- Kenny Easwaran wrote an introductory article on "forcing", a well-known technique in set theory for proving independence results. I will write more on it when I finish reading the article.
- Igor Naverniouk, a PhD student from University of Toronto, just started a new blog on True Artificial Intelligence. The first couple of entries look quite interesting.
Subscribe to:
Posts (Atom)