Saturday, September 24, 2005

P vs. NP: let's think out loud

We all know this question. It is listed as one of the seven millenium problem by Clay Mathematic Institute. We also know that most people believe that P is different from NP. For example, in the P vs. NP Poll by Gasarch (published in ACM SIGACT Newsletter), out of the 77 people who offered opinion, 61 believed that P is different from NP, 9 believed that P = NP, and others believed that the question is "formally independent" in some form. Well, what do you believe? If you are as yet undecided, I will try to unravel the reasons why most people believe that P is different from NP, and then speak about why it is equally reasonable to believe that P = NP. I will not talk about whether P vs. NP is formally independent of ZFC.

Two most common reasons for believing that P is different from NP are the following:

  1. There are (literally) tons of problems which are NP-complete (i.e. if one is in P, and all will be in P), and nobody has been able to devise any polynomial-time algorithm for any of these problems. However, from our experience, we know that we are "good" at devising polynomial-time algorithms, but not very good at proving lower bounds. [Just look at all the great
    algorithms we have devised. Also witness that we can't even prove that satisfiability for propositional logic cannot be solved in O(n2).]
  2. We can intuitively think of P as the set of problems for which we can produce proofs in polynomial time, and of NP as the set of problems for which, given a proof, we can check that it is indeed a proof in polynomial time. Well, from our experience with math, we know that producing proofs is usually much harder than verifying proofs.


Well, both of these reasons sound very reasonable. However, I also think that there are holes in these arguments (because of which, it is equally reasonable to believe that P = NP). Most people associate the complexity class P with the set of problems that we can solve "efficiently", and classify non-polynomial algorithms as bad (or inefficient) algorithms. [If you have done some computer science courses, you should see this immediately.] But this is a wrong assumption. For example, the Simplex algorithm for solving linear programming runs in exponential time in the worst case (I believe the same is true in average case), but the algorithm runs very efficiently in practice. There are also polynomial-time algorithms for linear programming, but it seems that none of those have outperformed the simplex algorithm in practice. Furthermore, the exponentiallity of the simplex algorithm's running time only occurs on some pathological examples that never appear in practice. On the other hand, there are many polynomial-time algorithms that do not actually run efficiently in practice. A counterexample for this (quoting Cook's paper) is due to a result by Robertson-Seymour that says that every minor closed family of graphs can be recognized in O(n3), but the hidden constant is so huge that the algorithm is inefficient in practice.

So, maybe there exist polynomial-time algorithms for satisfiability for propositional logic, it's just that the smallest exponents are so big (maybe bigger than the number of atoms in the visible universe, 1090). [Of course, the time hierarchy theorem tells us that there is an problem solvable in nk, but not solvable in nk-1.] Perhaps, the smallest size of the polynomial-time algorithm that solve satisfiability for propositional logic already outnumbers 1090. Who knows! Perhaps, we are good at devising efficient algorithms, but not so good at devising polynomial-time algorithms in general. Maybe the following question is more relevant: assuming that P = NP, give an upper bound on the size of the algorithm (by this I mean, multi-tape Turing Machine) that solves satisfiability in polynomial-time; does it necessarily consist 10, 100, or 1090 states?

In a sense, if you're a practical-minded person, P vs. NP doesn't really matter at all, and there is technical reason why. Recall that the linear speedup theorem (look at Papadimitriou's Computational Complexity) says that if a problem is solvable in `f(n)`, then for any `epsilon > 0`, there exists an algorithm that solves the same problem which runs in `epsilon f(n) + n + 2`. So, if your algorithm runs in 2n, then just use this theorem to speed up your algorithm for all n that is relevant in practice. [Of course, if you check the proof, the size of the resulting algorithm grows quite so quickly in the size of `n`, that there is no way you can write your algorithm on earth. So, the practically-minded people will probably settle with heuristics instead.]

Anyway, I think that it is very possible [correction: 'very' -> 'equally'] that P equals NP, and the proof of which is non-constructive (or we may never actually find the proof, because the length of the shortest possible proofs exceeds the number of atoms in the visible universe).

12 comments:

anthonywlin said...

Kenny, you're missing the point. I by no means suggested that P=NP. The main point of the argument is that to believe P=NP is as just reasonable as to believe P is different from NP. I'm also suggesting that one has to be impartial if one wants to resolve the P vs. NP problem. One very good example is the resolution of the question whether NSPACE is closed under complementation. It was open for 25 years until an undergrad (Szelepcsenyi), who was given this problem as a challenge and totally unaware that the problem was open, solved it positively (also independently resolved by Immerman at around the same time). Almost everybody conjectured the negation of this statement. I'm sure one can easily come up with other examples.

anthonywlin said...

BTW: the proof that NSPACE is closed under complementation given by Szelepcsenyi turns out to be unexpectedly simple.

anthonywlin said...

On the second reading, it sounded like I suggested P = NP. So, I change the word 'very' on the last paragraph to 'equally'. I hope this makes my reader happy.

anthonywlin said...

The other similar story is regarding Huffman coding. I can't recall the exact story, as Alistair Moffat (one professor in my ex comp. sci. department who works on compression) told me. The name of the lucky student this time is Huffman. Actually, the same thing happened with Donald Knuth (he was assigned a challenge (which turned out to be an open problem) with the promise that he would get an A+ immediately. So, he ended up solving the problem and not attending the rest of the course!)

Anyway, I do agree with you on your last point, as long as the odds are not something like 80/20 in favour of P not equal NP.

Anonymous said...

While you're on the topic of inefficient algorithms, a tangentially related but slightly offtopic result is that for any language L that we've shown to be in NP intersect coNP by actually giving the nondeterministic algorithms, we know an algorithm for L that is polynomial-time iff L is in P. Of course, nobody uses this algorithm in practice to try to solve factoring (for example) since it starts off with "take an enumeration of all polynomial-time TMs..." Anyhow, this is one method we can use to get at "inefficient polynomial time algorithms." It's true that there don't seem to be many of them... or at least, they aren't talked about much since they aren't algorithms of any particular practical value.

Still, if (for example) we managed to show a formal independence result for some language, such an algorithm might be useful, and would be pretty much the only option for (potentially) finding exact solutions in polynomial time... one could imagine augmenting the algorithm with a counter that keeps track of where in the enumeration it last was, to try to cut down on the overhead. You wouldn't (by hypothesis) be able to show that the algorithm was polynomial-time, but if the counter remained at the same index in the enumeration after many instances, you could take that as a source of "inductive evidence" that the algorithm would be polynomial time.

Slightly more on-topic now, it's worth noting that lower bounds are often so hard to prove because they simply aren't true. (ex: a circuit computing the negation of three input variables using only two NOT gates... as the statement suggests, there are many others, too.) Of course, I haven't said much since you can take this statement in two ways:

i) we haven't shown P != NP because we haven't hit upon the right lower bound

ii) we haven't shown P != NP because it isn't true

Oh well!

anthonywlin said...

Brendan, thanks for your comments. Could you please tell me where you encountered that result "that L in NP intersects co-NP, we know an algorithm for L iff L is in P"? Please let me know which paper proved this. I have heard of this result (I think it was proved by Levin) but I couldn't find which paper it was proven in.

Anonymous said...

Anthony: I encountered it on a midterm exam, so unfortunately I don't know where it was done first either... I'll let you know if I find out, though.

anthonywlin said...

Brendan: ok, thanks. But, I guess more important to me is the proof. How do you prove the result in the midterm exam? I guess it can't be that complicated.

Anonymous said...

Anthony: properly, the question on the midterm was to show this for the case of factoring; after the fact, it was noted that the result generalizes... I had been a little careless, and overstated how greatly the result generalizes. I'll discuss this at the end.

For the case of factoring, the problem isn't very difficult. Half of the trick is stepping through an enumeration of clocked polynomial-time TMs, and the other half makes use of the fact that we have certificates for both YES and NO instances. Run the kth TM to completion on the input string x to obtain f_k(x). Now, run the verifier algorithms on f_k(x); if the YES or NO instance verifier accepts, decide YES or NO, respectively. Otherwise, move to the k+1st TM in the enumeration.

We know that if factoring is in polynomial-time, then there is some K'th polynomial-time TM computing the factorization of any input n, factor(n) = f_K'(n) ... it's easy to design the YES/NO instance verifiers to take the factorization of the input as their certificates. Assuming that we've been clever about the enumeration, only enumerating up to TMs that have running time-bounds of degree at most d, then it's clear that our running time is O(K'n^d)=O(n^d), since K' is just a constant.

As I mentioned earlier, I was being pretty cavalier in stating the generalization of the problem: we do need to know the YES/NO verification algorithms, but perhaps more critically, we used the fact that the decision problem is polynomial-time equivalent to a function problem.

To be correct, the statement of the theorem should say that for any language L we have constructively shown to be in NP intersect coNP we can construct a TM M such that if the associated TFNP problem is in FP, then M decides L in polynomial time. ("the associated TFNP problem" is one that computes a witness of the appropriate type) It's clear also that we can construct a TM N that solves the associated TFNP problem, i.e., one that prints out the witness that satisfies one of the verifiers.

Clearly, if L is not polynomial-time decidable, then the associated TFNP problem is not in FP... we just happen to get the "iff" for factoring because we chose the decision problem specifically to guarantee that L in P implies the TFNP problem is in FP. This also happens to be true of most of the problems we're interested in contained in NP intersect coNP, since we're almost always studying polynomial-time equivalents of our function problems of interest... unfortunately, it's not clear whether or not it's true for all of NP intersect coNP. (sorry about that!!)

Anyway, this bugged me today, so I spent a while trying to track down the result. It does turn out to be Levin's work: there's a Russian version of the article, "Universal enumeration problems" from 1972 and apparently an English version "Universal sequential search problems" in Problems of Information Transmission,
9(3):265-266, from 1973.

anthonywlin said...

Brendan: You're right that the proof in the case of factoring is easy. Actually, since we have a poly-time algo for primality, we only need to know that factoring is an TFNP problem, and Levin's clever enumeration trick will work. In this case, the verifier will just take a decomposition of x into a sequence of numbers, and test whether or not this is *the* (up to ordering) valid decomposition of x into products of prime factors.

Also, yes, it doesn't seem obvious how this problem generalizes to all problems in NP intersects coNP. I'll this about this more carefully. Thanks also for the reference.

Well ... I guess at the end of the day what really matters is whether there is a poly-time algorithm for factoring ...

One last question: how do you turn factoring to a decision problem? Is the corresponding decision problem { (n,a,b) : n has a prime factor x with a < x < b }?

Anonymous said...

Anthony: at the end of the day, I'd make it a point to include discrete logarithm and Nash equilibria for bimatrix games, too. Well, perhaps with emphasis on factoring and discrete logarithm... it seems mildly disconcerting that we know how to do it, given it can be done, since this is the stuff cryptosystems are made of...

Anyway, the decision problem you stated works. You can even get away with {(n,k): n has a prime factor < k} ... a Turing reduction is just binary search, divide, and repeat.

anthonywlin said...

Yes, I agree that what you mentioned is a disconcerting fact, at least in theory. In practice, as you mentioned before, this algorithm might be, whether it be polynomial or not, just too slow to invert the RSA function. So, it might not matter in the end.