, Efficient computation, Internet security and the limits of human knowledge
Dedicated to Philippe Flajolet.
One of Clay Mathematical Institute millenium problems. Although it is a mathematical problem, it has a much different flavour than other maths problems, and also far reaching philosophical implications.
It is about problems we understand versus problems we want to understand.
- Computation is everywhere.
- Algorithm, the language of computation.
- Efficient algorithms: P.
- Efficient verification: NP.
- Universality: NP completeness.
Computation: every process which is a sequence of simple local steps. Examples:
- Development of living being (e.g. child)
- Infection spreading (cell scale, planet scale, local can mean many things).
- Proving theorems
The subconcious mind computes. Beauty and emotion can arise from computation.
In 1936, Alan Turing (1912-1954) formalized computation. Church-Turing thesis : everything that can be computed mechanically can be computed by a Turing machine.
An algorithm has a finite description, although there are infinitely many instances (input).
Not all problems can be solved by computation. Turing showed that one cannot tell wether a given program always halts (whatever input). Related to Matiyassevitch theorem for Diophantine equations.
If a problem has an algorithm, next question is: how long does it take to give an answer ?
Addition of -digit integers takes operations.
Elementary school multiplication of digit integers requires roughly operations. Can one do faster ? Yes! Nearly linear time.
Brute force (check if , , ,… is a divisor) factoring of an digit integer takes operations. Can one do faster ? Yes! But still nearly exponential. This is crucial for everyday life.
4. The class P
Cobham, Edmonds, Rabin circa 1965 defined P as algorithms which answer on input of size in polynomial time, i.e. , ,…
Many problems have fast algorithms, of daily use.
Dijkstra : Shortest path between two locations on a network. Your computer does it in a fraction of a second.
Knuth-Morris-Pratt : Pattern matching (text processing, genomes)
Cooley-Tukey: Fast Fourier Transform (audio and image processing). Used by Gauss in 1805 for computing planetary orbits.
Petersen, Berlekamp-Massey: Error correcting codes (CD’s, satellite communication).
Question: Do all problems which admit an algorithm also admit a class P algorithm ?
5. The class NP
Search problems: Shortest path, Pattern matching, Factoring, Theorem proving, Sudoku are instances of search problem. They may be of variable difficulty, but they share a common property: if someone hands you a solution for a given instance, it is easy to check that it is indeed a solution. Even checking a mathematical proof is easy, provided the proof is written in a formal language.
This is what class NP is about. It was defined formally by Cook and Levin in 1971, but the idea can be found in a letter of Gödel to von Neumann in 1956.
NP does NOT mean not polynomial. It means nondeterministically polynomial.
The P versus NP problem is wether for every search problem for which solutions can be efficiently checked, solutions can in fact be easily found.
Most people think that NP is not equal to P, i.e. finding is harder than checking.
6. What is in NP ?
Most everyday life search problems are in NP. Engineering (given constraints, find a suitable design),… Indeed, checking wether the design meets the constraints is easy (if not, we probably would not start looking).
If , this would mean that creativity can be automated. Utopia!
7. Universality: NP-completeness
Are Sudoku, Theorem proving, Factoring hard ? We do not know, but
Theorem 1 If Sudoku is easy, then so are Theorem proving and Factoring.
In fact, a Sudoku solver can solve any NP problem.
NP-complete problems are all equivalent: if one turns out to be easy, all of them will be.
Theorem proving (with a prescribed length of the proof) is NP-complete. We do not know about Factoring.
Question from audience: But Factoring has a subexponential algorithm ? Answer: and so what ? There exist NP-complete problems which have subexponential algorithms.
Thousands of problems in all fields of science have been shown to be NP-complete (e.g. protein threading problem with sequence amino-acid preference is NP-complete).
8. as a law of nature
Nature keeps solving NP-complete problems (e.g. optimal shapes). Why ?
Possibly, nature deals only with special inputs. Inputs are bounded (by the number of atoms in the universe), but this does not matter, since limitations of non P algorithms are visible on ridiculously small inputs.
Or the model is wrong, and computers cannot do everything. If so, can one use natural processes to enhance the power of computers ? Yes! This is how randomness, and later quantum physics, were introduced in Turing’s model.
9. Positive consequences of
Are hard problems useful ? Yes!
If Factoring is hard, then personal data are safely stored and exchanged, all of us bank on that.
10. Questions from the audience
10.1. What progress has been made ?
We know that some very weak models (where algorithms are severely restricted) do not solve all NP-problems.
We know restrictions on certain methods: we can explain why certain techniques for proving lower bounds on complexity must fail.
Interesting interaction with statistical physicists.
10.2. Further questions
Power of quantum computation: Shor’s result that Factoring is P in the quantum computation model came as a surprise (although Feynman alluded to this possibility). Possibly, this might lead to discovery of flaws in quantum physics. Research is active, since there are many problems for which we do not have fast quantum algorithms.
Parallelism in nature: In the weather game, every atom plays the role of a computer which optimizes its own position, temperature, speed… So nature is massively parallel. Nevertheless, the number of processors is linear in the input, so it does not matter for P versus NP.