Notes of Ryan O’Donnell lecture nr 1

1. How to prove hardness approximation results

First course is mainly definitions, second will state results and third will go into some interesting matematics.

P/NP is satisfactory for decision problems, not that much for optimization problems.

Example 1

  • MAX CUT (see Mathieu’s first lecture).
  • MAX 3LIN: Given a set of 3 variable equations of the form {x_i +x_j +x_k =0} or {1} mod {2}. Find an assignment satisfying as many as possible equations.
  • MAX INDEPENDENT SET: Given a graph {G}, find an INDEPENDENT set (no edges between them) {S} of vertices, {|S|} as large as possible.
  • MAX {k}-COVER: Given sets {S_1,\ldots, S_m} covering {\Omega}, given {k}, choose {k} of them so that the union is as large as possible.

Definition 1 Let us call value of a solution the relevant quantity normalized to be in {[0,1]}.

Given input {I}, let {Opt(I)=} value of best solution. Given algorithm {A}, {Alg_A (I)=} value of solution {A} achieves for {I}.

Here is the key

Definition 2 Say that algorithm {A} {(c,s)}-approximates a problem if

  1. {A} is efficient, i.e. runs in polynomial time (randomized algorithms, i.e. the class BPP, are allowed too);
  2. For all inputs {I}, if {Opt(I)\geq c}, then {Alg_A (I)\geq s}.

Example 2 The greedy algorithm {(1,1-e^{-1})}-approximates MAX {k}-COVER.

Greedy means pick largest set, then next largest in complement,… It is the best known algorithm.

Example 3 “Outputs {x_i=0} or {1}{(c,1/2)}-approximates MAX 3LIN. Gaussian elimination {(1,1)}-approximates MAX 3LIN.

Again, these are the best known algorithms.

Next we turn to more sophisticated algorithms. Start with a Theorem by Alon and Kalai (?)… 1994.

Theorem 3 There is an SDP-based algorithm (studying the Lovasz {\theta}-function) which {(0.33334,n^{-1/4})}-approximates MAX INDEPENDENT SET.

Next, Goemans and Williamson’s 1994 Theorem.

Theorem 4 There is an SDP-based algorithm which {(c,\frac{1}{\pi}\arccos(1-2c))}-approximates MAX CUT, provided {c\geq .845..}.

Best known.

Linial: what does GW do for smaller {c} ? Answer: linear in {c}. Does not look good.

Can we do better ? or prove that one cannot do better ?

1.1. Inapproximability results

All results stated are under the assumption P{\not=}NP.

Feige 1996.

Theorem 5 MAX {k}-COVER is {(1,1-e^{-1}+\delta)}-inapproximable for all {\delta>0}.

Håstad 1998.

Theorem 6 MAX 3LIN is {(1-\delta,\frac{1}{2}+\delta)}-inapproximable for all {\delta>0}.

Question: but Gaussian elimination does better ? Answer: yes, under the assumption that there is a feasible solution, {c=1}.

Dinur and Safra 2002.

Theorem 7 MAX INDEPENDENT SET is {(\frac{1}{3}-\delta,\frac{1}{?}+\delta)}-inapproximable for all {\delta>0}.

Khot and Regev 2003.

Theorem 8 MAX INDEPENDENT SET is {(\frac{1}{2}-\delta,\delta)}-inapproximable for all {\delta>0}, under a different complexity assumption, UGC.

Only 50% of people believe in UGC.

Khot, Kindler, Mossel, O’Donnell, based on Mossel, O’Donnell and Oleskiewicz 2005.

Theorem 9 MAX CUT is {(c-\delta,\frac{1}{\pi}\arccos(1-2c)+\delta)}-inapproximable for all {\delta>0}, under a different complexity assumption, UGC.

Raghavendra 2009.

Theorem 10 Let MAX BLAH be any constraint satisfaction problem. There is an SDP-based algorithm which {(c,S_{BLAH}(c))}-approximates MAX BLAH. And there is a matching inapproximability result: MAX BLAH is {(c-\delta,S_{BLAH}(c+\delta)+\delta)}-inapproximable for all {\delta>0}, assuming UGC.

Pisier: can this be phrased in terms of a ratio ? Answer: this would be weaker.

1.2. How can one prove such inapproximability results ?

These are NP-hardness results. They are proved by Karp-reduction from known NP-hard problems.

The fact that {k}-COVER is NP-hard can be stated: MAX {k}-COVER is {(1,1)}-inapproximable assuming P{\not=}NP. To prove this, one proves that {3}-COLORABILITY reduces in polynomial time {(1,1)}-approximating MAX {k}-COVER. This means producing a polytime algorithm that, given a graph {G}, outputs sets {S_1 ,\ldots,S_m} and {k} satisfying

  1. Completeness. {G} {3}-colorable {\Rightarrow} there exist {k} sets covering {1}-fraction of {\Omega}.
  2. Soundness. {G} not {3}-colorable {\Rightarrow} for all {l}-tuples of sets, they cover {<1}-fraction of {\Omega}.

Proof: Introduce a {k}-cover gadget for each vertex and edge pair. Attach them to the given graph, getting a set {R(G)}. take {k=} number of vertices. Completeness will hold by design. Soundness is proved by contrapositive. If there were {k} sets covering {1}-fraction of {R(G)}, then one can decode it into a {3}-colouring of {G}.

More details to be found in standard textbooks. \Box

The strategy for Feige’s {(1,1-e^{-1}+\delta)}-inapproximability theorem goes along similar lines, except that the Soundness result is stronger. The reduction is longer, it goes through intermediate problems like LABEL COVER (also called MAX PROJECTION). In fact, many hardness results I will discuss start from hardness of {(1,\delta)}-approximating MAX PROJECTION. This relies on the PCP theorem and Parallel Repetition.

Definition 11 For all integers {L\geq R}, the MAX PROJECTION {(L,R)}-problem has

– as input a bipartite graph {G=((U,V),E)}, {|U|=|V|}, {G} regular, and for each edge {e=uv\in E}, a projection constraint, i.e. a map {\pi_{uv}:[L]\rightarrow[R]};

– as output assignments {\alpha:U\rightarrow[L]}, {V\rightarrow [R]}.

– the goal is to maximize the fraction of consistent edges, i.e. {\pi_{uv}(\alpha(u))=\alpha(v)}.

One can think as {[L]} and {[R]} as label or color sets, and {\pi_{uv}} as coloring rules which depend on each edge.

The following is the heart of Feige’s inapproximability theorem.

Theorem 12 For all {\delta>0}, there exist {L}, {R=\delta^{-O(1)}} such that MAX PROJECTION{(L,R)} is {(1,\delta)}-inapproximable assuming P{\not=}NP.

Périfel: Can one prove {(c,\delta c)} inapproximability ? Answer: It is in fact stronger (not so obvious, but easy).

About metric2011

metric2011 is a program of Centre Emile Borel, an activity of Institut Henri Poincaré, 11 rue Pierre et Marie Curie, 75005 Paris, France. See http://www.math.ens.fr/metric2011/
This entry was posted in Workshop lecture and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s