**Approximability versus hardness: a duality**

More philosophical talk.

MAX 3SAT’s algorithm is trivial, whereas a huge amount of work has been put in proving the tight inapproximability bound, culminating with Håstad’s theorem.

MAX CUT has a trivial algorithm which is far from optimal. The optimal algorithm for MAX CUT is based on SDP, and follows a strategy (relaxation, rounding) which has been succesful in many other problems.

Inapproximability results start from the PCP theorem, which can be formulated as an inaproximability result for LABEL COVER, with a small gap. This gap can be amplified thanks to parallel repetition, the price to pay is large labels.

Harnessing hardness (BGS, Håstad). This amounts to a reduction, using gadgets. For instance, for reducing MAX CUT to Unique Games on size labels, the gadget consists in replacing the label set by the cube , producing a larger graph. Assignments for a Unique Games instance, can be upgraded into cuts in this graph by mapping labels to the corresponding coordinate half-cube (aka dictator function). Conversely, efficients cuts of the large graph must cut cubes nearly into half-cubes, and so labellings can be associated to them.

Where do relaxations, roundings and gadgets come from ? Raghavendra has a systematic way of producing them for a large class of CSPs over size alphabets (but does not apply to VERTEX COVER, for instance).

Kumar MTVishnoi have an alternate approach which applies to a different class, strict monotone CSPs over size alphabets, which contains VERTEX COVER.

**1. Raghavendra’s method **

Example 1Raghavendra’s relaxation for MAX CUT. For each edge, Raghavendra introduces 4 new variables , according to the 4 possible values for endpoints (belong to cut or not), and add to GW’s SDP the

Rounding amounts to cutting the sphere in many parts and brutally searching among cuts of the set of parts. Raghavendra’s approach does not explicitely give the value of the approximation ratio. For instance, for MAX CUT, it requires some nice mathematics to get it.

**2. KMTV’s approach **

Definition 1A strict -CSP has

Input: A -hypergraph, and for each hyperedge , a subset .Output: an assignment of boolean variables, one per vertex, such that for every hyperedge, the corresponding -tuple belongs to .Objective: minimize .

It is monotone if all ‘s are monotone sets.

In the sequel, we stick to one example, VERTEX COVER.

** 2.1. The relaxation **

Replace by

** 2.2. Rounding **

Partition in buckets and decide that variables falling in the same bucket are equal. Then brute force search among the possible assignments.

** 2.3. Analysis **

Thus rounding procedure comes out of analyzing reduction from UG. The gadget is an -cube (one for each vertex ), size of alphabet of UG. Put weights on vertices of this cube according to the -biased distribution (product of -Bernoulli). The LP solution provides us with, for each edge, a probability distribution on . Put an edge

Completeness is easy.

Soundness analysis requires an invariance priniple.

**3. Conclusion **

MAX CSPs and strict CSPs do not contain SPARSEST CUT, COLORING,…

We have some understanding of problem wher variables are associated with vertices. But what about variables associated to edges ? Parallel repetition does not apply any more.

For SPARSEST CUT, we need better isoperimetric results, sub-constant regime, see Sherman.

Systematic ways to produce relaxations ?

** References **