## Notes of Nisheeth Vishnoi’s talk, jan. 21

Approximability versus hardness: a duality

More philosophical talk.

MAX 3SAT’s algorithm is trivial, whereas a huge amount of work has been put in proving the tight inapproximability bound, culminating with Håstad’s theorem.

MAX CUT has a trivial algorithm which is far from optimal. The optimal algorithm for MAX CUT is based on SDP, and follows a strategy (relaxation, rounding) which has been succesful in many other problems.

Inapproximability results start from the PCP theorem, which can be formulated as an inaproximability result for LABEL COVER, with a small gap. This gap can be amplified thanks to parallel repetition, the price to pay is large labels.

Harnessing hardness (BGS, Håstad). This amounts to a reduction, using gadgets. For instance, for reducing MAX CUT to Unique Games on size ${k}$ labels, the gadget consists in replacing the label set by the cube ${\{ 0,1 \}^k}$, producing a larger graph. Assignments for a Unique Games instance, can be upgraded into cuts in this graph by mapping labels to the corresponding coordinate half-cube (aka dictator function). Conversely, efficients cuts of the large graph must cut cubes nearly into half-cubes, and so labellings can be associated to them.

Where do relaxations, roundings and gadgets come from ? Raghavendra has a systematic way of producing them for a large class of CSPs over size ${q}$ alphabets (but does not apply to VERTEX COVER, for instance).

Kumar MTVishnoi have an alternate approach which applies to a different class, strict monotone CSPs over size ${q}$ alphabets, which contains VERTEX COVER.

1. Raghavendra’s method

Example 1 Raghavendra’s relaxation for MAX CUT. For each edge, Raghavendra introduces 4 new variables ${\mu_{ij}^{\pm1,\pm1}}$, according to the 4 possible values for endpoints (belong to cut or not), and add to GW’s SDP the

$\displaystyle \begin{array}{rcl} v_i \cdot v_j &=&\mu_{ij}^{1,1}+\mu_{ij}^{-1,-1}-\mu_{ij}^{1,-1}-\mu_{ij}^{-1,1}\\ &&\mu_{ij} \textrm{ is a probability distribution on }\pm 1^2. \end{array}$

Rounding amounts to cutting the sphere in many parts and brutally searching among cuts of the set of parts. Raghavendra’s approach does not explicitely give the value of the approximation ratio. For instance, for MAX CUT, it requires some nice mathematics to get it.

2. KMTV’s approach

Definition 1 A strict ${q}$-CSP has

• Input: A ${q}$-hypergraph, and for each hyperedge ${e}$, a subset ${A_e \subset\{ 0,1 \}^q}$.
• Output: an assignment of boolean variables, one per vertex, such that for every hyperedge, the corresponding ${q}$-tuple belongs to ${A_e}$.
• Objective: minimize ${\sum x_i}$.

It is monotone if all ${A_e}$‘s are monotone sets.

In the sequel, we stick to one example, VERTEX COVER.

2.1. The relaxation

Replace ${x_i \in [0,1]}$ by

$\displaystyle \begin{array}{rcl} \forall i,\,j,~(x_i ,x_j) \in \textrm{ convex hull of }\{(1,0),(0,1),(1,1)\}. \end{array}$

2.2. Rounding

Partition ${[0,1]}$ in ${1/\delta}$ buckets and decide that variables falling in the same bucket are equal. Then brute force search among the ${2^{1/\delta}}$ possible assignments.

2.3. Analysis

Thus rounding procedure comes out of analyzing reduction from UG. The gadget is an ${r}$-cube (one for each vertex ${i}$), ${r=}$ size of alphabet of UG. Put weights on vertices of this cube according to the ${x_i}$-biased distribution (product of ${x_i}$-Bernoulli). The LP solution provides us with, for each edge, a probability distribution on ${\{(1,0),(0,1),(1,1)\}}$. Put an edge

Completeness is easy.

Soundness analysis requires an invariance priniple.

3. Conclusion

MAX CSPs and strict CSPs do not contain SPARSEST CUT, ${3}$ COLORING,…

We have some understanding of problem wher variables are associated with vertices. But what about variables associated to edges ? Parallel repetition does not apply any more.

For SPARSEST CUT, we need better isoperimetric results, sub-constant regime, see Sherman.

Systematic ways to produce relaxations ?

References