## Notes of Irit Dinur’s lecture nr 2

1. Reductions among CSPs

Let me repeat what a ${q}$-CSP is. It is a constraint satisfaction problem, i.e. there are boolean variables ${x_i}$. An instance is a collection of ${q}$-ary constraints, i.e. boolean functions of subsets of ${q}$ variables each. Notation: for an instance ${\phi}$, ${Sat(\phi)}$ is the maximum fraction of constraints that can be simultaneously sarisfied by an assignment of variables.

Example 1 3SAT is a ${3}$-CSP. MAX CUT is a ${2}$-CSP. ${3}$-COLORING is a ${2}$-CSP provided one accepts ${3}$-valued variables.

PCP theorem says that for all these CSPs, there is an inapproximability ratio. Indeed, these problems reduce to each others.

Example 2 Reducing ${3}$-COLORING to 3SAT.

This uses a tiny gadget: coding ${3}$-valued variables by sets (of size ${q}$) of boolean variables, and cooking up a set of ${q}$-ary constraints translating the ${2}$-ary constraint on ${3}$-valued variables expressing ${x_i \not=x_j}$.

Example 3 Reducing ${3}$-COLORING to LABEL COVER.

An instance of LABEL COVER is a bipartite graph, two sets of labels ${[L]}$ and ${[R]}$ and a constraint for each edge telling which pairs of colors at its endpoints are allowed.

If ${G}$ is ${3}$-colorable, then ${Sat(LC)=1}$.

If ${G}$ is at most ${1-\epsilon}$-colorable, then ${Sat(LC)<1-\frac{\epsilon}{2}}$.

So reduction apriori induces a loss on inapproximability ratios. We want to avoid that, i.e. amplify gaps instead.

2. Repetition

Sequential repetition, i.e. launching the PCP verifier independantly ${\ell}$ times improves the soundness (probability of false acceptance) to ${1-2^{\ell}}$. On the CSP side, the corresponding operation amounts to replacing a set of constraints ${\mathcal{C}=\{C_1 ,\ldots,C_m\}}$ by

$\displaystyle \begin{array}{rcl} \mathcal{C}^{\ell}=\{C_{i_{1}}\wedge C_{i_{2}}\cdots\wedge C_{i_{\ell}}\}. \end{array}$

If ${Sat(\mathcal{C})=1-\epsilon}$, then ${Sat(\mathcal{C}^{\|\ell})=(1-\epsilon)^{\ell}}$. However, the arity has been multiplied by ${\ell}$. The cost of reducing back to ${q}$-arity kills the benefit of sequential repetition. So this trivial gadget does not help.

One possible way out of this vicious circle is the use of samplers, i.e. selecting only randomly chosen ${\ell}$-tuples of constraints. We can even derandomize sequential repetition, using expanders…

An other, more efficient, way is parallel repetition.

Example 4 Parallel repetition of LABEL COVER.

Let ${(B,A)}$ be a bipartite graph, part of a LABEL COVER instance. Then ${B^{\ell},A^{\ell})}$ is a bipartite graph as follows. Put an edge between ${(a_1, \ldots,a_{\ell})}$ and ${(b_1,\ldots,b_{\ell})}$ iff there are edges between ${(a_i ,b_i)}$ for all ${i=1,\ldots,\ell}$. A labelling is consistent if it is consistent coordinatewise.

Ran Raz’s theorem will be stated precisely on wednesday afternoon.

Theorem 1 If an instance ${LC}$ is satisfiable, so is ${LC^{\|\ell}}$. If ${LC}$ is ${<1-\epsilon}$-satisfiable, ${LC^{\|\ell}}$ is ${<\epsilon'(\epsilon)}$-satisfiable.

Note that this theorem cannot be derandomised (Feige and Kilian).

3. Proof of PCP theorem

3.1. Scheme of proof

We want to reduce ${3}$-COLORING to GAP-${3}$-COLORING.

Start with a graph ${G}$. If ${G}$ is not ${3}$-coloring, ${Sat(G)\leq 1-\frac{1}{|E|}}$. That’s a small gap, we want to amplify it until it gets independant on ${G}$. We do this recursively: at each round, we want to double the gap. After ${\log(|E|)}$ rounds, we shall be done. Each doubling round will have 3 steps.

1. Regularisation (some innocuous preprocessing).
2. Amplification by parallel repetition.
3. Alphabet reduction.

3.2. Amplification step

Keep the same set of vertices, take as new edges walks of length ${t}$. Get a multi-graph. Thus the adjacency matrix is taken to its ${t}$-th power. ${q}$-colorings are taken to ${q'}$-colourings, wher ${q'>q}$ as follows: the new colour of a vertex encodes all the colors of its neighbours of radius ${t}$ in original graph ${G}$. Constraints

Proposition 2 Unsatisfaction gap increases by a definite factor, that gets ${\geq 2}$ for ${t}$ large enough.

This looks a bit like parallel repetition, but it differs in that the size of the instance is merely multiplied by a constant, whereas parallel repetition would change ${n}$ into ${n^2}$.

You may think of it as as a derandomized form of parallel repetition.

3.3. Alphabet reduction

To iterate the procedure, one needs to convert back the multigraph with a ${q}$-coloring into a graph with a ${3}$-coloring.

This is more complicated. It involves composition, an idea that has not been exploited enough so far.