Today, we complete the proof of the PCP theorem. Recall we show a reduction from -COLORING to GAP–-COLORING. We do it recursively, in rounds. Each round doubles , and size of graph is multiplied at most by a constant. It consists in 3 steps. The first one is innocuous (make graph regular and expanding; does not require zig-zag, inserting expanders in place of high degree vertices makes the graph regular, then adding the edges of an expander with same vertex set suffices for the needed expansion). The second one, gap amplification, can be thought of as a derandomized parallel repetition. It increases the number of colors. The third, alphabet reduction, brings back to a -COLORING instance. The main tool for this is called *composition*.

Gap amplification consists in replacing vertices by their -neighborhoods. Each vertex becomes aware of the badly colored edges in its -neighborhood, this increases the proportion of bad edges in any coloring. Alphabet reduction works in the opposite way, it spreads vertices equipped with big labels into a cloud of vertices with 3 colors. This is Yoga!

**1. Gap amplification **

The vertex set does not change. Edge change: the edges of the new graph correspond to length paths in original graph .

The new colors (call them labels) encode -colorings of -neighborhoods. The constraints check local consistency. The proof that a perfect coloring of gives rise to a perfect labeling of is straightforward.

The proof that

is much harder, it involves expansion. In one sentence, given a coloring of , let be the set of ill-colored edges. Then the set of ill-labelled edges of is related to the -neighborhood of , which thanks to expansion, is much larger that . Here are details. The precise consequence of expansion we need is

Given a labelling of , we must show that fraction of edges are ill-labelled. Define a coloring of by plurality (poll neighbors for their colors according to a random walk requirted to stop at each step with probability , choose the color which occurs most frequently). Let be the set of ill-colored edges of . Choose a random path of length in , conditioned on containing an edge from . The probability of the corresponding constraint in to be false is constant.

**2. Alphabet reduction **

Let be an edge of . The corresponding constraint reads colors in a certain subgraph of . Denote by (resp. ) the colors read in -neighborhood of (resp. ). iff and are valid colorings in .

Our next goal is to convert into a bunch of local coloring predicates over and so that

This looks circular, we are exactly back to our original PCP problem. But at a much smaller scale, depending on only, so we are allowed to use an exponential size PCP, like the telephone bill PCP of my first lecture. This is our first encounter with a composition of PCPs. Prahladh Harsha will continue explaining composition. Recall the parallel between -CSP’s and PCP verifiers. Composition is easier to describe in the language of PCP verifiers. So his talk will deal with PCP verifiers.

**3. Questions **

Question: How does one get sharp inapproximability bounds ? Answer: Raghavendra can do it for CSPs, but only conditionnally to UGC.

Linial: this work measures failure by a fraction of unsatisfied constraints. For me, the most natural formulation of the coloring problem is: what number of colors can we achieve in polytime. This is a different measurement, and little is known. Best lower bound is 5! Answer: UGC lower bound is .

** References **