Notes of Sanjeev Arora’s lecture nr 2

Unique Games

A unique game problem depends on three parameters, two constants ${\epsilon>0}$ and ${\delta>0}$ and an integer ${p}$.

Input: a bunch of 2-variable linear equations mod ${p}$. We want to decide between the following two events:

• There is an assignment satisfying a fraction ${>1-\epsilon}$ of equations.
• No assignment satisfies more than a fraction ${\delta}$ of equations.

Note that there is an implicit graph there, and constraints can be expressed in the form ${x_j =\pi_{ij}(x_i)}$ where ${\pi_{ij}}$ is a bijection.

This is different from Ryan O’Donnell’s definition of MAX BIJECTION${(p)}$. But there is a reduction from MAX BIJECTION${(p)}$ to our class of linear problems (due to O’Donnell et al., relies on the KKL theorem).

1. Known approximation algorithms

Charikar, Makarychev and Makarychev give an algorithm that satisfies ${1-\frac{2}{\sqrt{\epsilon\log p}}}$ fraction of equations. So if ${\epsilon<\frac{1}{10\log p}}$, half the constraints are satisfied. So unique games are easy when ${\epsilon}$ is very small.

If the constraint graph is an expander, there is an algorithm, AKKSTV, which satisfies ${1-10\epsilon}$ fraction of equations. So unique games are easy when the constraint graph is an expander.

Both algorithms are based on SDP relaxations.

2. Charikar et al. method

2.1. The SDP relaxation

For the first one (see Claire Mathieu’s lecture nr 4 and Comment), for each variable ${i}$, use ${p}$ vectors ${v_{i1},\ldots,v_{ip}}$.

Intention: if ${x_i}$ gets value ${\ell}$, set ${v_{i\ell}=1}$ and ${v_{i\ell'}=0}$ if ${\ell'\not=\ell}$ (i.e. all vectors are ${1}$-dimensional). So we put the following constraints

$\displaystyle \begin{array}{rcl} \forall \ell\not=\ell', ~\forall i,&& v_{i\ell}\cdot v_{i\ell'}=0.\\ \forall i,&&\sum_{\ell}|v_{i\ell}|^2 =1.\\ \end{array}$

an choose objective function (to be maximized)

$\displaystyle \begin{array}{rcl} \sum_{e=ij}\sum_{\ell=1}^{p}v_{i\ell}\cdot v_{j\pi_{e}(\ell)}. \end{array}$

2.2. Rounding

Think of orthogonal vectors ${v_{i1},\ldots,v_{ip}}$ associated with a variable (e.g. vertex) as forming a fan. A high score implies that neighboring fans are correlated, but if the graph is large, remote fans are not correlated any more, so it is not obvious how to extract an assignment of variables from geometry.

The rounding procedure amounts to projecting along a random Gaussian direction ${z}$. Let ${d}$ denote the dimension of the SDP solution (${d\leq np}$). The random variables ${\frac{1}{\sqrt{d}\sigma}v_{i\ell}\cdot z}$ are independant and normal Gaussian. For each ${i}$, we want to assign ${x_i =\ell}$ which maximizes ${|v_{i\ell}\cdot z|}$. The analysis gets a bit simpler if instead, we look at all values ${\ell}$ such that ${v_{i\ell}\cdot z>t\sigma}$ and pick one randomly. Here, ${t\simeq\sqrt{\log p}}$.

2.3. Analysis

If ${i}$ picks ${\ell}$, then, with conditional probability ${\geq 1-\Omega(\sqrt{\epsilon\log p}}$, ${j}$ picks ${\pi_{ij}(\ell)}$.

3. Case of expanders

One is tempted by a propagation algorithm, but this does not work. Even a small fraction of ill-labelled edges spoils everything for a general graph. Things are a bit better for expanders.

Lemma 1 If ${\lambda_2 (G)\geq 0.1}$ and average correlation on edges is ${\geq 1-\epsilon}$, then random pairs ${ij}$ are correlated, i.e. there exists a bijection ${\sigma}$ of ${[p]}$ such that

$\displaystyle \begin{array}{rcl} \sum_{\ell=1}^{p}v_{i\ell}\cdot v_{j\sigma(\ell)}\geq 0.9. \end{array}$

This suggest to round as follows. Pick ${i}$ randomly, pick value ${\ell}$ with probability ${|v_{i\ell}|^2}$ (recall that ${\sum_{\ell}|v_{i\ell}|^2 =1}$). For all other ${j}$, pick a value ${\sigma(\ell)}$ by identifying correlating matching ${\sigma}$ (it is unique if it exists).

Proof: of Lemma. Use vectors ${\{v_{i\ell}\}}$ to write a candidate vector ${x\in{\mathbb R}^n}$ incorporating edge/pairs of vertices correlations, and substitute it into ${\frac{x^{\top}Lx}{|x|^2}\geq \lambda_2 (G)}$. To get a single vector out of a fan, use tensor product ${v_{i\ell}^{\otimes m}}$ and sum up. Then take inner product with a random vector ${z}$ to get ${x_i}$. $\Box$

4. More on expansion and unique games

When is this SDP not tight ? DKSV construct a graph (noisy hypercube) such that locally, things look very correlated, but there is no global correlation. David Steurer will discuss his joint work with Raghavendra on small sets expansion and how this implies UGC.

References