## Notes of Johan Hastad’s talk, jan. 20

Approximating linear threshold predicates

Joint with Cheragchi, Isaksson and Svensson.

1. Resistance

Let ${P}$ be a predicate on ${\{\pm 1\}}$-valued variables, i.e. a ${k}$-variable boolean function. The probability that a random element of ${\{\pm 1\}^k}$ satisfies ${P}$ is ${r_P =2^{-k}|P^{-1}(1)|}$. This gives an easy ${(c,r_P)}$-approximation to MAX P for any ${c}$ and ${P}$ (deterministic, but easy to make deterministic using conditional expectations).

Question: is MAX P ${(1,r_P +\epsilon)}$-inapproximable ? is MAX P ${(1-\epsilon,r_P +\epsilon)}$-inapproximable ?

Say ${P}$ is approximation resistant (AR) in the second case, approximation resistant on satisfiable instances (ARSI) in the first case.

Example 1 3SAT is approximation resistant on satisfiable instances, 3LIN is approximation resistant.

AR looks sort of monotone. It is more likely for predicates accept more inputs. Not rigorously true.

Difference between AR and ARSI: SDP or LP do not care about perfect satisfiability, but hardness reductions do.

Steurer: what about BETWEEN US ?

2. Why study linear predicates ?

Austrin Mossel 2009:

Theorem 1 Take a measure ${\mu}$ on ${\{\pm 1\}^k}$ such that coordinates are independent, vanishing expectation. Assume that

$\displaystyle \begin{array}{rcl} \mathop{\mathbb P}_{\mu} (P(x)\textrm{ is true})=c. \end{array}$

Then MAX P is ${(c-\epsilon,r_P +\epsilon)}$-inapproximable under UGC.

This, like most of this talk, is contained in Raghavendra’s work, except it is not so easy to know when Raghavendra’s theorem applies or not.

Lemma 2 A subset ${S\in\{\pm 1\}^k}$ does not support a pairwise independent measure ${\mu}$ iff there exist a quadratic predicate without constant term such that ${Q>0}$ on ${S}$.

Proof: Let ${T:x\mapsto (x,x\otimes x)\in {\mathbb R}^{k(k+1)/2}}$. If ${0\in}$convex hull of ${T(S)}$, then ${0=\sum_{x\in S}\mu(x)T(x)}$ and ${\mu}$ is the required measure. If not, there is a separationg hyperplane, this provides the required quadratic polynomial. $\Box$

So the maximal ${P}$ not satisfying Austrin-Mossel’s assumptions is ${P=sign(Q)}$, ${Q}$ a quadratic polynomial without constant term. When ${P}$ is quadratic, SDP seem to be efficient. E.g. MAX CUT. One may believe that this ${P}$ can be handled using SDP, and we get a characterization of inapproximable predicates. It does not work so. So we lower our ambition and study predicates of the form ${P=sign(L)}$ where ${L}$ is linear. E.g. Majority on an odd number of inputs.

3. Our LP approximations

Take Majority on ${k=3}$ inputs. This becomes a linear integer program, with constraints of the form

$\displaystyle \begin{array}{rcl} y_1 \pm y_2 \pm y_3 \geq 1 \end{array}$

and no objective. Note that ${r_P=1/2}$, by symmetry. Relax integrality into ${y_i \in [-1,1]}$. Let ${\alpha}$ be the LP solution.

Rounding: set ${x_i =1}$ with probability ${\frac{1+\alpha_i}{2}}$. For instance

$\displaystyle \begin{array}{rcl} \mathop{\mathbb E}(Maj(x_1 ,x_2 ,x_3))=\frac{1}{2}(\alpha_1 +\alpha_2 +\alpha_3 +\alpha_1 \alpha_2 \alpha_3). \end{array}$

Hard to compute for more variables. Result is a symmetric polynomial. So change probability from ${\frac{1+\alpha_i}{2}}$ to ${\frac{1+\epsilon\alpha_i}{2}}$. Expand, for ${\epsilon=\Theta(1/k)}$, get ${\mathop{\mathbb E}(Maj(x_1 ,\ldots ,x_k))\sim k^{-3/2}}$. For ${\epsilon}$ constant, with more work, one can get ${\mathop{\mathbb E}(Maj(x_1 ,\ldots ,x_k))\sim k^{-1/2}}$. This shows that the LP algorithm achieves a ${\frac{1}{2}-O(k^{-1/2})}$-fraction of equations, and proves sharpness of Austrin-Mossel’s theorem for Majority.

For general linear functions ${L}$, same LP, same rounding,

$\displaystyle \begin{array}{rcl} \mathop{\mathbb E}(sign(L))=(c_1 \alpha_1 +\cdots+c_k \alpha_k)\epsilon+ O(\epsilon^2), \end{array}$

where the ${c_i}$‘s are the Fourier coefficients of ${P}$.

Theorem 3 If ${\hat{P}_i}$ are usable as weights, i.e. if

$\displaystyle \begin{array}{rcl} P(x)=sign(\sum \hat{P}_i x_i), \end{array}$

then we get an ${(1,\frac{1}{2})}$-approximation for all ${\delta>0}$.

This happens roughly when the coefficients ${w_i}$ of ${L}$ satisfy ${\sum_{i}w_{i}^{3}-w_{i}\leq 3\sum_{i}w_{i}^{2}}$ up to some error.

Example 2 The Republic predicate ${P=sign(\frac{k}{3}x_1 +\sum_{i=2}^{k}x_i)}$ has ${\hat{P}_1 =1-c^k}$ and other ${\hat{P}_i =c^k}$. Unknown wether AR or not.

The Monarchy predicate ${P=sign((k-2)x_1 +\sum_{i=2}^{k}x_i)}$ has ${\hat{P}_1 =1-c^k}$ and other ${\hat{P}_i =c^k}$. This has been solved by Austrin B and Magen: it is not AR.

4. Lower bounds

We can show that

• Majority is ${(1-\frac{1}{k+1},\frac{1}{2}+\epsilon)}$-inapproximable.
• Majority is ${(1-\epsilon,\frac{1}{2}+\theta(k^{-1/2})+\epsilon)}$-inapproximable.

This is not sufficient to prove that Majority is AR. I wonder wether there is a linear predicate which is AR.

References