Approximating linear threshold predicates
Joint with Cheragchi, Isaksson and Svensson.
Let be a predicate on -valued variables, i.e. a -variable boolean function. The probability that a random element of satisfies is . This gives an easy -approximation to MAX P for any and (deterministic, but easy to make deterministic using conditional expectations).
Question: is MAX P -inapproximable ? is MAX P -inapproximable ?
Say is approximation resistant (AR) in the second case, approximation resistant on satisfiable instances (ARSI) in the first case.
Example 1 3SAT is approximation resistant on satisfiable instances, 3LIN is approximation resistant.
AR looks sort of monotone. It is more likely for predicates accept more inputs. Not rigorously true.
Difference between AR and ARSI: SDP or LP do not care about perfect satisfiability, but hardness reductions do.
Steurer: what about BETWEEN US ?
2. Why study linear predicates ?
Austrin Mossel 2009:
Theorem 1 Take a measure on such that coordinates are independent, vanishing expectation. Assume that
Then MAX P is -inapproximable under UGC.
This, like most of this talk, is contained in Raghavendra’s work, except it is not so easy to know when Raghavendra’s theorem applies or not.
Lemma 2 A subset does not support a pairwise independent measure iff there exist a quadratic predicate without constant term such that on .
Proof: Let . If convex hull of , then and is the required measure. If not, there is a separationg hyperplane, this provides the required quadratic polynomial.
So the maximal not satisfying Austrin-Mossel’s assumptions is , a quadratic polynomial without constant term. When is quadratic, SDP seem to be efficient. E.g. MAX CUT. One may believe that this can be handled using SDP, and we get a characterization of inapproximable predicates. It does not work so. So we lower our ambition and study predicates of the form where is linear. E.g. Majority on an odd number of inputs.
3. Our LP approximations
Take Majority on inputs. This becomes a linear integer program, with constraints of the form
and no objective. Note that , by symmetry. Relax integrality into . Let be the LP solution.
Rounding: set with probability . For instance
Hard to compute for more variables. Result is a symmetric polynomial. So change probability from to . Expand, for , get . For constant, with more work, one can get . This shows that the LP algorithm achieves a -fraction of equations, and proves sharpness of Austrin-Mossel’s theorem for Majority.
For general linear functions , same LP, same rounding,
where the ‘s are the Fourier coefficients of .
Theorem 3 If are usable as weights, i.e. if
then we get an -approximation for all .
This happens roughly when the coefficients of satisfy up to some error.
Example 2 The Republic predicate has and other . Unknown wether AR or not.
The Monarchy predicate has and other . This has been solved by Austrin B and Magen: it is not AR.
4. Lower bounds
We can show that
- Majority is -inapproximable.
- Majority is -inapproximable.
This is not sufficient to prove that Majority is AR. I wonder wether there is a linear predicate which is AR.