**The pseudorandomness-learning nexus**

We focus on a class of Boolean functions called half-spaces, signs of linear forms. An on the class of depth -circuits where gates are half-spaces. We shall illustrate connections between learning and pseudorandomness on this class. Note this class is not that easy.

**Question [ O’Neill, 1971]**: How many half spaces are needed to slice every edge of the hypercube ?

This naive looking question is still widely open.

**1. Learning **

**Problem**: Predict Boolean function from a sample drawn according to some distribution .

- Most general model is PAC-learning : is unknown.
- Uniform distribution learning: Assume us uniform.
- Query learning : can ask for for certain .
- Agnostic learning: the unknown function need not be a half-space, look for a predictor in the family of half-spaces, do the best possible.

Half spaces are PAC learnable. Indeed, this boils down to solving linear systems of equations.

**2. Pseudorandomness **

Theorem 1 (Peres 2001)Polynomials of degree suffice for an approximation.

Peres stated this is terms of noise sentivity. See Ryan O’Donnell’s lecture notes.

This implies bounds on learning time under uniform distribution.

** 2.1. Pseudorandom generators for half-spaces **

Construct a small support distribution that fools half-spaces. Candidate: a -wise independent distribution. Indeed, Peres’ theorem can be interpreted in these terms (see Braverman’s talk this morning). But one must upgrade approximation into sandwich. This cannot work. This follows from BGGP’s independence sensitivity for Majority of majority.

Our approach: Replace function by a low degree polynomial.

We expect linear function to be distributed like a Gaussian.

Berry-Esseen: Quantitative comparison to a Gaussian. This requires that coefficients of the linear form be uniformly small (regularity).

** 2.2. Sandwiching by polynomials **

Since Gaussian is concentrated, one must achieve excellent approximation near the origin. It turns out that degree suffices for an approximation of .

What about irregular half-spaces ?

Example 1.

Only the two first variables matter.

Example 2.

This becomes regular once conditionned w.r.t. .

Lemma 2 (Servedio 2006)Randomly setting the first head variables gives regular halfspaces or constants, up to .

Theorem 3 (Diakonikolas, Gopalan, Jaiswal, Servedio, Viola)Halfspaces are fooled by a pseudorandom generator with seed length .

** 2.3. Sandwiching by branching programs **

Improve on the seed-length ? Replace polynomials by branching programs. Certain half-spaces require exponential width. Meka and Zuckerman show that sandwiching with small width branching programs is possible. Their generator has seed length .

If queries are allowed, then learning small width branching programs is possible.

** 2.4. Application: counting Knapsack solutions **

This amounts to counting , for some half-space. Exactly: NP-hard. Approximately ? Related to pseudorandom generators.

Morris-Sinclair 1999: Use MCMC approach. We give a deterministic algorithm: We approximate the given half-space by a suitable branching program.

**3. Conclusion **

Concerning half-spaces, our present state of knowledge is very similar on both sides: pseudorandomness and learning. This state of affair might change someday if progress is made in cryptography.

The slides of this talk can be found there.