**-wise independence and the moment problem**

Let , be -valued random variables, fully independent (say uniformly distributed), and -wise independent. Which Boolean functions are fooled by , i.e. is close to ?

**1. -wise independence **

** 1.1. Definition **

Definition 1Say is -wise independent with marginal if for every , are independent, and . The set of -wise independent distributions is denoted by .

Note is a convex set.

Example 1Let , , be -valued, independent, and set mod . Then .

** 1.2. Construction **

Let . Let be a -dimensional linear subspace of . Let be -valued and uniformly distributed in . Assume that is -wise independent. Then every projection of to coordinates is onto. In particular, every vector in has at least entries equal to . In other words, is an error correcting code with distance .

** 1.3. Independence sensitivity **

Let . The independence sensitivity of is the ratio of values of over .

Example 2percolation, i.e. detects

How does it behave under -wise independence ? Ori will tell tomorrow.

Example 3parity is very sensitive to independence, since our example of -independent distribution makes fall to .

Linear functions are sensitive, in general.

Example 4majority is independence stable.

Proposition 2If and tends to infinity, then tends to .

Nevertheless, Ori will decribe tomorrow a function, close to Majority, which is noise stable, but not independence stable.

*Proof:* The first moments of coincide with those of an independent variable. By central limit theorem, if tends to infinity, all its moments tend to the moments of . Since is determined by its moments, converges to in law.

Here is a quantitative version.

Theorem 3

Example 5Tribes is independence stable.

**2. The And function **

Example 6And . If comes from an error-corrector code , then .

For and for all , .

Theorem 4For , define

Then

If , a prime power, then

Define .

Proposition 5For ,

*Proof:* One estimates .

Conversely, error correcting codes provide lower bounds , . So upper and lower bounds match only when . Where is the truth ?

The proof relies on the theory of the moment problem.

**3. Crash course on the moment problem **

** 3.1. Symmetrization **

Given random variable , define as follows: Sample and then apply a random permutation to it. Then and is -wise independent if is. Observe that the distribution of is entirely determined by the distribution of . Its first moments coincide with those of the binomial distribution. Conversely, any such distribution gives a symmetric -wise independent distribution . So can be computed by maximizing the linear function over all distributions with prescribe first moments. By LP duality,

over all polynomials of degree at most such that , for all .

** 3.2. The classical moment problem **

Reference books: Krein-Nutelmann, Karlin-Studden.

Given interval and real numbers , consider all probability distributions supported on with for all . Study or .

We are interested in the interior of the space of distributions with prescribed moments . Define index function to be on and on and . The index of a random variable is the sum of indices of numbers in its support.

- There are exactly two distributions with index (known as principal respresentations), and none with lower index.
- There is exactly a -parameter family of index distributions (known as canonical representation).

They maximize : is achieved by the canonical representation if and by the principal representation if or .

The Chebyshev-Markov-Stieltjes inequality states that

It follows that is extremal on principal representations for a large class of functions .

Explicit expressions are given in terms of orthogonal polynomial (Krawtchouk polynomials for ).

Theorem 7

** 3.3. Application **

We get upper bounds on , not quite sharp since our random variables are integer valued.

We also get lower bounds.

Theorem 8 (Markov, Lukacs)Every polynomial which is nonnegative on is of the form

Using this,

which is expressible in terms of orthogonal polynomials. We can go through the whole calculation (Krawtchouk polynomials) and obtain Theorem 6.