**Influences, noise sensitivity and randomized algorithms**

I will survey a few famous theorems involving influences, together with sketches of proofs. I will mention applications to statistical physics and randomized algorithms.

**1. Influences **

** 1.1. Boolean functions **

A Boolean function is a function . There are of them. That is a lot. The point of the lecture is to give some structural results on this set.

Example 1Our favourite Boolean functions are

Majority: .Dictator: .Fix a subdomain of size of the -grid, and two pieces and of the boundary. Play bond percolation. Define iff there is an open path connecting to .

In this last example, say an edge is *pivotal* if changing that edge changes . The probability of a given edge to be pivotal is conjectured to be . Known in the triangular lattice.

** 1.2. Influences **

Definition 1Given a probability distribution on the hypercube, and , theinfluence of the -th coordinateon a Boolean function is the probability that be pivotal, i.e.

Most of the time, the probability distribution on the hypercube will be uniform. When we use instead the -biased Bernoulli distribution , we denoter influence by .

Example 2For the majority function

Definition 2The total influence is

** 1.3. Geometric interpretation **

Given a subset of the hypercube, define the *edge-boundary* as the set of edges joining to its complement. If we use the uniform distribution on the hypercube, for the characteristic function of ,

** 1.4. An easy estimate **

Proposition 3For all , there exists such that

*Proof:* Poincaré inequality states that .

This argument applies to all real valued functions.

** 1.5. The KKL Theorem **

This is an improvement on the Poincaré inequality, taking into account the fact the function is boolean.

Theorem 4 (Kahn, Kalai, Linial 1988)There is a universal constant such that for every , there exists such that

Example 3 (Ben Or, Linial)Thetribesfunction. Divide in blocks of size . Define iff at least one block is entirely ‘s. The clever choice is .

Then for every ,

Remark 1Assume that is invariant under a transitive group of permutations of . Then

This can be viewed as an isoperimetric inequality.

** 1.6. Proof of KKL theorem **

It uses some discrete harmonic analysis. Use the notation from the courses by Kindler or Raghavendra. Every function (Fourier-Walsh) decomposes as

where

Parseval’s identity is

Denote by

In general,

thus

Thus influences read

Assume (by contradiction) that for all , . Then all have small support. Heisenberg’s uncertainty principle suggests that this implies that spectrum is high. Bonami-Beckner inequality makes this quantitative. On the other hand,

If spectrum is high, This is much larger than .

**2. Application to statistical physics **

** 2.1. Russo’s formula **

Proposition 5Let be monotone. Use the -biased distribution on the hypercube. Then the derivative of at is the total influence .

The KKL theorem implies the following lower bound (BKKKL), for all , with a -independant constant ,

For symmetric functions,

So the slope is big, this is an instance of a *sharp threshold* result.

One can view this as a generalization of Kolmogorov’s -law.

Theorem 6Let be a monotone set. Then is either or .

** 2.2. Bond percolation **

Kesten’s proof that for bond percolation on . He knew that is balanced at , he proved by hand that influences were large, so just above connections across rectangles of aspect ratio have high probability.

** 2.3. First passage percolation **

Study balls in random metrics on . One knows that a limit shape exists. KKL has been used to show that fluctuations around that shape are small.

** 2.4. Dynamical percolation **

See Gabor Pete’s talk on friday.

**3. Randomized algorithms **

** 3.1. Revealment **

Given a Boolean function , one wants an algorithm that computes with high probability, using few queries, i.e. minimizing the *revealment*

Theorem 7 (Schramm, Steif)For every Boolean function , for all , for all algorithms ,

This means that small revealment gives a strong control on the Fourier coefficients of .

There is an algorithm with revealment for the connection function: launch a sample connection, i.e. travel through the domain querying at each step the colour of the encountered hex. This requires queries.

There is a conjecturally better algorithm.

** 3.2. Queried sets **

Let denote the set of variables queried by . It is easy to show that

In fact, R. O’Donnell and R. Servedio could prove that

This is very useful, since it works in the opposite direction as KKL, and gives lower bounds on revealment, since

For instance, for the connection Boolean function, is bounded from above by the typical length of an curve, i.e. . Compare to the lower bound provided by Peres, Schramm et al.

** 3.3. Witness revealment **

Theorem 8 (Schramm, Steif)For every Boolean function , if there exists an algorithm with small revealment, then is noise sensitive.

Define a *witness* as a set of coordinates that suffice to prove that (e.g. a connection path for the connection Boolean function). The *witness revealment* of is the infimum over all randomized witnesses which compute of

**Question**: does small witness revealment imply noise-sensitivity ?