**Noise sensitivity in critical percolation**

Recall Garban’s talk: Algorithm to decide if a crossing exists : start exploration from the left corner. If there is a left-right crossing, you cannot reach the upper side.

Introduce a dynamic: Flip edges with small probability, once every second. We see that in a short portion of time, crossing switches from left-right to up-down several times. So there seems to be plenty of pivotal edges, i.e. edges that, when flipped, change the crossing event.

**1. Pivotal edges **

From technology, we know that

so the length of the interface is .

** 1.1. How many pivotals are there ? **

Let be the probability to switch a given edge. If tends to (low noise), we don’t hit any pivotals. Indeed, . It implies asymptotically full correlation of successive configurations.

If tends to infinity, we do hit many pivotals, but this does not imply asymptotic independence.

** 1.2. The Fourier spectrum **

Let denote the indicator function of the crossing event. Let denote the noise operator,

is diagonal in the Fourier-Walsh basis,

Define the *spectral sample* as the distribution on the set of subsets of . Notation:

If for some sequence , tends to , then we have asymptotic independence, since correlations

** 1.3. Pivotals versus spectral sample **

An easy calculation gives

Also,

(but this does not hold for more points).

For percolation, , hence there exists such that

** 1.4. Other events **

**Dictator** has .

**Majority** has (most of the mass on singletons, very noise stable) and is huge (sharp threshold).

**Parity** is the most sensitive to noise.

** 1.5. BKS **

Schramm tried to prove existence scaling limits using noise sensitivity.

Theorem 1 (Benjamini, Kalai, Schramm)A sequence of monotone Boolean functions is noise sensitive iff it is correlated to all weightged majorities.

Corollary 2In particular, left-right percolation crossing is noise sensitive.

** 1.6. Schramm and Steif **

Theorem 3If can be computed by a randomized algorithm with revealment , then

This does not give a good bound for percolation. The best revealment is at least , whereas the conjectured is .

**2. Basic properties of the spectral sample **

Gil Kalai to study the whole distribution . We do not know how to sample from it (a quantum algorithm exists, Bernstein-Vazirani).

Now we know conformal invarince (Smirnov, Tsirelson, Schramm-Smirnov). Can we use it to describe , percolation ?

** 2.1. General facts **

Lemma 4 (Linial, Mansour, Nisan)(Random restriction Lemma).

This gives a control on the spectral sample in a subset.

** 2.2. Separation **

Lemma 5 (Kesten 1987, Damron, Sapozhnikov 2009)(Strong separation Lemma). If is far from the sides of the square, conditioned on the interfaces to reach , with a uniformly positive conditional probability, the interfaces are well-separated around .

Corollary 6

** 2.3. Self similarity **

The number of -boxes hit by is .

is very different from a uniform random set of the same density. Typically, for , intersects all -boxes.

** 2.4. Tightness **

We would like to estimate how much differs from its expectation. This works for the zero set of random walks on the line. But there is less independence for . For instance, we do not know how independent events hits and hits are. But we understand hits versus .

Theorem 7 (Garban, Pete, Schramm)On the triangular lattice,

So the scaling limit of is a conformally invariant Cantor set with Hausdorff dimension .

We also prove that the scaling limit of dynamical percolation exists as a Markov process. Above theorem implies that this process is ergodic.

** 2.5. Questions **

- What about other Boolean functions ? Do typical random restrictions of large Boolean functions look generic ? Compare Szemerédi 1975 Regularity Lemma, Chatterji-Ledoux 2009 for submatrices of Hermitian matrices.
- Self similarity of and implies that the entropy of these random sets should be at most , so no factors as in the uniform case. This looks like fractal percolation on a -ary tree.

** 2.6. Connection with Influence-Entropy conjecture **

**Influence-Entropy conjecture (Friedgut-Kalai 1996)**.

Considered very hard (much stronger than KKL). Does it hold for ?