Before I go on, let me come back to the discussion of optimality of KKL Theorem. It is sharp inasmuch as is large. One can be a bit more precise, even if one is merely interested in .

Theorem 1. Let be Boolean. If , then

But there is life beyond . This is captured in Talagrand’s improvement.

Theorem 2 (Talagrand)Let be Boolean with . Then

This is sharp (up to a multiplicative constant) for describing the whole vector of influences. Examples are variants of tribes with unequal tribe sizes.

**1. Application to first passage percolation **

What I discuss today in the special case of the square grid holds in much more generality, see Raphaël Rossignol’s work.

Theorem 3Let in the random metric with edge-lengths chosen at random in .

Theorem 4 (Kesten, Talagrand)Let in the random metric with edge-lengths chosen at random in .

One might believe that is asymptotically Gaussian. This is not the case.

**Conjecture**. .

** 1.1. Our approach to first passage percolation **

is -valued, but is -valued, so most of our previous calculations apply to .

Given , let be the number of neighbours of such that . Then

Things would be simple if one could prove that with high probability. We cannot prove this directly. Here is a simple trick to circumvent this. Let us consider separately ‘s which increase (resp. decrease) , i.e.

They have equal expectations (for every pair of neighbours, one is counted in and the other in ), so

**Claim**. .

Indeed, the only way to increase is to increase the length of one of the edges of the minimal path from to . This path has at most edges. Even better if there are several minimal paths.

Now we know that . This is the proof of Kesten’s theorem.

Next we are concerned with . Assume is small. Put

Then is non constant only on a -fraction of , so

This implies that decays exponentially (Kesten). Indeed, the variance of conditionned by is bounded above by as well, so probability is divided by when is incremented.

** 1.2. Improvements **

To improve exponential into Gaussian decay (and to get the bound on variance), we use hypercontractivity. We need to show that most Fourier coefficients of (or equivalently of ) are above level. By our version of Harper’s theorem (Theorem 2 of previous lecture), it suffices to show that the support of is small. It is enough to show it except for edges in balls containing edges around and .

I believe the following Lemma is true.

Lemma 5For every edge ouside and , the probability to belong to a minimal path between and is .

But I am unable to prove it. Here is how we circumvent it. Consider paths connecting vertical segments of length at endpoints. This defines a new function which is very close to , and the Lemma holds for .

To get Talagrand’s Gaussian tail estimate, consider again . By our version of Harper’s Theorem, since has support of size , the Fourier transform is concentrated on sets of size . This gives . The extra also gives Gaussian decay.

Last passage percolation refers to the longest path (paths are restricted to go north and east only). There are results as well, often sharper.

**2. Back to the density increasing argument **

For many problems, the inductive projection argument gives the best known results, although rather weak when compared with examples.

Let have measure . KKL Theorem implies that there is a coordinate so that if one projects orthogonally to this coordinate, the measure of projection is

Indeed, the effect of projection is to increase measure exactly by influence. This implies that there is a set of coordinates such that if we project orthogonally to these coordinates, .

The theorem even shows that there is a set of coordinates such that if we project orthogonally to them, then .

This is probably far from the true behaviour: measure of projection is probably exponentially close to .

**Conjecture (B. Chor)**. Suppose has half measure. There exist coordinates such that the measure of the projection of to these coordinates is .

A similar difficulty arises in the Blum, L, Rubinfeld linearity testing theorem.

The weak point is the fact one has to iterate. Here is a third situation where the same issue arises.

Let . Assume that does not contain triples of distinct vectors whose sum is zero.

**Question**. How large can be ?

Example: has size . One can do better: record is .

Theorem 6 (Meshulam).

Recently, this was improved.

Theorem 7 (Bateman, Katz).

Note Fourier analysis over works quite similarly. Meshulam’s argument goes as follows.

1. Give a Fourier theoretic description to the number of triples in in terms of :

To spoil this estimate requires a large Fourier coefficient. There must be a linear subspace where has larger density.

2. Iterate.

**3. Threshold behaviour of random graphs **

** 3.1. Monotone functions **

Definition 8A Boolean function is monotone if

for monotone functions is simpler, . This follows from the general formula, for monotone ,

It follows that .

We shall use non uniform distributions on . For instance,

and in particular .

Denote by , .

**Claim**. If is monotone, is non decreasing.

Indeed, to simulate , , simulate first and then flip bits from to randomly again. This increases the value of .

Lemma 9 (Russo)If is monotone,

**Proof**. is linear in each variable, and

** 3.2. Sharp thresholds **

Let us plot . For Dictators, the graph is a line (an instance of a coarse threshold). Many monotone functions exhibit sharp thresholds: jumps from nearly to nearly in a small window. This happens notably for properties fd random graphs in the Erdös-Renyi model. The sharpest threshold is achieved by Majority.