** Lower bounding distortion via metric invariants **

Let and be metric spaces. Recall that is the infimum of bi-Lipschtz distortions of maps . Typically, is a Banach space and is combinatorial (finite metric space). When , one denotes by

Theorem 1 (Enflo 1970)tends to infinity as tends to infinity.

Enflo gave a lower bound of . This is not sharp. The sharp bound is . The upper bound is due to Bourgain (1986) and the lower bound to London-Linial-Rabinovich (1994) – they used expanders.

**1. Enflo’s lower bound for the Hamming cube **

Enflo gave a sharp distortion lower bound, but for a suboptimal example, the Hamming cube with metric (= graph metric on 1-skeleton of cube).

Proposition 2 (Enflo)For any ,

Let us begin with a

Lemma 3 (Short diagonals)In , for any quadrilateral, the sum of squares of diagonals is less than the sum of squares of sides.

**Proof**. By summing over coordinates, it suffices to prove this for points on a line. There, it boils down to triangle inequality.

** 1.1. Proof of Proposition **

By induction on . Edges of split into and (edges of 2 opposite faces) and (edges joining these faces). Let and be the diagonals of opposite faces (there are not diagonals of ). Induction hypothesis gives

For each diagonal , there is a unique (parallel) such that and are diagonals of . Then and . The short diagonals lemma gives

Summing up such inequalities, on the left hand side, each diagonal of appears once. On the right hand side, each edge in and each element of and occurs once. With induction hypothesis, we are done.

** 1.2. Proof of the sharp bound for Hamming cube **

Assume is distance non decreasing and -Lipschitz. There are diagonals and edges. Diagonals in have length and edges have length 1, their images have lengths , therefore

hence .

** 1.3. Enflo type **

The crucial property of used is expressed in the Proposition. We could have had a multiplicative constant in the Proposition, this would not have spoilt the result. This motivates the following

Definition 4Let . Say that metric space has Enflo type if there exists such that for all and all maps ,

The best is called the Enflo type constant of .

By construction, if has Enflo type ,

Enflo type is a bi-Lispchitz invariant. The Enflo type constant is covariant under Lipschitz maps: if there exists a Lipschitz map , then

This gives lower bounds by

**2. Banach spaces **

Now I give some background on Banach spaces that will lead us to more invariants.

** 2.1. Rademacher type **

Say an infinite dimensional Banach space has Rademacher type if for all and all point ,

where are independent uniform -valued random variables.

This notion was introduced independently by Jorgensen and by Maurey-Pisier. Note that has not be (the real line does not have Rademacher type for ). Rademacher type is Enflo type when restricting maps to linear maps. Therefore Enflo type implies Rademacher type . A weak converse holds, as the following surprising theorem states.

Theorem 5 (Pisier)For Banach spaces, Rademacher type Enflo type for all .

**Open question**. Does Rademacher type Enflo type ? Known to hold for certain subclasses of Banach spaces. E.g. UMD spaces (Naor-Schechtman).

** 2.2. Finite representability **

**Locality**. Rademacher type depends only on the geometry of finite dimensional linear subspaces. Such properties are called *local*.

Definition 6Say a Banach space is (crudely) finitely representable in if there exists such that for every -dimensional subspace , there exists a subspace with Banach-Mazur distance from at most .

By definition, local properties are preserved by finite representability. The following theorem suggests that local properties may be expressed as bi-Lipschitz metric invariants.

Theorem 7 (Ribe)If two infinite dimensional Banach spaces and are uniformly homeomorphic, then they are mutually finitely representable.

*Suggests* has to be taken seriously. It merely gives a guideline, only a small number of invariants are understood from this point of view. We shall give an example below.

** 2.3. -convexity **

Let . Say a Banach space is -convex if it is uniformly convex with a modulus of convexity .

Proposition 8is -convex iff there exists such that for all , ,

**Example**. is -convex for , with bounded constants. For , is 2-convex with 2-convexity constant .

Theorem 9 (Pisier)1. Every uniformly convex Banach space can be renormed to be -convex for some .

2. Uniform convexity is a local property.

**3. Markov type **

If is a Markov chain on some set , denote by the Markov chain that coincides with until , and then continues an independent life.

Say a metric space is *Markov -convex* if there exists such that for every Markov chain on a set and every map ,

The best is called the *Markov -convexity constant* of .

** 3.1. Trees **

Let us start with an example which is not Markov convex. Let be the complete rooted binary tree of depth . Let be the standard downward random walk on with probability , stopped at leaves.

Proposition 10There is a such that the Markov convexity constant has to be

It follows that the infinite rooted binary tree is not Markov convex. We see that trees are really extreme for Markov convexity: they allow maximal branching for Markov chains.

**Proof**. The right hand side (or denominator in ) equals , since the distance by construction. To estimate the right hand side (or numerator in ), assume that and consider the independent chains and over time steps. With probability , at time , they jump to different nodes. If this happens, later on, they walk down the tree along different branches, therefore increases by 2 at each step, leading to

and thus

Summing over from to gives

Summing over from 0 to gives

whence .

Theorem 11If metric space is Markov -convex, then

Indeed, the Markov -convexity constant is Lipschitz covariant.

** 3.2. The case of Banach spaces **

Theorem 12An infinite dimensional Banach space can be renormed to be -convex iff it is Markov -convex.

I prove only one implication. Let be a -convex Banach space, i.e.

One first proves a consequence, the fork inequality:

Note that only the tip of the fork ( and ) involves the -convexity constant, the other coefficients take specific values.

Apply the fork inequality to the images by of , , , and take expectation. By independence,…

** 3.3. Question **

Does one have a metric invariant for uniform smoothness ?