Notes of Sean Li’s lecture

Lower bounding distortion via metric invariants

Let {X} and {Y} be metric spaces. Recall that {c_Y(X)} is the infimum of bi-Lipschtz distortions of maps {X\rightarrow Y}. Typically, {Y} is a Banach space and {X} is combinatorial (finite metric space). When {Y=\ell_2}, one denotes by

\displaystyle  \begin{array}{rcl}  c_2(n)=\sup\{c_{\ell_2}(X)\,;\,X\textrm{ an }n\textrm{-point metric space}\}. \end{array}

Theorem 1 (Enflo 1970) {c_2(n)} tends to infinity as {n} tends to infinity.

Enflo gave a lower bound of {\sqrt{\log n}}. This is not sharp. The sharp bound is {\log n}. The upper bound is due to Bourgain (1986) and the lower bound to London-Linial-Rabinovich (1994) – they used expanders.

1. Enflo’s lower bound for the Hamming cube

Enflo gave a sharp distortion lower bound, but for a suboptimal example, the Hamming cube {D_n=\{0,1\}^n} with {\ell_1} metric (= graph metric on 1-skeleton of cube).

Proposition 2 (Enflo) For any {f:D_n\rightarrow\ell_2},

\displaystyle  \begin{array}{rcl}  \sum_{diagonals\,xy}|f(x)-f(y)|^2\leq \sum_{edges\,uv}|f(u)-f(v)|^2. \end{array}

Let us begin with a

Lemma 3 (Short diagonals) In {\ell_2}, for any quadrilateral, the sum of squares of diagonals is less than the sum of squares of sides.

\displaystyle  \begin{array}{rcl}  |x-z|^2+|y-w|^2\leq|x-y|^2+|y-z|^2+|z-w|^2+|w-x|^2. \end{array}

Proof. By summing over coordinates, it suffices to prove this for points on a line. There, it boils down to triangle inequality.

1.1. Proof of Proposition

By induction on {n}. Edges of {D_n} split into {E_0} and {E_1} (edges of 2 opposite faces) and {E'} (edges joining these faces). Let {diag_0} and {diag_1} be the diagonals of opposite faces (there are not diagonals of {D_n}). Induction hypothesis gives

\displaystyle  \begin{array}{rcl}  \sum_{diag_0}|f(x)-f(y)|^2&\leq &\sum_{E_0}|f(u)-f(v)|^2,\\ \sum_{diag_1}|f(x)-f(y)|^2&\leq &\sum_{E_1}|f(u)-f(v)|^2. \end{array}

For each diagonal {uv\in diag_0}, there is a unique (parallel) {u'v'\in diag_1} such that {uv'} and {u'v} are diagonals of {D_n}. Then {uu'} and {vv'\in E'}. The short diagonals lemma gives

\displaystyle  \begin{array}{rcl}  |f(u)-f(v')|^2+|f(u')-f(v)|^2&\leq&|f(u)-f(v)|^2+|f(v)-f(v')|^2\\ &&+|f(v')-f(u')|^2+|f(u')-f(u)|^2. \end{array}

Summing up such inequalities, on the left hand side, each diagonal of {D_n} appears once. On the right hand side, each edge in {E'} and each element of {diag_0} and {diag_1} occurs once. With induction hypothesis, we are done.

1.2. Proof of the sharp bound for Hamming cube

Assume {f:D_n\rightarrow\ell_2} is distance non decreasing and {D}-Lipschitz. There are {2^{n-1}} diagonals and {n.2^{n-1}} edges. Diagonals in {D_n} have length {n} and edges have length 1, their images have lengths {\leq D}, therefore

\displaystyle  \begin{array}{rcl}  2^{n-1}n^2\leq n.2^{n-1}D^2, \end{array}

hence {D\geq\sqrt{n}}.

1.3. Enflo type

The crucial property of {\ell_2} used is expressed in the Proposition. We could have had a multiplicative constant in the Proposition, this would not have spoilt the result. This motivates the following

Definition 4 Let {p>1}. Say that metric space {X} has Enflo type {p} if there exists {T>0} such that for all {n} and all maps {f:D_n\rightarrow X},

\displaystyle  \begin{array}{rcl}  \sum_{diagonals\,xy}|f(x)-f(y)|^p\leq T\sum_{edges\,uv}|f(u)-f(v)|^p. \end{array}

The best {T:=T_p(X)} is called the Enflo type {p} constant of {X}.

By construction, if {X} has Enflo type {p},

\displaystyle  \begin{array}{rcl}  c_X(D_n)\geq T_p(X)^{-1}n^{1-\frac{1}{p}}. \end{array}

Enflo type is a bi-Lispchitz invariant. The Enflo type {p} constant is covariant under Lipschitz maps: if there exists a Lipschitz map {f:X\rightarrow Y}, then

\displaystyle  \begin{array}{rcl}  T_p(X)\leq dist(f) T_p(Y). \end{array}

This gives lower bounds by

\displaystyle  \begin{array}{rcl}  dist(f)\geq\frac{T_p(X)}{T_p(Y)}. \end{array}

2. Banach spaces

Now I give some background on Banach spaces that will lead us to more invariants.

2.1. Rademacher type

Say an infinite dimensional Banach space {X} has Rademacher type {p} if for all {n} and all point {x_1,\ldots,x_n\in X},

\displaystyle  \begin{array}{rcl}  \mathop{\mathbb E}_\epsilon(|\sum_{i=1}^n\epsilon_i x_i|^2)\leq T^p\sum_{i=1}^n|x_i|^p, \end{array}

where {\epsilon_i} are independent uniform {\pm 1}-valued random variables.

This notion was introduced independently by Jorgensen and by Maurey-Pisier. Note that {p} has not be {\leq 2} (the real line does not have Rademacher type {p} for {p>2}). Rademacher type is Enflo type when restricting maps {f:D_n\rightarrow X} to linear maps. Therefore Enflo type {p} implies Rademacher type {p}. A weak converse holds, as the following surprising theorem states.

Theorem 5 (Pisier) For Banach spaces, Rademacher type {p>1} {\Rightarrow} Enflo type {p'} for all {p'<p}.

Open question. Does Rademacher type {p>1} {\Rightarrow} Enflo type {p} ? Known to hold for certain subclasses of Banach spaces. E.g. UMD spaces (Naor-Schechtman).

2.2. Finite representability

Locality. Rademacher type {p} depends only on the geometry of finite dimensional linear subspaces. Such properties are called local.

Definition 6 Say a Banach space {X} is (crudely) finitely representable in {Y} if there exists {K\geq } such that for every {n}-dimensional subspace {Z\subset X}, there exists a subspace {Z'\subset Y} with Banach-Mazur distance from {Z} at most {K}.

By definition, local properties are preserved by finite representability. The following theorem suggests that local properties may be expressed as bi-Lipschitz metric invariants.

Theorem 7 (Ribe) If two infinite dimensional Banach spaces {X} and {Y} are uniformly homeomorphic, then they are mutually finitely representable.

Suggests has to be taken seriously. It merely gives a guideline, only a small number of invariants are understood from this point of view. We shall give an example below.

2.3. {p}-convexity

Let {p>1}. Say a Banach space is {p}-convex if it is uniformly convex with a modulus of convexity {\epsilon^p}.

Proposition 8 {X} is {p}-convex iff there exists {K>0} such that for all {x}, {y\in X},

\displaystyle  \begin{array}{rcl}  |x|^p+|y|^p \geq 2|\frac{x-y}{2}|^p+2|\frac{x+y}{2K}|^p. \end{array}

Example. {L^p} is {p}-convex for {p\geq 2}, with bounded constants. For {p<2}, {L^p} is 2-convex with 2-convexity constant {O(\frac{1}{\sqrt{p-1}})}.

Theorem 9 (Pisier) 1. Every uniformly convex Banach space can be renormed to be {p}-convex for some {p>1}.

2. Uniform convexity is a local property.

3. Markov type

If {(X_t)_{t\in{\mathbb Z}}} is a Markov chain on some set {\Omega}, denote by {(\tilde{X}_t(k))_{t\in{\mathbb Z}}} the Markov chain that coincides with {X_t} until {t\leq k}, and then continues an independent life.

Say a metric space {X} is Markov {p}-convex if there exists {\pi>0} such that for every Markov chain on a set {\Omega} and every map {f:\Omega\rightarrow X},

\displaystyle  \begin{array}{rcl}  \sum_{k=0}^{\infty}\sum_{t\in{\mathbb Z}}\frac{1}{2^{kp}}\mathop{\mathbb E}(d(f(X_t),f(\tilde{X}_t(t-2^k)))^p)\leq \pi^p\sum_{t\in{\mathbb Z}}\mathop{\mathbb E}(d(f(X_t),f(X_{t-1}))^p). \end{array}

The best {\pi} is called the Markov {p}-convexity constant {\pi_p(X)} of {X}.

3.1. Trees

Let us start with an example which is not Markov convex. Let {B_n} be the complete rooted binary tree of depth {n}. Let {X_t} be the standard downward random walk on {B_n} with probability {1/2}, stopped at leaves.

Proposition 10 There is a {C(p)} such that the Markov {p} convexity constant has to be

\displaystyle \pi_p(B_n)\geq C\,(\log n)^{1/p}.

It follows that the infinite rooted binary tree {B_\infty} is not Markov convex. We see that trees are really extreme for Markov convexity: they allow maximal branching for Markov chains.

Proof. The right hand side (or denominator in {\pi}) equals {n}, since the distance {d(X_t,X_{t+1})=1} by construction. To estimate the right hand side (or numerator in {\pi}), assume that {0\leq t-2^k\leq t\leq n} and consider the independent chains {(X_t)} and {(\tilde{X}_t(t-2^k))} over {2^k} time steps. With probability {1/2}, at time {t-2^k}, they jump to different nodes. If this happens, later on, they walk down the tree along different branches, therefore {d(X_t,\tilde{X}_t(t-2^k))} increases by 2 at each step, leading to

\displaystyle  \begin{array}{rcl}  d(X_t,\tilde{X}_t(t-2^k))\geq 2.2^k, \end{array}

and thus

\displaystyle  \begin{array}{rcl}  \mathop{\mathbb E}(d(X_t,\tilde{X}_t(t-2^k))^p)\geq \frac{1}{2}(2.2^k)^p\sim 2^{kp}. \end{array}

Summing over {t} from {2^k} to {n} gives

\displaystyle  \begin{array}{rcl}  \sum_{t=2^k}^{n}\mathop{\mathbb E}(d(X_t,\tilde{X}_t(t-2^k))^p)\geq (n-2^k)2^{kp}. \end{array}

Summing over {k} from 0 to {\log_2 n} gives

\displaystyle  \begin{array}{rcl}  \sum_{k=0}^{\log n}\sum_{t=2^k}^{n}\mathop{\mathbb E}(d(X_t,\tilde{X}_t(t-2^k))^p)\geq n\log n, \end{array}

whence {\pi^p\geq\log n}.

Theorem 11 If metric space {X} is Markov {p}-convex, then

\displaystyle  \begin{array}{rcl}  c_X(B_n)\geq C(\log n)^{1/p}. \end{array}

Indeed, the Markov {p}-convexity constant is Lipschitz covariant.

3.2. The case of Banach spaces

Theorem 12 An infinite dimensional Banach space can be renormed to be {p}-convex iff it is Markov {p}-convex.

I prove only one implication. Let {X} be a {p}-convex Banach space, i.e.

\displaystyle  \begin{array}{rcl}  |x|^p+|y|^p \geq 2|\frac{x-y}{2}|^p+2|\frac{x+y}{2K}|^p. \end{array}

One first proves a consequence, the fork inequality:

\displaystyle  \begin{array}{rcl}  \frac{1}{2^{p-1}}(|x-w|^p+|x-z|^p)+\frac{1}{4^{p-1}K}|z-w|^p \leq 2|y-x|^p+|y-w|^p+|z-y|^p. \end{array}

Note that only the tip of the fork ({z} and {w}) involves the {p}-convexity constant, the other coefficients take specific values.

Apply the fork inequality to the images by {f} of {X_{t-2^{k+1}}}, {X_{t-2^{k}}}, {X_t}, {\tilde{X}_t(t-2^k)} and take expectation. By independence,…

3.3. Question

Does one have a metric invariant for uniform smoothness ?

About metric2011

metric2011 is a program of Centre Emile Borel, an activity of Institut Henri Poincaré, 11 rue Pierre et Marie Curie, 75005 Paris, France. See
This entry was posted in Workshop lecture and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s