Notes of Daniel Galicer’s lecture

The minimal distortion needed to embed a binary tree into {\ell^p}

I will cover theorems very similar to those in the previous talk, but the methods will be different, with a more combinatorial flavour.

1. Ramsey theory

1.1. Friends and strangers problem

How many people should be in a party in order to ensure that at least 3 of them know each other or at least three of them were strangers ?

View the {n} participants as a complete graph some of whose edges are coloured. We want a triangle painted in one single colour.

The answer is {n=6}.

1.2. Ramsey’s theorem

Theorem 1 (Ramsey) Given {r}, {s\in{\mathbb N}}, there exists {R(r,s)} such that if {n\geq R(r,s)}, in any graph with {n} vertices, there exists either an {r}-clique (complete subgraph) or an {s}-independent set (no edges between them).

We just saw that {R(3,3)=6}. It is known that {R(4,4)=18}.

\indent{If aliens would threaten Earth of war unless we can give them the value of {R(5,5)}, we should put all our efforts to find it. If they would ask for {R(6,6)}, we should better get ready for war (Erdos).}

So do not think that these numbers are easy to compute. Finding rough estimates on them is a theory in itself.

2. Ramsey and embeddings

2.1. Embedding trees into {p}-convex Banach spaces

See Li’s lecture for the definition of {p}-convexity. Hilbert space is 2-convex. {L^p} is {p}-convex if {p\geq 2} and 2-convex if {1<p<2}.

Theorem 2 Let {X} be a {p}-convex Banach space. Let {B_n} be the complete rooted binary tree of depth {n}. For any embedding {f:B_n\rightarrow X},

\displaystyle  \begin{array}{rcl}  dist(f)\geq\Omega((\log n)^{1/p}). \end{array}

2.2. Hanner’s inequality

If {p>2},

\displaystyle  \begin{array}{rcl}  (|x|+|y|)^p +||x|-|y||^p \geq |x+y|^p+|x-y|^p\geq 2(|x|^p+|y|^p). \end{array}

If {p<2}, the inequalities are reversed.

2.3. Forks

A 4-tuple of points {\{x_0,x_1,x_2,x'_2\}} is a {\delta}-fork if {\{x_0,x_1,x_2\}} and {\{x_0,x_1,x'_2\}} are {(1+\delta)}-isomorphic to {\{0,1,2\}} (mapping {x_1} to 1).

Lemma 3 In a {p}-convex Banach space, {p\geq 2}, every {\delta}-fork satisfies

\displaystyle  \begin{array}{rcl}  |x_2-x'_2|=|x_0-x_1|O(\delta^{1/p}). \end{array}

Proof. One can assume that {x_0=0}, {|x_1|=1}. Let

\displaystyle  \begin{array}{rcl}  z:=x_1+\frac{x_2-x_1}{|x_2-x_1|}. \end{array}

Then {|z-x_1|=1}, {|z-x_2|\leq 2\delta}. Apply {p}-convexity inequality to unit vectors {x=x_1} and {y=z-x_1}. Their midpoint is {\frac{z}{2}}, and {|\frac{z}{2}|\geq 1-\delta}. Therefore {|x-y|\leq C\,\delta^{1/p}}. Furthermore,

\displaystyle  \begin{array}{rcl}  |x_2-2x_1|\leq |x_2-z|+|z-2x_1|=O(\delta^{1/p}),\quad |x'_2-2x_1|=O(\delta^{1/p}), \end{array}

thus {|x_2-x'_2|=O(\delta^{1/p})}.

2.4. Combinatorics in rooted trees

Let {T} be a rooted tree. Let {SP(T)} denote the set of pairs {(x,y)} of vertices with {x} on the path from the root to {y}.

Denote by {T_{k,h}} denote the complete {k}-ary tree of depth {h}.

Lemma 4 Let {k\geq r^{(h+1)^2}}. Colour {SP(T_{k,h})} in {r} colours. Then there is a copy {T'} of {B_h} such that the colour of any pair {(x,y)\in SP(T')} only depends on the levels of {x} and {y}.

We start with the following

Claim. It the leaves of {T_{k,h}} are coloured by {r'} colours and {k>r'}, then there is a copy of {B_h} inside {T_{k,h}} with monochromatic leaves.

This is proven by induction on {h}. Below the root, there are {k} trees isomorphic to {T_{k,h-1}}. By induction, in each of them, there is a binary subtree {B_{h-1}} with monocoloured leaves. Since {k>r'}, two of these binary subtrees have the same leaf colour. Connect them via the root, this yields a binary subtree {B_h} with monocoloured leaves.

Back to the proof of the Lemma. We label each leaf {z} of {T_{k,h}} by a vector whose components are the colours of the {\frac{h(h+1)}{2}} successive pairs {(x,y)} along the path from {o} to {z}. So the number of labels is {r'< r^{(h+1)^2}\leq k}. According to the Claim, there is a binary subtree {B_h} with monocoloured leaves, meaning that colours of pairs depend only on their depths.

2.5. Matousek’s proof of the Theorem

We shall use the following easy facts.

Lemma 5 {T_{k,h}} embeds into {B_n} for {n=2h[\log_2 k]}.

Lemma 6 (Path embedding lemma) Given {\alpha}, {\beta\in(0,1)}, there exists {C} such that every distance non decreasing map

\displaystyle  \begin{array}{rcl}  f:\{0,1,\ldots,h\}\rightarrow M \end{array}

in some metric space {M}, with {h\geq 2^{CK^{\alpha}}}, {K=Lip(f)}, there exists an arithmetic progression

\displaystyle  \begin{array}{rcl}  Z=\{x,x+a,x+2a\} \end{array}

such that the restriction of {f} to {Z} is {(1+\epsilon)}-isometric, for {\epsilon=\epsilon(\alpha,\beta,a)}.

Let {f:B_n\rightarrow X} be a distance non decreasing map, of distorsion {K=(\log n)^{1/p}}. We shall look for forks in the image. We start with a complete {k}-ary subtree {T_{k,h}} of {B_n}. We colour the elements of {SP(T_{k,h})} according to the distorsion of {f}: the colour is the integer part of

\displaystyle  \begin{array}{rcl}  \lfloor\frac{K^p}{\beta}\frac{|f(x)-f(y)|}{d_{T_{k,h}}(x,y)}\rfloor, \end{array}

{\beta} a small constant.

Thanks to our Ramsey type Lemma, we find a complete binary subtree {B_h} in {T_{k,h}} with nodes coloured by their depth only. Along each path from root to a leaf, we find an arithmetic progression {\{y_0,y_1,y_2\}} of step {a} along which {f} is nearly isometric. Let {y'_2} be a node situated at the same depth as {y_2} and at distance {a} from {y_1}. The colour=depth property implies that, as far as distances between images under {f} are concerned, {\{y_0,y_1,y'_2\}} behaves learly as {\{y_0,y_1,y_2\}} does, i.e. {f(\{y_0,y_1,y_2,y'_2\})} is a fork. Therefore {d(f(y_2),f(y'_2))} is small, whereas {d(y_2,y'_2)=2a}.

Posted in Workshop lecture | Tagged | Leave a comment

Notes of Sean Li’s lecture

Lower bounding distortion via metric invariants

Let {X} and {Y} be metric spaces. Recall that {c_Y(X)} is the infimum of bi-Lipschtz distortions of maps {X\rightarrow Y}. Typically, {Y} is a Banach space and {X} is combinatorial (finite metric space). When {Y=\ell_2}, one denotes by

\displaystyle  \begin{array}{rcl}  c_2(n)=\sup\{c_{\ell_2}(X)\,;\,X\textrm{ an }n\textrm{-point metric space}\}. \end{array}

Theorem 1 (Enflo 1970) {c_2(n)} tends to infinity as {n} tends to infinity.

Enflo gave a lower bound of {\sqrt{\log n}}. This is not sharp. The sharp bound is {\log n}. The upper bound is due to Bourgain (1986) and the lower bound to London-Linial-Rabinovich (1994) – they used expanders.

1. Enflo’s lower bound for the Hamming cube

Enflo gave a sharp distortion lower bound, but for a suboptimal example, the Hamming cube {D_n=\{0,1\}^n} with {\ell_1} metric (= graph metric on 1-skeleton of cube).

Proposition 2 (Enflo) For any {f:D_n\rightarrow\ell_2},

\displaystyle  \begin{array}{rcl}  \sum_{diagonals\,xy}|f(x)-f(y)|^2\leq \sum_{edges\,uv}|f(u)-f(v)|^2. \end{array}

Let us begin with a

Lemma 3 (Short diagonals) In {\ell_2}, for any quadrilateral, the sum of squares of diagonals is less than the sum of squares of sides.

\displaystyle  \begin{array}{rcl}  |x-z|^2+|y-w|^2\leq|x-y|^2+|y-z|^2+|z-w|^2+|w-x|^2. \end{array}

Proof. By summing over coordinates, it suffices to prove this for points on a line. There, it boils down to triangle inequality.

1.1. Proof of Proposition

By induction on {n}. Edges of {D_n} split into {E_0} and {E_1} (edges of 2 opposite faces) and {E'} (edges joining these faces). Let {diag_0} and {diag_1} be the diagonals of opposite faces (there are not diagonals of {D_n}). Induction hypothesis gives

\displaystyle  \begin{array}{rcl}  \sum_{diag_0}|f(x)-f(y)|^2&\leq &\sum_{E_0}|f(u)-f(v)|^2,\\ \sum_{diag_1}|f(x)-f(y)|^2&\leq &\sum_{E_1}|f(u)-f(v)|^2. \end{array}

For each diagonal {uv\in diag_0}, there is a unique (parallel) {u'v'\in diag_1} such that {uv'} and {u'v} are diagonals of {D_n}. Then {uu'} and {vv'\in E'}. The short diagonals lemma gives

\displaystyle  \begin{array}{rcl}  |f(u)-f(v')|^2+|f(u')-f(v)|^2&\leq&|f(u)-f(v)|^2+|f(v)-f(v')|^2\\ &&+|f(v')-f(u')|^2+|f(u')-f(u)|^2. \end{array}

Summing up such inequalities, on the left hand side, each diagonal of {D_n} appears once. On the right hand side, each edge in {E'} and each element of {diag_0} and {diag_1} occurs once. With induction hypothesis, we are done.

1.2. Proof of the sharp bound for Hamming cube

Assume {f:D_n\rightarrow\ell_2} is distance non decreasing and {D}-Lipschitz. There are {2^{n-1}} diagonals and {n.2^{n-1}} edges. Diagonals in {D_n} have length {n} and edges have length 1, their images have lengths {\leq D}, therefore

\displaystyle  \begin{array}{rcl}  2^{n-1}n^2\leq n.2^{n-1}D^2, \end{array}

hence {D\geq\sqrt{n}}.

1.3. Enflo type

The crucial property of {\ell_2} used is expressed in the Proposition. We could have had a multiplicative constant in the Proposition, this would not have spoilt the result. This motivates the following

Definition 4 Let {p>1}. Say that metric space {X} has Enflo type {p} if there exists {T>0} such that for all {n} and all maps {f:D_n\rightarrow X},

\displaystyle  \begin{array}{rcl}  \sum_{diagonals\,xy}|f(x)-f(y)|^p\leq T\sum_{edges\,uv}|f(u)-f(v)|^p. \end{array}

The best {T:=T_p(X)} is called the Enflo type {p} constant of {X}.

By construction, if {X} has Enflo type {p},

\displaystyle  \begin{array}{rcl}  c_X(D_n)\geq T_p(X)^{-1}n^{1-\frac{1}{p}}. \end{array}

Enflo type is a bi-Lispchitz invariant. The Enflo type {p} constant is covariant under Lipschitz maps: if there exists a Lipschitz map {f:X\rightarrow Y}, then

\displaystyle  \begin{array}{rcl}  T_p(X)\leq dist(f) T_p(Y). \end{array}

This gives lower bounds by

\displaystyle  \begin{array}{rcl}  dist(f)\geq\frac{T_p(X)}{T_p(Y)}. \end{array}

2. Banach spaces

Now I give some background on Banach spaces that will lead us to more invariants.

2.1. Rademacher type

Say an infinite dimensional Banach space {X} has Rademacher type {p} if for all {n} and all point {x_1,\ldots,x_n\in X},

\displaystyle  \begin{array}{rcl}  \mathop{\mathbb E}_\epsilon(|\sum_{i=1}^n\epsilon_i x_i|^2)\leq T^p\sum_{i=1}^n|x_i|^p, \end{array}

where {\epsilon_i} are independent uniform {\pm 1}-valued random variables.

This notion was introduced independently by Jorgensen and by Maurey-Pisier. Note that {p} has not be {\leq 2} (the real line does not have Rademacher type {p} for {p>2}). Rademacher type is Enflo type when restricting maps {f:D_n\rightarrow X} to linear maps. Therefore Enflo type {p} implies Rademacher type {p}. A weak converse holds, as the following surprising theorem states.

Theorem 5 (Pisier) For Banach spaces, Rademacher type {p>1} {\Rightarrow} Enflo type {p'} for all {p'<p}.

Open question. Does Rademacher type {p>1} {\Rightarrow} Enflo type {p} ? Known to hold for certain subclasses of Banach spaces. E.g. UMD spaces (Naor-Schechtman).

2.2. Finite representability

Locality. Rademacher type {p} depends only on the geometry of finite dimensional linear subspaces. Such properties are called local.

Definition 6 Say a Banach space {X} is (crudely) finitely representable in {Y} if there exists {K\geq } such that for every {n}-dimensional subspace {Z\subset X}, there exists a subspace {Z'\subset Y} with Banach-Mazur distance from {Z} at most {K}.

By definition, local properties are preserved by finite representability. The following theorem suggests that local properties may be expressed as bi-Lipschitz metric invariants.

Theorem 7 (Ribe) If two infinite dimensional Banach spaces {X} and {Y} are uniformly homeomorphic, then they are mutually finitely representable.

Suggests has to be taken seriously. It merely gives a guideline, only a small number of invariants are understood from this point of view. We shall give an example below.

2.3. {p}-convexity

Let {p>1}. Say a Banach space is {p}-convex if it is uniformly convex with a modulus of convexity {\epsilon^p}.

Proposition 8 {X} is {p}-convex iff there exists {K>0} such that for all {x}, {y\in X},

\displaystyle  \begin{array}{rcl}  |x|^p+|y|^p \geq 2|\frac{x-y}{2}|^p+2|\frac{x+y}{2K}|^p. \end{array}

Example. {L^p} is {p}-convex for {p\geq 2}, with bounded constants. For {p<2}, {L^p} is 2-convex with 2-convexity constant {O(\frac{1}{\sqrt{p-1}})}.

Theorem 9 (Pisier) 1. Every uniformly convex Banach space can be renormed to be {p}-convex for some {p>1}.

2. Uniform convexity is a local property.

3. Markov type

If {(X_t)_{t\in{\mathbb Z}}} is a Markov chain on some set {\Omega}, denote by {(\tilde{X}_t(k))_{t\in{\mathbb Z}}} the Markov chain that coincides with {X_t} until {t\leq k}, and then continues an independent life.

Say a metric space {X} is Markov {p}-convex if there exists {\pi>0} such that for every Markov chain on a set {\Omega} and every map {f:\Omega\rightarrow X},

\displaystyle  \begin{array}{rcl}  \sum_{k=0}^{\infty}\sum_{t\in{\mathbb Z}}\frac{1}{2^{kp}}\mathop{\mathbb E}(d(f(X_t),f(\tilde{X}_t(t-2^k)))^p)\leq \pi^p\sum_{t\in{\mathbb Z}}\mathop{\mathbb E}(d(f(X_t),f(X_{t-1}))^p). \end{array}

The best {\pi} is called the Markov {p}-convexity constant {\pi_p(X)} of {X}.

3.1. Trees

Let us start with an example which is not Markov convex. Let {B_n} be the complete rooted binary tree of depth {n}. Let {X_t} be the standard downward random walk on {B_n} with probability {1/2}, stopped at leaves.

Proposition 10 There is a {C(p)} such that the Markov {p} convexity constant has to be

\displaystyle \pi_p(B_n)\geq C\,(\log n)^{1/p}.

It follows that the infinite rooted binary tree {B_\infty} is not Markov convex. We see that trees are really extreme for Markov convexity: they allow maximal branching for Markov chains.

Proof. The right hand side (or denominator in {\pi}) equals {n}, since the distance {d(X_t,X_{t+1})=1} by construction. To estimate the right hand side (or numerator in {\pi}), assume that {0\leq t-2^k\leq t\leq n} and consider the independent chains {(X_t)} and {(\tilde{X}_t(t-2^k))} over {2^k} time steps. With probability {1/2}, at time {t-2^k}, they jump to different nodes. If this happens, later on, they walk down the tree along different branches, therefore {d(X_t,\tilde{X}_t(t-2^k))} increases by 2 at each step, leading to

\displaystyle  \begin{array}{rcl}  d(X_t,\tilde{X}_t(t-2^k))\geq 2.2^k, \end{array}

and thus

\displaystyle  \begin{array}{rcl}  \mathop{\mathbb E}(d(X_t,\tilde{X}_t(t-2^k))^p)\geq \frac{1}{2}(2.2^k)^p\sim 2^{kp}. \end{array}

Summing over {t} from {2^k} to {n} gives

\displaystyle  \begin{array}{rcl}  \sum_{t=2^k}^{n}\mathop{\mathbb E}(d(X_t,\tilde{X}_t(t-2^k))^p)\geq (n-2^k)2^{kp}. \end{array}

Summing over {k} from 0 to {\log_2 n} gives

\displaystyle  \begin{array}{rcl}  \sum_{k=0}^{\log n}\sum_{t=2^k}^{n}\mathop{\mathbb E}(d(X_t,\tilde{X}_t(t-2^k))^p)\geq n\log n, \end{array}

whence {\pi^p\geq\log n}.

Theorem 11 If metric space {X} is Markov {p}-convex, then

\displaystyle  \begin{array}{rcl}  c_X(B_n)\geq C(\log n)^{1/p}. \end{array}

Indeed, the Markov {p}-convexity constant is Lipschitz covariant.

3.2. The case of Banach spaces

Theorem 12 An infinite dimensional Banach space can be renormed to be {p}-convex iff it is Markov {p}-convex.

I prove only one implication. Let {X} be a {p}-convex Banach space, i.e.

\displaystyle  \begin{array}{rcl}  |x|^p+|y|^p \geq 2|\frac{x-y}{2}|^p+2|\frac{x+y}{2K}|^p. \end{array}

One first proves a consequence, the fork inequality:

\displaystyle  \begin{array}{rcl}  \frac{1}{2^{p-1}}(|x-w|^p+|x-z|^p)+\frac{1}{4^{p-1}K}|z-w|^p \leq 2|y-x|^p+|y-w|^p+|z-y|^p. \end{array}

Note that only the tip of the fork ({z} and {w}) involves the {p}-convexity constant, the other coefficients take specific values.

Apply the fork inequality to the images by {f} of {X_{t-2^{k+1}}}, {X_{t-2^{k}}}, {X_t}, {\tilde{X}_t(t-2^k)} and take expectation. By independence,…

3.3. Question

Does one have a metric invariant for uniform smoothness ?

Posted in Workshop lecture | Tagged | Leave a comment

Notes of Adriane Kaichouh’s lecture nr 2

1. Super-reflexivity

1.1. Uniform convexity

A Banach space {E} is uniformly convex if the midpoint of two points on the unit sphere which are sufficiently far apart is deep inside the ball.

Example. Hilbert spaces, {L^p} spaces with {1<p<\infty} are uniformly convex.

Uniform convexity carries to {\ell^2} direct sums {\ell^2(X,E)}.

1.2. Super-reflexivity

Uniform convexity is not stable under renorming.

Definition 1 A Banach space is super-reflexive if it admits an equivalent norm which is uniformly convex.

Here is an equivalent definition.

Theorem 2 (Enflo) A Banach space is super-reflexive iff every one of its ultrapowers is reflexive.

In other words, every space which is finitely representable in it is reflexive.

Recall that, given an ultrafilter {\mathcal{U}} on an index set {I}, the ultraproduct {\prod_{\mathcal{U}}E_i} of a family of Banach spaces {(E_i)_{i\in I}} is the quotient of {\ell^{\infty}(I,E_i)} by the subspace {c_0(I,E_i)} of sequences that tend to 0 along {\mathcal{U}}. This is a Banach space. If all {E_i=E}, the ultraproduct is called an ultrapower.

2. Pestov’s theorem

Here is Pestov’s main theorem.

Theorem 3 (Pestov) Let {X} be a metric space. Let {G} be a locally finite group acting on {X} by isometries. Assume that {X} admits a coarse/uniform embedding into a Banach space {E}. Then there is a coarse/uniform embedding {\psi} from {X} to some ultrapower of some {\ell^2(\mathcal{U},E)}, and the action of {G} on {\psi(X)} extends to an action by affine isometries on the affine span of {\psi(X)}.

2.1. Proof

Let {\phi} be the initial coarse embedding. Start with a finite subgroup {F} of {G}. Define

\displaystyle  \begin{array}{rcl}  \psi_F : X\rightarrow \ell^2(F,E),\quad \psi_F(x)(f)=\frac{1}{|F|}\phi(f^{-1}x). \end{array}

Then {\psi_F} has the same expansion and compression moduli as {\phi}. {F} acts on {\ell^2(F,E)} via left translations, and {\psi_F} is equivariant for this action.

Next we glue all maps {\psi_F} together. Let {I} denote the set of all finite subgroups of {G}. Because {G} is locally finite, there exists an ultrafilter {\mathcal{U}} on {I} such that for all {F\in I}, the set

\displaystyle  \begin{array}{rcl}  \{H\in I\,;\,F\subset H\} \end{array}

is in {\mathcal{U}}. Indeed, the intersection of two such subsets

\displaystyle  \begin{array}{rcl}  \{H\in I\,;\,F\subset H\}\cap\{H\in I\,;\,F\subset H\}=\{H\in I\,;\,F\subset H\}=\{H\in I\,;\,F\subset H\} \end{array}

and the subgroup {┬▒langle F_1,F_2\rangle} generated by two finite subgroups is finite again. So we are dealing with a filter basis.

Fix an origin {o\in X}. Let {V} be the ultraproduct over {\mathcal{U}} of spaces {\ell^2(F,E)}, based at {\psi_F(o)} instead of 0 (this makes {\ell^2(F,E)} an affine space instead of a vectorspace, and so much for the ultraproduct). Define {\psi(x)} as the sequence {(\psi_F(x))_{F\in I}}. It is well defined and has the same expansion and compression moduli as {\phi}.

Given {g\in G}, observe that {\mathcal{U}}-almost surely,

\displaystyle  \begin{array}{rcl}  g\psi(x)=\psi(gx) \end{array}

is well defined, and this provides an isometry of {\psi(X)}. Isometries of subsets of Hilbert spaces extend to affine hulls, so the {G} action extends.

As defined, {V} is not quite an ultrapower. For every {F\in I}, {\ell^2(F,E)} embeds non canonically into {\ell^2(G,E)}, so

2.2. Application to uniform embeddability of Urysohn space

Let {\phi:U\rightarrow E} be a uniform embedding. Let {G} be a dense locally finite subgroup of {Isom(U)}. Pestov’s theorem implies that there is a uniform embedding of {U} to a reflexive Banach space {V}, equivariant with respect to an affine isometric action of {G}. {\Psi} is a homeo onto {\psi(U)}. The topologies of pointwise convergence induced on {G} by the two actions of {\psi(U)} coincide, so, by density, the action on the affine span {S} of {\psi(U)} extends to {Isom(U)}. I.e. we get a continuous group homomorphism

\displaystyle  \begin{array}{rcl}  Isom(U)\rightarrow Aff(S)=O(S)\times S. \end{array}

Since there is an injective equivariant map, it is injective.

Now we use the fact that {Isom(U)} contains a subgroup {H} isomorphic to {Homeo^*([0,1])}. So we get a linear isometric representation of {H} on a reflexive Banach space. According to Megerelishvili’s theorem, this must be the trivial representation. So {H} acts faithfully by translations. Contradiction, since {H} is non abelian.

Posted in Workshop lecture | Tagged | Leave a comment

Notes of Adriane Kaichouh’s lecture nr 1

Non-embeddability of Urysohn space

Theorem 1 (Pestov 2008) Urysohn space {U} does not embed uniformly in any super-reflexive Banach space.

1. Urysohn space

1.1. Universality

Urysohn space is the universal complete separable metric space. It contains an isometric copy of every complete separable metric space. In particular, it contains {c_0}. So Pestov’s theorem follows from Kalton’s non embeddability result for {c_0} into reflexive Banach spaces. The point of the course is to study the method, different from Kalton’s.

As far as universality, {U} is not so spectacular: Banach and Mazur observed that {C([0,1],{\mathbb R})} is also universal. However, Urysohn space is ultrahomogeneous: any isometry between finite subsets of {U} extends to a global isometry (this generalizes to compact sets). Nevertheless, {U} was long forgotten, until Katetov gave a new construction.

1.2. Katetov’s construction

For any finite subset {A\subset U}, and any isometric embedding of {A} into a finite metric space {B}, {B} also sits in {U} containing {A}. So any time a point is given with distances to points of {A} that satisfy triangle inequality, that point in fact sits in {U}. This is the basic step of Katetov’s construction, “one point extension”.

Given a metric space {X}, let

\displaystyle  E(X)=\{f:X\rightarrow{\mathbb R}\,;\,\forall x'\in X,\,|f(x)-f(x')|\leq d(x,x')\leq f(x)+f(x')\}.

It is equipped with the sup distance. {X} sits there isometrically. Consider {E_2(X)=E(E(X))}, and iterate. Let {U} be the completion of

\displaystyle  \begin{array}{rcl}  \bigcup_{n\in{\mathbb N}}E_n(X). \end{array}

Actually, we take only those functions of {E(X)} which have finite support inthe following sense: {f\in E(X)} if there is a finite subset {A\subset X} such that {f} is the , i.e. {f} is the largest function compatible with its values on {A},

\displaystyle  \begin{array}{rcl}  f(x)=\max_{a\in A}f(a)+d(a,x). \end{array}

This is necessary to get separable spaces.

Naturality: isometries of {X} extend uniquely to isometries of {E(X)}. So they also extend to isometries of {U}.

Theorem 2 (Uspensky) {Isom(U)} is universal for all Polish groups (i.e. separable and admit a complete compatible distance).

Example. {Isom(U)} contains {Homeo^+([0,1])}.

Later, we shall use a result about that group (Megrelishvili 2001): The only continuous isometric linear representation of {Homeo^+([0,1])} on a reflexive Banach space is the trivial representation.

Uspensky relies on the following theorem.

Theorem 3 (Gao-Kechris) Every Polish group is a closed subgroup of some {Isom(X)}, {X} complete separable.

2. The extension property

Let us discuss a strenthening of ultrahomogeneity.

Say a metric space {X} has the extension property if for every finite subset {A\subset X}, there exists a finite subset {B\subset X} containing it such that every partial isometry of {A} (i.e. an isometry between subsets of {A}) extends to a global isometry of {B}.

Example. {{\mathbb N}} with it usual distance has the extesion property. Hrushovski has shown that the random graph (percolation in the complete graph) has the extension property.

Theorem 4 (Solecki) {U} has the extension property.

This a hard theorem.

By iterating the extension property applied to {A\cup B}, and then adding points, one obtains ultrahomogeneity.

Here is one more property that we shall need tomorrow.

Theorem 5 {Isom(U)} contains a dense, locally finite subgroup.

Here, locally finite means that every finitely generated subgroup is finite.

This subgroup is obtained as an increasing union of finite groups, isometry groups of finite subsets, as a consequence of a strenghening of extension property: not only isometries do extend from {A} to {B}, but this is performed by a group homomorphism {Isom(A)\rightarrow Isom(B)}.

Posted in Workshop lecture | Tagged | Leave a comment

Notes of Michal Kraus’s lectures

Assouad’s embedding theorem

1. Motivation

1.1. Doubling constant

Recall that the doubling constant of a metric space is the least {K} such that every {R}-ball can be covered by {K} {R/2}-balls.

Example. {{\mathbb R}^n} and its subspaces are doubling.

1.2. An open problem

Open problem: characterize those metric spaces which admit a bi-Lipschitz embedding in some Euclidean space.

These spaces are doubling, but the converse fails. There are various counterexamples, the Heisenberg group in its Carnot metric is one of them.

1.3. Snowflaking

If one admits a little bit of snowflaking, the answer becomes simpler.

Theorem 1 (Assouad 1983) For every {K\geq 1} and {\alpha\in(0,1)}, there exist {N} and {D} such that every metric space with doubling constant {K}, once snowflaked with exponent {\alpha}, admits a {D}-bi-Lipschitz embedding into {{\mathbb R}^N}.

I will present Assouad’s original proof, based on tensor product.

2. Tensor product of Hilbert spaces

2.1. Algebraic tensor product

One can view {V\otimes W} as the quotient of the vectorspace freely generated by {V\times W} by the subspace generated by elements

\displaystyle \begin{array}{rcl} (a_1 v_1+a_2v_2,b_1w_1+b_2w_2)-\sum_{i,\,j=1}^2 a_ib_j (v_i,w_j). \end{array}

2.2. Inner product

If {V} and {W} are Hilbert spaces, there is a unique inner product on {V\otimes W} which extends

\displaystyle \begin{array}{rcl} \langle v_1\otimes w_1,v_2\otimes w_2\rangle=\langle v_1,v_2\rangle\langle w_1,w_2\rangle. \end{array}

Take the completion with respect to this norm. One gets a Hilbert space {V\hat{\otimes}W}, which has the following two properties:

  1. {|v\otimes w|=|v||w|},
  2. {\mathrm{dim}(V\hat{\otimes}W)=\mathrm{dim}(V)\mathrm{dim}(W)}.

Example. {L^2([0,1])\hat{\otimes}L^2([0,1])=L^2([0,1]\times[0,1])}.

3. Proof of Assouad’s theorem

3.1. A Lemma

The first step is to map {X} to some Euclidean space in a Lipschitz manner (with a possibly large constant {1/r}) in such a way that distances of the order of {r} are not compresses. Then the following Lemma will complete the proof.

Lemma 2 Let {\tau\in(0,1)}. Assume that, for all {i}, there exists a map {\phi_i:X\rightarrow{\mathbb R}^m} such that

  1. If {d(x,x')\in(\tau^{i+1},\tau^i]}, then {d(\phi_i(x),\phi_i(x'))\geq A},
  2. {d(\phi_i(x),\phi_i(x'))\leq B\min\{\tau^{-i}d(x,x'),1\}}.

Then, for all {\alpha\in(0,1)}, {X^\alpha} embeds in {{\mathbb R}^N} with controlled dimension {N} and bi-Lipschitz distorsion {D}.

Proof. Fix some origin {o\in X}. Let {f:X\rightarrow{\mathbb R}^m\otimes{\mathbb R}^{2n}} be defined by

\displaystyle \begin{array}{rcl} f(x)=\sum_{i\in{\mathbb Z}}\tau^{i\alpha}(\phi_i(x)-\phi_i(o))\otimes e_i. \end{array}

Here, {(e_i)_{1\leq i\leq 2n}} is an orthonormal basis of {{\mathbb R}^{2n}}, and the notation is extended to {i\in{\mathbb Z}} by {e_{i+2n}=e_i}.

Then, choosing {k} such that {d(x,x')\in(\tau^{k+1},\tau^k)},

\displaystyle \begin{array}{rcl} |f(x)-f(x')|&\leq& \sum_{i\in{\mathbb Z}}\tau^{i\alpha}|\phi_i(x)-\phi_i(o)|\\ &\leq& \sum_{i<k}\tau^{i\alpha}B\tau^{-i}d(x,x')+\sum_{i\geq k}\tau^{i\alpha}B\\ &=&B(\frac{1}{1-\tau^\alpha}+\frac{1}{1-\tau^{1-\alpha}})d(x,x')^\alpha. \end{array}

Note that this Lipschitz estimate does not depend on the yet undefined dimension {n}.

For the compression modulus, the main contribution will come from scales {\tau^i} for {i=k-n,\ldots,k+n}. This part of the sum has norm bounded below by that of only one term, {|\phi_k(x)-\phi_k(x')|}, since we are dealing with a combination of pairwise orthogonal vectors. This has size {Ad(x,x')^\alpha}.

The rest of the sum is dealt with by the Lipschitz estimate. The upper bound is

\displaystyle B(\frac{\tau^{n\alpha}}{1-\tau^\alpha}+\frac{\tau^{n(1-\alpha)}}{1-\tau^{1-\alpha}})d(x,x')^\alpha.

Choosing {n} large enough so that {Ad(x,x')^\alpha} beats this term gives the compression bound. Note that both embedding dimension and distorsion blow up as {\alpha} tends to 1.

3.2. Existence of embeddings with compression controlled at a fixed scale

Let {c=\tau^{i+1}}. Pick a maximal {c}-net {Y}. The number of points of {Y} in a ball of radius {c/\tau} is bounded by some {m}. We shall use a {m}-colouring of {Y} with the property that points with distance {\leq c/\tau} have different colours. To make one, order the points of {Y} and assign colours inductively: for each new point {y}, examine the colours already assigned in {B(y,c/\tau)} and pick one that had not yet been assigned (there are always a few suitable colours available).

Pick an orthonormal basis {(e_i)} of {{\mathbb R}^m}. Set

\displaystyle \begin{array}{rcl} \phi=\sum_{y\in Y}\frac{1}{c}g_y e_{colour\,of\,y}, \end{array}

where {g_y} is a tent function of distance to {y}. Then {\phi} is {\frac{m}{c}}-Lipschitz. Again, for points of {X} at distance in {(c,c/\tau)}, the main contribution in the sum comes from points of {Y} in a {c/\tau} ball, which have different colours, so {|\phi(x)-\phi(x')|} is bounded below by the largest term, corresponding to a point of {Y} close to {x}, whose size is 1.

This completes the proof.

4. Further results

Clearly (use Hausdorff dimension, for instance), as {\alpha} tends to 0, embedding dimension must tend to infinity. What if {\alpha} tends to 1 ?

Theorem 3 (Naor-Neiman 2012) Assouad’s construction can be improved in order that embedding dimension depends on doubling constant only (provided {\alpha\geq 1/2}).

The new construction uses stochastic decompositions.

4.1. Questions

What about isometric embedding of snowflakes ? Le Donne: Besicovich covering property is snowflake invariant. The Carnot metric on Heisenberg group does not have it. So no snowflakes of that metric never embed isometrically to Euclidean space.

Back to characterization of metric spaces bi-Lipschitz embeddable in Euclidean space. Being Euclidean (i.e. bi-Lipschitz embeddable in {L^2}, there is a workable criterion) and doubling may be necessary and sufficient. It true, this would provide a very useful tool in computer science: non linear dimension reduction in Euclidean spaces.

Posted in Workshop lecture | Tagged | Leave a comment

Notes of Ana Khukhro’s lecture

Wall structures and coarse embeddings

Lots of spaces have Property A, great! This provides many coarse embeddings in Hilbert space. Nevertheless, there exist spaces which coarsely embed but do not satisfy Property A. I will describe the best known obstruction to Property A, expansion. This also prevents Hilbertian embeddings. Then I will describe an other mechanism for the construction if embeddings, spaces with walls.

1. Is Property A equivalent to coarse embeddability in Hilbert space ?

1.1. Expansion

A sequence of finite graphs is called an expander if

  • Size tends to infinity.
  • Valency stays uniformly bounded.
  • Cheeger constant stays bounded below.

Note the forthcoming conference in Neuchatel (Switzerland), december 1-5: Expanders everywhere!

Given such a sequence, form a metric space, their disjoint union, by requiring that distance between distinct graphs be larger than their diameters.

Fact. An expander does not have Property A and does not coarsely embed in Hilbert space.

1.2. A space which does not have Property A but does coarsely embed in Hilbert space

The following example is due to P. Nowak (2002). One deals with embeddings in {L^1} (from the point of view of coarse embeddings, this is equivalent to {L^2}).

Let {F} be a finite group, with generating set {F} itself, embed it bi-Lispchitaly in {L^1}. Consider direct sum {F^n} with obvious generating system. This embeds in {L^1} with the same distorsion (direct sum of embeddings), so the disjoint union embeds as well.

When {F={\mathbb Z}_2}, we are dealing with Hamming cubes, we shall see below an alternating embedding in {\ell^1}.

Nowak verifies that Property A fails. This requires some effort (not included in my notes).

2. Wall spaces

2.1. Definition

Let {\Gamma} be a graph. A wall on {\Gamma} is a subset of edges whose removal yields exactly two connected components. A wall structure on {\Gamma} is a set of walls {\mathcal{W}} such that each edge is contained in exactly one wall. Notation :

\displaystyle  \begin{array}{rcl}  W(x|y)=\{\textrm{walls separating }x\textrm{ and }y\}. \end{array}

The wall (pseudo) metric is defined by

\displaystyle  \begin{array}{rcl}  d_w(x,y)=|W(x|y)|. \end{array}

When {d_w} is a metric, it embeds isometrically in {\ell^1(\mathcal{W})} as follows. Fix an origin {o}. Map vertex {x} to indicator of {W(x|o)}.

Example. The graph metric on a tree is a wall metric. Each single edge is a wall. Take all of them.

Example. Hamming cube. The set of edges in one direction is a wall. Take all of them. The resulting metric equals the Hamming metric.

2.2. Bounded geometry

Ultimately, we would like to provide examples of finitely generated groups without property A (but which embed coarsely in {\ell^1}. For this, we would like to proide a graph counterexample and then embed it is a group. This requires the graph to have bounded geometry.

Nowak’s example does not have this property (valency tends to infinity), so we turn to different constructions.

3. Residual finiteness

Say a group {G} is residually finite if every non trivial element has a non trivial image under some homomorphism to some finite group. Equivalently, the intersection of all finite index normal subgroups of {G} is {\{e\}}.

This often happens.

Examples. Free groups, abelian groups, linear groups are residually finite. It is an open question wether all hyperbolic groups are residually finite.

Definition 1 Given a sequence of nested finite index normal subgroups

\displaystyle  \begin{array}{rcl}  G>N_1>N_2\cdots,\quad \bigcap N_i=\{e\}, \end{array}

the box space is the coarse disjoint union of {Cay(G/N_i,S_i)}, where {S_i} is the projection to {G/N_i} of a fixed generating system of {G}.

Theorem 2 (Guentner) The box space of a group {G} has property A if and only if {G} is amenable.

Indeed, F\o lner sets can be used for the box space. Conversely, the F\o lner sets of the box space can be averaged to provide F\o lner sets for {G}.

3.1. Derived series

Given a finitely generated group {G}, the derived {m}-series {N_i} is defined recursively by {N_1=G} and

\displaystyle  \begin{array}{rcl}  N_{i+1}=[N_i,N_i]N_i^m. \end{array}

The idea is to kill commutators and {m}-powers, i.e. to provide the largest quotient {N_i/N_{i+1}} which is abelian with order {m}. Then {N_i/N_{i+1}} is a direct sum of cyclic groups {{\mathbb Z}_m}.

Theorem 3 (Arzhantseva-Guentner-Spakula 2012) Let {F} be a non abelian free group. Let {\{N_i\}} be its derived 2-series. Let {X} be the corresponding box space. Then {X} embeds coarsely in {\ell^2}.

3.2. Proof

View {F/N_{i+1}} as a cover of {F/N_i}. This will provide us with a wall structure on {F/N_{i+1}} whose wall metric is close to the origin metric.

The covering map is constructed explicitely by lifting a maximal subtree of {Cay(G/N_i,S_i)} en specifying how the copies are glued together. The fibers are cubes, which are spaces with walls.

The walls of {Cay(G/N_{i+1},S_{i+1})} are inverse images, under the covering map, of edges of {Cay(G/N_i,S_i)}. The wall metric is smaller than the graph metric, and they coincide below the girth. Since girth tends to infinity, ther wall metric gets closer and closer to the graph metric. This yields the {\ell^1} embedding.

3.3. Generalization

What we really use is the existence of embeddings for the disjoint union of quotient groups, not that much the wall structure.

Posted in Workshop lecture | Tagged | Leave a comment

Notes of Thibault Pillon’s lecture nr 2

1. Metric aspects

F\o lner’s criterium shows that amenability is a coarse equivalence invariant of groups. A-T-menability is not.

1.1. Yu’s Property A

From now on, we deal with metric spaces {X}. {X} is assumed to be uniformly discrete. Yu pursued a purely (coarse) metric version of Bekka-Cherix-Valette’s theorem. He proposed a sufficient condition for coarse embeddability which, in the special case of groups, turned out to have a flavour similar to amenability and a-T-menability (it does not compare with either property).

Definition 1 (Yu 2000) A uniformly discrete metric space {X} has property A if for all {R}, {\epsilon>0}, there exists a family {\{A_x\}_{x\in X}} of subsets of {X\times{\mathbb N}} such that

  1. \displaystyle \begin{array}{rcl} \frac{|A_x\Delta A_y|}{|A_x\cap A_y|}<\epsilon\quad\textrm{ whenever }d(x,y)<R. \end{array}

  2. There exists {S>0} such that {A_x\subset B(x,S)\times{\mathbb N}}.

The factor {{\mathbb N}} is there to allow for sets with multiplicities.

In other words, (1) mimics F\o lner’s criterion. Sets are indexed by points in order to cope with the absence of equivariance.

Examples. Finite metric spaces have property A. Amenable groups have property A.

1.2. Trees have property A

Let {T} be an infinite tree. Fix a root {o}. For {n>0}, set {A_x^{(n)}=} vertices of the geodesic from {x} to {o} with multiplicity 1, until {n} points have been counted. If {\delta=d(x,o)<n}, put multiplicity {n-\delta} on {o}. Then

\displaystyle \begin{array}{rcl} |A_x\Delta A_y|\leq d(x,y),\quad |A_x\cap A_y|\geq n-2d(x,y). \end{array}

1.3. Asymptotic dimension

Let {X} be a metric space, {\mathcal{U}} a cover of {X}. Its {R}-multiplicity is the maximum number of elements of {\mathcal{U}} containing a given ball of radius {R}. {\mathcal{U}} is uniformly bounded if there is a common bound for diameters of all elements of {\mathcal{U}}.

Definition 2 Say {X} has asymptotic dimension {\leq n} if for all {R>0}, there is a uniformly bounded cover of {R}-multiplicity {\leq n+1}.

Examples. {{\mathbb Z}^n} has asymptotic dimension {n}.

{{\mathbb Z}^{(\infty)}}, {{\mathbb Z}\wr{\mathbb Z}} have infinite asymptotic dimension.

Gromov: Hyperbolic metric spaces have finite asymptotic dimension. However, it can be arbitrarily large.

Theorem 3 (Higson-Roe 2000) Finite asymptotic dimension implies Property A.

The proof is lengthy and not even included in my notes.

Theorem 4 (Yu 2000) Property A implies coarse embeddability in Hilbert space.

The construction is directly inspired from Bekka-Cherix-Valette. See below a more precise statement.

2. Quantitative Property A

Definition 5 (Tessera) Let {X} be a uniformly discrete metric space. Let {J:{\mathbb R}_+\rightarrow{\mathbb R}_+} e a nondecreasing function. Say that {X} satisfies Property A with gauge {J} and exponent {p} if there exists a sequence of families {(A_x^n)_{x\in X}} of subsets of {X\times {\mathbb N}} such that

  1. {|A_x^n|\geq J(n)^p}.
  2. {|A_x^n\Delta A_y^n|\leq C\,d(x,y)^p}.
  3. {A_x^n} is contained in {B(x,n)\times{\mathbb N}}.

In other words, we want a quantitative control on the diameters of the sets in Yu’s Property A.

Theorem 6 From {J}, one cooks up a class of nondecreasing functions {f} which are shown to be compression functions for Lipschitz coarse embeddings of {X} into {\ell^p}.

Proof. Fix a base point {o\in X}. Embed {X} in the {\ell^p} direct sum of countably many copies of {\ell^p(X\times{\mathbb N})} as follows. The {n}-component maps point {x\in X} to

\displaystyle \begin{array}{rcl} \frac{f(2^n)}{J(2^n)}(1_{A_x^{2^n}-1_{A_o^{2^n}}}. \end{array}

This is indeed in {\ell^p} provided

\displaystyle \begin{array}{rcl} \int_{1}^{\infty}(\frac{f(t)}{J(t)})^{p}\frac{dt}{t}<\infty. \end{array}

In fact, it is Lipschitz which compression {f}.

Examples. Spaces with subexponential growth have Property {A(J,p)} for

\displaystyle \begin{array}{rcl} J(t)=(\frac{t}{\log v(t))})^{1/p} \end{array}

Doubling metric spaces have Property {A(J,p)} with {J(t)=t} and therefore coarsely embed into

Posted in Workshop lecture | Tagged | Leave a comment