## Notes of Samuel Lelièvre’s Orsay lecture

Groupes aléatoires

avec Moon Duchin, Kasia Jankiewicz, Shelby Kilmer, John Mackay, Andrew Sánchez. Résultat d’un cluster d’undergraduates qui a eu lieu l’an dernier à Tufts, Boston (12 participants, 6 semaines).

1. Le modèle à densité de Gromov

Dans la sphère de rayon ${\ell}$ ${S_\ell}$ du groupe libre ${F_m}$, on tire uniformément et indépendamment au hasard ${n}$ éléments. On considère le sous-groupe distingué ${N}$ qu’ils engendrent, et le groupe quotient ${G=F_m/N}$.

Obtient on des groupes non isomorphes ? Pas clair, il ne s’agit pas du tirage au hasard d’une classe d’isomorphisme de groupe de présentation ${(m,n)}$. On va voir que dans certains régimes, le groupe obtenu est en général trivial !

Pourquoi la sphère ? Ca aide beaucoup. De toutes fa\c cons, dans la boule, la plupart des éléments sont au bord.

Terminologie. On appelle densité du tirage le réel ${d}$ tel que

$\displaystyle \begin{array}{rcl} n=|S_{d\ell}|. \end{array}$

Autrement dit, on tire non pas une fraction ${d}$, mais une puissance ${d}$ du nombre total d’éléments.

Etant donnée une fonction ${n\mapsto N(\ell)}$, on dit qu’une propriété des groupes est asymptotiquement presque s\^ ure si la probabilité qu’elle ait lieu tend vers 1 lorsque, à ${m}$ fixé et ${n=N(\ell)}$, loorsque ${\ell}$ tend vers l’infini.

Comment choisir ${\ell\mapsto N(\ell)}$ ? Il semble que fixer la densité, i.e. prendre ${n=(2m-1)^{d\ell}}$, est un bon choix.

Theorem 1 (Gromov 1993)

1. Si ${d>\frac{1}{2}}$, alors asymptotiquement presque s\^ urement ${G}$ a au plus 2 éléments.
2. Si ${d<\frac{1}{2}}$, alors asymptotiquement presque s\^ urement, ${G}$ est infini, hyperbolique, sans torsion, de dimension 2, et contient des groupes de surfaces.

Il y a d’autres effets de seuil connus,

1. à ${d=\frac{1}{5}}$ propriété de Dehn,
2. à ${d=\frac{1}{5}}$ propriété ${C'(\frac{1}{6})}$.
3. entre ${d=\frac{1}{5}}$ et ${\frac{1}{3}}$, propriété (T) de Kazhdan.

2. Résultats

Theorem 2 (DJKLMS 2015) On s’intéresse à la densité convergeant vers ${\frac{1}{2}}$, ${d(\ell)=\frac{1}{2}-f(\ell)}$.

1. Si ${f(\ell)\leq \frac{\log\ell}{4\ell}-\frac{\log\log\ell}{\ell}}$, alors asymptotiquement presque s\^ urement ${G}$ a au plus 2 éléments.
2. Si ${f(\ell)\geq 10^5 \frac{(\log\ell)^{1/3}}{\ell^{1/3}}}$, alors asymptotiquement presque s\^ urement, ${G}$ est infini, hyperbolique,.

3. Démonstrations

3.1. C\^oté trivial

On s’est appuyés sur des notes de Gady Kozma. Il s’agit d’augmenter la densité effective. Si deux relateurs ont une branche commune longue et des parties restantes courtes, leur différence est un petit relateur. On voit ${G}$ comme un quotient d’un groupe aléatoire avec longueur ${\ell}$ plus petite, et densité plus grande que ${\frac{1}{2}}$, c’est gagné.

Principe des tiroirs probabiliste : si ${\ell\rightarrow\infty}$ et ${n\rightarrow\infty}$ avec ${n=o(\sqrt{\ell})}$, alors asymptotiquement presque s\^ urement, il y a une coïncidence quand on range ${\ell}$ objets dans ${n}$ tiroirs.

On analyse l’influence des lettres les unes sur les autres dans des mots aléatoires. On pose ${\mu=\frac{1}{2m-1}}$,

$\displaystyle \begin{array}{rcl} S_n=\sum_{k=0}^{n-1}(-\mu)^k \rightarrow \frac{1}{1+\mu}. \end{array}$

On écrit un mot aléatoire ${x_0x_1\ldots x_n\ldots}$. Pour ${n}$ pair,

$\displaystyle \begin{array}{rcl} \mathop{\mathbb P}(x_n=x_0)=\mu S_{n-1},\quad \mathop{\mathbb P}(x_n=y\not=x_0)=\mu S_n. \end{array}$

Pour ${n}$ impair,

$\displaystyle \begin{array}{rcl} \mathop{\mathbb P}(x_n=x_0^{-1})=\mu S_{n-1},\quad \mathop{\mathbb P}(x_n=y\not=x_0^{-1})=\mu S_n. \end{array}$

Par conséquent, toutes ces probabilités tendent vers ${\frac{\mu}{1+\mu}=\frac{1}{2m}}$.

Proposition 3 S’il existe une fonction ${\ell\mapsto k(\ell)\leq \ell}$ telle que

1. ${k-2\ell f(\ell)\rightarrow\infty}$,
2. ${\frac{\ell-2}{(2k+2)(2m-1)^{2k}}\rightarrow\infty}$,

alors asymptotiquement presque s\^ urement, ${|G|\leq 2}$.

En effet, avec le principe des tiroirs probabiliste et la première condition, on trouve un mot réduit ${w}$ de longueur ${2k}$ dans ${F_m}$ tel que ${w=1}$ dans ${G}$. On l’utilise pour réduire les autres relateurs. Pour cela, on observe que la queue (de ${k+1}$ à ${\ell}$) d’un mot tiré au hasard est un mot de longueur ${\ell-k}$ tiré au hasard. Le PTP donne deux relateurs aléatoires dont les queues coincident mais qui diffèrent à la ${k}$-ème lettre, on considère leur différence, qui est de longueur ${2k}$. On note ${R_w}$ les mots du tirage qui commencent par ${w}$. Pour ${x,y,z}$ des lettres, ${R_{xz}}$ et ${R_{yz}}$ sont typiquement disjoints et non vides, cela donne une partition en ${2m(2m-1)}$ sous ensembles…

3.2. C\^oté infini hyperbolique

On suit le livre de Yann Ollivier. Il s’agit d’estimer la probabilité qu’il existe un diagramme de van Kampen de taille ${\leq K}$ qui viole une inégalité isopérimétrique quadratique avec petite constante. Pour contr\^oler les effets de dépendance, Ollivier compte des diagrammes abstraits (cellulations du plan) et estime la probabilité qu’un diagramme abstrait de taille ${\leq K}$ soit réalisable par l’ensemble de relateurs tiré au hasard (i.e. qu’on puisse coller des étiquettes aux arêtes de sorte que les mots qu’on lit sur les bords des faces sont des conjugués cycliques des relateurs tirés). On utilise sans changement son estimation de probabilité de réalisation, on n’a besoin de retravailler que son décompte de diagrammes abstraits.

## Notes of David Hume’s Orsay lecture

Expanders and separation

How different can expanders be ?

1. Expanders

Definition 1 An ${\epsilon}$-expander is a sequence of finite graphs ${G_n}$ where – each ${G_n}$ has Cheeger constant ${>\epsilon}$.

A ${(d,\epsilon)}$-expander is a sequence of finite graphs ${G_n}$ where – each ${G_n}$ has maximal degree ${< d}$, – each ${G_n}$ has Cheeger constant ${>\epsilon}$.

Margulis: Cayley graphs of finite quotients of ${Sl(3,{\mathbb Z})}$ (or of any propertyT residually finite group) are an expander.

Lubotzky-Philipps-Sarnak, Lubotzky pursued the finite group line.

Wigderson : zig-zag construction of expanders.

2. Connection to Topology

Borel conjecture : given two closed aspherical manifolds, any homotopy equivalence is homotopic to a homeomorphism.

This is hard. An important related (philosophically weaker) question is the Novikov conjecture, which has partials solutions.

Yu : if a finitely generated group ${G}$ coarsely embeds in Hilbert space, then Novikov conjecture holds for all closed manifolds with fundamental group ${G}$.

Since expanders do not coarsely embed into Hilbert space, Gromov asked wether there exist finitely generated groups that coarsely contain expanders.

Gromov (followed by Coulon and Arzhantseva-Delzant) provided a slightly weaker construction. This was made more precise by

Osajda : there exist expander families with ${C'(1/6)}$ small cancellation labellings.

Idea : graphical small cancellation. Pick finite graphs ${G_n}$ with oriented edges, labelled by a finite set S. Define

$\displaystyle G_I=.$

Then graphical small cancellation theory gives sufficient conditions in order that the disjoint union of ${G_n}$ embeds isometrically in ${G}$.

Theorem 2 There exists a continuum of ${(d,\epsilon)}$-expanders ${G_r}$, ${r}$ real number, such that ${G_r}$ does not coarsely embed into ${G_s}$ unless ${r=s}$.

Theorem 3 There exists a continuum of finitely generated groups ${G_r}$, ${r}$ real number, such that ${G_r}$ does not coarsely embed into ${G_s}$ unless ${r=s}$.

3. Separation

To distinguish expanders, we use separation.

Definition 4 (Benjamini-Schramm-Timar) ${G}$ finite graph. The cut-size of ${G}$ is the smallest ${k}$ such that there exists a subset ${A}$ of vertices of size ${k}$ such that every connected component of the complement of ${A}$ in ${G}$ contains at most half of the vertices of ${G}$.

For an infinite graph ${X}$, the separation profile ${sep_X}$ is the function

$\displaystyle sep_X(n)= \max \{\textrm{cut-size of a subgraph of size } < n\}.$

Separation behaves rather differently from Cheeger constant.

Example 1 (Benjamini-Schramm-Timar) For bounded geometry graphs, let ${r : X\rightarrow Y}$ be Lipschitz and fibers have bounded size, then ${sep_X < C sep_Y + C}$.

Proposition 5 (Benjamini-Schramm-Timar)

$\displaystyle \begin{array}{rcl} sep_{{\mathbb Z}^k} &=& n^{k-1/k} .\\ sep_{H^k} &=& \begin{cases} \log n & \text{ if }k=2, \\ n^{k-2/k-1}& \text{ if }k\geq 3. \end{cases}\\ sep_{F_k} &=& 1.\\ sep_{F_2\times F_2} &=&\frac{n}{\log n}. \end{array}$

Theorem 6 ${X}$ infinite graph. Then ${sep_X}$ is not sublinear ${\Leftrightarrow X}$ contains an ${\epsilon}$-expander. If furthermore ${X}$ has bounded degree, then ${sep_X}$ is not sublinear ${\Leftrightarrow X}$ contains a ${(d,\epsilon)}$-expander.

4. Proof

4.1. Characterization of expanders

(i) If G is a finite graph with Cheeger constant ${h > \epsilon}$, then ${cut(G) > n \epsilon/4}$. Indeed, let ${C}$ be a cut set for ${G}$. A greedy search provides a collection ${D}$ of components of ${G\setminus C}$ with size between ${n/4}$ and ${n/2}$. Its boundary is contained in ${C}$.

(ii) Conversely, let ${G}$ be a finite graph of size ${n}$. There is a subgraph ${G'}$ of size ${\geq n/2}$ such that ${h(G')\geq cut(G)/2n}$.

4.2. Different separation profiles

To construct expanders with different separation profiles that can be embedded into groups, pick a (d,${\epsilon}$)-expander (${G_n}$) such that –

1. girth ${g(G_{n+1}) > 2 |G_n|}$, –
2. ${|G_n| > 3 |G_{n-1}|}$.

Observe that if ${M}$, ${N}$ are infinite subsets of integers, with ${M\setminus N}$ infinite, then ${sep_{G_M}}$ is not bounded above by ${sep_{G_N}}$. Indeed, below girth, separation profile is dramatically low.

When embedded into groups, the receiving groups have a separation profile governed by the graph at certain scales, and are essentially hyperbolic at others. But hyperbolic implies polynomial separation profile.

## Notes of Daniel Galicer’s lecture

The minimal distortion needed to embed a binary tree into ${\ell^p}$

I will cover theorems very similar to those in the previous talk, but the methods will be different, with a more combinatorial flavour.

1. Ramsey theory

1.1. Friends and strangers problem

How many people should be in a party in order to ensure that at least 3 of them know each other or at least three of them were strangers ?

View the ${n}$ participants as a complete graph some of whose edges are coloured. We want a triangle painted in one single colour.

The answer is ${n=6}$.

1.2. Ramsey’s theorem

Theorem 1 (Ramsey) Given ${r}$, ${s\in{\mathbb N}}$, there exists ${R(r,s)}$ such that if ${n\geq R(r,s)}$, in any graph with ${n}$ vertices, there exists either an ${r}$-clique (complete subgraph) or an ${s}$-independent set (no edges between them).

We just saw that ${R(3,3)=6}$. It is known that ${R(4,4)=18}$.

\indent{If aliens would threaten Earth of war unless we can give them the value of ${R(5,5)}$, we should put all our efforts to find it. If they would ask for ${R(6,6)}$, we should better get ready for war (Erdos).}

So do not think that these numbers are easy to compute. Finding rough estimates on them is a theory in itself.

2. Ramsey and embeddings

2.1. Embedding trees into ${p}$-convex Banach spaces

See Li’s lecture for the definition of ${p}$-convexity. Hilbert space is 2-convex. ${L^p}$ is ${p}$-convex if ${p\geq 2}$ and 2-convex if ${1.

Theorem 2 Let ${X}$ be a ${p}$-convex Banach space. Let ${B_n}$ be the complete rooted binary tree of depth ${n}$. For any embedding ${f:B_n\rightarrow X}$,

$\displaystyle \begin{array}{rcl} dist(f)\geq\Omega((\log n)^{1/p}). \end{array}$

2.2. Hanner’s inequality

If ${p>2}$,

$\displaystyle \begin{array}{rcl} (|x|+|y|)^p +||x|-|y||^p \geq |x+y|^p+|x-y|^p\geq 2(|x|^p+|y|^p). \end{array}$

If ${p<2}$, the inequalities are reversed.

2.3. Forks

A 4-tuple of points ${\{x_0,x_1,x_2,x'_2\}}$ is a ${\delta}$-fork if ${\{x_0,x_1,x_2\}}$ and ${\{x_0,x_1,x'_2\}}$ are ${(1+\delta)}$-isomorphic to ${\{0,1,2\}}$ (mapping ${x_1}$ to 1).

Lemma 3 In a ${p}$-convex Banach space, ${p\geq 2}$, every ${\delta}$-fork satisfies

$\displaystyle \begin{array}{rcl} |x_2-x'_2|=|x_0-x_1|O(\delta^{1/p}). \end{array}$

Proof. One can assume that ${x_0=0}$, ${|x_1|=1}$. Let

$\displaystyle \begin{array}{rcl} z:=x_1+\frac{x_2-x_1}{|x_2-x_1|}. \end{array}$

Then ${|z-x_1|=1}$, ${|z-x_2|\leq 2\delta}$. Apply ${p}$-convexity inequality to unit vectors ${x=x_1}$ and ${y=z-x_1}$. Their midpoint is ${\frac{z}{2}}$, and ${|\frac{z}{2}|\geq 1-\delta}$. Therefore ${|x-y|\leq C\,\delta^{1/p}}$. Furthermore,

$\displaystyle \begin{array}{rcl} |x_2-2x_1|\leq |x_2-z|+|z-2x_1|=O(\delta^{1/p}),\quad |x'_2-2x_1|=O(\delta^{1/p}), \end{array}$

thus ${|x_2-x'_2|=O(\delta^{1/p})}$.

2.4. Combinatorics in rooted trees

Let ${T}$ be a rooted tree. Let ${SP(T)}$ denote the set of pairs ${(x,y)}$ of vertices with ${x}$ on the path from the root to ${y}$.

Denote by ${T_{k,h}}$ denote the complete ${k}$-ary tree of depth ${h}$.

Lemma 4 Let ${k\geq r^{(h+1)^2}}$. Colour ${SP(T_{k,h})}$ in ${r}$ colours. Then there is a copy ${T'}$ of ${B_h}$ such that the colour of any pair ${(x,y)\in SP(T')}$ only depends on the levels of ${x}$ and ${y}$.

Claim. It the leaves of ${T_{k,h}}$ are coloured by ${r'}$ colours and ${k>r'}$, then there is a copy of ${B_h}$ inside ${T_{k,h}}$ with monochromatic leaves.

This is proven by induction on ${h}$. Below the root, there are ${k}$ trees isomorphic to ${T_{k,h-1}}$. By induction, in each of them, there is a binary subtree ${B_{h-1}}$ with monocoloured leaves. Since ${k>r'}$, two of these binary subtrees have the same leaf colour. Connect them via the root, this yields a binary subtree ${B_h}$ with monocoloured leaves.

Back to the proof of the Lemma. We label each leaf ${z}$ of ${T_{k,h}}$ by a vector whose components are the colours of the ${\frac{h(h+1)}{2}}$ successive pairs ${(x,y)}$ along the path from ${o}$ to ${z}$. So the number of labels is ${r'< r^{(h+1)^2}\leq k}$. According to the Claim, there is a binary subtree ${B_h}$ with monocoloured leaves, meaning that colours of pairs depend only on their depths.

2.5. Matousek’s proof of the Theorem

We shall use the following easy facts.

Lemma 5 ${T_{k,h}}$ embeds into ${B_n}$ for ${n=2h[\log_2 k]}$.

Lemma 6 (Path embedding lemma) Given ${\alpha}$, ${\beta\in(0,1)}$, there exists ${C}$ such that every distance non decreasing map

$\displaystyle \begin{array}{rcl} f:\{0,1,\ldots,h\}\rightarrow M \end{array}$

in some metric space ${M}$, with ${h\geq 2^{CK^{\alpha}}}$, ${K=Lip(f)}$, there exists an arithmetic progression

$\displaystyle \begin{array}{rcl} Z=\{x,x+a,x+2a\} \end{array}$

such that the restriction of ${f}$ to ${Z}$ is ${(1+\epsilon)}$-isometric, for ${\epsilon=\epsilon(\alpha,\beta,a)}$.

Let ${f:B_n\rightarrow X}$ be a distance non decreasing map, of distorsion ${K=(\log n)^{1/p}}$. We shall look for forks in the image. We start with a complete ${k}$-ary subtree ${T_{k,h}}$ of ${B_n}$. We colour the elements of ${SP(T_{k,h})}$ according to the distorsion of ${f}$: the colour is the integer part of

$\displaystyle \begin{array}{rcl} \lfloor\frac{K^p}{\beta}\frac{|f(x)-f(y)|}{d_{T_{k,h}}(x,y)}\rfloor, \end{array}$

${\beta}$ a small constant.

Thanks to our Ramsey type Lemma, we find a complete binary subtree ${B_h}$ in ${T_{k,h}}$ with nodes coloured by their depth only. Along each path from root to a leaf, we find an arithmetic progression ${\{y_0,y_1,y_2\}}$ of step ${a}$ along which ${f}$ is nearly isometric. Let ${y'_2}$ be a node situated at the same depth as ${y_2}$ and at distance ${a}$ from ${y_1}$. The colour=depth property implies that, as far as distances between images under ${f}$ are concerned, ${\{y_0,y_1,y'_2\}}$ behaves learly as ${\{y_0,y_1,y_2\}}$ does, i.e. ${f(\{y_0,y_1,y_2,y'_2\})}$ is a fork. Therefore ${d(f(y_2),f(y'_2))}$ is small, whereas ${d(y_2,y'_2)=2a}$.

## Notes of Sean Li’s lecture

Lower bounding distortion via metric invariants

Let ${X}$ and ${Y}$ be metric spaces. Recall that ${c_Y(X)}$ is the infimum of bi-Lipschtz distortions of maps ${X\rightarrow Y}$. Typically, ${Y}$ is a Banach space and ${X}$ is combinatorial (finite metric space). When ${Y=\ell_2}$, one denotes by

$\displaystyle \begin{array}{rcl} c_2(n)=\sup\{c_{\ell_2}(X)\,;\,X\textrm{ an }n\textrm{-point metric space}\}. \end{array}$

Theorem 1 (Enflo 1970) ${c_2(n)}$ tends to infinity as ${n}$ tends to infinity.

Enflo gave a lower bound of ${\sqrt{\log n}}$. This is not sharp. The sharp bound is ${\log n}$. The upper bound is due to Bourgain (1986) and the lower bound to London-Linial-Rabinovich (1994) – they used expanders.

1. Enflo’s lower bound for the Hamming cube

Enflo gave a sharp distortion lower bound, but for a suboptimal example, the Hamming cube ${D_n=\{0,1\}^n}$ with ${\ell_1}$ metric (= graph metric on 1-skeleton of cube).

Proposition 2 (Enflo) For any ${f:D_n\rightarrow\ell_2}$,

$\displaystyle \begin{array}{rcl} \sum_{diagonals\,xy}|f(x)-f(y)|^2\leq \sum_{edges\,uv}|f(u)-f(v)|^2. \end{array}$

Let us begin with a

Lemma 3 (Short diagonals) In ${\ell_2}$, for any quadrilateral, the sum of squares of diagonals is less than the sum of squares of sides.

$\displaystyle \begin{array}{rcl} |x-z|^2+|y-w|^2\leq|x-y|^2+|y-z|^2+|z-w|^2+|w-x|^2. \end{array}$

Proof. By summing over coordinates, it suffices to prove this for points on a line. There, it boils down to triangle inequality.

1.1. Proof of Proposition

By induction on ${n}$. Edges of ${D_n}$ split into ${E_0}$ and ${E_1}$ (edges of 2 opposite faces) and ${E'}$ (edges joining these faces). Let ${diag_0}$ and ${diag_1}$ be the diagonals of opposite faces (there are not diagonals of ${D_n}$). Induction hypothesis gives

$\displaystyle \begin{array}{rcl} \sum_{diag_0}|f(x)-f(y)|^2&\leq &\sum_{E_0}|f(u)-f(v)|^2,\\ \sum_{diag_1}|f(x)-f(y)|^2&\leq &\sum_{E_1}|f(u)-f(v)|^2. \end{array}$

For each diagonal ${uv\in diag_0}$, there is a unique (parallel) ${u'v'\in diag_1}$ such that ${uv'}$ and ${u'v}$ are diagonals of ${D_n}$. Then ${uu'}$ and ${vv'\in E'}$. The short diagonals lemma gives

$\displaystyle \begin{array}{rcl} |f(u)-f(v')|^2+|f(u')-f(v)|^2&\leq&|f(u)-f(v)|^2+|f(v)-f(v')|^2\\ &&+|f(v')-f(u')|^2+|f(u')-f(u)|^2. \end{array}$

Summing up such inequalities, on the left hand side, each diagonal of ${D_n}$ appears once. On the right hand side, each edge in ${E'}$ and each element of ${diag_0}$ and ${diag_1}$ occurs once. With induction hypothesis, we are done.

1.2. Proof of the sharp bound for Hamming cube

Assume ${f:D_n\rightarrow\ell_2}$ is distance non decreasing and ${D}$-Lipschitz. There are ${2^{n-1}}$ diagonals and ${n.2^{n-1}}$ edges. Diagonals in ${D_n}$ have length ${n}$ and edges have length 1, their images have lengths ${\leq D}$, therefore

$\displaystyle \begin{array}{rcl} 2^{n-1}n^2\leq n.2^{n-1}D^2, \end{array}$

hence ${D\geq\sqrt{n}}$.

1.3. Enflo type

The crucial property of ${\ell_2}$ used is expressed in the Proposition. We could have had a multiplicative constant in the Proposition, this would not have spoilt the result. This motivates the following

Definition 4 Let ${p>1}$. Say that metric space ${X}$ has Enflo type ${p}$ if there exists ${T>0}$ such that for all ${n}$ and all maps ${f:D_n\rightarrow X}$,

$\displaystyle \begin{array}{rcl} \sum_{diagonals\,xy}|f(x)-f(y)|^p\leq T\sum_{edges\,uv}|f(u)-f(v)|^p. \end{array}$

The best ${T:=T_p(X)}$ is called the Enflo type ${p}$ constant of ${X}$.

By construction, if ${X}$ has Enflo type ${p}$,

$\displaystyle \begin{array}{rcl} c_X(D_n)\geq T_p(X)^{-1}n^{1-\frac{1}{p}}. \end{array}$

Enflo type is a bi-Lispchitz invariant. The Enflo type ${p}$ constant is covariant under Lipschitz maps: if there exists a Lipschitz map ${f:X\rightarrow Y}$, then

$\displaystyle \begin{array}{rcl} T_p(X)\leq dist(f) T_p(Y). \end{array}$

This gives lower bounds by

$\displaystyle \begin{array}{rcl} dist(f)\geq\frac{T_p(X)}{T_p(Y)}. \end{array}$

2. Banach spaces

Now I give some background on Banach spaces that will lead us to more invariants.

Say an infinite dimensional Banach space ${X}$ has Rademacher type ${p}$ if for all ${n}$ and all point ${x_1,\ldots,x_n\in X}$,

$\displaystyle \begin{array}{rcl} \mathop{\mathbb E}_\epsilon(|\sum_{i=1}^n\epsilon_i x_i|^2)\leq T^p\sum_{i=1}^n|x_i|^p, \end{array}$

where ${\epsilon_i}$ are independent uniform ${\pm 1}$-valued random variables.

This notion was introduced independently by Jorgensen and by Maurey-Pisier. Note that ${p}$ has not be ${\leq 2}$ (the real line does not have Rademacher type ${p}$ for ${p>2}$). Rademacher type is Enflo type when restricting maps ${f:D_n\rightarrow X}$ to linear maps. Therefore Enflo type ${p}$ implies Rademacher type ${p}$. A weak converse holds, as the following surprising theorem states.

Theorem 5 (Pisier) For Banach spaces, Rademacher type ${p>1}$ ${\Rightarrow}$ Enflo type ${p'}$ for all ${p'.

Open question. Does Rademacher type ${p>1}$ ${\Rightarrow}$ Enflo type ${p}$ ? Known to hold for certain subclasses of Banach spaces. E.g. UMD spaces (Naor-Schechtman).

2.2. Finite representability

Locality. Rademacher type ${p}$ depends only on the geometry of finite dimensional linear subspaces. Such properties are called local.

Definition 6 Say a Banach space ${X}$ is (crudely) finitely representable in ${Y}$ if there exists ${K\geq }$ such that for every ${n}$-dimensional subspace ${Z\subset X}$, there exists a subspace ${Z'\subset Y}$ with Banach-Mazur distance from ${Z}$ at most ${K}$.

By definition, local properties are preserved by finite representability. The following theorem suggests that local properties may be expressed as bi-Lipschitz metric invariants.

Theorem 7 (Ribe) If two infinite dimensional Banach spaces ${X}$ and ${Y}$ are uniformly homeomorphic, then they are mutually finitely representable.

Suggests has to be taken seriously. It merely gives a guideline, only a small number of invariants are understood from this point of view. We shall give an example below.

2.3. ${p}$-convexity

Let ${p>1}$. Say a Banach space is ${p}$-convex if it is uniformly convex with a modulus of convexity ${\epsilon^p}$.

Proposition 8 ${X}$ is ${p}$-convex iff there exists ${K>0}$ such that for all ${x}$, ${y\in X}$,

$\displaystyle \begin{array}{rcl} |x|^p+|y|^p \geq 2|\frac{x-y}{2}|^p+2|\frac{x+y}{2K}|^p. \end{array}$

Example. ${L^p}$ is ${p}$-convex for ${p\geq 2}$, with bounded constants. For ${p<2}$, ${L^p}$ is 2-convex with 2-convexity constant ${O(\frac{1}{\sqrt{p-1}})}$.

Theorem 9 (Pisier) 1. Every uniformly convex Banach space can be renormed to be ${p}$-convex for some ${p>1}$.

2. Uniform convexity is a local property.

3. Markov type

If ${(X_t)_{t\in{\mathbb Z}}}$ is a Markov chain on some set ${\Omega}$, denote by ${(\tilde{X}_t(k))_{t\in{\mathbb Z}}}$ the Markov chain that coincides with ${X_t}$ until ${t\leq k}$, and then continues an independent life.

Say a metric space ${X}$ is Markov ${p}$-convex if there exists ${\pi>0}$ such that for every Markov chain on a set ${\Omega}$ and every map ${f:\Omega\rightarrow X}$,

$\displaystyle \begin{array}{rcl} \sum_{k=0}^{\infty}\sum_{t\in{\mathbb Z}}\frac{1}{2^{kp}}\mathop{\mathbb E}(d(f(X_t),f(\tilde{X}_t(t-2^k)))^p)\leq \pi^p\sum_{t\in{\mathbb Z}}\mathop{\mathbb E}(d(f(X_t),f(X_{t-1}))^p). \end{array}$

The best ${\pi}$ is called the Markov ${p}$-convexity constant ${\pi_p(X)}$ of ${X}$.

3.1. Trees

Let us start with an example which is not Markov convex. Let ${B_n}$ be the complete rooted binary tree of depth ${n}$. Let ${X_t}$ be the standard downward random walk on ${B_n}$ with probability ${1/2}$, stopped at leaves.

Proposition 10 There is a ${C(p)}$ such that the Markov ${p}$ convexity constant has to be

$\displaystyle \pi_p(B_n)\geq C\,(\log n)^{1/p}.$

It follows that the infinite rooted binary tree ${B_\infty}$ is not Markov convex. We see that trees are really extreme for Markov convexity: they allow maximal branching for Markov chains.

Proof. The right hand side (or denominator in ${\pi}$) equals ${n}$, since the distance ${d(X_t,X_{t+1})=1}$ by construction. To estimate the right hand side (or numerator in ${\pi}$), assume that ${0\leq t-2^k\leq t\leq n}$ and consider the independent chains ${(X_t)}$ and ${(\tilde{X}_t(t-2^k))}$ over ${2^k}$ time steps. With probability ${1/2}$, at time ${t-2^k}$, they jump to different nodes. If this happens, later on, they walk down the tree along different branches, therefore ${d(X_t,\tilde{X}_t(t-2^k))}$ increases by 2 at each step, leading to

$\displaystyle \begin{array}{rcl} d(X_t,\tilde{X}_t(t-2^k))\geq 2.2^k, \end{array}$

and thus

$\displaystyle \begin{array}{rcl} \mathop{\mathbb E}(d(X_t,\tilde{X}_t(t-2^k))^p)\geq \frac{1}{2}(2.2^k)^p\sim 2^{kp}. \end{array}$

Summing over ${t}$ from ${2^k}$ to ${n}$ gives

$\displaystyle \begin{array}{rcl} \sum_{t=2^k}^{n}\mathop{\mathbb E}(d(X_t,\tilde{X}_t(t-2^k))^p)\geq (n-2^k)2^{kp}. \end{array}$

Summing over ${k}$ from 0 to ${\log_2 n}$ gives

$\displaystyle \begin{array}{rcl} \sum_{k=0}^{\log n}\sum_{t=2^k}^{n}\mathop{\mathbb E}(d(X_t,\tilde{X}_t(t-2^k))^p)\geq n\log n, \end{array}$

whence ${\pi^p\geq\log n}$.

Theorem 11 If metric space ${X}$ is Markov ${p}$-convex, then

$\displaystyle \begin{array}{rcl} c_X(B_n)\geq C(\log n)^{1/p}. \end{array}$

Indeed, the Markov ${p}$-convexity constant is Lipschitz covariant.

3.2. The case of Banach spaces

Theorem 12 An infinite dimensional Banach space can be renormed to be ${p}$-convex iff it is Markov ${p}$-convex.

I prove only one implication. Let ${X}$ be a ${p}$-convex Banach space, i.e.

$\displaystyle \begin{array}{rcl} |x|^p+|y|^p \geq 2|\frac{x-y}{2}|^p+2|\frac{x+y}{2K}|^p. \end{array}$

One first proves a consequence, the fork inequality:

$\displaystyle \begin{array}{rcl} \frac{1}{2^{p-1}}(|x-w|^p+|x-z|^p)+\frac{1}{4^{p-1}K}|z-w|^p \leq 2|y-x|^p+|y-w|^p+|z-y|^p. \end{array}$

Note that only the tip of the fork (${z}$ and ${w}$) involves the ${p}$-convexity constant, the other coefficients take specific values.

Apply the fork inequality to the images by ${f}$ of ${X_{t-2^{k+1}}}$, ${X_{t-2^{k}}}$, ${X_t}$, ${\tilde{X}_t(t-2^k)}$ and take expectation. By independence,…

3.3. Question

Does one have a metric invariant for uniform smoothness ?

## Notes of Adriane Kaichouh’s lecture nr 2

1. Super-reflexivity

1.1. Uniform convexity

A Banach space ${E}$ is uniformly convex if the midpoint of two points on the unit sphere which are sufficiently far apart is deep inside the ball.

Example. Hilbert spaces, ${L^p}$ spaces with ${1 are uniformly convex.

Uniform convexity carries to ${\ell^2}$ direct sums ${\ell^2(X,E)}$.

1.2. Super-reflexivity

Uniform convexity is not stable under renorming.

Definition 1 A Banach space is super-reflexive if it admits an equivalent norm which is uniformly convex.

Here is an equivalent definition.

Theorem 2 (Enflo) A Banach space is super-reflexive iff every one of its ultrapowers is reflexive.

In other words, every space which is finitely representable in it is reflexive.

Recall that, given an ultrafilter ${\mathcal{U}}$ on an index set ${I}$, the ultraproduct ${\prod_{\mathcal{U}}E_i}$ of a family of Banach spaces ${(E_i)_{i\in I}}$ is the quotient of ${\ell^{\infty}(I,E_i)}$ by the subspace ${c_0(I,E_i)}$ of sequences that tend to 0 along ${\mathcal{U}}$. This is a Banach space. If all ${E_i=E}$, the ultraproduct is called an ultrapower.

2. Pestov’s theorem

Here is Pestov’s main theorem.

Theorem 3 (Pestov) Let ${X}$ be a metric space. Let ${G}$ be a locally finite group acting on ${X}$ by isometries. Assume that ${X}$ admits a coarse/uniform embedding into a Banach space ${E}$. Then there is a coarse/uniform embedding ${\psi}$ from ${X}$ to some ultrapower of some ${\ell^2(\mathcal{U},E)}$, and the action of ${G}$ on ${\psi(X)}$ extends to an action by affine isometries on the affine span of ${\psi(X)}$.

2.1. Proof

Let ${\phi}$ be the initial coarse embedding. Start with a finite subgroup ${F}$ of ${G}$. Define

$\displaystyle \begin{array}{rcl} \psi_F : X\rightarrow \ell^2(F,E),\quad \psi_F(x)(f)=\frac{1}{|F|}\phi(f^{-1}x). \end{array}$

Then ${\psi_F}$ has the same expansion and compression moduli as ${\phi}$. ${F}$ acts on ${\ell^2(F,E)}$ via left translations, and ${\psi_F}$ is equivariant for this action.

Next we glue all maps ${\psi_F}$ together. Let ${I}$ denote the set of all finite subgroups of ${G}$. Because ${G}$ is locally finite, there exists an ultrafilter ${\mathcal{U}}$ on ${I}$ such that for all ${F\in I}$, the set

$\displaystyle \begin{array}{rcl} \{H\in I\,;\,F\subset H\} \end{array}$

is in ${\mathcal{U}}$. Indeed, the intersection of two such subsets

$\displaystyle \begin{array}{rcl} \{H\in I\,;\,F\subset H\}\cap\{H\in I\,;\,F\subset H\}=\{H\in I\,;\,F\subset H\}=\{H\in I\,;\,F\subset H\} \end{array}$

and the subgroup ${±langle F_1,F_2\rangle}$ generated by two finite subgroups is finite again. So we are dealing with a filter basis.

Fix an origin ${o\in X}$. Let ${V}$ be the ultraproduct over ${\mathcal{U}}$ of spaces ${\ell^2(F,E)}$, based at ${\psi_F(o)}$ instead of 0 (this makes ${\ell^2(F,E)}$ an affine space instead of a vectorspace, and so much for the ultraproduct). Define ${\psi(x)}$ as the sequence ${(\psi_F(x))_{F\in I}}$. It is well defined and has the same expansion and compression moduli as ${\phi}$.

Given ${g\in G}$, observe that ${\mathcal{U}}$-almost surely,

$\displaystyle \begin{array}{rcl} g\psi(x)=\psi(gx) \end{array}$

is well defined, and this provides an isometry of ${\psi(X)}$. Isometries of subsets of Hilbert spaces extend to affine hulls, so the ${G}$ action extends.

As defined, ${V}$ is not quite an ultrapower. For every ${F\in I}$, ${\ell^2(F,E)}$ embeds non canonically into ${\ell^2(G,E)}$, so

2.2. Application to uniform embeddability of Urysohn space

Let ${\phi:U\rightarrow E}$ be a uniform embedding. Let ${G}$ be a dense locally finite subgroup of ${Isom(U)}$. Pestov’s theorem implies that there is a uniform embedding of ${U}$ to a reflexive Banach space ${V}$, equivariant with respect to an affine isometric action of ${G}$. ${\Psi}$ is a homeo onto ${\psi(U)}$. The topologies of pointwise convergence induced on ${G}$ by the two actions of ${\psi(U)}$ coincide, so, by density, the action on the affine span ${S}$ of ${\psi(U)}$ extends to ${Isom(U)}$. I.e. we get a continuous group homomorphism

$\displaystyle \begin{array}{rcl} Isom(U)\rightarrow Aff(S)=O(S)\times S. \end{array}$

Since there is an injective equivariant map, it is injective.

Now we use the fact that ${Isom(U)}$ contains a subgroup ${H}$ isomorphic to ${Homeo^*([0,1])}$. So we get a linear isometric representation of ${H}$ on a reflexive Banach space. According to Megerelishvili’s theorem, this must be the trivial representation. So ${H}$ acts faithfully by translations. Contradiction, since ${H}$ is non abelian.

## Notes of Adriane Kaichouh’s lecture nr 1

Non-embeddability of Urysohn space

Theorem 1 (Pestov 2008) Urysohn space ${U}$ does not embed uniformly in any super-reflexive Banach space.

1. Urysohn space

1.1. Universality

Urysohn space is the universal complete separable metric space. It contains an isometric copy of every complete separable metric space. In particular, it contains ${c_0}$. So Pestov’s theorem follows from Kalton’s non embeddability result for ${c_0}$ into reflexive Banach spaces. The point of the course is to study the method, different from Kalton’s.

As far as universality, ${U}$ is not so spectacular: Banach and Mazur observed that ${C([0,1],{\mathbb R})}$ is also universal. However, Urysohn space is ultrahomogeneous: any isometry between finite subsets of ${U}$ extends to a global isometry (this generalizes to compact sets). Nevertheless, ${U}$ was long forgotten, until Katetov gave a new construction.

1.2. Katetov’s construction

For any finite subset ${A\subset U}$, and any isometric embedding of ${A}$ into a finite metric space ${B}$, ${B}$ also sits in ${U}$ containing ${A}$. So any time a point is given with distances to points of ${A}$ that satisfy triangle inequality, that point in fact sits in ${U}$. This is the basic step of Katetov’s construction, “one point extension”.

Given a metric space ${X}$, let

$\displaystyle E(X)=\{f:X\rightarrow{\mathbb R}\,;\,\forall x'\in X,\,|f(x)-f(x')|\leq d(x,x')\leq f(x)+f(x')\}.$

It is equipped with the sup distance. ${X}$ sits there isometrically. Consider ${E_2(X)=E(E(X))}$, and iterate. Let ${U}$ be the completion of

$\displaystyle \begin{array}{rcl} \bigcup_{n\in{\mathbb N}}E_n(X). \end{array}$

Actually, we take only those functions of ${E(X)}$ which have finite support inthe following sense: ${f\in E(X)}$ if there is a finite subset ${A\subset X}$ such that ${f}$ is the , i.e. ${f}$ is the largest function compatible with its values on ${A}$,

$\displaystyle \begin{array}{rcl} f(x)=\max_{a\in A}f(a)+d(a,x). \end{array}$

This is necessary to get separable spaces.

Naturality: isometries of ${X}$ extend uniquely to isometries of ${E(X)}$. So they also extend to isometries of ${U}$.

Theorem 2 (Uspensky) ${Isom(U)}$ is universal for all Polish groups (i.e. separable and admit a complete compatible distance).

Example. ${Isom(U)}$ contains ${Homeo^+([0,1])}$.

Later, we shall use a result about that group (Megrelishvili 2001): The only continuous isometric linear representation of ${Homeo^+([0,1])}$ on a reflexive Banach space is the trivial representation.

Uspensky relies on the following theorem.

Theorem 3 (Gao-Kechris) Every Polish group is a closed subgroup of some ${Isom(X)}$, ${X}$ complete separable.

2. The extension property

Let us discuss a strenthening of ultrahomogeneity.

Say a metric space ${X}$ has the extension property if for every finite subset ${A\subset X}$, there exists a finite subset ${B\subset X}$ containing it such that every partial isometry of ${A}$ (i.e. an isometry between subsets of ${A}$) extends to a global isometry of ${B}$.

Example. ${{\mathbb N}}$ with it usual distance has the extesion property. Hrushovski has shown that the random graph (percolation in the complete graph) has the extension property.

Theorem 4 (Solecki) ${U}$ has the extension property.

This a hard theorem.

By iterating the extension property applied to ${A\cup B}$, and then adding points, one obtains ultrahomogeneity.

Here is one more property that we shall need tomorrow.

Theorem 5 ${Isom(U)}$ contains a dense, locally finite subgroup.

Here, locally finite means that every finitely generated subgroup is finite.

This subgroup is obtained as an increasing union of finite groups, isometry groups of finite subsets, as a consequence of a strenghening of extension property: not only isometries do extend from ${A}$ to ${B}$, but this is performed by a group homomorphism ${Isom(A)\rightarrow Isom(B)}$.

## Notes of Michal Kraus’s lectures

1. Motivation

1.1. Doubling constant

Recall that the doubling constant of a metric space is the least ${K}$ such that every ${R}$-ball can be covered by ${K}$ ${R/2}$-balls.

Example. ${{\mathbb R}^n}$ and its subspaces are doubling.

1.2. An open problem

Open problem: characterize those metric spaces which admit a bi-Lipschitz embedding in some Euclidean space.

These spaces are doubling, but the converse fails. There are various counterexamples, the Heisenberg group in its Carnot metric is one of them.

1.3. Snowflaking

If one admits a little bit of snowflaking, the answer becomes simpler.

Theorem 1 (Assouad 1983) For every ${K\geq 1}$ and ${\alpha\in(0,1)}$, there exist ${N}$ and ${D}$ such that every metric space with doubling constant ${K}$, once snowflaked with exponent ${\alpha}$, admits a ${D}$-bi-Lipschitz embedding into ${{\mathbb R}^N}$.

I will present Assouad’s original proof, based on tensor product.

2. Tensor product of Hilbert spaces

2.1. Algebraic tensor product

One can view ${V\otimes W}$ as the quotient of the vectorspace freely generated by ${V\times W}$ by the subspace generated by elements

$\displaystyle \begin{array}{rcl} (a_1 v_1+a_2v_2,b_1w_1+b_2w_2)-\sum_{i,\,j=1}^2 a_ib_j (v_i,w_j). \end{array}$

2.2. Inner product

If ${V}$ and ${W}$ are Hilbert spaces, there is a unique inner product on ${V\otimes W}$ which extends

$\displaystyle \begin{array}{rcl} \langle v_1\otimes w_1,v_2\otimes w_2\rangle=\langle v_1,v_2\rangle\langle w_1,w_2\rangle. \end{array}$

Take the completion with respect to this norm. One gets a Hilbert space ${V\hat{\otimes}W}$, which has the following two properties:

1. ${|v\otimes w|=|v||w|}$,
2. ${\mathrm{dim}(V\hat{\otimes}W)=\mathrm{dim}(V)\mathrm{dim}(W)}$.

Example. ${L^2([0,1])\hat{\otimes}L^2([0,1])=L^2([0,1]\times[0,1])}$.

3.1. A Lemma

The first step is to map ${X}$ to some Euclidean space in a Lipschitz manner (with a possibly large constant ${1/r}$) in such a way that distances of the order of ${r}$ are not compresses. Then the following Lemma will complete the proof.

Lemma 2 Let ${\tau\in(0,1)}$. Assume that, for all ${i}$, there exists a map ${\phi_i:X\rightarrow{\mathbb R}^m}$ such that

1. If ${d(x,x')\in(\tau^{i+1},\tau^i]}$, then ${d(\phi_i(x),\phi_i(x'))\geq A}$,
2. ${d(\phi_i(x),\phi_i(x'))\leq B\min\{\tau^{-i}d(x,x'),1\}}$.

Then, for all ${\alpha\in(0,1)}$, ${X^\alpha}$ embeds in ${{\mathbb R}^N}$ with controlled dimension ${N}$ and bi-Lipschitz distorsion ${D}$.

Proof. Fix some origin ${o\in X}$. Let ${f:X\rightarrow{\mathbb R}^m\otimes{\mathbb R}^{2n}}$ be defined by

$\displaystyle \begin{array}{rcl} f(x)=\sum_{i\in{\mathbb Z}}\tau^{i\alpha}(\phi_i(x)-\phi_i(o))\otimes e_i. \end{array}$

Here, ${(e_i)_{1\leq i\leq 2n}}$ is an orthonormal basis of ${{\mathbb R}^{2n}}$, and the notation is extended to ${i\in{\mathbb Z}}$ by ${e_{i+2n}=e_i}$.

Then, choosing ${k}$ such that ${d(x,x')\in(\tau^{k+1},\tau^k)}$,

$\displaystyle \begin{array}{rcl} |f(x)-f(x')|&\leq& \sum_{i\in{\mathbb Z}}\tau^{i\alpha}|\phi_i(x)-\phi_i(o)|\\ &\leq& \sum_{i

Note that this Lipschitz estimate does not depend on the yet undefined dimension ${n}$.

For the compression modulus, the main contribution will come from scales ${\tau^i}$ for ${i=k-n,\ldots,k+n}$. This part of the sum has norm bounded below by that of only one term, ${|\phi_k(x)-\phi_k(x')|}$, since we are dealing with a combination of pairwise orthogonal vectors. This has size ${Ad(x,x')^\alpha}$.

The rest of the sum is dealt with by the Lipschitz estimate. The upper bound is

$\displaystyle B(\frac{\tau^{n\alpha}}{1-\tau^\alpha}+\frac{\tau^{n(1-\alpha)}}{1-\tau^{1-\alpha}})d(x,x')^\alpha.$

Choosing ${n}$ large enough so that ${Ad(x,x')^\alpha}$ beats this term gives the compression bound. Note that both embedding dimension and distorsion blow up as ${\alpha}$ tends to 1.

3.2. Existence of embeddings with compression controlled at a fixed scale

Let ${c=\tau^{i+1}}$. Pick a maximal ${c}$-net ${Y}$. The number of points of ${Y}$ in a ball of radius ${c/\tau}$ is bounded by some ${m}$. We shall use a ${m}$-colouring of ${Y}$ with the property that points with distance ${\leq c/\tau}$ have different colours. To make one, order the points of ${Y}$ and assign colours inductively: for each new point ${y}$, examine the colours already assigned in ${B(y,c/\tau)}$ and pick one that had not yet been assigned (there are always a few suitable colours available).

Pick an orthonormal basis ${(e_i)}$ of ${{\mathbb R}^m}$. Set

$\displaystyle \begin{array}{rcl} \phi=\sum_{y\in Y}\frac{1}{c}g_y e_{colour\,of\,y}, \end{array}$

where ${g_y}$ is a tent function of distance to ${y}$. Then ${\phi}$ is ${\frac{m}{c}}$-Lipschitz. Again, for points of ${X}$ at distance in ${(c,c/\tau)}$, the main contribution in the sum comes from points of ${Y}$ in a ${c/\tau}$ ball, which have different colours, so ${|\phi(x)-\phi(x')|}$ is bounded below by the largest term, corresponding to a point of ${Y}$ close to ${x}$, whose size is 1.

This completes the proof.

4. Further results

Clearly (use Hausdorff dimension, for instance), as ${\alpha}$ tends to 0, embedding dimension must tend to infinity. What if ${\alpha}$ tends to 1 ?

Theorem 3 (Naor-Neiman 2012) Assouad’s construction can be improved in order that embedding dimension depends on doubling constant only (provided ${\alpha\geq 1/2}$).

The new construction uses stochastic decompositions.

4.1. Questions

What about isometric embedding of snowflakes ? Le Donne: Besicovich covering property is snowflake invariant. The Carnot metric on Heisenberg group does not have it. So no snowflakes of that metric never embed isometrically to Euclidean space.

Back to characterization of metric spaces bi-Lipschitz embeddable in Euclidean space. Being Euclidean (i.e. bi-Lipschitz embeddable in ${L^2}$, there is a workable criterion) and doubling may be necessary and sufficient. It true, this would provide a very useful tool in computer science: non linear dimension reduction in Euclidean spaces.