## Notes of Michal Kraus’s lectures

1. Motivation

1.1. Doubling constant

Recall that the doubling constant of a metric space is the least ${K}$ such that every ${R}$-ball can be covered by ${K}$ ${R/2}$-balls.

Example. ${{\mathbb R}^n}$ and its subspaces are doubling.

1.2. An open problem

Open problem: characterize those metric spaces which admit a bi-Lipschitz embedding in some Euclidean space.

These spaces are doubling, but the converse fails. There are various counterexamples, the Heisenberg group in its Carnot metric is one of them.

1.3. Snowflaking

If one admits a little bit of snowflaking, the answer becomes simpler.

Theorem 1 (Assouad 1983) For every ${K\geq 1}$ and ${\alpha\in(0,1)}$, there exist ${N}$ and ${D}$ such that every metric space with doubling constant ${K}$, once snowflaked with exponent ${\alpha}$, admits a ${D}$-bi-Lipschitz embedding into ${{\mathbb R}^N}$.

I will present Assouad’s original proof, based on tensor product.

2. Tensor product of Hilbert spaces

2.1. Algebraic tensor product

One can view ${V\otimes W}$ as the quotient of the vectorspace freely generated by ${V\times W}$ by the subspace generated by elements

$\displaystyle \begin{array}{rcl} (a_1 v_1+a_2v_2,b_1w_1+b_2w_2)-\sum_{i,\,j=1}^2 a_ib_j (v_i,w_j). \end{array}$

2.2. Inner product

If ${V}$ and ${W}$ are Hilbert spaces, there is a unique inner product on ${V\otimes W}$ which extends

$\displaystyle \begin{array}{rcl} \langle v_1\otimes w_1,v_2\otimes w_2\rangle=\langle v_1,v_2\rangle\langle w_1,w_2\rangle. \end{array}$

Take the completion with respect to this norm. One gets a Hilbert space ${V\hat{\otimes}W}$, which has the following two properties:

1. ${|v\otimes w|=|v||w|}$,
2. ${\mathrm{dim}(V\hat{\otimes}W)=\mathrm{dim}(V)\mathrm{dim}(W)}$.

Example. ${L^2([0,1])\hat{\otimes}L^2([0,1])=L^2([0,1]\times[0,1])}$.

3.1. A Lemma

The first step is to map ${X}$ to some Euclidean space in a Lipschitz manner (with a possibly large constant ${1/r}$) in such a way that distances of the order of ${r}$ are not compresses. Then the following Lemma will complete the proof.

Lemma 2 Let ${\tau\in(0,1)}$. Assume that, for all ${i}$, there exists a map ${\phi_i:X\rightarrow{\mathbb R}^m}$ such that

1. If ${d(x,x')\in(\tau^{i+1},\tau^i]}$, then ${d(\phi_i(x),\phi_i(x'))\geq A}$,
2. ${d(\phi_i(x),\phi_i(x'))\leq B\min\{\tau^{-i}d(x,x'),1\}}$.

Then, for all ${\alpha\in(0,1)}$, ${X^\alpha}$ embeds in ${{\mathbb R}^N}$ with controlled dimension ${N}$ and bi-Lipschitz distorsion ${D}$.

Proof. Fix some origin ${o\in X}$. Let ${f:X\rightarrow{\mathbb R}^m\otimes{\mathbb R}^{2n}}$ be defined by

$\displaystyle \begin{array}{rcl} f(x)=\sum_{i\in{\mathbb Z}}\tau^{i\alpha}(\phi_i(x)-\phi_i(o))\otimes e_i. \end{array}$

Here, ${(e_i)_{1\leq i\leq 2n}}$ is an orthonormal basis of ${{\mathbb R}^{2n}}$, and the notation is extended to ${i\in{\mathbb Z}}$ by ${e_{i+2n}=e_i}$.

Then, choosing ${k}$ such that ${d(x,x')\in(\tau^{k+1},\tau^k)}$,

$\displaystyle \begin{array}{rcl} |f(x)-f(x')|&\leq& \sum_{i\in{\mathbb Z}}\tau^{i\alpha}|\phi_i(x)-\phi_i(o)|\\ &\leq& \sum_{i

Note that this Lipschitz estimate does not depend on the yet undefined dimension ${n}$.

For the compression modulus, the main contribution will come from scales ${\tau^i}$ for ${i=k-n,\ldots,k+n}$. This part of the sum has norm bounded below by that of only one term, ${|\phi_k(x)-\phi_k(x')|}$, since we are dealing with a combination of pairwise orthogonal vectors. This has size ${Ad(x,x')^\alpha}$.

The rest of the sum is dealt with by the Lipschitz estimate. The upper bound is

$\displaystyle B(\frac{\tau^{n\alpha}}{1-\tau^\alpha}+\frac{\tau^{n(1-\alpha)}}{1-\tau^{1-\alpha}})d(x,x')^\alpha.$

Choosing ${n}$ large enough so that ${Ad(x,x')^\alpha}$ beats this term gives the compression bound. Note that both embedding dimension and distorsion blow up as ${\alpha}$ tends to 1.

3.2. Existence of embeddings with compression controlled at a fixed scale

Let ${c=\tau^{i+1}}$. Pick a maximal ${c}$-net ${Y}$. The number of points of ${Y}$ in a ball of radius ${c/\tau}$ is bounded by some ${m}$. We shall use a ${m}$-colouring of ${Y}$ with the property that points with distance ${\leq c/\tau}$ have different colours. To make one, order the points of ${Y}$ and assign colours inductively: for each new point ${y}$, examine the colours already assigned in ${B(y,c/\tau)}$ and pick one that had not yet been assigned (there are always a few suitable colours available).

Pick an orthonormal basis ${(e_i)}$ of ${{\mathbb R}^m}$. Set

$\displaystyle \begin{array}{rcl} \phi=\sum_{y\in Y}\frac{1}{c}g_y e_{colour\,of\,y}, \end{array}$

where ${g_y}$ is a tent function of distance to ${y}$. Then ${\phi}$ is ${\frac{m}{c}}$-Lipschitz. Again, for points of ${X}$ at distance in ${(c,c/\tau)}$, the main contribution in the sum comes from points of ${Y}$ in a ${c/\tau}$ ball, which have different colours, so ${|\phi(x)-\phi(x')|}$ is bounded below by the largest term, corresponding to a point of ${Y}$ close to ${x}$, whose size is 1.

This completes the proof.

4. Further results

Clearly (use Hausdorff dimension, for instance), as ${\alpha}$ tends to 0, embedding dimension must tend to infinity. What if ${\alpha}$ tends to 1 ?

Theorem 3 (Naor-Neiman 2012) Assouad’s construction can be improved in order that embedding dimension depends on doubling constant only (provided ${\alpha\geq 1/2}$).

The new construction uses stochastic decompositions.

4.1. Questions

What about isometric embedding of snowflakes ? Le Donne: Besicovich covering property is snowflake invariant. The Carnot metric on Heisenberg group does not have it. So no snowflakes of that metric never embed isometrically to Euclidean space.

Back to characterization of metric spaces bi-Lipschitz embeddable in Euclidean space. Being Euclidean (i.e. bi-Lipschitz embeddable in ${L^2}$, there is a workable criterion) and doubling may be necessary and sufficient. It true, this would provide a very useful tool in computer science: non linear dimension reduction in Euclidean spaces.