** Assouad’s embedding theorem **

**1. Motivation **

** 1.1. Doubling constant **

Recall that the doubling constant of a metric space is the least such that every -ball can be covered by -balls.

**Example**. and its subspaces are doubling.

** 1.2. An open problem **

**Open problem**: characterize those metric spaces which admit a bi-Lipschitz embedding in some Euclidean space.

These spaces are doubling, but the converse fails. There are various counterexamples, the Heisenberg group in its Carnot metric is one of them.

** 1.3. Snowflaking **

If one admits a little bit of snowflaking, the answer becomes simpler.

Theorem 1 (Assouad 1983)For every and , there exist and such that every metric space with doubling constant , once snowflaked with exponent , admits a -bi-Lipschitz embedding into .

I will present Assouad’s original proof, based on tensor product.

**2. Tensor product of Hilbert spaces **

** 2.1. Algebraic tensor product **

One can view as the quotient of the vectorspace freely generated by by the subspace generated by elements

** 2.2. Inner product **

If and are Hilbert spaces, there is a unique inner product on which extends

Take the completion with respect to this norm. One gets a Hilbert space , which has the following two properties:

- ,
- .

**Example**. .

**3. Proof of Assouad’s theorem **

** 3.1. A Lemma **

The first step is to map to some Euclidean space in a Lipschitz manner (with a possibly large constant ) in such a way that distances of the order of are not compresses. Then the following Lemma will complete the proof.

Lemma 2Let . Assume that, for all , there exists a map such that

- If , then ,
- .
Then, for all , embeds in with controlled dimension and bi-Lipschitz distorsion .

**Proof**. Fix some origin . Let be defined by

Here, is an orthonormal basis of , and the notation is extended to by .

Then, choosing such that ,

Note that this Lipschitz estimate does not depend on the yet undefined dimension .

For the compression modulus, the main contribution will come from scales for . This part of the sum has norm bounded below by that of only one term, , since we are dealing with a combination of pairwise orthogonal vectors. This has size .

The rest of the sum is dealt with by the Lipschitz estimate. The upper bound is

Choosing large enough so that beats this term gives the compression bound. Note that both embedding dimension and distorsion blow up as tends to 1.

** 3.2. Existence of embeddings with compression controlled at a fixed scale **

Let . Pick a maximal -net . The number of points of in a ball of radius is bounded by some . We shall use a -colouring of with the property that points with distance have different colours. To make one, order the points of and assign colours inductively: for each new point , examine the colours already assigned in and pick one that had not yet been assigned (there are always a few suitable colours available).

Pick an orthonormal basis of . Set

where is a tent function of distance to . Then is -Lipschitz. Again, for points of at distance in , the main contribution in the sum comes from points of in a ball, which have different colours, so is bounded below by the largest term, corresponding to a point of close to , whose size is 1.

This completes the proof.

**4. Further results **

Clearly (use Hausdorff dimension, for instance), as tends to 0, embedding dimension must tend to infinity. What if tends to 1 ?

Theorem 3 (Naor-Neiman 2012)Assouad’s construction can be improved in order that embedding dimension depends on doubling constant only (provided ).

The new construction uses stochastic decompositions.

** 4.1. Questions **

What about isometric embedding of snowflakes ? Le Donne: Besicovich covering property is snowflake invariant. The Carnot metric on Heisenberg group does not have it. So no snowflakes of that metric never embed isometrically to Euclidean space.

Back to characterization of metric spaces bi-Lipschitz embeddable in Euclidean space. Being Euclidean (i.e. bi-Lipschitz embeddable in , there is a workable criterion) and doubling may be necessary and sufficient. It true, this would provide a very useful tool in computer science: non linear dimension reduction in Euclidean spaces.