Notes of James Lee’s lecture nr 5

1. Negative type metrics

We started studying distorsion of embeddings of finite metric spaces into {L_p}-spaces. We discussed obstructions for a metric space to have small distorsion into {L_2}-spaces. Today, we make steps in the direction of {L_1}-embeddings.

Terminology: say a metric is an {L_p}-metric if it is induced by a map to {L_p}.

1.1. Negative type

Definition 1 Say a metric {d} is of negative type iff {\sqrt{d}} is an {L_2}-metric.

By Schoenberg’s criterion, a metric is of negative type if for all {x_1 ,\ldots,x_n \in X}, the matrix {(d(x_i ,x_j))_{ij}} is negative definite.

Proposition 2 {L_2}-metrics {\subset} {L_1}-metrics {\subset} negative type metrics.

We have seen problems which lead to minimizing linear functionals on the cone of {L_1}-metrics. Exact minimization over negative type metrics is doable (SDP). So an approach is to project every negative type metric to {L_1}-metrics, i.e. attempt to embed negative type metrics in {L_1} and control distorsion.

Proof: Claim: {L_1 ({\mathbb R})} is of negative type.

Indeed, Let {T:L_1 ({\mathbb R})\rightarrow L_2 ({\mathbb R}^2)} be defined by

\displaystyle  \begin{array}{rcl}  Tf (x,y)=1_{\{0\leq y \leq f(x)\}}. \end{array}


\displaystyle  \begin{array}{rcl}  \|Tf_1 -Tf_2 \|_{2}^{2}&=&\int|Tf_1 (x,y)-Tf_2 (x,y)|^2 \,dx\,dy\\ &=&\int|Tf_1 (x,y)-Tf_2 (x,y)| \,dx\,dy\\ &=&\int|f_1 (x)-f_2 (x)| \,dx\\ &=&\|f_1 -f_2\|_1 , \end{array}

i.e. the square root of {L_1} metric is an {L_2}-metric.

Claim: {\ell_2} embeds isometrically into {L_1}.

Indeed, let {\gamma} denote the Gaussian measure on infinite dimensional Hilbert space {H}. If {X} is a Gaussian variable, {\mathop{\mathbb E}(|X|)} is a function of {\mathop{\mathbb E}(X^2)} only. In other words, when restricted to linear functions on {H}, {L_1 (\gamma)} and {L_2 (\gamma)} norms are proportional. \Box

Bartholdi: Could such an isometric embedding {L_1 ({\mathbb R})\rightarrow L_2 ({\mathbb R}^2)} be given by a linear map ? Answer: No. The image cannot by rectifiable.

Bartholdi: Why aren’t you interested in {\ell_1} rather than {L_1} ? Answer: every finite {L_1}-metric is also an {\ell_1}-metric, but in a larger dimension ({n\rightarrow \begin{pmatrix}n\\ 2\end{pmatrix}}). A metric is an {L_1}-metric iff its restriction to every finite subset is an {L_1}-metric. This does not work for {\ell_1}. For instance, {L_1} does not bilipschitz embed in {\ell_1}. So {L_1} is the right space to look at.

1.2. The Goemans-Linial conjecture

Conjecture: Every metric space of negative type admits a bilipschitz embedding into {L_1}.

This was motivated by approximation algorithms for SPARSEST CUT, see Robi Krauthgamer’s second course.

It turns out to be false. Counterexamples by Subhash Khot and Nisheeth Vishnoi, and later by Jeff Cheeger and Bruce Kleiner: they show that Heisenberg group in its Carnot metric does not bilipschitz embed into {L_1}. Whereas Assaf Naor and I have found a left invariant metric on Heisenberg group, quasiisometric to the Carnot metric, which has negative type. An effective form of their theorem states

Theorem 3 (Cheeger, Kleiner, Naor) {\exists \delta>0} such that

\displaystyle  \begin{array}{rcl}  D_n (NEG,L_1)\geq \Omega((\log n)^{\delta}). \end{array}

Conversely, here is the best known upper bound on the distorsion of {L_2} embeddings of {n}-point metric spaces of negative type.

Theorem 4 (Arora, Lee, Naor)

\displaystyle  \begin{array}{rcl}  D_n (NEG,L_2)\leq O(\sqrt{\log n}\log\log n). \end{array}

Note that this implies that

\displaystyle  \begin{array}{rcl}  D_n (NEG,L_1)\leq O(\sqrt{\log n}\log\log n). \end{array}

which is nearly sharp. Let {\widetilde{NEG}} denote the class of metrics {d} such that {\sqrt{d}} is a metric which embeds in {L_2} with bounded distorsion. Then

Theorem 5 (Lee, Sidiropoulos)

\displaystyle  \begin{array}{rcl}  D_n (\widetilde{NEG},L_1)\geq \Omega(\sqrt{\frac{\log n}{\log\log n}}). \end{array}

1.3. Observable diameter in {L_1}

Observable diameter have been previously defined in terms of maps to the real line. There is a generalization where the real line is replaced by an arbitrary metric space as a range. It is crucial, for various issues in computer science, to understand the observable diameter in {L_1} of various classes of metric spaces. Since every compact metric measure space can be approximated by graphs with counting measure, it is related to the best constant {D} in the Poincaré inequality for scalar functions on graphs.

\displaystyle  \begin{array}{rcl}  \mathop{\mathbb E}_{x,y}|f(x)-f(y)| \leq \frac{1}{D}\,\mathop{\mathbb E}_{x\sim y}|f(x)-f(y)| . \end{array}

For general finite metric spaces, the observable diameter in {L_1} is exponential in {n}. But for negative type metrics, there are bounds

\displaystyle  \begin{array}{rcl}  \log\log n \leq D\leq \sqrt{\log n}. \end{array}

In the Hilbert case, the story goes back to the 1960’s.

Theorem 6 (Enflo 1969) For all {f:\{ 0,1 \}^k \rightarrow L_2},

\displaystyle  \begin{array}{rcl}  \mathop{\mathbb E}_{x,y}|f(x)-f(y)|^2 \leq O(k)\,\mathop{\mathbb E}_{x\sim y}|f(x)-f(y)|^2 . \end{array}

This implies that the distorsion of every map of the cube to {L_2} has distorsion {\Omega(\sqrt{k})=\Omega(\sqrt{\log n})}.

Here comes the upper bound on the {L^1} observable diameter.

Theorem 7 (Arora, Rao, Vazirani) If {X} has negative type, then there exists a {1}-Lipschitz function {f:X\rightarrow{\mathbb R}} such that

\displaystyle  \begin{array}{rcl}  \sum_{x,\,y}|f(x)-f(y)|\geq\Omega(\frac{1}{\sqrt{\log n}})\sum_{x,\,y}d(x,y). \end{array}

Note one can replace negative type by : which admits a quasisymmetric embedding into Hilbert space.

1.4. From {L^1}-observable diameter to {L^1}-embeddings

Theorem 8 (Lee) Let {X} have negative type. Let {\tau>0}, let {\omega} be an integer valued measure on pairs of points of {X} at distance {\geq \tau}. Then there exists a {1}-Lipschitz function {f:X\rightarrow{\mathbb R}} and two sets {A}, {B\subset X} with measure {\omega(A\times B)\geq \frac{1}{100}\omega(X\times X)} that are separated by {f}, i.e. for all {x\in A}, {y\in B} such that {d(x,y)\geq \tau},

\displaystyle  \begin{array}{rcl}  |f(x)-f(y)|\geq\Omega(\frac{\tau}{\sqrt{\log n}}). \end{array}

In other words, there exists a map to {L_1} which separates a certain proportion of pairs which are far apart in {X}. Using it, Chawla, Gupta and Räcke have been able to improve Theorem 8 into a uniform distorsion bound.

Theorem 9 (Chawla, Gupta, Räcke) Let {X} have negative type. Let {\tau>0}. There exists a {1}-Lipschitz map {f:X\rightarrow L_2} such that for all {x}, {y\in X} with {d(x,y)\geq\tau},

\displaystyle  \begin{array}{rcl}  |f(x)-f(y)|_{2}\geq \Omega(\frac{\tau}{\sqrt{\log n}}). \end{array}

In other words, {X} has an {\alpha}-ensemble with {\alpha=\sqrt{\log n}}.

Proof: Boosting: Order the pairs {\{x,y\}} such that {d(x,y)\geq \tau}. Put uniform measure {\omega_0} on them. Then modify {\omega_0} recursively as follows. Apply Theorem 8 to get function {f_0}. Set

\displaystyle  \begin{array}{rcl}  \omega_{i+1}(x,y)&=&\omega_{i}(x,y)\textrm{ if }f_0 \textrm{ separates }x,\,y,\\ &=&2\omega_{i}(x,y)\textrm{ otherwise}. \end{array}

and get function {f_{i+1}} accordingly. Form the rescaled direct sum

\displaystyle  \begin{array}{rcl}  f:=\frac{1}{\sqrt{k}}(f_0 \oplus f_1 \oplus \cdots):X\rightarrow \ell_2 . \end{array}

One checks that every pair will be handled at some time {\leq O(\log n)}. \Box

We seem to be in bad shape: we have lost a factor of {\sqrt{\log n}} and still do not have a bilipschitz embedding in {L_2}. Local dimension will save us.

1.5. Local dimension

Let {X} be a finite metric space, and {x\in X}. Easily, one has

\displaystyle  \begin{array}{rcl}  \sum_{m\in{\mathbb Z}}\log\frac{|B(x,2^{m+1})|}{|B(x,2^{m})|}\leq \log|X|. \end{array}

Idea: Replace {\log n} by {\log\frac{|B(x,2^{m+1})|}{|B(x,2^{m})|}}, which measures the dimension of {X} at scale {2^m}. Then glue scales together. The gluing is achieved by the following theorem.

Theorem 10 (Lee) Let {X} be an {n}-point metric space. Let {\phi_m :X\rightarrow L_2} be {1}-Lipschitz maps. Then there exists a map {\phi :X\rightarrow L_2} such that

  1. {\|\phi\|_{Lip}\leq O(\sqrt{\log n})}.
  2. For all {x}, {y\in X},
    \displaystyle  \begin{array}{rcl}  |\phi(x)-\phi(y)|\geq \max_{m\in{\mathbb Z}} \sqrt{\lceil \log\frac{|B(x,2^{m+1})|}{|B(x,2^{m})|} \rceil}|\phi_m (x)-\phi_m (y)|. \end{array}

Proof: Let {\phi} be the direct sum of {\phi_m} rescaled by the weights in the theorem. The Lipschitz upper bound follows from the fact that weights sum up to {\log n} at each point.

Weights are not smooth, they are strongly discontinuous. Nevertheless, the function

\displaystyle  \begin{array}{rcl}  R_t (x)=\sup\{R\,;\, |B(x,R)|\leq 2^t\}. \end{array}

is {1}-Lipschitz. Let

\displaystyle  \begin{array}{rcl}  \rho_{m,t}(x)=\rho(\frac{R_t (x)}{2^m}), \end{array}

where {\rho>0}, {\rho} has support {[1/4,4]}, equals {1} on {[1/2,2]}. Use these numbers as weights, i.e. set

\displaystyle  \begin{array}{rcl}  \psi_t =\bigoplus_{m\in{\mathbb Z}}\rho_{m,t}\phi_m , \end{array}

anf let {\phi} be the direct sum of all {\phi_t}‘s. Then

  • {\psi_t}‘s are uniformly Lipschitz.
  • If {\rho_{m,t}(x)=1}, then
    \displaystyle  \begin{array}{rcl}  |\psi_t (x) -\psi_t (y)|_2 \geq \min\{\frac{2^m}{10},|\phi_m (x) -\phi_m (y)|_2 \}. \end{array}

Since this lower bound holds for {\log\frac{|B(x,2^{m+1})|}{|B(x,2^{m-1})|}} values of {t}, the proof follows. \Box


About metric2011

metric2011 is a program of Centre Emile Borel, an activity of Institut Henri Poincaré, 11 rue Pierre et Marie Curie, 75005 Paris, France. See
This entry was posted in Course and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s