Notes of Ofer Neiman’s lecture

Stochastic decompositions of metric spaces

This has applications to many problems : Ramsey decompositions, distributed computing… This decompositions are a basic tool for various parts of mathematics and design of algorithms, so it is good to know how to produce them.

1. Definitions

A stochastic decomposition is a probability distribution over partitions, i.e. a random partition.

1.1. Padded decompositions

Say a partition of a metric space is {\Delta}-bounded if all pieces (also called clusters) have diameter {\leq \Delta}. Given a point {x} and a partition {P}, {P(x)} denotes the cluster that contains {x}.

Definition 1 Fix a positive function {\epsilon} on {X}. Say a random {\Delta}-bounded partition is {\epsilon}-padded if, for every point {x\in X},

\displaystyle  \begin{array}{rcl}  \mathop{\mathbb P}(B(x,\epsilon(x)\Delta)\subseteq P(x))\geq \frac{1}{2}. \end{array}

{\epsilon} will most of the time be a constant.

Example. Let {X={\mathbb R}}, {\Delta=1}. Consider the partition into integral intervals {[i,i+1)} and its translates by numbers uniformly chosen in {[0,1]}. This is {\epsilon}-padded for {\epsilon\leq 1/4}.

Example. Let {X=\ell_p^d=({\mathbb R}^d,\ell_p)}, {\Delta=1}. The product partition is {\epsilon}-padded for {\epsilon\leq \frac{1}{d\,d^{1/p}}}.

1.2. Decomposability

Definition 2 A metric space is called {\alpha}-decomposable if, for every {\Delta>0}, there exists a {\frac{1}{\alpha}}-padded decomposition.

In general, every finite metric space is {\alpha}-decomposable with {\alpha=O(\log(|X|))}

Example. Planar graphs are {\alpha}-decomposable for a universal constant {\alpha}. More generally, there is a {\alpha}-decomposability constant for every minor-excluding family of graphs.

Example. Bounded tree-width graphs are {\alpha}-decomposable for a universal constant {\alpha}.

2. Doubling metrics

2.1. The result

Today, I will prove the following.

Theorem 3 Every bounded {\lambda}-doubling metric space is {O(\log\lambda)}-decomposable.

Doubling means finite dimensional.

Definition 4 A metric space is {\lambda}-doubling if for every point {x} and {r>0}, {B(x,2r)} can be covered by {\lambda} balls of radius {r}. The doubling dimension is {\log\lambda}.

Example. {\ell_p^d} has doubling dimension {O(d)}.

2.2. Proof of doubling theorem

Fix {\Delta>0}. Let {N} be a {\Delta/4}-net of {X}. The partition will be based on balls centered at points of {N}. We must specify radii and an ordering of points. Choose {r\in[\frac{\Delta}{4},\frac{\Delta}{2}]} uniformly at random, and choose a random ordering {\sigma\in\mathfrak{S}_n}.

For each {u\in N}, define a cluster as follows

\displaystyle  \begin{array}{rcl}  C_u=\{x\in X\,;\,d(u,x)\leq r\textrm{ and }\forall v\in N\textrm{ such that }\sigma(v)<\sigma(u),\,d(v,x)>r\}. \end{array}

The clusters of the random partition are the non empty {C_u}‘s.

Fix {x\in X}. Note that if {u\in N} is such that {d(u,x)>\Delta}, then {C_u\cap B(x,\frac{\Delta}{k})=\emptyset} where {k=100\log\lambda>2}. So only points in {S=B(x,\Delta)\cap N} may play a role in defining the cluster of {x}.

Claim: {|S|\leq \lambda^3}. Indeed, {B(x,\Delta)} can be covered by {\lambda^3} balls of radius {\frac{\Delta}{8}}, and each of them contains at most one point of {S}.

One could add up brutal bounds on the effects of all points of {S}, but this would merely give a bound which is polynomial in {\lambda}. To get the logarithmic bound, one uses the randomness of the ordering. Let {s_1,\ldots,s_m} be the points of {S} ordered by increasing distance from {x}. Let {t=\frac{\Delta}{100\log\lambda}}. Say that {s_j} cuts {B(x,t)} if

  1. {s_j} is minimal according to the ordering {\sigma} such that {r\geq d(x,s_j)-t}.
  2. {r\leq d(x,s_j)+t}.

We must estimate the probability that some small ball {B(x,t)} is cut, i.e. {\mathop{\mathbb P}(B(x,t)\not\subset P(x))}. Union bound gives

\displaystyle  \begin{array}{rcl}  \mathop{\mathbb P}(B(x,t)\not\subset P(x))&\leq&\sum_{j=1}^m \mathop{\mathbb P}(s_j\textrm{ cuts }B(x,t) )\\ &=&\sum_{j=1}^m \mathop{\mathbb P}(d(x,s_j)-t\leq r\leq d(x,s_j)+t\textrm{ and }\forall i<j,\,\sigma(s_j)<\sigma(s_i))\\ &=&\sum_{j=1}^m \mathop{\mathbb P}(d(x,s_j)-t\leq r\leq d(x,s_j)+t)\mathop{\mathbb P}(\forall i<j,\,\sigma(s_j)<\sigma(s_i))\\ &=&\sum_{j=1}^m \frac{2t}{\Delta/4}\frac{1}{j}\\ &=&\frac{8t}{\Delta}\log(\lambda^3)<\frac{1}{2}, \end{array}

since {t=\frac{\Delta}{100\log\lambda}}.

2.3. Questions

Could one apply Assouad’s embedding instead ? This would give a weaker bound, polynomial in {\lambda}.

How does the partition of the theorem compare with the product partition ? It does better, but it is not optimal either. Using concentration of measure, one can do even better.

Does the theorem extend to unbounded spaces ?

Tomorrow, I will explain an application of stochastic decompositions to constructing embeddings into Banach spaces, see my report.

About metric2011

metric2011 is a program of Centre Emile Borel, an activity of Institut Henri Poincaré, 11 rue Pierre et Marie Curie, 75005 Paris, France. See
This entry was posted in Workshop lecture and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s