Notes of Yuri Makarychev’s lecture nr 2

1. Back to the {0}-extension problem

The word {0}-extension is unfortunate, but commonly use. It vaguely refers to the fact that the given distance on {T} needs be extended to distance {f(f(u),f(v))} on {V} which vanishes for most edges.

1.1. Analysis of the FHRT algorithm

Let {uv} be an edge, let {\Delta=\rho(u,v)}. The easy case when {\Delta\geq\frac{1}{2}\min\{R_u ,R_v\}} has already been treated, so we assume that {\Delta<\frac{1}{2}\min\{R_u ,R_v\}}.

Recall our terminology: {u} cuts the edge {uv} if {f(u)<f(v)}. Let {\mathcal{E}_t} be the event that {u} cuts edge {uv} and {u} is mapped to terminal {t}. Then

\displaystyle  \begin{array}{rcl}  \mathop{\mathbb E}(1_{\mathcal{E}_t} d(f(u),f(v))) &\leq&\mathop{\mathbb E}(3\alpha R_u 1_{\mathcal{E}_t})\\ &=& 3R_u \mathop{\mathbb E}(\alpha 1_{\frac{\rho(u,t)}{R_u}\alpha\leq \frac{\rho(u,t)+\Delta}{R_u -\Delta}})\frac{1}{\sigma(t)}\\ &\leq&\frac{3R_u }{\sigma(t)}(\frac{\rho(u,t)+\Delta}{R_u -\Delta}-\frac{\rho(u,t)}{R_u})\frac{1}{\log M}\\ &\leq&C\,\frac{\Delta}{\sigma(t)\log M} \end{array}

Here, {\sigma(t)\in[k]} is the order of {t} when terminals are ordered in terms of their distances to {u}. Next, we sum over all terminals {t}, and get

\displaystyle  \begin{array}{rcl}  \mathop{\mathbb E}(d(f(u),f(v))1_{u\,\mathrm{cuts}\,uv})&\leq&\sum_{t \in T}C\,\frac{\Delta}{\sigma(t)\log M}\\ &\leq&C\,\Delta\frac{\log k}{\log M}\\ &=&C\,\Delta\frac{\log k}{\log \log k}. \end{array}

Symmetrically, we have an estimate in case {v} cuts edge {uv}. Summing over edges gives an upper bound of the objective function in terms of {\sum_{\mathrm{edges}\,uv}\Delta} which is the LP value.

Note that the greedy algorithm “map {u} to the closest terminal in {\rho} distance” does not give any bound at all. For instance,

1.2. Link with Lipschitz extension

We use the FHRT algorithm to give a proof of Lee and Naor’s theorem. Let {Y} be a normed space. Let {X} be a weighted graph. Let {T} be a subset of vertices of size {|T|=k}. Let {f:T\rightarrow Y} be a {1}-Lipschitz map. The algorithm provides us with a distribution on maps {g:X\rightarrow T} such that {g=id} on {T}. Define

\displaystyle  \begin{array}{rcl}  \hat{f}(u)=\mathop{\mathbb E}(f\circ g (v)). \end{array}

Then

\displaystyle  \begin{array}{rcl}  |\hat{f}(u)-\hat{f}(v)|\leq \mathop{\mathbb E}|f\circ g (u)-f\circ g (v)|_Y \leq \mathop{\mathbb E} d_T (g(u),g(v))\leq C\,\frac{\log k}{\log \log k}d_X (u,v). \end{array}

Since this works for arbitrary weighted graphs, it extends to arbitrary metric spaces.

2. Vertex sparsifiers

2.1. Multi-commodity flows

In a network with edge capacities {\alpha_{ij}} we want to transmit a nonnegative amount {dem_r} of information from each source {s_r} to each destination {t_r} in a given list.

Definition 1 A weighted collection of paths {(\mathcal{P},w_p)} is a multi-commodity flow satisfying a {\lambda}-fraction of demands if

  1. For each edge,

    \displaystyle  \begin{array}{rcl}  \sum_{p\in\mathcal{P}}w_p \leq \alpha_{ij}. \end{array}

  2. For each {r},

    \displaystyle  \begin{array}{rcl}  \sum_{p\in\mathcal{P}, p\,\mathrm{joins}\,s_r \,\mathrm{to}\,t_r}w_p \geq dem_r . \end{array}

The max-flow of the problem is the maximal feasible value of {\lambda}.

Computing max-flow and finding an optimal multi-commodity flow is an LP.

2.2. Flow sparsifiers

Definition 2 (Leighton, Moitra) Given a graph with edge capacities {(G,\alpha)} and a subset {T} of vertices of {G}, a vertex flow sparsifier is a graph {H} on vertex set {T} with edge-capacities {\beta} such that

\displaystyle  \begin{array}{rcl}  \mathrm{Maxflow}(G,\alpha, s_r ,t_r, dem_r)\leq \mathrm{Maxflow}(H,\beta, s_r ,t_r, dem_r)\leq Q\, \mathrm{Maxflow}(G,\alpha, s_r ,t_r, dem_r) \end{array}

for any demands such that {s_r}, {t_r \in T}.

The goal is to make the approximate computation of max-flow easier by choosing {T} rather small with a reasonable loss {Q} (it may be useful if one has to repeatedly compute max-flow on the same graph). We want the bounds to be independent on the size of the original graph {G}. Thus {Q=Q(k)}, {k=|T|}, is called the quality of the sparsifier.

This also provides an approach to SPARSEST CUT, for instance.

2.3. Results

[Moitra 2009] introduced vertex cut sparsifiers (see below). He showed that there exist vertex cut sparsifiers of quality {O(\frac{\log k}{\log\log k})}.

[Leighton, Moitra 2010] introduced vertex flow sparsifiers (see above). They showed that there exist vertex flow sparsifiers of quality {O(\frac{\log k}{\log\log k})}. This was not constructive. They could provide a construction of an {O(\frac{(\log k)^2}{\log\log k})}-vertex flow sparsifier. They showed that there exists a constant {C>1} lower bound for the quality of cut sparsifiers, and they proved an {\Omega(\log\log k)} lower bound for vertex flow sparsifiers.

Later, the algorithmic bound was improved to {O(\frac{\log k}{\log\log k})} by several groups, [Charikar, Leighton, Li, Moitra], [Englert, Gupta, Krauthgamer, R\” acker, Talgam, Talwar], [Makarychev, Makarychev].

2.4. Link to Lipschitz extension

Below, we shall define vertex cut sparsifiers and metric sparsifiers.

Theorem 3 (Makarychev, Makarychev)

\displaystyle  \begin{array}{rcl}  Q^{cut}(k)=l_k (\ell_1 ,\ell_1).\quad Q^{metric}(k)=l_k (\ell_{\infty},\ell_1 (\ell_{\infty})). \end{array}

Theorem 4 (Makarychev, Makarychev)

\displaystyle  \begin{array}{rcl}  l_k (\ell_{\infty},\ell_1 (\ell_{\infty}))\geq l_k (\ell_{\infty},\ell_1)\geq\Omega(\sqrt{\frac{\log k}{\log\log k}}). \end{array}

They also obtained a lower bound on {l_k (\ell_1 ,\ell_1)} (independently, Charikar, Leighton, Li, Moitra proved a lower bound on the quality of cut sparsifiers {Q_k^{cut}}), but it turns out that it was not as good as the following bound, which follows from old results of [Johnson, Lindenstrauss] and [Grunbaum].

Theorem 5

\displaystyle  \begin{array}{rcl}  l_k (\ell_{1},\ell_1)\geq\Omega(\frac{\sqrt{\log k}}{\log\log k}). \end{array}

Theorem 6 (Makarychev, Makarychev)

\displaystyle  \begin{array}{rcl}  l_k (\ell_{1},\ell_1)\leq l_k (\ell_{2},\ell_1)O(\sqrt{\log k\,\log\log k}). \end{array}

Recall that Ball asked wether {l_k (\ell_2 ,\ell_1)} is bounded. If true, this would give much improved upper bounds on quality of sparsifiers.

Johnson and Lindenstrauss showed that the Lipschitz extension constant {l_k(X, Y)} is greater than or equal to the projection constant {\lambda(Y, X)} if dim{Y \leq c \frac{\log k}{\log\log k}}. To the best of my knowledge, this is the only known way for proving lower bounds on {l_k (X, Y)}. Using this approach, however, one cannot prove a lower bound better than {\sqrt{\log k}} (because the projection constant is at most {\sqrt{d}}).

Question: Does there exist a metric space {X} and a normed space {Y} such that

\displaystyle  \begin{array}{rcl}  l_k (X,Y)\geq \omega(\sqrt{\log k}) ? \end{array}

The linear setting (projection constants for linear subspaces of Banach spaces) is better understood. [Kadec, Stober 1971] show that every {k}-dimensional {L\subset Y} has a linear projection {Y\rightarrow L} with distorsion {\leq \sqrt{k}}, and this is sharp [König].

2.5. Vertex cut sparsifiers

Definition 7 (Moitra) Given a graph with edge costs {(G,\alpha)} and a subset {T} of vertices of {G}, a vertex cut sparsifier is a graph {H} on vertex set {T} with edge-costs {\beta} such that

\displaystyle  \begin{array}{rcl}  \mathrm{Mincut}(G,\alpha, S,T\setminus S)\leq \mathrm{Mincut}(H,\beta, S,T\setminus S)\leq Q\, \mathrm{Mincut}(G,\alpha, S,T\setminus S) \end{array}

for all cuts {S} of {T}.

{Q^{cut}(k)} denotes the minimum quality {Q} achievable by size {k} vertex cut sparsifiers for arbitrary graphs with costs {(G,\alpha)}.

2.6. Vertex metric sparsifiers

Note that

\displaystyle  \begin{array}{rcl}  \mathrm{Mincut}(G,\alpha, S,T\setminus S) &=&\min_{R\subset V\,;\,R\cap T=S}\sum_{\mathrm{edges}\,uv\in\mathrm{in\,G}}\alpha_{uv}\delta_R (u,v)\\ &=:&\min_{R\subset V\,;\,R\cap T=S}\alpha(\delta_R), \end{array}

\displaystyle  \begin{array}{rcl}  \mathrm{Mincut}(H,\beta, S,T\setminus S) &=&\sum_{\mathrm{edges}\,uv\in\mathrm{in\,H}}\alpha_{uv}\delta_S (u,v)\\ &=:&\beta(\delta_S), \end{array}

where {\delta_R } is the cut metric associated to {R}. {\alpha(\delta)} is called the cost of metric {\delta}.

Definition 8 (Makarychev, Makarychev) Given a graph with edge costs {(G,\alpha)} and a subset {T} of vertices of {G}, a vertex metric sparsifier is a graph {H} on vertex set {T} with edge-costs {\beta} such that for every metric {d_T} on {T}, the minimum cost extension {d_{T\rightarrow G}} of {d_T} to {G} satisfies

\displaystyle  \begin{array}{rcl}  \alpha(d_{T\rightarrow G})\leq \beta(d_T)\leq Q\, \alpha(d_{T\rightarrow G}). \end{array}

{Q^{metric}(k)} denotes the minimum quality {Q} achievable by size {k} vertex metric sparsifiers for arbitrary graphs with costs {(G,\alpha)}.

If one restricts the class of metrics to {L_1}-embeddable metrics, one recovers {Q^{cut}}.

2.7. Sketch of proof of Theorem 3

Let {d_{T\rightarrow G}} denote the minimum cost extension of {d_T}. Then {d_T \mapsto \alpha(d_{T\rightarrow G})} is a convex function on the space {D_T} of metrics on {T}. {d_T \mapsto \beta(d_T)} is a linear function on the space of metrics. Finding {\beta} such that {\alpha(d_{T\rightarrow G})\leq \beta(d_T)\leq Q\, \alpha(d_{T\rightarrow G})} is equivalent to finding a hyperplane in {D_T \times{\mathbb R}} which separates the convex hull {C} of the graph of {d_T \mapsto \alpha(d_{T\rightarrow G})} from the convex hull of the graph of {d_T \mapsto Q\alpha(d_{T\rightarrow G})}. So the best {Q} is

\displaystyle  \begin{array}{rcl}  Q=\max\{\frac{x}{\alpha(d_{T\rightarrow G})}\,;\,d_T \in D_T,\,(d_T ,x)\in C\} \end{array}

The maximum is attained at some {(x,d_T)} such that {d_T} is an average (sum, by homogeneity) of distances {d_T^i} and {x} is the sum of {\alpha(d_{T\rightarrow G}^i)}. Each {d_T^i} maps isometrically to {\ell_{\infty}}, so {d_T} maps isometrically to {\ell_1 (\ell_{\infty})}. {d_{T\rightarrow G}} is controlled by extensions of this embedding to {G}.

If we replace {D_T} with the space {CUT_T} of {L_1}-embeddable metrics on {T}, the same discussion yields to

\displaystyle  \begin{array}{rcl}  Q^{cut}(k)=l_k (\ell_1 ,\ell_1 (\ell_1))=l_k (\ell_1 ,\ell_1). \end{array}

About metric2011

metric2011 is a program of Centre Emile Borel, an activity of Institut Henri Poincaré, 11 rue Pierre et Marie Curie, 75005 Paris, France. See http://www.math.ens.fr/metric2011/
This entry was posted in Course and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s