**Approximating Sparsest-Cut**

**1. Problem definition**

The input of the Sparsest-Cut problem is a finite graph , , with set of demand pairs (in fact, edges) . The idea is to find a subset thus dividing into two sets, and , such that as few edges as possible are cut, but the demand pairs are separated. Given such a graph we define the *sparsity* for the cut as the ratio between cost and effectiveness:

**Definition 1**

Note that when exactly one of is in , otherwise 0. We could give weights for demands and cuts, but here we stick to unit costs.

**Example 1** *Assume that we have demand pairs, i.e. every edge is a demand pair. Now the Sparsest-Cut problem is uniform demand. (If , we would have general demand Sparsest-Cut). We have*

* denotes the set of edges of with exactly one end vertex in . We may assume (since if this were not true, we could interchance and ) and . Thus after scaling the problem is equivalent to minimizing , up to a factor of 2. is called edge expansion of .*

**2. Computational problem**

Given graph , pairs , compute

This problem is NP-hard, but there is a polytime -approximation with linear program relaxation. Today we do -approximation. Let us first write an integer program.

**2.1. Integer program**

The idea is to map every vertex to 0 or 1. Then edges from 0 to 1 have the cost 1, other edges have the cost 0.

**Definition 2 (Cut metric )** *A (semi-)metric is a cut metric if there exists a subset such that for all .*

Now the interger program is

This is exactly Sparsest-Cut.

**2.2. LP-relaxation**

We relax the demand that has to be a cut metric by allowing it to be any metric:

such that and . This is invariant of scaling the metric . So we may assume that , and we get a linear program relaxation

Since the relaxation allows solutions which are not cut metric, we have . Now the idea is to use the LP relaxation to get a solution in polynomial time, and then round it to get some solution for the original problem. The cost of the rounding we allow to be at most .

**Theorem 3 (Linial-London-Rabinovich, Aumann-Rabani 1995)** *This LP can be rounded with factor thus giving a -approximation for Sparsest-Cut.*

The idea here is to look at as a metric space, and embedd it into , which is a convex combination of all cuts. Here is some finite index. We use Bourgain’s theorem, noting that in a sense it holds .

**Theorem 4 (Bourgain 1985)** *Every metric space on points can be embedded into with distortion .*

*Proof of Theorem 3:* Solve the LP. Then apply Bourgain’s embedding, and get a non-contractive map . Now

We can scale by a constant to make this an equality, thus

Also

**Lemma 5** *Every finite -metric can be represented as a positive combination of cut metrics: There exists constants , , and sets sucht that for all *

*Moreover, given the -embedding , this representation can be computed in polynomial time.*

*Proof:* We work coordinate by coordinate. Given . We replace each coordinate of , denoted by , by a map as follows: Suppose s.t. . Define

After replacing each coordinate of as above, we have a new map s.t.

for all . Every coordinate of takes only 2 values, 0 or (in fact at most 2 values, but we can ignore the ones taking only 1 value). I.e.

This means that the coordinate is just some times a cut metric of :

for all . We use the lemma to get a map equivalent to . Now one of these ‘s give us a “good“ cut.

**Claim 1** *There exists an index such that is a good cut, meaning that*

Here is our original problem, and the we take the best cut among the ‘s. The idea of the proof is that if , then . This can be proven by contradiction: assume the contrary for every . Multiply the inequalities by and sum up to ge a contradiction .

*Proof of the Claim:* Assume the contrary for all . Thus

We multiply this inequality for every by and the denominator of RHS, and add up:

We get

which is a contradiction.

By the claim, one of the sets is a good cut, and

The above LP relaxation can be improved to get approximation, but with an LP relaxation you cannot do any better than that.

**Theorem 6** *For all there exists an point metric whose embedding into requires distortion , where is a universal constant.*

*Proof:* Let be an -vertex edge-expander (bounded degree), which means that for all with holds . Let be the shortest path metric. We show that embedding into requires distortion . For suitable , is feasible for LP (uniform demands). Thus

The integer solution gives

We know that LP is smaller than IP by . From a embedding it would follow that here exists such that meaning that .

With SDP approach the following tight results have been obtained: for uniform demands (Arora-Rao-Vazirani), and for general demands (Arora-Lee-Naor). The gap in Sparsest-Cut problem is interesting also because of its direct relation to embeddings.

**References**

S. Arora, J. Lee, and A. Naor. Euclidean distortion and the sparsest cut. In Proc. 37th Symp. on Theory of Computation (STOC), pages 553-562, 2005.

S. Arora, S. Rao, and U. Vazirani. Expander Flows, Geometric Embeddings and Graph Partitioning. ACM Symposium on Theory of Computing, 2004.

J. Bourgain. On Lipschitz embeddings of finite metric spaces in Hilbert space. Israel J. of Math. 1985.

F. T. Leighton and S. Rao. An approximate max-flow min-cut theorem for uniform multicom- modity flow problems with applications to approximation algorithms. In Proc. 29th Symp. on Foundations of Computer Science (FOCS), pages 422-431, 1988.

N. Linial, E. London, and Y. Rabinovich. The geometry of graphs and some of its algorithmic applications. Combinatorica, 15(2):215-245, 1995.