However, it is extremely distracting when there are 15 second commercials every 5 minutes to break the tempo. Worse still, trying to jump to a latter part of the video results in having to sit through a few commercials at once. This breaks one’s concentration and makes learning much harder than it should be (almost defeating the purpose of learning from a video).

Could the videos be uploaded to some other site besides DailyMotion? Their advertising policies are terrible. YouTube seems to fare much better on this.

]]>Thank you for taking notes. Here are, as promised, some minor corrections to the notes of lecture 2.

– Paragraph after Theorem 1: it should read “provided the coarse filling function has at most quadratic growth.”

– In Section 1.1, after Proposition 3: in the definition d_\omega(\hat{x}, \hat{y}) insert \lim_\omega

– In Section 1.1, after Theorem 5: write r_n instead of r-n

– In Section 1.2, Proposition 6: “geodesic thickening with quadratic filling function”

– In Section 1.2, in Definition 7: complete the sentence as follows: “… if all images of Lipschitz maps from subsets of R^2 to Z have zero 2-dimensional Hausdorff measure.”

– In Section 1.2, after Example 1: “… for Carnot groups whose first layer of the stratification of the Lie algebra does not contain any 2-dimensional sub-algebra.”

– In Section 1.3, Proof of Theorem 10:

– In step 1, the first inequality should read $$\liminf_{r\to0}\frac{d(\psi(t+r), \psi(t))}{|r|} \geq \liminf_{r\to0}\frac{|\gamma_n(t+r) – \gamma_n(t)|}{|r|}= |\dot{\gamma}_n(t)|.$$

– In the equation that follows, the term “\sup_n\int_s^t|\dot{\gamma}_n(\tau)|d\tau” should be between inequality signs

– In the equation that follows, at the very end insert “| = a(t)”

– In step 3, in the equation, replace w by v-w at the end of the first line

– In Section 2.1, end of second line in first paragraph: H^2(\psi(K))>0

– In Section, 2.1, middle of proof:

– in the equation, replace $\sim length(\partial I)$ by $\sim \frac{1-\varepsilon}{4\pi}length(\partial I)^2.$

– on the next line, complete the sentence “One would like to” to “One would like to extend the (1+\delta)-Lipschitz map \psi^{-1} to a map from L_\infty(X_1) to V in order to push the filling \phi to V.”

– In Section 2.1, in the equation before the Open question, replace the first occurrence of \gamma by \bar{\gamma}

– Section 2.2, title: “Proof of Proposition 6”, not Proposition 14

Thank you again. With kind regards,

Stefan

Thank you for taking all the notes. There were only two minor details that I found which could maybe be corrected:

– In Gromov’s second theorem in Section 1.3 one should add “for all r >r_0” after the inequality.

– In Remark 1 in Section 3.1 one could replace “with equality if Y is injective” with “where the first inequality becomes an equality in case Y is injective.” One could furthermore add “In the above, E(X) denotes the injective hull of X.”

Thanks again.

Stefan

Thanks for making the changes. I will look at all of lecture 1 and also lecture 2 today and will write again.

As for the proposition about thickenings: yes, the thickening improves the filling function to quadratic on small scale (r<1) and has growth on the large scale bounded by that of Gromov's coarse isoperimetric function. One can even find a thickening Y such that FA_0^Y has growth at most that of Gromov's coarse function at large scales and such that FA_0^Y(r) is bounded by Cr^2 for 0<r<1, where C is a suitable constant. Note that FA_0^{X,Y} is bounded by FA_0^Y so this is even a bit better than what you wrote in the notes now. We will need this somewhat better statement when trying to produce uniformly compact fillings of given curves in lecture 3. The proof of the proposition (which is not so difficult) can be found in my paper on the sharp isoperimetric constant. I was unaware of a proof by Gromov. Is it the one you told about in the Filling Riemannian Manifolds paper?

]]>Thanks for your comment. I edited the file as you suggested.

Can you please have a look at the notes of your first lecture ? The statement of Gromov’s thickening lemma, to be found at the end, looks suspicious to me. I tend to believe that the thickening improves the filling function for small values of r and should not change it for larger values of r. So it cannot state that FA^{X,Y}(r)\preceq r^2 for all r.

Sincerely,

]]>If we choose the mass* notion of area in the definition of the filling function FA_0^{X,\infty}(r) we do not need to resort to Burago-Ivanov’s result. We can then also allow surfaces of arbitrary genus as fillings in the function FA_0^{X,\infty}(r) making it potentially smaller than when only allowing disk fillings and thus making the assumption 2. in the gap theorem even weaker.

]]>just a small typo in 3.1 in the definition of the multiplicity function N: the demand phi(z)=y is missing from the formula.

Thanks for the excellent notes, I was (again) able to fill out some missing parts of my own notes with the help of these.

– Riikka

]]>Thanks for proofreading these notes and pointing to mistakes in the notes that make them impossible to understand.

The first step in an SDP approach to an optimisation problem is to embed the given problem in a semidefinite program. This means mapping unknowns of the combinatorial problem to unknowns of the semidefinite program in such a way that objective functions coincide.

Here, the unknowns of the combinatorial problem are k-colorings. Suppose we view a k-coloring as a vector in R^n, whose components are integers between 1 and k. When relaxing the integrality assumption, the number k, which is essential for the combinatorial problem, disappears. The optimum of the SDP does not depend on k. This does not seem to be a good idea.

In order to freeze k in the SDP, Luca Trevisan had the idea to include it as a dimension. He chooses unknowns of the SDP to be nk-tuples of vectors. Note that the dimension of the vectorspace where these vectors live is unspecified. This is a rule for SDP’s: the true unknown is the Gram matrix X (whose entries are pairwise inner products) of the collection of vectors. Individual vectors (columns of a matric V) appear only when one expresses X as X=V^top V, they are defined only up to rotation, and their dimension, the rank of X, is apriori unknown.

A coloring v (i.e. a {1,…,k}-valued function on vertices) is mapped to an nk-tuple of vectors as follows: given a vertex u and a color i, set u_i to be the vector with one component, equal to zero unless v(u)=i, in which case it is 1.

The objective function is the number of satisfied constraints, which can be rewritten as

\sum_{e=uv}\sum_{i=1}^{k}u_i \cdot v_{\pi_{e}(i)}.

Next, one collects semidefinite constraints satisfied by such very special k-tuples of vectors.

1. For all vertices u, v and colors i, j, inner product u_i . v_j is nonnegative.

2. If i and j are distinct colors, and u is a vertex, then the inner product u_i . u_j vanishes.

3. For every vertex u, only one of the vectors u_i is non zero, so \sum_{i=1}^k |u_i|^2 =1.

4. Triangle inequalities.

The collection of these constraints constitutes the SDP.

Sincerely,

]]>I did not completely understand the setting: for variables we picked {k} vectors for each vertex {u} in the graph. But why? What do the vectors {u_i} refer to? Their length seems not to be of importance, not even defined. Why does

\displaystyle \begin{array}{rcl} \sum_{i=1}^{k}|u_i|^2 =1, \end{array}

impose that every vertex has a single colour? And what does an integral solution mean?

BTW, there is a missprint:

“Unknown will vectors {v_{ui}}, {u} vertex, {i\in\{1,\ldots,k\}}.”

-> probably “Unknowns will be vectors etc”.