Notes of Gilles Pisier’s lecture nr 1

 

Uniform convexity and related metric geometric properties of Banach spaces

Recently, there has been a revival of type and cotype considerations, in nonlinear settings. This started with Enflo and Bourgain, and was revived by Naor and coauthors.

Our assignment is type and cotype. We split the contents into two parts: classical material (Maurey) and martingales (Pisier). I am preparing a book, parts of it will be posted on my webpage.

 

1. Martingale type versus type

 

Definition 1 Say a Banach space {B} has martingale type {p} if

\displaystyle  \begin{array}{rcl}  \sum\mathop{\mathbb E}(|du|_B)<\infty \Rightarrow \sum du \textrm{ converges in }L_p (B). \end{array}

{B} has martingale cotype {p} if

\displaystyle  \begin{array}{rcl}  \sum\mathop{\mathbb E}(|du|_B)<\infty \Rightarrow \sum du \textrm{ converges in }L_p (B). \end{array}

 

Type is {\leq 2}, cotype is {\geq 2}. Kwapien’s theorem states that only Hilbert space as type{)}cotype{=2}. Classical type and cotype amounts to martingales with independant increments. The two notions differ. For instance,

 

{B} has a martingale type {\Leftrightarrow} {B} is super-reflexive

{B} has a type {\Leftrightarrow} {B} does not contain {\ell_1^n} uniformly.

The best {p} such that {B} has a martingale type {p} is denoted by {P_{I}(B)}. The best {p} such that {B} has a type {p} is denoted by {P_{II}(B)}. There exists {B} such that {P_{I}(B)=1} and {P_{II}(B)=2}.

 

1.1. Plan of course

 

  1. {B}-valued martingales and RNP
  2. Superreflexivity
  3. Uniform convexity and smoothness
  4. Examples: {W_p}
  5. Property UMD: if it holds, all subtleties disappear.
  6. Nonlinear characterizations of superreflexifity

 

2. {B}-valued martingales and RNP

 

Definition 2 Let {B} be a Banach space, {(\Omega,\mathcal{A},\mu)} a measure space. A function {f:\Omega\rightarrow B} is Bochner measurable if it is the pointwise limit of functions that take only finitely many values. Let

\displaystyle  \begin{array}{rcl}  L_p (B)=\{\textrm{Bochner measurable }f:\Omega\rightarrow B\,;\,\int_{\Omega}|f|_B \,d\mu <\infty. \end{array}

 

2.1. Elementary properties

 

Step functions, and the algebraic tensor product {L_p \otimes B}, are dense in {L_p (B)}.

 

2.2. Martingales

 

Definition 3 Let {\mathcal{A}_{0}\subset\mathcal{A}_{0}\subset \cdots \subset \mathcal{A}_{0}\subset\cdots\mathcal{A}} be {\sigma}-algebras. Assume that {\mathcal{A}=\sigma(\bigcup_{n}\mathcal{A}_n}.

A sequence of functions in {L_1 (B)} is a martingale is for all {n}, the conditional expectation

\displaystyle  \begin{array}{rcl}  \mathop{\mathbb E}^{\mathcal{A}_n}(f_{n+1})=f_{n}. \end{array}

 

Conditional expectation {\mathop{\mathbb E}^{\mathcal{B}}:L_p \rightarrow L_p} is a contraction. So does {\mathop{\mathbb E}^{\mathcal{B}}\otimes id_B : L_p (B)\rightarrow L_p (B)}. Indeed, for a.e. {\omega},

\displaystyle  \begin{array}{rcl}  |\mathop{\mathbb E}^{\mathcal{B}}f (\omega)|\leq \mathop{\mathbb E}^{\mathcal{B}}(|f|_B) (\omega). \end{array}

Example 1 Let {f\in L_p (B)}. Set {f_n =\mathop{\mathbb E}^{\mathcal{B}}(f)}. Then {f_n} us a martingale, and {f_n} converges to {f} in {L_p (B)}.

Indeed, the union of {L_p (\mathcal{A}_n)\otimes B} is dense in {L_p (B)}.

 

2.3. Doob’s maximal inequalities

 

Proposition 4 (Doob) Let {M_n} be nonnegative random variables, {M_k} is {\mathcal{A}_k}-measurable. Assume that {M_n} is a submartingale, i.e. for all {k},

\displaystyle  \begin{array}{rcl}  M_k \leq \mathop{\mathbb E}_{k}(M_{k+1}). \end{array}

Let

\displaystyle  \begin{array}{rcl}  M_n^* =\sup_{k\leq n} M_k \end{array}

be the maximal function. Then

\displaystyle  \begin{array}{rcl}  \forall t>0,\quad t\mathop{\mathbb P}(M_n^* >t)\leq \int_{\{M_n^* >t\}}M_n d\mathop{\mathbb P}. \end{array}

It follows that for all {p>1},

\displaystyle  \begin{array}{rcl}  \|M_n^* \|_p \leq p'\|M_n \|_p . \end{array}

 

Proof: Relies on stopping times. A stopping time is a {{\mathbb N}}-valued random variable such that for all {k}, {\{T=k\}\in\mathcal{A}_k}.

If {f_n} is a martingale in {L_1 (B)},

\displaystyle  \begin{array}{rcl}  g_n =f_{n\wedge T} \end{array}

is again a martingale.

Let {T=\inf\{k\,;\, M_k >t\}}. Then

\displaystyle  \begin{array}{rcl}  1_{\{M_n^* >t\}}=1_{\{T\leq n\}}=\sum_{k=0}^{n}1_{\{T=k\}}. \end{array}

This implies that

\displaystyle  \begin{array}{rcl}  t1_{\{M_n^* >t\}} \leq \sum_{k=0}^{n}M_k 1_{\{T=k\}}. \end{array}

Integrate this and use submartingale assumption.

Second statement:

\displaystyle  \begin{array}{rcl}  \mathop{\mathbb E}((M_n^*)^p)&=&\mathop{\mathbb E}(\int_{0}^{M_n^*}pt^{p-1}\,dt)\\ &=&\int_{0}^{M_n^*}pt^{p-1}\mathop{\mathbb P}(M_n^* >t)\,dt\\ &\leq&\mathop{\mathbb E}(\frac{p}{p-1}(M_n^*)^{p-1}M_n)\\ &\leq&p'\|M_n^*\|_p^{p-1}\|M_n\|_p . \end{array}

\Box

Applying Proposition to {M_n =|f_n|_B} yields

Corollary 5

\displaystyle  \begin{array}{rcl}  \|\sup |f_n|\|_p &\leq& p' \sup\|f_n\|_{L_p (B)}.\\ \sup_{t>0}\mathop{\mathbb P}\{\sup_{n}|f_n|>t\} &\leq& \|f_n\|_{L_p (B)}. \end{array}

 

2.4. The fundamental property of martingales

 

Theorem 6 Let {f_n=\mathop{\mathbb E}_n (f)}, {f\in L_p (B)}. Then {f_n} converges a.e. and in {L_p (B)}.

 

Proof: Apply Doob’s inequalities to the (shifted) martingale {(f_n -f_N)_{n\geq N}}. Its {L_p}-norm tends to {0} so a.e. converges to {0}. \Box

 

2.5. Scalar case

 

Theorem 7 In the scalar case, if {p>1}, boundedness in {L_p} suffices to imply convergence in {L_p} and a.e. If {p=1}, boundedness implies convergence a.e., uniform integrability implies convergence in {L_1}.

 

Proof: By reflexivity, boundedness in {L_p} implies existence of weakly converging subsequences, their {\mathop{\mathbb E}_n} weakly converges as well, so {f_n =\mathop{\mathbb E}_n (f)}, {f=} weak limit.

In case {p=1}, introduce the stopping time {T=\inf\{n\,;\,|f_n|>t\}}. A.e. and {L_1} convergence hold on all {\{T\leq k\}}. By Doob’s inequality, {\mathop{\mathbb P}(\sup|f_n|>t)} tends to {0}, so {\mathop{\mathbb P}\{T=\infty\}=0}. \Box

 

2.6. Vector valued case

 

Theorem 7 does not extend to {B}-valued martingales.

Example 2 {B=c_0}, {\Omega=\{\pm 1\}^{{\mathbb N}}}, {\epsilon_i =} coordinates, {\mathcal{A}_n= \sigma(\epsilon_0 ,\ldots,\epsilon_n)}, {f_n =\sum_{k=0}^{n}\epsilon_k e_k}. Then {|f_n|=1} but {|f_n -f_{n-1}|} is constant.

 

Example 3 {B=L^1 (\Omega,\mathop{\mathbb P})}, {f_n (\omega)=\prod_{k=0}^{n}(1+\epsilon_k (\omega)\epsilon_k)}. Then {|f_n|=1} but {|f_n -f_{n-1}|} is constant.

 

Remark 1 {f_n} bounded in {L_1 (B)} implies that the norm {|f_n|} converges a.e.

Indeed, the norm of {f_n} is a submartingale. Nonnegative submartingales which are bounded in {L^1} converge a.e.

 

3. RNP

 

3.1. RNP and convergence of martingales

 

The Radon-Nikodym Property solves this difficulty. It will be defined soon.

Theorem 8 Fix {1<p<\infty}. For a Banach space {B}, the following are equivalent:

  1. {B} has Radon-Nikodym Property.
  2. {(f_n)} uniformly integrable in {L_1 (B)} {\Rightarrow} {f_n} converges in {L_1 (B)} and a.e.
  3. {(f_n)} bounded in {L_1 (B)} {\Rightarrow} {f_n} converges a.e.
  4. {(f_n)} bounded in {L_p (B)} {\Rightarrow} {f_n} converges in {L_p (B)} and a.e.
  5. For all {\delta>0}, the unit ball {U_B} does not contain {\delta}-separated infinite trees.

 

A {\delta}-separated tree is a {B}-valued martingale {f_n}such that

  • For all {n}, {f_n} takes finitely many values.
  • For all {n} and all {\omega}, {|f_n(\omega)-f_{n-1}(\omega)|>\delta}.

The martingale assumption merely means here that the set of values increases, each value being the average of values appearing in the next generation. The full picture is a tree where children are at least {\delta}-far away from their father, and average to their father.

The issue here is wether {\delta}-separates trees need to blow up or can be confined in a ball.

 

3.2. Radon-Nikodym Property

 

Definition 9 A {B} valued measure on {(\Omega,\mathcal{A},m)} is a map {\mu:\mathcal{A}\rightarrow B} such that there exists {w\in L_1 (m)_+} such that for all {A\in\mathcal{A}},

\displaystyle  \begin{array}{rcl}  |\mu(A)|_B \leq \int_{A}w\,dm. \end{array}

Say {\mu} is differentiable if there exists {f\in L_1 (B)} such that for all {A\in \mathcal{A}},

\displaystyle  \begin{array}{rcl}  \mu (A)=\int_{A}f\,dm. \end{array}

Say that {B} has RNP if every {B}-valued measure is differentiable.

 

Example 4 {c_0}, {L^1} do not have RNP. Reflexive spaces, separable duals have RNP (Stegall).

But there exist RNP spaces which are not separable duals (Bourgain).

 

3.3. Proof of Theorem \ref

}

1{\Rightarrow}2: Given {f_n}, define {\mu(A)=\lim_{n}\int_{A}f_n \,dm}. Then {\mu} is a {B}-valued measure. RNP provides us with {f\in L_1 (B)}. For {A\in\mathcal{A}_n}, {\int_{A}f_n \,d\mathop{\mathbb P} =\int_{A}f \,d\mathop{\mathbb P}}, so {f_n =\mathop{\mathbb E}(f)}.

4{\Rightarrow}1: ({p=\infty} admitted here). We can take {\Omega} standard, {\Omega=[0,1]}. Let {\mu} be a {B}-valued measure. Let {\mathcal{A}_n} be the dyadic filtration. Given {\omega}, let {A} be the length {2^{-n}}-interval containing {\omega}, and set

\displaystyle  \begin{array}{rcl}  f_n (\omega)=\frac{\mu(A)}{\mathop{\mathbb P}(A)}. \end{array}

Then the {L_p}-limit of {f_n} serves as a density for {\mu}.

 

3.4. Open problem

 

It is known that RNP implies the Krein-Milman property (a closed convex set is the closed convex hull of its extreme points).

Question: Is the converse true, i.e. KMP implies RNP ?

Among many works on this question, here is a nice result. I like it because the proof uses martingales.

Theorem 10 (Edgar) Let {C} be closed bounded convex in a separable space {B}. Assume that {B} has RNP. For all {x\in C}, there exists a function {f\in L_1 (B)} with values in the set of extremal points of {C}, and whose expectation equals {X}.

 

Here are facts used in the proof.

  1. There exists a continuous strictly convex function on {C}.
  2. Convergence in {L^p} holds for general directed index sets, not inly {{\mathbb N}}.

Proof: Assume {x} is not extreme, {x=\frac{y+z}{2}}. Set {f_1 (\omega)=y} (resp. {z}) with probability {\frac{1}{2}}. If {y} is extreme, keep {f_{2}^{-1}(y)=f_{1}^{-1}(y)}. If {y} is not extreme, let {f_2} pick at random… Use transfinite induction. For a limit ordinal {\alpha}, let {f_{\alpha}=\lim_{\beta<\alpha}f_{\beta}}. RNP garantees that the limit exists. Convergence show that there exists an ordinal {\alpha_0} beyong which {f_{\alpha}} become stationary. For every continuous strictly convex function {\phi} on {C}, the nonincreasing net {\mathop{\mathbb E}(\phi(f_{\alpha}))} converges, so for {\alpha>\alpha_0}, {\mathop{\mathbb E}(\phi(f_{\alpha}))=\mathop{\mathbb E}(\phi(f_{\alpha_0}))}. This implies that a.e. {f_{\alpha_0}=f_{\alpha_0 +1}}, i.e. {f_{\alpha_0}} takes its values in extremal points. \Box

 

About metric2011

metric2011 is a program of Centre Emile Borel, an activity of Institut Henri Poincaré, 11 rue Pierre et Marie Curie, 75005 Paris, France. See http://www.math.ens.fr/metric2011/
This entry was posted in Course and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s