## Notes of Anton Thalmaiers’s lecture nr 4

1. Probabilistic content of Hörmander’s condition

1.1. Statement

Theorem 1 Suppose that the Lie algebra generated by ${A_1,\ldots,A_r}$ and brackets ${[A_0,A_i]}$ fills ${T_xM}$. Then the bilinear form ${C_t(x)}$ on ${T_xM}$ is non-degenerate

1.2. Proof

Let

$\displaystyle \begin{array}{rcl} G_s=span\{{X_s^{-1}}_* A_i\textrm{ at }x\,;\,i=1,\ldots,r\}\subset T_x M,\quad U^+_t=span\bigcup_{s\leq t}G_s. \end{array}$

By Blumenthal’s 0/1-law, ${U^+_t}$ is not random. We prove by contradiction that ${U_0^+=T_x M}$ (this will suffice to prove the theorem). Introduce

$\displaystyle \begin{array}{rcl} \sigma=\inf\{t>0\,;\,U_0^+\not=U_t^+\} \end{array}$

Let ${\xi\in T_x^*M}$ be orthogonal to ${U_0^+}$ (and thus to ${U_t^+}$ for ${t<\sigma}$). Since ${\xi}$ is orthogonal to all ${{X_s^{-1}}_* A_i}$, ${s<\sigma}$. But for all vectorfields ${V}$, ${{X_s^{-1}}_* V}$ satisfies (first line is Stratonovich, the second is Ito)

$\displaystyle \begin{array}{rcl} d({X_s^{-1}}_* V)&=&({X_s^{-1}}_* [A_0,V])_X \,dt+\sum ({X_s^{-1}}_* [A_i,V])_X \cdot dB_s^i\\ &=&({X_s^{-1}}_* [A_0,V])_X \,dt+\sum ({X_s^{-1}}_* [A_i,V])_X \,dB_s^i+\sum_j ({X_s^{-1}}_*[A_j, [A_j,V]])_X\,ds\\ \end{array}$

thus for all ${t<\sigma}$,

$\displaystyle \begin{array}{rcl} \langle\xi,({X_s^{-1}}_*A_i)_X)\rangle&=&\langle\xi,A_i(X)\rangle\\ &&+\int_{0}^{t}\langle\xi,({X_s^{-1}}_* [A_0,A_i])_X \,ds\rangle\\ &&+\int_{0}^{t}\sum_j\langle\xi,({X_s^{-1}}_* [A_j,A_i])_X \rangle dB_s^i+\int_{0}^{t}\sum_j\langle\xi,({X_s^{-1}}_*[A_j, [A_j,A_i]])_X\rangle\,ds \end{array}$

By uniqueness of the solution of an SDE, this implies that ${\langle\xi,({X_s^{-1}}_* [A_j,A_i])_X\rangle=0}$ for all ${i,j\geq 1}$ and ${s<\sigma}$. Replacing ${[A_i]}$ with ${[A_i,Aj]}$ shows that

$\displaystyle \begin{array}{rcl} \langle\xi,({X_s^{-1}}_* [A_j,[A_j,A_i]])_X\rangle=0, \end{array}$

and

$\displaystyle \begin{array}{rcl} \langle\xi,({X_s^{-1}}_* [A_0,A_i])_X\rangle=0 \end{array}$

Iterating the procedure shows orthogonality of ${\xi}$ with all iterated brackets, and thus ${\xi=0}$.

2. Probabilistic proof of hypoellipticity

Theorem 2 Assume that ${A_i}$ and there derivatives satisfy suitable growth conditions. Assume that the bilinear form ${C_t(x)}$ is non-degenerate and

$\displaystyle \begin{array}{rcl} |C_t(x)|^{-1}\in L^p \end{array}$

for all ${p\geq 1}$. Then ${P_t(x,dy)=p_t(x,y)\,dy}$ with a smooth density ${p_t(x,y)}$.

The proof we are about to give is due to a large extent to Bismut, although many details are skipped in Bismut’s original paper. We use more elementary tools. We shall rely on the following standard fact.

2.1. Girsanov’s theorem

Let ${B}$ a Brownian motion on Euclidean space. Add an absolutely continuous process, i.e. ${d\hat{B}_t=dB_t+u_t\,dt}$ such that

$\displaystyle \begin{array}{rcl} \mathop{\mathbb E}(\exp(\frac{1}{2}\int_{0}^{t}|u(s)|^2\,ds))<\infty. \end{array}$

${\hat{B_t}}$ is not a martingale any more, but this can be recovered by changing the probability measure.

Theorem 3 (Girsanov) ${\hat{B}_t}$ is a Brownian measure with respect to the mesure ${\hat{P}}$ whose density with respect to ${P}$ is

$\displaystyle \begin{array}{rcl} G_t:=\frac{d\hat{P}}{dP}_{|\mathcal{F}_t}=\exp(-\int_{0}^{t}u_s\,dB_s-\frac{1}{2}\int_{0}^{t}|u(s)|^2\,ds). \end{array}$

In other words, if ${F}$ is a functional on the space of Brownian motions, then

$\displaystyle \begin{array}{rcl} \mathop{\mathbb E}_{P}(F(B_.))=\mathop{\mathbb E}_{\hat{P}}((F\hat{B}_.)). \end{array}$

2.2. A criterion for a measure to have a smooth density

We want to prove that ${P_t(x,dy)=p_t(x,\cdot)\,dvol}$ for ${t>0}$. We use the following criterion.

Lemma 4 Let ${\mu}$ be a probability measure on some manifold, viewed as a distribution. Assume that for all ${\alpha\in{\mathbb N}}$ and all test functions ${f}$,

$\displaystyle \begin{array}{rcl} |\langle f,D^\alpha \mu\rangle|\leq C_\alpha\,\|f\|_\infty. \end{array}$

Then ${\mu}$ has a smooth density.

2.3. Proof of Theorem 2

Fix ${x}$. Identify ${T_xM}$ with ${{\mathbb R}^n}$. We apply Girsanov’s theorem to ${u_s=a_s\cdot\lambda}$ where ${a_s}$ takes values in ${T_xM\otimes{\mathbb R}^r}$ and ${\lambda\in T_x^*}$. The modified flow is denoted by ${X^\lambda_t(x)}$. Let ${g}$ be a function to be specified later. Up to introducing the density ${G_t^\lambda}$, nothing changes, and

$\displaystyle \begin{array}{rcl} \mathop{\mathbb E}(f(X^\lambda_t(x))g(B^\lambda_\cdot )G_t^\lambda) \end{array}$

does not depend on ${\lambda}$. Let us differentiate with respect to ${\lambda}$ at ${\lambda=0}$.

$\displaystyle \begin{array}{rcl} \mathop{\mathbb E}((D_i f)(X_t(x))(\frac{\partial}{\partial \lambda_k}_{|\lambda=0}X_t^\lambda(x))^i g(B_.))=-\mathop{\mathbb E}(f(X_t(x))\frac{\partial}{\partial \lambda_k}_{|\lambda=0}(g(B^\lambda_.)G_t^\lambda))) \end{array}$

Remember that SDE can be formally differentiated with respect to a parameter. Notation: ${\frac{\partial}{\partial \lambda_k}_{|\lambda=0}X_t^\lambda(x))^i =(\partial X_t(x))_{ik}}$. Get

$\displaystyle \begin{array}{rcl} \partial X_t(x)={(X_t)}_*\int_{0}^{t}(X_s^{-1}A)_X u_s\,ds. \end{array}$

This suggests choosing

$\displaystyle \begin{array}{rcl} u_s=(X_s^{-1})_* A)_X^*:T_x^*M\rightarrow{\mathbb R}^r. \end{array}$

With this choice,

$\displaystyle \begin{array}{rcl} \partial X_t(x)={(X_t)}_*C_t(x). \end{array}$

By assumption, ${C_t(x)}$ is invertible, so we take

$\displaystyle \begin{array}{rcl} g(B^*_.)=(C_t(x)^{-1}({X_t^{-1}}_*)^{-1})_{kj}\gamma(B^\lambda_.), \end{array}$

where ${\gamma}$ is to be specified later. This yields

$\displaystyle \begin{array}{rcl} \mathop{\mathbb E}((D_j f)(X_t(x))\gamma(B_.))=-\mathop{\mathbb E}(f(X_t(x))H_j(\gamma)), \end{array}$

for some rather complicated expression ${H_j(\gamma)}$. Iteration gives

$\displaystyle \begin{array}{rcl} \mathop{\mathbb E}((D_i D_j D_k f)(X_t(x)))=-\mathop{\mathbb E}(f(X_t(x))H_k(H_j(H_i(1)))))), \end{array}$

from which we get the estimate

$\displaystyle \begin{array}{rcl} |\mathop{\mathbb E}((D_i D_j D_k f)(X_t(x)))|\leq\|f\|_{\infty}\|\cdots H_k(H_j(H_i(1)))\|_{L^1} \end{array}$

The right hand side involves only polynomial expressions, except ${C_t(x)^{-1}}$ and its derivatives with respect to ${\lambda}$. These have to be computed and estimated too. Then the Lemma applies, it shows that the distribution of ${X_t(x)}$ has a smooth density.

3. Subjects I could not cover

There was no time to treat

1. the short time asymptotics of the heat kernel,
2. bounds on the lifetime of Brownian motion (differentiating ${d(x,X_t(x))}$, leads to the Laplacian of the distance function, and to Ricci curvature).
3. Bismut’s interpolation between the geodesic flow and an hypoelliptic diffusion.

There is more that probability theory can do for sub-Riemannian geometry and hypoelliptic PDE’s.