Loading [MathJax]/extensions/TeX/newcommand.js
Powered By Blogger

Tuesday, July 24, 2012

Random functions on tori

\newcommand{\bR}{\mathbb{R}} \newcommand{\bZ}{\mathbb{Z}} \newcommand{\bC}{\mathbb{C}}  \newcommand{\bT}{\mathbb{T}} \newcommand{\eE}{\mathscr{E}} \newcommand{\ve}{{\varepsilon}} \newcommand{\lan}{\langle} \newcommand{\ran}{\rangle} \newcommand{\bu}{\boldsymbol{u}} \newcommand{\eS}{\mathscr{S}} \DeclareMathOperator{\var}{\boldsymbol{var}} \newcommand{\ii}{\boldsymbol{i}} \newcommand{\bsU}{\boldsymbol{U}}  \newcommand{\vfi}{\varphi} \newcommand{\bsE}{\boldsymbol{E}} \newcommand{\teE}{\widetilde{\mathscr{E}}} \newcommand{\pa}{\partial}  \DeclareMathOperator{\Hess}{\boldsymbol{Hess}} \DeclareMathOperator{\diag}{Diag} \newcommand{\one}{\boldsymbol{1}}





Consider the m-dimensional torus

\bT^m:=\bR^m/(2\pi\bZ)^m

equipped with the  flat metric

g:=\sum_{j=1}^m (d\theta^j)^2.

It has volume {\rm vol}_g(\bT^m) =(2\pi)^m.   The eigenvalues of the corresponding  Laplacian are

|\vec{k}|^2,\;\;\vec{k}=(k_1,\dotsc, k_m)\in\bZ^m.

 For \vec{\theta}=(\theta^1,\dotsc, \theta^m) \in\bR^m and \vec{k}\in\bZ^m we set

\lan\vec{k},\vec{\theta}\ran =\sum_jk_k\theta^j.

Denote by \prec the lexicographic order on \bR^m.  An  orthonormal basis of L^2(\bT^m) is given by the  functions (\Psi_{\vec{k}})_{\vec{k}\in\bZ^m}, where


\Psi_{\vec{0}}(\vec{\theta}) =\frac{1}{(2\pi)^{\frac{m}{2}}},


\Psi_{\vec{k}}(\vec{\theta})=\frac{\sqrt{2}}{(2\pi)^{m/2}} \sin\lan \vec{k},\vec{\theta}\ran, \;\;\vec{k}\succ\vec{0},


\Psi_{\vec{k}}(\vec{\theta})=\frac{\sqrt{2}}{(2\pi)^{m/2}} \cos\lan\vec{k},\vec{\theta}\ran,\;\;\vec{k}\prec\vec{0}.



Fix a  nonnegative Schwartz function w\in \eS(\bR), set w_\ve(t)=w(\ve t) and consider the random  function

\bu_\ve(\vec{\theta})=\sum_{\vec{k}\in\bZ^m} X_{\vec{k}}\Psi_{\vec{k}}(\vec{\theta}),

where X_{\vec{k}} are independent  Gaussian random variables with mean 0 and variances

\var(X_{\vec{k}})= w(\ve|\vec{k}|).

We denote by N(\bu_\ve) the number of critical points  of \bu_\ve and  by N_\ve its expectation

N_\ve =\bsE\Bigl(\; N(\bu_\ve)\;\Bigr).

A simple computation shows that the covariance kernel of this random function is

\eE^\ve(\vec{\theta}_1,\vec{\theta}_2)= \frac{1}{(2\pi)^m}\sum_{\vec{k}\in\bZ^m }w(\ve|\vec{k}|)e^{-\ii\lan\vec{k}, \vec{\theta}_2-\vec{\theta_1}\ran}.

Set \vec{\theta}:=\vec{\theta}_2-\vec{\theta}_1 and define \phi:\bR^m\to\bC by

\phi(\vec{x})=e^{-\ii\lan\vec{x},\frac{1}{\ve}\vec{\theta}\ran}  w(|\vec{x}|).

We deduce that

\eE^\ve(\vec{\theta}_1,\vec{\theta}_2)=\frac{1}{(2\pi)^m}\sum_{\vec{k}\in\bZ^m}  \phi(\ve\vec{k}).
Using Poisson formula we deduce  that for any a>0 we have

\sum_{\vec{k}\in\bZ^m}\phi\left(\frac{2\pi}{a}\vec{k}\right)=\left(\frac{a}{2\pi}\right)^m \sum_{\vec{\nu}\in\bZ^m}\widehat{\phi}(a\vec{\nu}),

where  for any f\in\eS(\bR^m) we denote by \widehat{f}(\xi) its Fourier transform

\widehat{f}(\xi)=\int_{\bR^m} e^{-\ii\lan\xi,\vec{x}\ran} f(\vec{x})|d\vec{x}|.

If we let  \frac{2\pi}{a}=\vethen we deduce

\eE^\ve(\vec{\theta}_1,\vec{\theta}_2)=\frac{1}{(2\pi\ve)^m} \sum_{\vec{\nu}\in\bZ^m}\widehat{\phi}\left(\frac{2\pi}{\ve}\vec{\nu}\right).


Let v:\bR^m\to \bR,  v(\vec{x})=w(|\vec{x}|) . Then

\widehat{\phi}(\xi)=\widehat{v}\Bigl(\;\xi+\frac{1}{\ve}\theta\;\Bigr).

Hence

\eE^\ve(\vec{\theta}_1,\vec{\theta}_2)= \frac{1}{(2\pi\ve)^m}\sum_{\vec{\nu}\in\bZ^m}\widehat{v}\left(\frac{1}{\ve}\vec{\theta}+\frac{2\pi}{\ve}\vec{\nu}\right).

Now observe that if |\theta| \ll 2\pi, then for \vec{\nu}\in\bZ^m\setminus 0 then for any N>0 there exists a constant  C_N>0 such that

\left|\widehat{v}\left(\frac{1}{\ve}\vec{\theta}+\frac{2\pi}{\ve}\vec{\nu}\right)\right|\leq   C_N\ve^N|\nu|^{-N}.

We deduce that

\eE^\ve(\vec{\theta}_1,\vec{\theta}_2) = \frac{1}{(2\pi\ve)^m}\left(\widehat{v}\left(\; \frac{1}{\ve}\vec{\theta}\;\right)+O\bigl(\; \ve^N\;\bigr)\;\right),\;\;\forall N>0.

The last asymptotic expansion can be  differentiated with respect to \vec{\theta}_1 and \vec{\theta}_2.

Now define the random function

\bsU_\ve:\bT^m\times \bT^m\to \bR,\;\;\bsU_\ve(\vec{\theta},\vec{\vfi})=\bu_\ve(\vec{\theta})+\bu_\ve(\vec{\vfi}).

We denote  by N(\bsU_\ve) the number of critical points of \bsU_\ve situated  outside the diagonal. Note that

N(\bsU_\ve)= N(\bu_\ve)^2-N(\bu_\ve).

We would like to  understand the behavior of the expectation of N(\bsU_\ve) as \ve\searrow 0.

The covariance kernel  of \bsU_\ve is the function

\widetilde{\eE}^\ve(\vec{\theta}_1,\vec{\vfi}_1; \vec{\theta}_2,\vec{\vfi}_2) =\eE^\ve(\vec{\theta}_1,\vec{\theta}_2)+\eE^\ve(\vec{\theta}_1,\vec{\vfi}_2)+\eE^\ve(\vec{\vfi}_1,\vec{\theta}_2)+\eE^\ve(\vec{\vfi}_1,\vec{\vfi}_2)

= \frac{1}{(2\pi\ve)^m}\Bigl( \;\widehat{v}(\;\ve^{-1}(\vec{\theta}_2-\vec{\theta}_1)\;)+ \widehat{v}(\ve^{-1}(\vec{\vfi}_2-\vec{\theta}_1)\;)\\ +\widehat{v}(\;\ve^{-1}(\vec{\theta}_2-\vec{\vfi}_1)\;) +\widehat{v}(\;\ve^{-1}(\vec{\vfi}_2-\vec{\vfi}_1)\;)+O(\ve^\infty)\;\Bigr).

Let us introduce the notation

\Theta:=(\vec{\theta},\vec{\vfi})\in\bT^m\times\bT^m , d(\Theta):=\vec{\vfi}-\vec{\theta}.


We  need to understand the quantities

\pa^\alpha_{\Theta_1}\pa^\beta_{\Theta_2}\teE^\ve(\Theta_1,\Theta_2)_{\Theta_1=\Theta_2=\Theta}=\bsE\bigl(\;\pa^\alpha_\Theta\bsU_\ve(\Theta)\cdot\pa^\beta_\Theta\bsU_\ve(\Theta)\;\bigr).

Note that \widehat{v}(\xi) is radially symmetric, in fact it  can be written as f(|\xi|^2)  for some smooth function f. Indeed, we have  (see  Michael Taylor's notes;  he uses a different normalization for the Fourier transform.)


\widehat{v}(\xi)=\int_{\bR^m} v(|\vec{x}|) e^{-\ii\lan\xi,\vec{x}\ran}  =(2\pi)^{\frac{m}{2}}|\xi|^{1-\frac{m}{2}}\int_0^\infty v(r) J_{\frac{m}{2}-1}(r|\xi|) dr,

where J_\nu denotes the Bessel function of first type and order \nu. For  any multi-indices \alpha,\beta we have



(2\pi)^m\pa^\alpha_{\vec{\theta}_1}\pa^\beta_{\vec{\theta}_2}\teE^\ve(\Theta,\Theta)=  \ve^{-m-|\alpha|-|\beta|} \Bigl(\; (-1)^{|\alpha|}\pa^{\alpha+\beta}_\xi \widehat{v}(0) + O(\ve^\infty)\,\Bigr), \tag{1}


(2\pi)^m\pa^\alpha_{\vec{\vfi}_1}\pa^\beta_{\vec{\vfi}_2}\teE^\ve(\Theta,\Theta)= \ve^{-m-|\alpha|-|\beta|}\Bigl( (-1)^{|\alpha|} \pa^{\alpha+\beta}_\xi\widehat{v}(0) +O(\ve^{\infty})\;\Bigr). \tag{2}

The  main term of this asymptotics  is trivial if |\alpha|+|\beta| is odd. Next


(2\pi)^m\pa^\alpha_{\vec{\theta}_1}\pa^\beta_{\vec{\vfi}_2} \teE^\ve(\Theta,\Theta)= \ve^{-m-|\alpha|-|\beta}\Bigl( (-1)^{|\alpha|}\pa^{\alpha+\beta}_\xi\widehat{v}(\ve^{-1}d(\Theta) ) +O(\ve^\infty)\;\Bigr), \tag{3}


(2\pi)^m \pa^\alpha_{\vec{\vfi}_1}\pa^\beta_{\vec{\theta}_2}\teE^\ve(\Theta,\Theta) =\ve^{-m-|\alpha|-|\beta|} \Bigl(\;(-1)^{|\alpha|} \pa^{\alpha+\beta}_\xi \widehat{v}(\;-\ve^{-1}d(\Theta)\;)+ O(\ve^\infty)\;\Bigr)\\ =\ve^{-m-|\alpha|-|\beta|}\Bigl(\;(-1)^{|\beta|}\pa^{\alpha+\beta}_\xi\widehat{v}(\;\ve^{-1}d(\Theta)\;)+O(\ve^\infty)\;\Bigr).\tag{4}

For example  if |\alpha|=2 and |\beta|=1 we have

(2\pi)^m\pa^\alpha_{\vec{\theta}_1}\pa^\beta_{\vec{\vfi}_1}\teE^\ve(\Theta,\Theta)= \ve^{-m-3}\Bigl(\;\pa^{\alpha+\beta}_\xi\widehat{v}(\;\ve^{-1}d(\Theta)\;)+O(\ve^\infty)\;\Bigr),\tag{3'}

Example 1.    Let us compute \pa^\alpha_\xi f(|\xi|^2),  |\alpha|\leq 4.

We have

\pa_{\xi_i} f(|\xi|^2) = 2\xi_i f',\;\;\pa^2_{\xi_i\xi_j}f(|\xi|^2)= 2\delta_{ij} f' +4\xi_i\xi_j f'',

\pa^3_{\xi_i\xi_j\xi_k} f = 4\bigl(\; \delta_{ij}\xi_k+\delta_{ik}\xi_j+\delta_{jk}\xi_i\;\bigr)f''+8\xi_i\xi_j\xi_k f'''.

\pa^4_{\xi_i\xi_j\xi_k\xi_\ell} f(|\xi|^2)= 4\bigl(\;\delta_{ij}\delta_{k\ell}+\delta_{ik}\delta_{j\ell}+\delta_{jk}\delta_{i\ell}\;\bigr) f''

+ 8\bigl(\; \delta_{ij}\xi_k\xi_\ell+\delta_{ik}\xi_j\xi_\ell+\delta_{jk}\xi_i\xi_\ell+\delta_{i\ell}\xi_j\xi_k+\delta_{j\ell}\xi_i\xi_k+\delta_{k\ell}\xi_i\xi_j\;\bigr) f''' +16\xi_i\xi_j\xi_k\xi_\ell f^{(4)}.

Example 2. Let's be more specific and  set w(t)=e^{-t^2/2}. Then v(\vec{x})= e^{-|\vec{x}|^2/2}, f(s)=e^{-s} so that

\widehat{v}(\xi) = (2\pi)^{m} e^{-|\xi|^2/2}.


We can write

\widehat{v}(\xi) =(2\pi)^{m/2}\prod_{j=1}^m e^{-\xi_j^2/2}.

For any multi-index \alpha=(\alpha_1,\dotsc, \alpha_m) we have


\pa^\alpha_\xi \widehat{v}(\xi) =(-1)^{|\alpha|}\underbrace{\left(\prod_{j=1}^m H_{\alpha_j}(\xi_j)\right) }_{=:H_\alpha(\xi)}\widehat{v}(\xi),

where H_n denotes the n-th Hermite polynomial defined by

\frac{d^n}{dx^n} e^{-x^2/2}= (-1)^nH_n(x) e^{-x^2/2}.


Let us point out that

(-1)^{n+1}H_{n+1}(x) = (-1)^nH_n'(x) +(-1)^{n+1}xH_n(x),

H_1(x)=x, \;\; H_2(x)=x^2-1,\;\;H_3(x) =x^3-3x,\;\; H_4(x)=x^4-6x^2 +3.



Observe that

\Hess\bsU_\ve(\Theta) =\Hess \bu_\ve(\vec{\theta})\oplus \Hess(\bu_\ve(\vec{\vfi}).



We need to understand the statistics of the following two random objects.

 d\bsU_\ve(\Theta),,


H_c(\Theta):= \bsE\Bigl(\;\Hess \bsU_\ve(\Theta)\;\bigr|\; d\bsU_\ve(\Theta)=0\;\Bigr),

as d(\Theta)\to 0, i.e.,  \Theta approaches the diagonal in  \bT^m\times \bT^m. The covariance form V_\ve of d\bsU_\ve(\Theta) becomes singular as d\to 0.  Fortunately, something miraculous seems to be happening: as d\to 0  the Gaussian random variable  H_c ecomes highly concentrated near the trivial matrix and in the limit it becomes the deterministic 0-matrix. This leads to remarkable compensation is the Kac-Rice formula.




No comments: