$$ \widehat{w}(\theta)=\int_{\bR} e^{-\ii x\theta}w(\theta) dx. $$
For any $\vec{\theta}\in\bR^n$ we form the complex Hermitian $n\times n$ matrix
$$A_w(\vec{\theta})= \bigl(\; a_{ij}(\vec{\theta})\;)_{1\leq i,j\leq n},\;\; a_{ij}(\vec{\theta})=\widehat{w}(\theta_i-\theta_j) .$$
Observe that for any $\vec{z}\in\mathbb{C}^n$ we have $\newcommand{\bC}{\mathbb{C}}$
$$ \bigl(\; A_w(\vec{\theta})\vec{z},\vec{z}\;\bigr)=\sum_{i,j} \widehat{w}(\theta_i-\theta_j) z_i\bar{z}_j =\int_{\bR} | T_{\vec{z}}(x,\vec{\theta})|^2 w(x) dx, $$
where $T_{\vec{z}}( x)$ is is the trigonometric polynomial $\newcommand{\vez}{\vec{z}}$
$$T_{\vez}(x,\vec{\theta})= \sum_j z_j e^{\ii \theta_j x}. $$
We denote by $(-,-)_w$ the inner product
$$ (f,g)_w=\int_{\bR} f(x) \bar{g(x)} w(x) dx,\;\;f,g:\bR\to \bC. $$
We see that $A_w(\vec{\theta})$ is the Gramm-Schmidt matrix
$$ a_{ij}(\vec{\theta})= (E_{\theta_i}, E_{\theta_j})_w,\;\; E_\theta(x)=e^{\ii\theta x}. $$
We see that $\sqrt{\;\det A_w(\vec{\theta})\;}$ is equal to the $n$-dimensional volume of the parallelepiped $P(\vec{\theta})=L^2(\bR, wdx)$ spanned by the functions $E_{\theta_1},\dotsc, E_{\theta_n}$. We observe that if these exponentials are linearly dependent, then this volume is zero. Here is a first elementary result.
Lemma 1. The exponentials $E_{\theta_1},\dotsc, E_{\theta_n}$ are linearly dependent (over $\bC$) if and only if $\theta_j=\theta_k$ for some $j\neq k$.
Proof. Suppose that
$$\sum_{j=1}^n z_j E_{\theta_j}(x)=0,\;\;\forall x\in \bR. $$
Then for any $f\in \eS(\bR)$ we have
$$\sum_{j=1}^n z_j E_{\theta_j}(x)f(x)=0,\;\;\forall x\in \bR. $$
By taking the Fourier Transform of the last equality we deduce
$$ \sum_{j=1}^n z_j \widehat{f}(\theta-\theta_j) =0. \label{1}\tag{1}$$
If we now choose $\newcommand{\ve}{{\varepsilon}}$ a family $f_\ve(x)\in\eS(\bR)$ such that, as $\ve\searrow 0$, $\widehat{f}_\ve(\theta)\to\delta(\theta)=$ the Dirac delta function concentrated at $0$, we deduce from (\ref{1}) that
$$\sum_{j=1}^n z_j\delta(\theta-\theta_j)=0. \tag{2}\label{2} $$
Clearly this can happen if and only if $\theta_j=\theta_k$ for some $j\neq k$. q.e.d.
If we set
$$ \Delta(\vec{\theta}) :=\prod_{1\leq j<k\leq n} (\theta_k-\theta_j), $$
then we deduce from the above lemma that
$$ \det A_w(\vec{\theta})= 0 \Leftrightarrow \Delta(\vec{\theta})=0. $$
A more precise statement is true.
Theorem 2. For any integrable weight $w:\bR\to [0,\infty)$ such that $\int_{\bR} w(x) dx >0$ there exists a constant $C=C(w)>0$ such that for any $\theta_1,\dotsc, \theta_n\in [-1,1]$ we have
$$ \frac{1}{C}|\Delta(\vec{\theta})|^2 \leq \det A_w(\vec{\theta}). \tag{E}\label{E}$$
Proof. We regard $A_w(\vec{\theta})$ as a hermitian operator
$$ A_w(\vec{\theta}):\bC^n\to \bC^n. $$
We denote by $\lambda_1(\vec{\theta})\leq \cdots \leq \lambda_n(\vec{\theta}) $ its eigenvalues so that
$$\det A_w(\vec{\theta})=\prod_{j=1}^n \lambda_j(\vec{\theta}) \tag{Det}\label{D}. $$
Observe that $\newcommand{\Lra}{\Leftrightarrow}$ $\newcommand{\eO}{\mathscr{O}}$
$$\vec{z}\in \ker A(\vec{\theta}) \Lra \sum_{j=1}^n z_j E_{\theta_j}(x) =0,\;\;\forall x\in{\rm supp}\; w \Lra \sum_{j=1}^n z_j E_{\theta_j}(x) =0,\;\;\forall x\in\bR. \tag{Ker}\label{K}$$
We want to give a more precise description of $\ker A_w(\vec{\theta})$. Set
$$ I_n:=\{1,\dotsc, n\},\;\; \Phi_{\vec{\theta}}=\{ \theta_1,\dotsc,\theta_n\}\subset \bR. $$
We want to emphasize that $\Phi{\vec{\theta}}$ is not a multi-set so that $\#\Phi(\vec{\theta})\leq n.$ $\newcommand{\vet}{{\vec{\theta}}}$.
Example 3. For example with $n=6$ and $\vet=(1,2,3,2,2,4)$ we have
$$ \Phi_\vet=\Phi_{(1,2,3,2,2,4)}=\{1,2,3,4\}. $$
For $\newcommand{\vfi}{{\varphi}}$ $\vfi\in\Phi_\vet$ we set
$$ J_\vfi=\bigl\{ j\in I_n;\;\; \theta_j=\vfi\;\bigr\}. $$
In the example above for $ \vet=(1,2,3,2,2,4)$ and $\vfi=2$ we have $J_\vfi=\{2,4,5\}$. $\newcommand{\vez}{\vec{z}}$ For $J\subset I_n$ we set
$$S_J:\bC^n\to \bC,\;\;S_J(\vez)=\sum_{j\in J} z_. $$
In particular, for any $\vfi\in\Phi_\vet$ we define
$$S_\vfi:\bC^n\to \bC,\;\; S_{\vfi}(\vec{z})=S_{J_\vfi}(\vez)=\sum_{j\in J_\vfi} z_j. $$
We deduce
$$ \sum_{j\in I_n} z_jE_{\theta_j}=\sum_{\vfi\in \Phi_\vet} S_\vfi(\vec{z}) E_\vfi. $$
Using (\ref{K}) we deduce
$$ \vez\in\ker A(\vet)\Lra \sum_{\vfi\in \Phi_\vet} S_\vfi(\vec{z}) E_\vfi\Lra S_\vfi(\vez)=0,\;\;\forall \vfi\in \Phi_\vet . \tag{3}\label{3}$$
$$\sum_{j=1}^n z_j\delta(\theta-\theta_j)=0. \tag{2}\label{2} $$
Clearly this can happen if and only if $\theta_j=\theta_k$ for some $j\neq k$. q.e.d.
If we set
$$ \Delta(\vec{\theta}) :=\prod_{1\leq j<k\leq n} (\theta_k-\theta_j), $$
then we deduce from the above lemma that
$$ \det A_w(\vec{\theta})= 0 \Leftrightarrow \Delta(\vec{\theta})=0. $$
A more precise statement is true.
Theorem 2. For any integrable weight $w:\bR\to [0,\infty)$ such that $\int_{\bR} w(x) dx >0$ there exists a constant $C=C(w)>0$ such that for any $\theta_1,\dotsc, \theta_n\in [-1,1]$ we have
$$ \frac{1}{C}|\Delta(\vec{\theta})|^2 \leq \det A_w(\vec{\theta}). \tag{E}\label{E}$$
Proof. We regard $A_w(\vec{\theta})$ as a hermitian operator
$$ A_w(\vec{\theta}):\bC^n\to \bC^n. $$
We denote by $\lambda_1(\vec{\theta})\leq \cdots \leq \lambda_n(\vec{\theta}) $ its eigenvalues so that
$$\det A_w(\vec{\theta})=\prod_{j=1}^n \lambda_j(\vec{\theta}) \tag{Det}\label{D}. $$
Observe that $\newcommand{\Lra}{\Leftrightarrow}$ $\newcommand{\eO}{\mathscr{O}}$
$$\vec{z}\in \ker A(\vec{\theta}) \Lra \sum_{j=1}^n z_j E_{\theta_j}(x) =0,\;\;\forall x\in{\rm supp}\; w \Lra \sum_{j=1}^n z_j E_{\theta_j}(x) =0,\;\;\forall x\in\bR. \tag{Ker}\label{K}$$
We want to give a more precise description of $\ker A_w(\vec{\theta})$. Set
$$ I_n:=\{1,\dotsc, n\},\;\; \Phi_{\vec{\theta}}=\{ \theta_1,\dotsc,\theta_n\}\subset \bR. $$
We want to emphasize that $\Phi{\vec{\theta}}$ is not a multi-set so that $\#\Phi(\vec{\theta})\leq n.$ $\newcommand{\vet}{{\vec{\theta}}}$.
Example 3. For example with $n=6$ and $\vet=(1,2,3,2,2,4)$ we have
$$ \Phi_\vet=\Phi_{(1,2,3,2,2,4)}=\{1,2,3,4\}. $$
For $\newcommand{\vfi}{{\varphi}}$ $\vfi\in\Phi_\vet$ we set
$$ J_\vfi=\bigl\{ j\in I_n;\;\; \theta_j=\vfi\;\bigr\}. $$
In the example above for $ \vet=(1,2,3,2,2,4)$ and $\vfi=2$ we have $J_\vfi=\{2,4,5\}$. $\newcommand{\vez}{\vec{z}}$ For $J\subset I_n$ we set
$$S_J:\bC^n\to \bC,\;\;S_J(\vez)=\sum_{j\in J} z_. $$
In particular, for any $\vfi\in\Phi_\vet$ we define
$$S_\vfi:\bC^n\to \bC,\;\; S_{\vfi}(\vec{z})=S_{J_\vfi}(\vez)=\sum_{j\in J_\vfi} z_j. $$
We deduce
$$ \sum_{j\in I_n} z_jE_{\theta_j}=\sum_{\vfi\in \Phi_\vet} S_\vfi(\vec{z}) E_\vfi. $$
Using (\ref{K}) we deduce
$$ \vez\in\ker A(\vet)\Lra \sum_{\vfi\in \Phi_\vet} S_\vfi(\vec{z}) E_\vfi\Lra S_\vfi(\vez)=0,\;\;\forall \vfi\in \Phi_\vet . \tag{3}\label{3}$$
In particular we deduce
$$\dim \ker A(\vet)=n-\#\Phi_\vet. $$
Step 1. Assume that $w$ has compact support so that $\widehat{w}(\theta)$ is real analytic over $\bR$. We will show that we have the two-sided estimate
$$ \frac{1}{C}|\Delta(\vec{\theta})|^2 \leq \det A_w(\vec{\theta}) \leq C |\Delta(\vec{\theta})|^2. \tag{$E_*$}\label{Es} $$
In this case $\det A_w(\vet)$ is real analytic and symmetric in the variables $\theta_1,\dotsc, \theta_n$ and vanishes if and only if $\theta_j=\theta_k$ for some $j=k$. Thus $\det A_w(\vet)$ has a Taylor series expansion (near $\vet=0$)
$$\det A_w(\vet)= \sum_{\ell\geq 0} P_\ell(\vet), $$
where $P_\ell(\vet)$ is a symmetric polynomial in $\vet$ that vanishes when $\theta_j=\theta_k$ for some $j\neq k$. Symmetric polynomials of this type have the form,
$$\Delta(\vet)^{2N} \cdot Q(\vet) $$
where $N$ is some positive integer and $Q$ is a symmetric polynomial. We deduce from the \Lojasewicz inequality for subanalytic functions that there exists $C=C(w)>0$, a positive integer $N$ and a rational number and $r>0$ such that
$$ \frac{1}{C} |\Delta(\vet)|^{r}\leq \det A_w(\vet) \leq C \Delta(\vet)^{2N},\;\;\forall |\vet|\leq 2\pi. \tag{4} \label{4} $$
We want to show that in (\ref{4}) we have $2N=r=2$. We argue by contradiction, namely we assume that $r\neq 2$ or $N\neq 1$. Let
$$\vet(t)= (0, t, \theta_3, \dotsc, \theta_n), \;\; 0\leq |t| < \theta_3<\cdots < \theta_n. $$
Set $A_w(t)=A_w\bigl(\,\vet(t)\;\bigr)$. Denote its eigenvalues by
$$0\leq \lambda_1(t)\leq \lambda_2(t)\cdots \leq \lambda_n(t). $$
The eigenvalues are so arranged so that the functions $\lambda_k(t)$ are real analytic for $t$ in a neighborhood of $0$. We deduce from (\ref{3}) that $\ker A_w(0)$ is one dimensional so that $\lambda_1(0) =0$, $\lambda_k(0)>0$, $\forall k>1$. Hence
$$ \det A_w(t) \sim \lambda_1(t) \prod_{k=2}^n \lambda_k(0)\;\;\mbox{as $t\searrow 0$}. \tag{5}\label{5} $$
On the other hand
$$\Delta(\vet(t))^2 \sim Zt^2 \;\;\mbox{as}\;\; t\searrow 0 $$
for some positive constant $Z$. Using this estimate in (\ref{4}) we deduce $r=2N$. On the other hand, using the above estimate in (\ref{5}) we deduce
$$ \lambda_1(t) \sim Z_1 t^{2N} \;\;\mbox{as}\;\;t\searrow 0. \tag{6}\label{6}, $$
for another positive constant $Z_1$.
The kernel of $A_w(0)$ is spanned by the unit vector
$$ \vez(0)= (\frac{1}{\sqrt{2}}, -\frac{1}{\sqrt{2}}, 0,\dotsc 0). $$
We can find a real analytic family of vectors $t\mapsto \vec{z}(t)$ satisfying
$$|\vez(t)|=1,\;\; A_w(t) \vez(t)=\lambda_1(t)\vez(t),\;\;\lim_{t\to 0}\vez(t)=\vez(0). $$
In particular, we deduce
$$ \dot{A}_w(0)\vez(0)+A_w(0)\dot{\vez}(0)=\dot{\lambda}_1(0)\vez(0)+\lambda_1(0)\dot{\vez}(0)=0. $$
A simple computation shows that $\dot{A}_w(0) \vez(0)=0$ so we deduce $A_w(0)\dot{\vez}(0)=0$. This shows that
$$\dot{z}_1(0)+\dot{z}_2(0)=0,\;\;\dot{z}_k(0)=0,\;\;\forall k>2. $$
$$ \lambda_1(t)= (A_w(t) \vez(t),\vez(t))= \int_{\bR} \Bigl| \;\underbrace{\sum_{j=1}^n z_j(t) e^{\theta_j(t) x}}_{=:f_t(x)}\;\Bigr|^2 w(x) dx. $$
$$\dim \ker A(\vet)=n-\#\Phi_\vet. $$
Step 1. Assume that $w$ has compact support so that $\widehat{w}(\theta)$ is real analytic over $\bR$. We will show that we have the two-sided estimate
$$ \frac{1}{C}|\Delta(\vec{\theta})|^2 \leq \det A_w(\vec{\theta}) \leq C |\Delta(\vec{\theta})|^2. \tag{$E_*$}\label{Es} $$
In this case $\det A_w(\vet)$ is real analytic and symmetric in the variables $\theta_1,\dotsc, \theta_n$ and vanishes if and only if $\theta_j=\theta_k$ for some $j=k$. Thus $\det A_w(\vet)$ has a Taylor series expansion (near $\vet=0$)
$$\det A_w(\vet)= \sum_{\ell\geq 0} P_\ell(\vet), $$
where $P_\ell(\vet)$ is a symmetric polynomial in $\vet$ that vanishes when $\theta_j=\theta_k$ for some $j\neq k$. Symmetric polynomials of this type have the form,
$$\Delta(\vet)^{2N} \cdot Q(\vet) $$
where $N$ is some positive integer and $Q$ is a symmetric polynomial. We deduce from the \Lojasewicz inequality for subanalytic functions that there exists $C=C(w)>0$, a positive integer $N$ and a rational number and $r>0$ such that
$$ \frac{1}{C} |\Delta(\vet)|^{r}\leq \det A_w(\vet) \leq C \Delta(\vet)^{2N},\;\;\forall |\vet|\leq 2\pi. \tag{4} \label{4} $$
We want to show that in (\ref{4}) we have $2N=r=2$. We argue by contradiction, namely we assume that $r\neq 2$ or $N\neq 1$. Let
$$\vet(t)= (0, t, \theta_3, \dotsc, \theta_n), \;\; 0\leq |t| < \theta_3<\cdots < \theta_n. $$
Set $A_w(t)=A_w\bigl(\,\vet(t)\;\bigr)$. Denote its eigenvalues by
$$0\leq \lambda_1(t)\leq \lambda_2(t)\cdots \leq \lambda_n(t). $$
The eigenvalues are so arranged so that the functions $\lambda_k(t)$ are real analytic for $t$ in a neighborhood of $0$. We deduce from (\ref{3}) that $\ker A_w(0)$ is one dimensional so that $\lambda_1(0) =0$, $\lambda_k(0)>0$, $\forall k>1$. Hence
$$ \det A_w(t) \sim \lambda_1(t) \prod_{k=2}^n \lambda_k(0)\;\;\mbox{as $t\searrow 0$}. \tag{5}\label{5} $$
On the other hand
$$\Delta(\vet(t))^2 \sim Zt^2 \;\;\mbox{as}\;\; t\searrow 0 $$
for some positive constant $Z$. Using this estimate in (\ref{4}) we deduce $r=2N$. On the other hand, using the above estimate in (\ref{5}) we deduce
$$ \lambda_1(t) \sim Z_1 t^{2N} \;\;\mbox{as}\;\;t\searrow 0. \tag{6}\label{6}, $$
for another positive constant $Z_1$.
The kernel of $A_w(0)$ is spanned by the unit vector
$$ \vez(0)= (\frac{1}{\sqrt{2}}, -\frac{1}{\sqrt{2}}, 0,\dotsc 0). $$
We can find a real analytic family of vectors $t\mapsto \vec{z}(t)$ satisfying
$$|\vez(t)|=1,\;\; A_w(t) \vez(t)=\lambda_1(t)\vez(t),\;\;\lim_{t\to 0}\vez(t)=\vez(0). $$
In particular, we deduce
$$ \dot{A}_w(0)\vez(0)+A_w(0)\dot{\vez}(0)=\dot{\lambda}_1(0)\vez(0)+\lambda_1(0)\dot{\vez}(0)=0. $$
A simple computation shows that $\dot{A}_w(0) \vez(0)=0$ so we deduce $A_w(0)\dot{\vez}(0)=0$. This shows that
$$\dot{z}_1(0)+\dot{z}_2(0)=0,\;\;\dot{z}_k(0)=0,\;\;\forall k>2. $$
$$ \lambda_1(t)= (A_w(t) \vez(t),\vez(t))= \int_{\bR} \Bigl| \;\underbrace{\sum_{j=1}^n z_j(t) e^{\theta_j(t) x}}_{=:f_t(x)}\;\Bigr|^2 w(x) dx. $$
Observe that
$$f_t(x):= \sum_{j=1}^n z_j(t) e^{\theta_j(t) x}= \frac{1}{\sqrt{2}}(1-e^{\ii t x}) +\sum_{j=1}^k \ve_j(t) e^{\ii\theta_j(t) x},\;\;\ve_j(t)=z_j(t)-z_j(0). $$
We deduce that
$$ \lim_{t\to 0} \frac{1}{t}f_t(x) = -\frac{\ii x}{\sqrt{2}} + \sum_{k=1}^n \dot{z}_k(0)= -\frac{\ii x}{\sqrt{2}}\tag{7}\label{7}$$
uniformly for $x$ on compacts. Since $w$ has compact support we deduce that (\ref{7}) holds for uniformly for $x$ in the support of $w$. We deduce that
$$ \lambda_1(t)\sim \frac{1}{2}\;\underbrace{\left(\int_{\bR} x^2 w(x)dx \right)}_{=\widehat{w}''(0)}\;t^2\;\;\mbox{as $t\to 0$}. $$
Using the last equality in (\ref{6}) we obtain $2N=2$ which proves (\ref{Es}) .
Step 2. We will show that if (\ref{E}) holds for $w_0$ and $w_1(x) \geq w_0(x)$, $\forall x$, then (\ref{E}) holds for $w_1$ as well. For any weight $w$ and any $\vet$ such that the $\Delta(\vet)\neq 0$ consider the ellipsoid
$$ \Sigma_w:=\bigl\{\vez\in\bC^n;\;\; (A_w\vez,\vez)\leq 1\bigr\}. $$
Then
$$ {\rm vol}\, \bigl(\;\Sigma_w(\vet)\;\bigr)=\frac{\pi^n}{n!\det A_w(\vet)}. $$
Observe that if $ w_0\leq w_1$ then $\Sigma_{w_0}(\vet)\subset \Sigma_{w_1}(\vet)$ and we deduce
$$ \det A_{w_0}(\vet) \leq \det A_{w_1}(\vet). $$
This proves our claim.
Step 3. We show that (\ref{E}) holds for any integrable weight. At least one of the level sets $\{w\geq \ve\}$, $\ve>0$ is nonempty. We can find a compact set of nonzero measure $K \subset \{w\geq \ve \}$. Now define $w_0=I_{K}$. Clearly $I_K\leq w$. From Step 1 we know that (\ref{E}) holds for $w_0$. Invoking Step 2 we deduce that (\ref{E}) holds for $w$. Q.E.D.
$$f_t(x):= \sum_{j=1}^n z_j(t) e^{\theta_j(t) x}= \frac{1}{\sqrt{2}}(1-e^{\ii t x}) +\sum_{j=1}^k \ve_j(t) e^{\ii\theta_j(t) x},\;\;\ve_j(t)=z_j(t)-z_j(0). $$
We deduce that
$$ \lim_{t\to 0} \frac{1}{t}f_t(x) = -\frac{\ii x}{\sqrt{2}} + \sum_{k=1}^n \dot{z}_k(0)= -\frac{\ii x}{\sqrt{2}}\tag{7}\label{7}$$
uniformly for $x$ on compacts. Since $w$ has compact support we deduce that (\ref{7}) holds for uniformly for $x$ in the support of $w$. We deduce that
$$ \lambda_1(t)\sim \frac{1}{2}\;\underbrace{\left(\int_{\bR} x^2 w(x)dx \right)}_{=\widehat{w}''(0)}\;t^2\;\;\mbox{as $t\to 0$}. $$
Using the last equality in (\ref{6}) we obtain $2N=2$ which proves (\ref{Es}) .
Step 2. We will show that if (\ref{E}) holds for $w_0$ and $w_1(x) \geq w_0(x)$, $\forall x$, then (\ref{E}) holds for $w_1$ as well. For any weight $w$ and any $\vet$ such that the $\Delta(\vet)\neq 0$ consider the ellipsoid
$$ \Sigma_w:=\bigl\{\vez\in\bC^n;\;\; (A_w\vez,\vez)\leq 1\bigr\}. $$
Then
$$ {\rm vol}\, \bigl(\;\Sigma_w(\vet)\;\bigr)=\frac{\pi^n}{n!\det A_w(\vet)}. $$
Observe that if $ w_0\leq w_1$ then $\Sigma_{w_0}(\vet)\subset \Sigma_{w_1}(\vet)$ and we deduce
$$ \det A_{w_0}(\vet) \leq \det A_{w_1}(\vet). $$
This proves our claim.
Step 3. We show that (\ref{E}) holds for any integrable weight. At least one of the level sets $\{w\geq \ve\}$, $\ve>0$ is nonempty. We can find a compact set of nonzero measure $K \subset \{w\geq \ve \}$. Now define $w_0=I_{K}$. Clearly $I_K\leq w$. From Step 1 we know that (\ref{E}) holds for $w_0$. Invoking Step 2 we deduce that (\ref{E}) holds for $w$. Q.E.D.
No comments:
Post a Comment