Appendices¶
Appendix I: Mean-Square Error Minimization¶
We wish to estimate a random variable \(x\) as a linear combination \({x_e}\) of other random variables \(r.\) The weighting coefficients in the linear combination are \(h.\) These coefficients are not random. The linear combination is:
(13.1) |
$${x_e} = \sum\limits_k {{h_k}{r_k}}$$
|
The difference between the estimated value \({x_e}\) and the true value \(x\) represents the error of the estimation procedure \(error = {x_e} - x.\) If this term is greater than zero we have an overestimate and if it less than zero we have an underestimate. We will consider both cases as equally objectionable. To accomplish this we consider the squared error \({\left( {{x_e} - x} \right)^2}.\) The mean-square error is then given by:
(13.2) |
$$e = E\left\{ {{{\left( {{x_e} - x} \right)}^2}} \right\} = E\left\{ {{{\left( {\sum\limits_k {{h_k}{r_k}} - x} \right)}^2}} \right\}$$
|
We seek the value of the coefficient set \(h\) that minimizes this mean-square error. Taking the derivative with respect to one of the coefficients \({{h_i}}\) gives:
(13.3) |
$$0 = \frac{{\partial e}}{{\partial {h_i}}} = \left( {\frac{{\partial e}}{{\partial q}}} \right)\left( {\frac{{\partial q}}{{\partial {h_i}}}} \right) = 2E\left\{ {\left( {\sum\limits_k {{h_k}{r_k} - x} } \right){r_i}} \right\}$$
|
where we have (temporarily) set \(q = \left( {\sum\nolimits_k {{h_k}{r_k} - x} } \right).\) Note the critical step of exchanging the order of differentiation \(\partial \left( \bullet \right)/\partial {h_i}\) and expectation \(E\left\{ \bullet \right\}.\) See Equation 4.13 and Equation 4.14.
The result in Equation 13.3 must be simultaneously applied for every coefficient \({h_i}\) to produce the minimum mean-square error. This leads to a set of simultaneous equations which in some textbooks (Papoulis1) is formulated as a vector equation. In that vector formulation the data vector \({\bf{r}}\) is orthogonal to the error vector, \({\bf{e}} = {{\bf{x}}_{\bf{e}}} - {\bf{x}},\) as their inner product—as exemplified by Equation 13.3—is zero. We, however, shall use the formulation shown above.
Appendix II: The Discrete-Time Uncertainty Principle¶
We begin with a discrete-time signal \(x[n]\) whose Fourier transform exists and is given by \(X(\Omega ) = {\mathscr{F}}\left\{ {x[n]} \right\}.\) We note that \(X(\Omega - \pi ) = X(\Omega + \pi )\); the spectrum is periodic.
We will make use of Parseval’s relation in both discrete and continuous time, Equation 13.4.
(13.4) |
$$\begin{array}{*{20}{l}}
{\sum\limits_{n = - \infty }^{ + \infty } {{{\left| {x[n]} \right|}^2} = \frac{1}{{2\pi }}\int\limits_{ - \pi }^{ + \pi } {{{\left| {X(\Omega )} \right|}^2}d\Omega } } }\\
{\int\limits_{ - \infty }^{ + \infty } {{{\left| {x(t)} \right|}^2}dt} = \frac{1}{{2\pi }}\int\limits_{ - \infty }^{ + \infty } {{{\left| {X(\omega )} \right|}^2}d\omega } }
\end{array}$$
|
Without loss of generality we assume that:
(13.5) |
$$\sum\limits_{n = - \infty }^{ + \infty } {{{\left| {x[n]} \right|}^2}} = 1$$
|
(13.6) |
$$\sum\limits_{n = - \infty }^{ + \infty } {n{{\left| {x[n]} \right|}^2}} = 0$$
|
These two assumptions mean that the terms \(\left\{ {{{\left| {x[n]} \right|}^2}} \right\}\) are real, positive, and normalized for all \(n.\) They can be thought of as probabilities (Equation 13.5) and that the average value of these probabilities is zero (Equation 13.6). The general case, where the average is not zero, can be treated with a straightforward change of variable. Our proof is based upon the one given in Folland2. We start with the definitions of the squared signal-duration Equation 13.7 and the squared signal-bandwidth Equation 13.8.
(13.7) |
$$\begin{array}{*{20}{l}}
{{{\left( {\Delta {n_{rms}}} \right)}^2}}&{ = \left( {\sum\limits_{n = - \infty }^{ + \infty } {{n^2}{{\left| {x[n]} \right|}^2}} } \right)/\sum\limits_{n = - \infty }^{ + \infty } {{{\left| {x[n]} \right|}^2}} }\\
{\,\,\,}&{ = \sum\limits_{n = - \infty }^{ + \infty } {{n^2}{{\left| {x[n]} \right|}^2}} }
\end{array}$$
|
(13.8) |
$$\begin{array}{*{20}{l}}
{{{\left( {\Delta {\Omega _{rms}}} \right)}^2}}&{ = \left( {\int\limits_{ - \pi }^{ + \pi } {{\Omega ^2}{{\left| {X(\Omega )} \right|}^2}d\Omega } } \right)/\left( {\int\limits_{ - \pi }^{ + \pi } {{{\left| {X(\Omega )} \right|}^2}d\Omega } } \right)}\\
{\,\,\,}&{ = \frac{1}{{2\pi }}\int\limits_{ - \pi }^{ + \pi } {{\Omega ^2}{{\left| {X(\Omega )} \right|}^2}d\Omega } }
\end{array}$$
|
We intend to show that:
(13.9) |
$${\left( {\Delta {n_{rms}}} \right)^2}{\left( {\Delta {\Omega _{rms}}} \right)^2} \ge \frac{1}{4}\,\,\,\,\, \Rightarrow \,\,\,\,\,\left( {\Delta {n_{rms}}} \right)\left( {\Delta {\Omega _{rms}}} \right) \ge \frac{1}{2}$$
|
We begin by forming a continuous and piecewise-smooth, continuous-time signal \(y(t)\) from the discrete-time signal \(x[n].\) By construction, \(y(t = k) = x[k]\) with a sampling interval of \(T = 1.\)
(13.10) |
$$y(t) = \sum\limits_{n = - \infty }^{ + \infty } {x[n]{\rm{sinc}}(t - n)} = \sum\limits_{n = - \infty }^{ + \infty } {x[n]\frac{{\sin \left( {\pi (t - n)} \right)}}{{\pi (t - n)}}}$$
|
The sinc function is chosen so that its Fourier transform is the ideal lowpass filter given in Equation 13.11.
(13.11) |
$${\mathscr{F}}\left\{ {\frac{{\sin (\pi t)}}{{\pi t}}} \right\} = \left\{ {\begin{array}{*{20}{l}}
1&{\left| \omega \right| < \pi }\\
0&{\left| \omega \right| > \pi }
\end{array}} \right.$$
|
Using \(T = 1\) implies that:
(13.12) |
$$Y(\omega ) = {\mathscr{F}}\left\{ {y(t)} \right\} = \left\{ {\begin{array}{*{20}{l}}
{X(\Omega = \omega )}&{\left| \omega \right| < \pi }\\
0&{\left| \omega \right| > \pi }
\end{array}} \right.$$
|
In words, the ideal lowpass filter guarantees that \(Y(\omega )\) is the baseband spectrum of the periodic spectrum \(X(\Omega ).\)
For the proof of the Uncertainty Principle, it is essential that \(y(t)\) is square-integrable, that it is in \({L^2}.\) This follows from:
(13.13) |
$$\begin{array}{l}
\int\limits_{ - \infty }^{ + \infty } {{{\left| {y(t)} \right|}^2}dt} \\
\,\,\,\,\,\, = \int\limits_{ - \infty }^{ + \infty } {\left( {\sum\limits_{n = - \infty }^{ + \infty } {x[n]{\rm{sinc}}(t - n)} } \right){{\left( {\sum\limits_{k = - \infty }^{ + \infty } {x[k]{\rm{sinc}}(t - k)} } \right)}^*}dt} \\
\,\,\,\,\, = \sum\limits_{n = - \infty }^{ + \infty } {\sum\limits_{k = - \infty }^{ + \infty } {x[n]} } {x^*}[k]\int\limits_{ - \infty }^{ + \infty } {{\rm{sinc}}(t - n){\rm{sinc}}(t - k)dt} \\
\,\,\,\,\, = \sum\limits_{n = - \infty }^{ + \infty } {{{\left| {x[n]} \right|}^2} = 1}
\end{array}$$
|
We use the orthonormality of the sinc functions and the assumption in Equation 13.5 to reach the last line in Equation 13.13. We conclude that \(y(t)\) is, indeed, in \({L^2}.\)
The two signals \(ty(t)\) and \(y'(t),\) which we will now use, are either in \({L^2}\) or they are not. The situation when they are not in \({L^2}\) will be discussed later. In fact, whether or not \(ty(t)\) or \(y'(t)\) is in \({L^2},\) the Uncertainty Principle is satisfied.
If they are in \({L^2},\) then the Folland technique can be directly applied to \(\int {ty(t){y^*}(t)dt}\) leading to:
(13.14) |
$$\begin{array}{*{20}{l}}
{\int\limits_{ - \infty }^{ + \infty } {{{\left| {y(t)} \right|}^2}dt} }&{ = - 2{\mathop{\rm Re}\nolimits} \left\{ {\int\limits_{ - \infty }^{ + \infty } {{{\left( {ty(t)} \right)}^*}y'(t)dt} } \right\} + \left. {t{{\left| {y(t)} \right|}^2}} \right|_{ - \infty }^{ + \infty }}\\
{\,\,\,}&{ = - 2{\mathop{\rm Re}\nolimits} \left\{ {\int\limits_{ - \infty }^{ + \infty } {{{\left( {ty(t)} \right)}^*}y'(t)dt} } \right\}}
\end{array}$$
|
Because of the way \(y(t)\) is formed Equation 13.10, the term \(t{\left| {y(t)} \right|^2}\) in the first line of Equation 13.14 vanishes at \(t = \pm \infty.\) Applying the Cauchy-Schwartz inequality then leads to3:
(13.15) |
$${\left( {\int\limits_{ - \infty }^{ + \infty } {{{\left| {y(t)} \right|}^2}dt} } \right)^2} \le 4\left( {\int\limits_{ - \infty }^{ + \infty } {{t^2}{{\left| {y(t)} \right|}^2}dt} } \right)\left( {\int\limits_{ - \infty }^{ + \infty } {{{\left| {y'(t)} \right|}^2}dt} } \right)$$
|
which means:
(13.16) |
$$\left( {\int\limits_{ - \infty }^{ + \infty } {{t^2}{{\left| {y(t)} \right|}^2}dt} } \right)\left( {\int\limits_{ - \infty }^{ + \infty } {{{\left| {y'(t)} \right|}^2}dt} } \right) \ge \frac{1}{4}$$
|
We now use the Fourier property \(y'(t) = {\mathscr{F}^{ - 1}}\left\{ {j\omega Y\left( \omega \right)} \right\},\) Parseval’s relation Equation 13.4, and the relation between the continuous-time spectrum \(Y(\omega )\) and the discrete-time spectrum \(X(\Omega )\) given in Equation 13.12. From Equation 13.12 and Equation 13.8 we have:
(13.17) |
$$\begin{array}{*{20}{l}}
{\int\limits_{ - \infty }^{ + \infty } {{{\left| {y'(t)} \right|}^2}dt} }&{ = \frac{1}{{2\pi }}\int\limits_{ - \infty }^{ + \infty } {{\omega ^2}{{\left| {Y(\omega )} \right|}^2}d\omega } }\\
{\,\,\,}&{ = \frac{1}{{2\pi }}\int\limits_{ - \pi }^{ + \pi } {{\Omega ^2}{{\left| {X(\Omega )} \right|}^2}d\Omega } }\\
{\,\,\,}&{ = {{\left( {\Delta {\Omega _{rms}}} \right)}^2}}
\end{array}$$
|
Substituting in Equation 13.16 gives:
(13.18) |
$$\left( {\int\limits_{ - \infty }^{ + \infty } {{t^2}{{\left| {y(t)} \right|}^2}dt} } \right){\left( {\Delta {\Omega _{rms}}} \right)^2} \ge \frac{1}{4}$$
|
It only remains to show that \(\int {{t^2}{{\left| {y(t)} \right|}^2}} dt =\) \(\sum {{n^2}{{\left| {x[n]} \right|}^2} = }\) \({\left( {\Delta {n_{rms}}} \right)^2}.\)
We know that \(y(t)\) is bandlimited because of the way it is formed, Equation 13.10. Again using Parseval’s relation Equation 13.4 and the relation between the continuous-time spectrum \(Y(\omega )\) and the discrete-time spectrum \(X(\Omega )\) given in Equation 13.12, this implies that \(ty(t)\) is also bandlimited and that the “energy” in \(ty(t)\) is given by:
(13.19) |
$$\begin{array}{*{20}{l}}
{\int\limits_{ - \infty }^{ + \infty } {{t^2}{{\left| {y(t)} \right|}^2}dt} }&{ = \frac{1}{{2\pi }}\int\limits_{ - \infty }^{ + \infty } {{\omega ^2}{{\left| {Y(\omega )} \right|}^2}d\omega } }\\
{\,\,\,}&{ = \frac{1}{{2\pi }}\int\limits_{ - \pi }^{ + \pi } {{\omega ^2}{{\left| {Y(\omega )} \right|}^2}d\omega } }\\
{\,\,\,}&{ = \frac{1}{{2\pi }}\int\limits_{ - \pi }^{ + \pi } {{\Omega ^2}{{\left| {X(\Omega )} \right|}^2}d\Omega } }\\
{\,\,\,}&{ = \sum\limits_{n = - \infty }^{ + \infty } {{n^2}{{\left| {x[n]} \right|}^2}} }\\
{\,\,\,}&{ = {{\left( {\Delta {n_{rms}}} \right)}^2}}
\end{array}$$
|
Throughout this derivation we use \(T = 1\) which is the proper value for analyzing the Uncertainty Principle. Should we wish to use this result in other applications where the sampling interval could be shorter, a higher sampling frequency, then the appropriate formulation is:
(13.20) |
$$\frac{1}{T}\int\limits_{ - \infty }^{ + \infty } {{t^2}{{\left| {y(t)} \right|}^2}dt} = \sum\limits_{n = - \infty }^{ + \infty } {{n^2}{{\left| {x[n]} \right|}^2}}$$
|
Substituting the result in Equation 13.19 into Equation 13.18 leads to the desired result:
(13.21) |
$${\left( {\Delta {n_{rms}}} \right)^2}{\left( {\Delta {\Omega _{rms}}} \right)^2} \ge \frac{1}{4}\,\,\,\,\, \Rightarrow \,\,\,\,\,\left( {\Delta {n_{rms}}} \right)\left( {\Delta {\Omega _{rms}}} \right) \ge \frac{1}{2}$$
|
We return to the discussion of the two signals \(ty(t)\) and \(y'(t)\) when either of the signals is not in \({L^2}.\) If \(ty(t)\) is not in \({L^2}\) then, from Equation 13.19, we have that \({\left( {\Delta {n_{rms}}} \right)^2}\) is unbounded and Equation 13.21 is automatically satisfied. Similarly if \(y'(t)\) is not in \({L^2}\) then, from Equation 13.17, \({\left( {\Delta {\Omega _{rms}}} \right)^2}\) is unbounded and Equation 13.21 is again satisfied.
Epilogue¶
Our goal in this iBook has been twofold. First, we have attempted to present the basic concepts of stochastic (random) signal processing in the setting of discrete-time signal processing and with a number of examples that indicate the power of this approach. Second, we have tried to make use of modern technology to change a textbook from a static entity to a dynamic one. The ability to hear signals before and after processing, the ability to see the dynamics of signal processing and the possibility to link to the outside world can, we believe, improve the learning process.
In the end, the best description of the goal of a textbook was given by Professor Y.W. Lee of the Massachusetts Institute of Technology when he wrote4: “In writing this book I have been guided by the idea that a teacher should not attempt to cover the subject of study but should attempt to uncover it for the student.”
Ted Young
Ronald Ligteringen
Delft, The Netherlands
July, 2020
-
Papoulis, A. (1977). Signal Analysis. New York, McGraw-Hill ↩
-
Folland, G. B. (1992). Fourier Analysis and its Applications. Pacific Grove, California, Wadsworth $ Brooks/Cole ↩
-
The essential part of Folland’s approach starts with integration-by-parts: \(\int {u(t)v'(t)dt = }\) \(u(t)v(t) -\)\(\int {v(t)u'(t)dt}.\) We then set \(u(t) = t{y^*}(t)\) and \(v'(t) = y'(t).\) Careful application of the integration rule—together with the observation that \(q(t) + {q^*}(t) = 2\operatorname{Re} \left\{ {q(t)} \right\}\) and that for any complex \(q,\) \(\operatorname{Re} \left\{ q \right\} \leqslant \left| q \right|\)—yields the desired result. ↩
-
Lee, Y. W. (1960). Statistical Theory of Communication. New York, John Wiley & Sons ↩