Let us compute two independent realizations and of the stationary process described in the preceding paragraph and form a complex random function
(9.1) |
Let us introduce a complex random function by the following relation
(9.2) |
where is a dominant angular frequency. The real and imaginary parts define two real random functions
(9.3) | |
The mean values of , and are zero.
The covariance functions of the real functions are
(9.4) | |
The function is even and the function is odd. They do not depend upon the value of and therefore the processes and are stationary in the wide sense and because they are Gaussian they are strictly stationary. The cross covariance function for is zero, thus the random variables and are statistically independent.
The spectral density is the Fourier transform of the covariance function. Thus
(9.5) |
For a real function the spectral density is defined in the interval and is an even function. Thus when only positive values of are considered it follows
(9.6) |
The complex random function may be written in an exponential form
(9.7) |
The absolute value is equal to
(9.8) |
and the phase shift may be calculated from the relations
(9.9) |
It should be noted that if at a time the function then the curve is tangent to the curve . It may be easily seen that the functions and are envelopes for both functions and .
The amplitude of the discussed random function with a dominant frequency is changing in time and due to the random phase shift the local angular frequency is changing in time too. For example if in an interval the phase shift may be approximated by a linear function the local angular frequency is . Thus the distance between the down or up crossings is a random sequence.
It may be easily verified that the random variable has a Rayleigh distribution and the random variable a uniform distribution on the interval of length . These random variables are independent. Thus
(9.10) | |
and the joint probability function is equal to the product of these functions.
The differential equations for the functions and are not written in a suitable form, they do not correspond to differential equations with constant coefficients. The first two equations that correspond to two independent processes without dominant frequencies may be written in matrix notations
(9.11) |
where and are independent Brownian motion processes. For example for the first two equations the differentials are
and thus
(9.12) |
The first matrix on the right side is an orthogonal and normal matrix (its inverse is equal to the transpose and the determinant is equal one). Such a matrix represents rotations and will be denoted by . Multiplication of the initial equation by the orthogonal matrix and upon substitution yields the following final equation
(9.13) |
where the property was used that an orthogonal and normal transformation of two independent increments of Brownian motion preserves their properties. Similar relations
(9.14) |
hold for the other differential equations in the set that correspond to standard differential equations.
Let us consider a case of a twice differentiable function in matrix notations.
(9.15) |
This set of linear differential equations has constant coefficients and therefore it is easy to solve it by standard methods.
Let us look at the fundamental solution of the homogeneous equation . The solution for the first matrix differential equation
by the standard method with initial conditions at is
(9.16) |
The second matrix differential equation
has a general solution that is the sum of a general solution (similar as in the previous case) and a particular solution of the non homogeneous case (the right side is a known matrix). It follows
(9.17) |
The same simple procedure leads to the general solution of the third matrix homogeneous differential equation
(9.18) |
If we denote by the matrix
the general solution may be written in the form of a block matrix
(9.19) |
where is a 2×2 matrix with elements equal to zeros.
The asymptotic variance matrix has the following structure in block matrix notation
(9.20) |
To simulate a stationary process the initial conditions for should be computed with the help of a lower triangular matrix that satisfies the relation . For a twice differentiable function, in block matrix notation the matrix is
(9.21) |
where and is a column matrix with Gaussian independent random numbers in three rows.
The stationary random series may be computed from the following recurrence equation
(9.22) |
where is computed from the relation
(9.23) |
It should be noted that when the block matrix notation is used in and it follows that the block matrix multiplication leads to the following relation for the case of a twice differentiable function
(9.24) |
where are elements of the matrix for the case of the corresponding random process without a dominant frequency.
Example 1
The script file pwsemc04 calculates examples of once and twice
differentiable processes with dominant frequency, envelopes
and derivatives.
Download
Scilab: pwsemc04.sci
Octave/Matlab: pwsemc04.m
Example 2
The script file pwsemf04 depicts the correlation functions
and the spectral densities of the non, once and twice differentiable
processes.
Download
Scilab: pwsemf04.sci
Octave/Matlab: pwsemf04.m
Example 3
The script file pwsemg04 computes examples of twice
differentiable realizations with a dominant frequency.
Download
Scilab: pwsemg04.sci
Octave/Matlab: pwsemg04.m