Linear Mathematical Model With Dominant Frequency

Let us compute two independent realizations Ant and Dnt of the stationary process described in the preceding paragraph and form a complex random function

Ent = Ant + i Dnt (9.1)

Let us introduce a complex random function by the following relation

Znt = Ent e-iωdt (9.2)

where ωd is a dominant angular frequency. The real and imaginary parts define two real random functions

Xnt = Ant cos(ωdt) + Dnt sin(ωdt) (9.3)
Ynt = -Ant sin(ωdt) + Dnt cos(ωdt)

The mean values of Znt, Xnt and Ynt are zero.

The covariance functions of the real functions are

CXXtt+τ = CYYtt+τ = CAAτ cos(ωdτ) (9.4)
CXYtt+τ = -CYXtt+τ = CAAτ sin(ωdτ)

The function CXX is even and the function CXY is odd. They do not depend upon the value of t and therefore the processes Xt and Yt are stationary in the wide sense and because they are Gaussian they are strictly stationary. The cross covariance function for τ=0 is zero, thus the random variables Xnt and Ynt are statistically independent.

The spectral density is the Fourier transform of the covariance function. Thus

SXXω = 12 SAA ω - ωd + 12 SAA ω + ωd (9.5)

For a real function the spectral density is defined in the interval -<ω< and is an even function. Thus when only positive values of ω are considered it follows

SXXω = SAA ω - ωd (9.6)

The complex random function Znt may be written in an exponential form

Znt = Wnt e-i[ ωdt - Ψt ] (9.7)

The absolute value is equal to

Wnt = | Znt | = Xn2t + Yn2t = | Ent | = An2t + Dn2t (9.8)

and the phase shift may be calculated from the relations

cos[ Ψnt ] = Ant Wnt sin[ Ψnt ] = Dnt Wnt (9.9)

It should be noted that if at a time t the function Ynt=0 then the curve Xnt is tangent to the curve Wnt. It may be easily seen that the functions Wnt and -Wnt are envelopes for both functions Xnt and Ynt.

The amplitude of the discussed random function with a dominant frequency is changing in time and due to the random phase shift the local angular frequency is changing in time too. For example if in an interval the phase shift may be approximated by a linear function ψt= ψ0-Δωt the local angular frequency is ω +Δω. Thus the distance between the down or up crossings is a random sequence.

It may be easily verified that the random variable Wnt has a Rayleigh distribution and the random variable Ψnt a uniform distribution on the interval of length 2π. These random variables are independent. Thus

fww = wP exp - w2 2P , w< (9.10)
fψψ = 12π , -π<ψ<π

and the joint probability function is equal to the product of these functions.

The differential equations for the functions Xnt and Ynt are not written in a suitable form, they do not correspond to differential equations with constant coefficients. The first two equations that correspond to two independent processes without dominant frequencies may be written in matrix notations

[ dA0t dD0t ] = -η [ A0t D0t ] +α [ dB1t dB2t ] (9.11)

where B1 and B2 are independent Brownian motion processes. For example for the first two equations the differentials are

dX0t = dA0 cosωdt + dD0 sinωdt - ωdA0dt sinωdt + ωdD0dt cosωdt
dY0t = -dA0 sinωdt + dD0 cosωdt - ωdA0dt cosωdt - ωdD0dt sinωdt

and thus

[ dX0t dY0t ] = [ cosωdt sinωdt -sinωdt cosωdt ] [ dA0t dD0t ] - [ 0 -ωd ωd 0 ] [ X0t Y0t ] (9.12)

The first matrix on the right side is an orthogonal and normal matrix (its inverse is equal to the transpose and the determinant is equal one). Such a matrix represents rotations and will be denoted by R0. Multiplication of the initial equation by the orthogonal matrix and upon substitution yields the following final equation

d [ X0t Y0t ] = - [ η -ωd ωd η ] [ X0t Y0t ] dt + α d [ B1t B2t ] (9.13)

where the property was used that an orthogonal and normal transformation of two independent increments of Brownian motion preserves their properties. Similar relations

ddt [ Xst Yst ] = - [ η -ωd ωd η ] [ Xst Yst ] + η [ Xs-1t Ys-1t ] , s=1,2,,n (9.14)

hold for the other differential equations in the set that correspond to standard differential equations.

Let us consider a case of a twice differentiable function in matrix notations.

( d + dt [ η -ωd ωd η ] ) [ X0t Y0t ] = α [ dB1t dB2t ] ( ddt + [ η -ωd ωd η ] ) [ X1t Y1t ] = η [ X0t Y0t ] ( ddt + [ η -ωd ωd η ] ) [ X2t Y2t ] = η [ X1t Y1t ] (9.15)

This set of linear differential equations has constant coefficients and therefore it is easy to solve it by standard methods.

Let us look at the fundamental solution of the homogeneous equation α=0. The solution for the first matrix differential equation

ddt [ X0t Y0t ] + [ η -ωd ωd η ] [ X0t Y0t ] = [ 0 0 ]

by the standard method with initial conditions at t=0 is

[ X0t Y0t ] = e-ηt [ cosωdt sinωdt -sinωdt cosωdt ] [ X0t Y0t ] = e-ηt R0 Z00 , Z0T0 = [ X00 , Y00 ] (9.16)

The second matrix differential equation

ddt [ X1t Y1t ] + [ η -ωd ωd η ] [ X1t Y1t ] = η [ X0t Y0t ]

has a general solution that is the sum of a general solution (similar as in the previous case) and a particular solution of the non homogeneous case (the right side is a known matrix). It follows

Z1t = e-ηt [ R0 Z10 + ηt R0 Z00 ] (9.17)

The same simple procedure leads to the general solution of the third matrix homogeneous differential equation

Z2t = e-ηt [ R0 Z20 + ηt 1! R0 Z10 + (ηt)2 2! R0 Z10 ] (9.18)

If we denote by φs tt0 the matrix

φstt0 = [ η ( t-t0 ) ] 2 s! e-ηt R0 , s=0,1,,n

the general solution may be written in the form of a block matrix

φtt0 = [ φ0tt0 0 0 0 φ1tt0 φ0tt0 0 0 φ2tt0 φ1tt0 φ0tt0 0 φntt0 φn-1tt0 φn-2tt0 φ0tt0 ] (9.19)

where 0 is a 2×2 matrix with elements equal to zeros.

The asymptotic variance matrix P has the following structure in block matrix notation

P = α2 2η [ I 12I 122I 12I 222I 323I 122I 323I 624I ] , I= [ 1 0 0 1 ] (9.20)

To simulate a stationary process the initial conditions for t=0 should be computed with the help of a lower triangular matrix p that satisfies the relation pp= P. For a twice differentiable function, in block matrix notation the matrix p is

p = α 2η [ I 0 0 12I 12I 0 14I 12I 14I ] , Zd0 = p U (9.21)

where ZdTt = X0t Y0t X1t Y1t X2t Y2t and U is a column matrix with Gaussian independent random numbers in three rows.

The stationary random series may be computed from the following recurrence equation

Zd[ (r+1) Δt ] = φΔt0 Zd( r Δt ) + qΔt Ur+1 , r=0,1,2, (9.22)

where qΔt is computed from the relation

qΔt qTΔt = 0Δt φΔtu g gT φTΔtu du = P - φΔt P φTΔt (9.23)

It should be noted that when the block matrix notation is used in P and φ it follows that the block matrix multiplication leads to the following relation for the case of a twice differentiable function

qΔt = [ q11I 0 0 q21I q22I 0 q31I q32I q33I ] (9.24)

where qij are elements of the qΔt matrix for the case of the corresponding random process without a dominant frequency.

Numerical examples

Example 1
The script file pwsemc04 calculates examples of once and twice differentiable processes with dominant frequency, envelopes and derivatives.

Download
Scilab: pwsemc04.sci
Octave/Matlab: pwsemc04.m

Example 2
The script file pwsemf04 depicts the correlation functions and the spectral densities of the non, once and twice differentiable processes.

Download
Scilab: pwsemf04.sci
Octave/Matlab: pwsemf04.m

Example 3
The script file pwsemg04 computes examples of twice differentiable realizations with a dominant frequency.

Download
Scilab: pwsemg04.sci
Octave/Matlab: pwsemg04.m