RANDOM SEQUENCES IN MATRIX NOTATION

Let us write the jointly distributed random variables in form of a column matrix, (random vector)

X = x1 x2 xn , XT = x1, x2, , xn , E[X] = E{x1} E{x2} E{xn} . (4.1)

Its transpose is a row matrix. The expectation is a column matrix with elements equal to the expectations of the random numbers xk, 1kn.

The square (n×n) matrix with elements cov{ xi,xi} is called the covariance matrix of the random vector X and denoted by CX

CX = E X-EX X-EX T , (4.2)

its elements are

X = var{x1} cov{x1,x2} cov{x1,xn} cov{x2,x1} var{x2} cov{x2,xn} cov{xn,x1} cov{xn,x2} var{xn} . (4.3)

The covariance matrix is symmetric. For a symmetric square matrix the eigenvalues are real and the eigenvectors are orthogonal. A matrix A is said to be positive definite if XT AX for all vectors X 0. It is easy to see that the matrix CX has to be positive definite. If the values of the elements of the covariance matrix are estimated from observation it is necessary to transform the matrix to a symmetric form and change the terms so that all the eigenvalues are positive.

In matrix notation the density function of a n jointly normally distributed random variables XT= x1, x2, , xn is

px1, , xm x1 xm = 2πn |CX| exp -12 X-E{X}T CX-1 X-E{X} (4.4)

where |CX| denotes the determinant and CX-1 the inverse of the covariance matrix.

Let us now look at some simple examples of Gaussian random sequences

Example 4.1. Let us consider the simple case the expectation is a zero column matrix and the covariance matrix is proportional to the unit matrix I, CX= σ2I. Thus it is a diagonal matrix. When substituted into the expression for jointly normally distributed random variables it follows that the elements of the sequence are mutually independent. It follows that X= σU or in elements xk= σuk, 1kn where U is a Gaussian white noise sequence all elements have the same variance σ2.

 

Example 4.2. Let us consider the random sequence defined by the following difference equation and initial value

The covariance matrix has the following form

X(0)=0, X(s)-X(s-1) = σu(s), 1sn

It means the sequence has independent increments with equal σ2 variances.

The covariance matrix has the following form

CX = σ2 11111 12222 12333 12344 1234n .

It is easy to verify that the elements of the covariance matrix are given by the following expression

The covariance matrix has the following form

c(i,j) = σ2 min(i,j) .

 

Let us generalize the results of Example 4.2 to the case of not unit intervals but of length Δt. This results in a change in the notation of the coefficient σ2σ2Δt. This leads to the following form of the difference equation:

X(s) - X(s-1) = σ Δt u(0) , (4.5)

and the expression for the element of the covariance matrix becomes

The covariance matrix has the following form

c(i,j) = σ2 min(iΔt,jΔt) = σ2 min(ti,tj) . (4.6)

This random difference equation may be used to study the case when Δt0. The result is that the sequence tends to a continuous function with no derivative in any point.

 

Example 4.3. Let us consider the random sequence defined by the following difference equation and initial value

A0(0) = σ0u(0), A0(s) = γA0(s-1) + βσ0u(s), 1sn-1

where u(s) is an element of a white noise sequence and the condition that the elements A0(0) and A0(1) have equal variances has to be satisfied. The condition yields the relation

σ02 = γ2 σ02 + β2 σ02 β= 1-γ2

It is easy to verify that the covariance matrix has the following form

The covariance matrix has the following form

CX = σ2 1 γ γ2 γ3 γn-1 γ 1 γ γ2 γn-2 γ2 γ 1 γ γn-3 γ2 γ2 γ 1 γn-4 γn-1 γn-2 γn-3 γn-4 1 .

It is easy to verify that the general expressions for the elements of the covariance matrix are

cij = γ|i-j| σ02

The covariance matrix has same values on all diagonals. The values depend upon the distances of the points |i-j|.

 

Let us generalize the results of the Example 4.3 by introduction of the following change of notation in parameters in the expressions for the elements of the covariance matrix γ = e -ηΔt . It follows

cij = σ02 [ e -ηΔt ] |i-j| . (4.7)

In the new notations the random difference equation becomes

A0(0) = σ0u(0), A0(s) = e-ηΔt A0(s-1) + 1-e-2ηΔt σ0u(s), 1sn-1 . (4.8)

This form is suitable to study the influence of the value of Δt on the behaviour of the solution. The final result is: when Δt0 the difference equation tends to an Itô random differential equation with a solution that is a continuous function with no derivative at any point.

Numerical examples

Example 1
The script file pwsema04 gives examples of simple random sequences: 1) White Gaussian Sequence, 2) Brownian Motion Sequence, 3) not differentiable stationary process.

Download
Scilab: pwsema04.sci
Octave/Matlab: pwsema04.m