Let us write the jointly distributed random variables in form of a column matrix, (random vector)
(4.1) |
Its transpose is a row matrix. The expectation is a column matrix with elements equal to the expectations of the random numbers .
The square matrix with elements is called the covariance matrix of the random vector and denoted by
(4.2) |
its elements are
(4.3) |
The covariance matrix is symmetric. For a symmetric square matrix the eigenvalues are real and the eigenvectors are orthogonal. A matrix is said to be positive definite if for all vectors . It is easy to see that the matrix has to be positive definite. If the values of the elements of the covariance matrix are estimated from observation it is necessary to transform the matrix to a symmetric form and change the terms so that all the eigenvalues are positive.
In matrix notation the density function of a n jointly normally distributed random variables is
(4.4) |
where denotes the determinant and the inverse of the covariance matrix.
Let us now look at some simple examples of Gaussian random sequences
Example 4.1. Let us consider the simple case the expectation is a zero column matrix and the covariance matrix is proportional to the unit matrix , . Thus it is a diagonal matrix. When substituted into the expression for jointly normally distributed random variables it follows that the elements of the sequence are mutually independent. It follows that or in elements where is a Gaussian white noise sequence all elements have the same variance .
Example 4.2. Let us consider the random sequence defined by the following difference equation and initial value
The covariance matrix has the following form
It means the sequence has independent increments with equal variances.
The covariance matrix has the following form
It is easy to verify that the elements of the covariance matrix are given by the following expression
The covariance matrix has the following form
Let us generalize the results of Example 4.2 to the case of not unit intervals but of length . This results in a change in the notation of the coefficient . This leads to the following form of the difference equation:
(4.5) |
and the expression for the element of the covariance matrix becomes
The covariance matrix has the following form
(4.6) |
This random difference equation may be used to study the case when . The result is that the sequence tends to a continuous function with no derivative in any point.
Example 4.3. Let us consider the random sequence defined by the following difference equation and initial value
where is an element of a white noise sequence and the condition that the elements and have equal variances has to be satisfied. The condition yields the relation
It is easy to verify that the covariance matrix has the following form
The covariance matrix has the following form
It is easy to verify that the general expressions for the elements of the covariance matrix are
The covariance matrix has same values on all diagonals. The values depend upon the distances of the points .
Let us generalize the results of the Example 4.3 by introduction of the following change of notation in parameters in the expressions for the elements of the covariance matrix . It follows
(4.7) |
In the new notations the random difference equation becomes
(4.8) |
This form is suitable to study the influence of the value of on the behaviour of the solution. The final result is: when the difference equation tends to an Itô random differential equation with a solution that is a continuous function with no derivative at any point.
Example 1
The script file pwsema04 gives examples of simple random
sequences: 1) White Gaussian Sequence, 2) Brownian Motion Sequence,
3) not differentiable stationary process.
Download
Scilab: pwsema04.sci
Octave/Matlab: pwsema04.m