[ Home ] [ Up ] [ Info ] [ Mail ]

Proof of Theorem 6].

We are given n linearly independent eigenvectors p1, p2, ... , pn of n-square matrix A and the corresponding eigenvalues λ1, λ2, ... , λn. An eigenvector is a vector X that a matrix A carries into a multiple of itself according to the equation AX = λX where λ is the corresponding eigenvalue. Thus the eigenvectors p1, p2, ... , pn and eigenvalues λ1, λ2, ... , λn must satisfy the following set of equations:

Ap1 = λ1p1

1)        Ap2 = λ2p2

............

Apn = λnpn .

Now system 1) is equivalent to

2)        A[p1 p2 ... pn] = [λ1p1 λ2p2 ... λnpn] .

Why? Because of the following theorem:

______________________________________________

Theorem 1. Suppose that the matrix A carries the vector X1 into the vector Y1, the vector X2 into the vector Y2 , etc. i.e.

AX1 = Y1

AX2 = Y2

......

......

AXn = Yn

Then

A [ X1 X2 .... Xn] = [Y1 Y2 ... Yn]

where [ X1 X2 .... Xn] is a matrix whose columns are X1, X2, .... , Xn and [Y1 Y2 ... Yn] is a matrix whose columns are Y1,Y2, ... ,Yn .

_____________________________________________________

Furthermore,

Why? Because of the following theorem:

______________________________________________

Theorem 2. The effect of post-multiplying a matrix

by a diagonal matrix

is that of multiplying the i-th column of matrix A by the factor i.e. the successive columns of the original matrix are simply multiplied by successive diagonal elements of the diagonal matrix. Explicitly:

______________________________________________

From equations 2) and 3) we get

AP = PD

or, equivalently,

4)        A = PDP-1

where we have substituted P for [p1 p2 ... pn] and D for

[ Home ] [ Up ] [ Info ] [ Mail ]