```Website owner:  James Miller
```

[ Home ] [ Up ] [ Info ] [ Mail ]

Lambda matrices, matrix polynomials, division of λ-matrices, remainder theorem, scalar matrix polynomials, Cayley-Hamilton theorem

Lambda matrix. A matrix whose elements are polynomials in the variable λ.

Let F[λ] be a polynomial domain consisting of the set of all polynomials in λ with coefficients in field F. A non-zero mxn matrix over F[λ]

is called a λ-matrix.

Example.

Matrix polynomial.. A matrix polynomial can take any of the following three forms:

where the coefficients A0, A1, .... , Ap are mxn matrices over a field F and the indeterminate λ is a number.

Example.

where the coefficients a0, a1, .... , ap are numbers and the indeterminate C is a matrix.

where the coefficients A0, A1, .... , Ap are matrices and the indeterminate C is a matrix.

Representation of a λ-matrix as a matrix polynomial. Any mxn λ-matrix can be written as a matrix polynomial. Let “p” be the degree of the polynomial of highest degree found in A(λ). Then A(λ) can be written as the following matrix polynomial:

where A0, A1, .... , Ap are mxn matrices.

Example.

Singular and non-singular λ-matrices. The determinant of an n-square λ-matrix is a polynomial in λ and if this determinant vanishes identically we call the matrix singular. Otherwise it is called non-singular.

Proper and improper λ-matrices. An n-square λ-matrix A(λ) is called proper if the matrix Ap in the matrix polynomial

is non-singular. It is called improper if matrix Ap is singular.

Operations with λ-matrices. Consider the two n-square λ-matrices A(λ) and B(λ) and their matrix polynomial equivalents:

and

Equality of two λ-matrices. Two λ-matrices A(λ) and B(λ) are said to be equal if p = q and Ai = Bi (i = 0, 1, 2, ... , p) in their matrix polynomial representations.

Sum of two λ-matrices. The sum of A(λ) and B(λ), A(λ) + B(λ), is a λ-matrix C(λ) obtained by adding corresponding elements of A(λ) and B(λ).

The product A(λ) B(λ) is a λ-matrix or matrix polynomial of degree at most p + q. If either A(λ) or B(λ) is non-singular, the degree of A(λ) B(λ) and also B(λ) A(λ) is exactly p + q.

A λ-matrix and its matrix polynomial equivalent are identically equal and the equality is not disturbed by replacing λ with any scalar k of F. For example, putting λ = k in (4) yields

However, when λ is replaced by an n-square matrix C, two results can be obtained due to the fact that, in general, two n-square matrices do not commute. These two results correspond to

and

where AR(C) is called the right functional value of A(λ) and AL(C) is called the left functional value of A(λ).

Division of λ-matrices. Consider the matrix polynomials

and

If B(λ) is non-singular, then there exist unique matrix polynomials Q1(λ), R1(λ), Q2(λ), and R2(λ) where R1(λ) and R2(λ) are either zero or of degree less than that of B(λ), such that

If R1(λ) = 0 , B(λ) is called a right divisor of A(λ).

If R2(λ) = 0 , B(λ) is called a left divisor of A(λ).

Scalar matrix polynomials. Let

where the coefficients b0, b1, .... , bq and the indeterminate λ are scalars from a number field F. A matrix polynomial B(λ) of the form

(where In is the identity matrix) is called a scalar matrix polynomial. A scalar matrix polynomial is a matrix polynomial whose coefficients are scalar matrices.

Example. The following is a scalar matrix polynomial:

Theorem 1. A scalar matrix polynomial B(λ) = b(λ)∙In commutes with every n-square matrix polynomial.

Theorem 2. If

and

then there exist unique matrix polynomials Q1(λ) and R1(λ) such that

and if R1(λ) = 0 , b(λ)∙In divides A(λ).

Theorem 3. A matrix polynomial

of degree n is divisible by a scalar matrix polynomial B(λ) = b(λ)∙In if and only if every aij(λ) in A(λ) is divisible by b(λ).

The Remainder Theorem. Let A(λ) be an n-square λ-matrix over the polynomial domain F[λ]

and let B = [bij] be an n-square matrix over field F. Since λI - B is non-singular, we may write

and

where R1 and R2 are free of λ.

Theorem 4. If A(λ) is divided by λI - B, where B = [bij] is n-square, until remainders R1 and R2, free of λ, are obtained, then

and

(where AR(B) and AL(B) are the right and left functional values of A(λ)).

When A(λ) is a scalar matrix polynomial

the remainders are identical so that

Theorem 5. If a scalar matrix polynomial f(λ)∙In is divided by λI - B until a remainder R, free of λ, is obtained, then R = f(B).

Theorem 6. A scalar matrix polynomial f(λ)∙In is divisible by λI - B if and only if f(B) = 0.

Cayley-Hamilton Theorem. Every square matrix A = [aij] satisfies its characteristic equation Φ(λ) = 0.

Proof. Let A be a n-square matrix having characteristic matrix (λI - A) and characteristic equation Φ(λ) = |λI - A| = 0. A theorem on adjoints states that for any matrix A

A(adj A) = |A| In .

Applying this theorem to the characteristic matrix (λI - A) we get

(λI - A) ·adj (λI - A) = Φ(λ)·I .

Thus Φ(λ)·I is divisible by λI - A and, by Theorem 6, Φ(A) = 0.

References.

Ayres. Matrices (Schaum).