```Website owner:  James Miller
```

[ Home ] [ Up ] [ Info ] [ Mail ]

VECTOR SPACE, SUBSPACE, BASIS, DIMENSION, LINEAR INDEPENDENCE

Vector spaces and subspaces – examples.

Let A and B be any two non-collinear vectors in the x-y plane. Then any other vector X in the plane can be expressed as a linear combination of vectors A and B. That is there exist numbers k1 and k2 such that X = k1A + k2B for any vector X. Conversely, any linear combination of vectors A and B gives a vector in the x-y plane, Note the closure idea involved. Any vector in the plane can be obtained as a linear combination of A and B and any linear combination gives some vector in the plane. It is a closed system. It is a vector space.

Let A, B and C be any three non-coplanar vectors in an x-y-z Cartesian coordinate system. Then any vector in this x-y-z coordinate system can be expressed as a linear combination of A, B and C. That is, there exist numbers k1, k2, and k3 such that X = k1A + k2B + k3C for any vector X. Conversely, any linear combination of A, B and C gives some vector in the x-y-z coordinate system. Note the closure idea involved. It is a closed system. It is a vector space.

Pass any plane through the origin of an x-y-z Cartesian coordinate system. Denote the plane by K. Let A and B be any two non-collinear vectors lying in plane K. Then any linear combination of vectors A and B is a vector lying in plane K (i.e. if c1 and c2 are any two numbers, the vector X = c1A + c2B lies in plane K). Moreover, any vector lying in plane K can be expressed as a linear combination of vectors A and B (i.e. for any vector X in plane K there exist numbers c1 and c2 such that X = c1A + c2B ). Note the closure concept involved. It is a closed system. The totality of all vectors in plane K constitute a vector space. Vectors A and B constitute a basis for the space –- as would any other set of two non-collinear vectors lying in K. The dimension of the space is “two” (it is a two dimensional space). This space constitutes a two-dimensional subspace of the three dimensional space of the last paragraph. In fact, any plane passing through the origin of the x-y-z coordinate system constitutes a two-dimensional subspace of three-dimensional space.

Pass any line through the origin of an x-y-z Cartesian coordinate system. Denote the line by L. Let A be any vector lying in the line. Then any multiple of vector A is a vector lying in line L.. Moreover, any vector lying in line L can be expressed as a multiple of vector A.. Note the closure concept involved. It is a closed system. The totality of all vectors in line L constitute a vector space. Line L is a one-dimensional subspace of three-dimensional space.

Linearly dependent and independent sets of vectors

Linear combination of vectors. The vector c1x1 + c2x2 + ... + cmxm with arbitrary numerical values for the coefficients c1, c2, ... ,cm is called a linear combination of the vectors x1, x2, ... ,xm .

Linearly dependent and independent sets of vectors. A set of vectors x1, x2, ... ,xm is said to be linearly dependent if some one of the vectors in the set can be expressed as a linear combination of one or more of the other vectors in the set. If none of the vectors in the set can be expressed as a linear combination of any other vectors of the set, then the set is said to be linearly independent.

Examples from three-dimensional space. To illustrate the concepts let us consider some examples from three dimensional space.

Let A, B and C be any three non-coplanar vectors in an x-y-z Cartesian coordinate system. Then any vector in this x-y-z coordinate system can be expressed as a linear combination of A, B and C. However, none of these three vectors A, B and C can be expressed as a linear combination of the other two. The vectors A, B and C constitute a linearly independent set. Now add another vector D to the set. Consider the set A, B, C and D. This set is a dependent set because vector D can be expressed as a linear combination of A, B and C.

Pass any plane through the origin of an x-y-z Cartesian coordinate system. Denote the plane by K. Let A and B be any two non-collinear vectors lying in plane K. Then any linear combination of vectors A and B is a vector lying in plane K. However, neither vector A or B can be expressed as a linear combination of the other. The two vectors form a linearly independent set. Now add another vector C to the set. The set A, B and C is a dependent set because vector C can be expressed as a linear combination of A and B.

A necessary and sufficient condition for the independence of a set of vectors.

Theorem. A necessary and sufficient condition for the set of vectors to x1, x2, ... ,xm to be linearly independent is that

c1x1 + c2x2 + ... + cmxm = 0

only when all the scalars ci are zero.

What is the reasoning that leads to the assertion of this theorem? Well, a set of vectors x1, x2, ... ,xm is linearly dependent if some one of the vectors in the set can be expressed as a linear combination of one or more of the other vectors in the set. This assertion is equivalent to the assertion that a set of vectors is linearly dependent if there exist two or more vectors xi, xj, etc. such that

cixi + cjxj + ... = 0

where ci cj,, etc are non-zero. Said differently, a set is linearly dependent if there exist two or more non-zero c’s for which the following equation holds true:

c1x1 + c2x2 + ... + cmxm = 0

If there does not exist two or more non-zero c’s for which it will hold, then the set of vectors is independent. The case in which only one of the c’s is non-zero is impossible since cixi = 0 is not possible. Thus the set of vectors is linearly independent if and only if

c1x1 + c2x2 + ... + cmxm = 0

only when all the scalars ci are zero.

Linear dependence or independence of a set of vectors is determined from the rank of a matrix formed from them. Consider a matrix formed from m n-vectors with each vector corresponding to a row in the matrix. If the rank of the matrix is m the set of vectors is linearly independent. If the rank is less than m the set of vectors is linearly dependent. If the rank r is less than m then there are exactly r vectors in the set which are linearly independent and the remaining vectors c1x1 + c2x2 + ... + cmxm can be expressed as a linear combination of these r independent vectors.

Space spanned by a set of vectors. Let x1, x2, ... ,xm be a set of n-vectors in n-dimensional space. They may be a linearly independent set or a linearly dependent set of vectors, it doesn’t matter. The set of all linear combinations of these vectors corresponds either to all of n-dimensional space or to some subspace of n-dimensional space. The vector space generated by all linear combinations of x1, x2, ... ,xm is called the subspace spanned by x1, x2, ... ,xm .

Example. Let us illustrate the concept with an example from three dimensional space.

Pass any plane through the origin of an x-y-z Cartesian coordinate system. Denote the plane by K. Let A, B, C, D and E be five non-collinear vectors lying in plane K. These five vectors form a set that spans plane K, a subspace of three dimensional space. Any linear combination of these vectors lies in plane K and no linear combination lies outside the plane. Now add to this set a vector F which lies outside this plane. The new set of vectors (vectors A, B, C, D, E and F) span all of three dimensional space, Why? Because the set now contains three linearly independent vectors and three independent vectors will span all of three dimensional space.

Basis of a vector space. A basis of a vector space is any set of linearly independent vectors that spans the space. Each vector of the space is then a unique linear combination of the vectors of this basis.

Examples.

In three dimensional space any set of three non-coplanar vectors constitute a basis for the space (choose any three non-coplanar vectors and they qualify as a basis). Any vector in the space can be expressed as a linear combination of these basis vectors and, conversely, any linear combination of these three basis vectors lies in three dimensional space. This basis plays the same role as a set of coordinate axes (it is viewed as a non-orthogonal set of coordinate axes that serve as a reference frame from which we can express any other vector in the space).

Any two non-coplinear vectors in three-dimensional space define a plane that constitutes a subspace of three-dimensional space since any linear combination of these two vectors lies in the plane and, conversely, any vector in the plane can be expressed as a linear combination of these two vectors. Thus these two vectors constitute a basis for a two dimensional subspace of three dimensional space. Similarly, a single vector in 3-space constitutes a basis for a one dimensional subspace of 3-space.

In two dimensional space any set of two non-collinear vectors constitute a basis for the space. These two basis vectors than serve as a non-orthogonal reference frame from which any other vector in the space can be expressed.

Dimension of a vector space. The dimension of a vector space is the number of independent vectors required to span the space.

Subspaces. N-dimensional space Vn(F) has embedded in it subspaces of lesser dimensions. For example, ordinary three-dimensional space has embedded in it two-dimensional subspaces in the form of planes passing through the origin of the coordinate system and one-dimensional subspaces in the form of lines passing through the origin. A r-dimensional subspace of Vn(F) is denoted by Vnr(F). A two-dimensional subspace of ordinary three-dimensional space V3(R) would, for example, be denoted by V32(R).

Row space of a matrix. The row space of a matrix is that subspace spanned by the rows of the matrix (rows viewed as vectors). It is that space defined by all linear combinations of the rows of the matrix.

Consider a matrix containing five rows and three columns. The rows may be viewed as 3-vectors spanning some subspace of three-dimensional space. If the rows contain three linearly independent vectors they span all of three-dimensional space. If the rows contain only two linearly independent vectors they span the subspace of three-dimensional space defined by these two vectors (some plane passing through the origin). If all rows are multiples of some one row they represent one-dimensional subspace of three dimensional space corresponding to some line passing through the origin.

The effect of the elementary row operations on a matrix is to produce other sets of rows in the same row space. If the rows of a matrix A span some subspace K of n-space Vn then the elementary row operations will produce another matrix whose row vectors span the same subspace of Vn . Row-equivalent matrices have the same row space. The dimension of the row space corresponds to the number of linearly independent vectors required to span the row space — which is equal to the rank of the matrix.

Column space of a matrix. The column space of a matrix is the subspace spanned by the columns of the matrix (columns viewed as vectors).

Theorem. For any mxn matrix the dimension of its row space is equal to the dimension of its column space and both dimensions are equal to its rank. In other words the number of linearly independent rows in a matrix is equal to the number of linearly independent columns.

Sum space of two vector spaces. The sum of two vector spaces P and Q is defined as the totality of all vectors x + y where x is in P and y is in Q. This is a vector space and we call it the sum space of P and Q. The sum space can be regarded as the space spanned by the union of the bases of the spaces P and Q.

Intersection space of two spaces. The intersection space of two vector spaces is the set of all vectors that belong to both spaces.

Example. A plane passing through the origin of an x-y-z Cartesian system in ordinary three-dimensional space represents a two-dimensional subspace of three-dimensional space. Consider two planes P and Q passing through the origin which are assumed not to coincide. The sum space of the two planes is the whole three dimensional space and the intersection space is a straight line (the line of their intersection).

Theorem. The dimensions p and q of two given spaces, the dimensions t of their sum and the dimension s of their intersection satisfy the following relation:

p + q = t + s

Coordinate systems in vector spaces. Consider a k-dimensional vector space with basis vectors A1, A2 , ... ,Ak so that an arbitrary vector X of the space has a unique representation

X = u1A1 + u2A2 + ... + ukAk

The vectors A1, A2 , ... ,Ak are called a coordinate system or reference system in the space and u1, u2, ... ,uk are the coordinates of X with respect to this system. Thus we can call the n-tuple {u1, u2, ... ,uk } the coordinate vector of X relative to the basis {A1, A2 , ... ,Ak } and denote X by

where A denotes the basis {A1, A2 , ... ,Ak }.

E-basis Coordinate System. The n-vectors

E1 = [1, 0, 0, ...., 0]

E2 = [0, 1, 0, ...., 0]

..............................

En = [0, 0, 0, ...., 1]

are called the elementary or unit n-vectors. The elementary vector Ej, whose j-th component is 1, is called the j-th elementary n-vector. The elementary vectors E1, E2, ... ,En constitute an important basis for Vn(F). Every vector X = [x1, x2, ... ,xn] of n-space Vn(F), can be expressed uniquely as the sum

X = x1E1 + x2E2 + ... + xnEn

of the elementary vectors. The components x1, x2, ... ,xn of X are now called the coordinates of X relative to the E-basis.

Changes in coordinates due to change in basis. Let us consider the problem of how the coordinates of vectors are changed on transition from one basis to another in an

n-dimensional vector space.

Let the original basis be the usual E-basis E1, E2, ... ,En . Let x1, x2, ... ,xn be the coordinates of a vector X with respect to the E-basis . Let Z1, Z2, ... ,Zn some arbitrary other basis. Then there exist unique numbers a1, a2, ... ,an such that

X = a1Z1 + a2Z2 + ... + anZn

These numbers a1, a2, ... ,an represent the coordinates of X relative to the Z-basis. Writing

we have X = [ Z1, Z2, ... ,Zn ] XZ = ZXZ

where Z is the matrix [ Z1, Z2, ... ,Zn ] whose columns are the basis vectors Z1, Z2, ... ,Zn .

Thus we have the following result: the coordinates of a vector X with respect to the E-basis are related to its coordinates with respect to some other Z-basis by

X = ZXZ

where matrix Z , whose columns are the new basis vectors Z1, Z2, ... ,Zn , is called the “matrix of the coordinate transformation”.

Now let us ask what happens to the coordinates of a point if we move from some Z-basis given by {Z1, Z2, ... ,Zn } to some other W-basis given by {W1, W2, ... ,Wn }. We know

X = ZXZ

X = WXW

So

WXW = ZXZ

and

XW = W-1ZXZ

Theorem. If a vector of Vn(F) has coordinates XZ and XW relative to bases {Z1, Z2, ... ,Zn } and {W1, W2, ... ,Wn } of Vn(F), there exits a nonsingular matrix P, determined solely by the two bases and given by P = W-1Z, such that XW = PXZ .

References.

Ayres. Matrices. Chap. 11

Mathematics, Its Content, Methods and Meaning, Vol. 3, p. 54

James & James. Math. Dictionary. ”vector space”

Hohn. Elementary Matrix Theory. p. 179

Lipschutz. Linear Algebra. p. 90