Packet 06

Vectors II: Independence

Independence

A collection of vectors {๐ฏ1,๐ฏ2,โ€ฆ,๐ฏk} is called independent when the only solution to the equation

x1๐ฏ1+x2๐ฏ2+โ‹ฏ+xk๐ฏk=0

is given by setting xi=0 for every i.

Question 06-01

Independence vs. dependency equations

Prove the statement:

The collection ๐’ฑ is dependent (i.e. not independent) if and only if at least one of the vectors can be written as a linear combination of the others, i.e. one of them is in the span of the others.

Example

Dependent columns implies non-invertible

If a square matrix A has column vectors ๐ši which form a dependent set, then the square matrix is not invertible.

Proof: Let xi be numbers giving the dependency relation. These numbers can be combined into a vector ๐ฑ=(x1,โ€ฆ,xn). Then:

A๐ฑ=A(x1๐ž1+โ‹ฏ+xn๐žn)=A(x1๐ž1)+โ‹ฏ+A(xn๐žn)=x1(A๐ž1)+โ‹ฏ+xn(A๐žn)=x1๐š1+โ‹ฏ+xn๐šn=0.

However, since the xi are not all zero, we know ๐ฑโ‰ 0, and therefore A sends something nonzero to zero, which means it cannot be invertible. (Otherwise, what would be the preimage of zero?)

Independence and dimension Recall that the dimension of a subspace is equal to the smallest number of vectors needed to span the subspace.

If a set of vectors has the smallest number that spans whatever they span, then this set is independent. (If there is any dependency relation, then a smaller set will have the same span: just eliminate the dependent vector.)

Conversely, if some vectors are independent, then they are automatically the smallest number that spans their span:

Theorem: Independent vectors span their span minimally

Let V=โŸจ๐ฏ1,โ€ฆ,๐ฏkโŸฉโŠ‚โ„n be the subspace spanned by the set of vectors {๐ฏ1,โ€ฆ,๐ฏk}, and suppose the set {๐ฏ1,โ€ฆ,๐ฏk} is independent. Then any set of vectors spanning V must have at least k elements.

Proof

We use Steinitz Exchange but with ๐ฏi in place of ๐ži. Suppose V=โŸจ๐ฎ1,โ€ฆ,๐ฎrโŸฉ for r other vectors ๐ฎj. Our goal is to show that rโ‰ฅk.

First, since ๐ฏ1โˆˆV, we know there is a linear combination

๐ฏ1=a1๐ฎ1+โ‹ฏ+ar๐ฎr

with at least one coefficient aiโ‹†โ‰ 0. Solve for ๐ฎiโ‹† in terms of the others in this equation to obtain the fact that V=โŸจ๐ฏ1,๐ฎ2โ€ฒ,โ€ฆ,๐ฎrโ€ฒโŸฉ, where we introduce the labels ๐ฎjโ€ฒ with j=2,โ€ฆ,r for whichever vectors are left from {๐ฎ1,โ€ฆ,๐ฎr} after removing ๐ฎiโ‹†.

By iterating this process, we obtain

โŸจ๐ฏ1,๐ฏ2,โ€ฆ,๐ฏk,๐ฎk+1(kโˆ’1),โ€ฆ,๐ฎrโˆ’k(kโˆ’1)โŸฉ,

and this is only possible if rโ‰ฅk.

(Notation: here ๐ฎj(p) stands for ๐ฎjโ€ฒโ€ฒโ€ฆโ€ฒ with p ticks, just as for higher derivatives.)

We are able to iterate the process because the vectors ๐ฏi are independent: at each stage, the next ๐ฏi+1 can be written as a linear combination of the previous ๐ฏj (j<i) and some other ๐ฎj. But the coefficient on at least one of the ๐ฎj must be nonzero, otherwise we have written ๐ฏi in terms of ๐ฏj alone, and that is impossible by independence.

Exercise 06-01

Understanding independence

Explain in more detail why we can iterate the process because the ๐ฏj are independent. Specifically compare the reason it works here with the reason it worked for the ๐žj in the Steinitz Exchange at the end of Packet 05.

Question 06-02

Completing the logic

Explain in more detail the statement: โ€œand this is only possible if rโ‰ฅk.โ€ Why is this statement true? What would happen if r<k?

Example

Independence

Problem: Is the following set of vectors dependent or independent?

(3โˆ’12),(02โˆ’1),(2โˆ’43),(011)

Solution: It must be a dependent set. These vectors live in โ„3, which is spanned by ๐ž1,๐ž2,๐ž3. If these vectors were independent, then we could use Steinitz Exchange to write โ„3=โŸจ๐ž1,๐ž2,๐ž3โŸฉ as the span of just three of them. Since the fourth is also in โ„3, it could then be written as a combination of the other three, contradicting independence.

Basis

A basis for VโŠ‚โ„n is a set of independent vectors which span the entire subspace V.

For example, the standard basis {๐ž1,โ€ฆ,๐žn} is a basis for the whole space โ„n.

The criterion of independence is a function of the relationships between the vectors in the set. The criterion of spanning also depends upon what other vectors there may be in the subspace.

(Independence is an intrinsic concept, whereas spanning the subspace is an extrinsic concept. These are not precise mathematical terms.)

Any set of vectors spans a certain subspace, namely the span itself. Therefore any independent set of vectors is a basis for its own span.

Because an independent set of vectors is always the minimal number needed to span its span (the theorem above), and a basis for a given space is an independent set of vectors spanning the subspace, every basis for a given subspace has the same number of vectors. (Since every basis has the minimal number needed to span the given subspace.) This number as the dimension of the subspace.

The most important way to think about and use the concept of basis is this: Given a basis {๐ฏ1,๐ฏ2,โ€ฆ,๐ฏk} for a subspace VโŠ‚โ„n, every vector ๐ฐ in V can be obtained uniquely as a linear combination of basis vectors:

๐ฐ=x1๐ฏ1+โ‹ฏ+xk๐ฏk.

(Here โ€˜uniqueโ€™ means that the coefficients xi are uniquely determined by ๐ฐ.)

From this way of thinking about a basis, we see that the dimension k is the number of quantities xi that are needed to describe everything in the space without redundancy.

Example

3 vectors spanning 3D space must be independent

In Problem 05-02, we saw that the span

โŸจ(111),(110),(022)โŸฉ

is a 3-dimensional subspace of โ„3. (In fact it is โ„3, since the Steinitz Exchange will put all three ๐ži in the same span as this.) Therefore we know the vectors are independent, because there are three vectors spanning a 3D space.

Orthonormal systems, orthonormal bases A collection of vectors {๐ฏ1,๐ฏ2,โ€ฆ,๐ฏk} is called an orthonormal system when they are:

  • unit vectors, meaning |๐ฏi|=1 for all i
  • pairwise orthogonal, meaning ๐ฏiโ‹…๐ฏj=0 when iโ‰ j

If the collection is also a basis, then it is called an orthonormal basis.

Vector components in a given basis The typical application of a basis is to write other vectors as its linear combinations. Since a basis spans the whole space, every other vector in the space can be written using some set of coefficients. Since a basis is linearly independent, there is a unique set of coefficients that will generate a given vector.

For example, if โ„ฌ={๐›1,โ€ฆ,๐›k} is a basis of a subspace VโŠ‚โ„n, which incidentally therefore has dimension k, then we can write any ๐ฏโˆˆV by:

๐ฏ=x1๐›1+x2๐›2+โ‹ฏ+xk๐›k.

Using such linear combinations to express vectors ๐ฏ, we find a unique association between a vector ๐ฏ and the quantities x1,โ€ฆ,xk which express that vector in terms of the ๐›i. We can group those quantities into a list (x1,โ€ฆ,xk). The terms are called the components of ๐ฏ in the basis โ„ฌ.

The ordinary component entries of a vector are actually just the components of the vector in the standard basis. (Think about this!)

Problems due Monday 26 Feb 2024 by 12:00pm

Problem 06-01

Unique coefficients on independent vectors

Suppose the vectors {๐ฏ1,โ€ฆ,๐ฏk} are independent. Show that the coefficients on these vectors are unique, in any linear combination. This means, for example, that if we know

x1๐ฏ1+โ‹ฏ+xk๐ฏk=๐ฐ=y1๐ฏ1+โ‹ฏ+yk๐ฏk,

then in fact we have xi=yi for all i. In other words, there is at most one way to write a vector as a linear combination of some independent vectors.

Problem 06-02

Neutralizing a matrix row

Suppose that the row vectors of a matrix M are dependent. Explain precisely how (in terms of a given dependency relation) you can transform M into a matrix having a row of all zeros by performing some combination of row-scale and row-add operations to the rows of M.

Problem 06-03

Column dependence and multiple solutions

Suppose that the columns of a matrix A are dependent. Consider equations of the form A๐ฑ=๐› for ๐ฑ a variable vector, and ๐› a given constant vector. Show that if there is a solution ๐ฑ to this equation, then there must be more than one solution ๐ฑ. (Because of the dependency of the columns of A.)

(Hint: because the columns of A are dependent, (explain how) you can find a specific vector ๐ฑโ€ฒโ‰ 0 with the property that A๐ฑโ€ฒ=0. Combine ๐ฑโ€ฒ with a given solution to the equation to produce a second solution. Can you always get an infinite set of solutions?)

Problem 06-04

Checking independence

Is the following set of vectors linearly independent?

{(123),(456),(210)}

If itโ€™s dependent, find a dependency relation.

(You are encouraged to do this problem without Steinitz Exchange and dimensional reasoning: simply solve a system of equations instead.)

Problem 06-05

Vector operations according to components in another basis

Show that the list of components of a vector in some basis โ„ฌ are added and scaled componentwise.

This means: given two vectors, if you express them in the basis โ„ฌ, and add the components in this basis, you will get the same result as if you first added the two vectors (according to the definition of adding components within โ„n) and then expressed the result in the basis โ„ฌ. And similarly for scaling.