A collection of vectors is called independent when the only solution to the equation
is given by setting for every .
Question 06-01
Independence vs. dependency equations
Prove the statement:
The collection is dependent (i.e. not independent) if and only if at least one of the vectors can be written as a linear combination of the others, i.e. one of them is in the span of the others.
Example
Dependent columns implies non-invertible
If a square matrix has column vectors which form a dependent set, then the square matrix is not invertible.
Proof: Let be numbers giving the dependency relation. These numbers can be combined into a vector . Then:
However, since the are not all zero, we know , and therefore sends something nonzero to zero, which means it cannot be invertible. (Otherwise, what would be the preimage of zero?)
Independence and dimension
Recall that the dimension of a subspace is equal to the smallest number of vectors needed to span the subspace.
If a set of vectors has the smallest number that spans whatever they span, then this set is independent. (If there is any dependency relation, then a smaller set will have the same span: just eliminate the dependent vector.)
Conversely, if some vectors are independent, then they are automatically the smallest number that spans their span:
Theorem: Independent vectors span their span minimally
Let be the subspace spanned by the set of vectors , and suppose the set is independent. Then any set of vectors spanning must have at least elements.
Proof
We use Steinitz Exchange but with in place of . Suppose for other vectors . Our goal is to show that .
First, since , we know there is a linear combination
with at least one coefficient . Solve for in terms of the others in this equation to obtain the fact that , where we introduce the labels with for whichever vectors are left from after removing .
By iterating this process, we obtain
and this is only possible if .
(Notation: here stands for with ticks, just as for higher derivatives.)
We are able to iterate the process because the vectors are independent: at each stage, the next can be written as a linear combination of the previous () and some other . But the coefficient on at least one of the must be nonzero, otherwise we have written in terms of alone, and that is impossible by independence.
Exercise 06-01
Understanding independence
Explain in more detail why we can iterate the process because the are independent. Specifically compare the reason it works here with the reason it worked for the in the Steinitz Exchange at the end of Packet 05.
Question 06-02
Completing the logic
Explain in more detail the statement: “and this is only possible if .” Why is this statement true? What would happen if ?
Example
Independence
Problem: Is the following set of vectors dependent or independent?
Solution: It must be a dependent set. These vectors live in , which is spanned by . If these vectors were independent, then we could use Steinitz Exchange to write as the span of just three of them. Since the fourth is also in , it could then be written as a combination of the other three, contradicting independence.
Basis
A basis for is a set of independent vectors which span the entire subspace .
For example, the standard basis is a basis for the whole space .
The criterion of independence is a function of the relationships between the vectors in the set. The criterion of spanning also depends upon what other vectors there may be in the subspace.
(Independence is an intrinsic concept, whereas spanning the subspace is an extrinsic concept. These are not precise mathematical terms.)
Any set of vectors spans a certain subspace, namely the span itself. Therefore any independent set of vectors is a basis for its own span.
Because an independent set of vectors is always the minimal number needed to span its span (the theorem above), and a basis for a given space is an independent set of vectors spanning the subspace, every basis for a given subspace has the same number of vectors. (Since every basis has the minimal number needed to span the given subspace.) This number as the dimension of the subspace.
The most important way to think about and use the concept of basis is this: Given a basis for a subspace , every vector in can be obtained uniquely as a linear combination of basis vectors:
(Here ‘unique’ means that the coefficients are uniquely determined by .)
From this way of thinking about a basis, we see that the dimension is the number of quantities that are needed to describe everything in the space without redundancy.
Example
3 vectors spanning 3D space must be independent
In Problem 05-02, we saw that the span
is a 3-dimensional subspace of . (In fact it is, since the Steinitz Exchange will put all three in the same span as this.) Therefore we know the vectors are independent, because there are three vectors spanning a 3D space.
Orthonormal systems, orthonormal bases
A collection of vectors is called an orthonormal system when they are:
unit vectors, meaning for all
pairwise orthogonal, meaning when
If the collection is also a basis, then it is called an orthonormal basis.
Vector components in a given basis
The typical application of a basis is to write other vectors as its linear combinations. Since a basis spans the whole space, every other vector in the space can be written using some set of coefficients. Since a basis is linearly independent, there is a unique set of coefficients that will generate a given vector.
For example, if is a basis of a subspace , which incidentally therefore has dimension , then we can write any by:
Using such linear combinations to express vectors , we find a unique association between a vector and the quantities which express that vector in terms of the . We can group those quantities into a list . The terms are called the components of in the basis .
The ordinary component entries of a vector are actually just the components of the vector in the standard basis. (Think about this!)
Problems due Monday 26 Feb 2024 by 12:00pm
Problem 06-01
Unique coefficients on independent vectors
Suppose the vectors are independent. Show that the coefficients on these vectors are unique, in any linear combination. This means, for example, that if we know
then in fact we have for all . In other words, there is at most one way to write a vector as a linear combination of some independent vectors.
Problem 06-02
Neutralizing a matrix row
Suppose that the row vectors of a matrix are dependent. Explain precisely how (in terms of a given dependency relation) you can transform into a matrix having a row of all zeros by performing some combination of row-scale and row-add operations to the rows of .
Problem 06-03
Column dependence and multiple solutions
Suppose that the columns of a matrix are dependent. Consider equations of the form for a variable vector, and a given constant vector. Show that if there is a solution to this equation, then there must be more than one solution . (Because of the dependency of the columns of .)
(Hint: because the columns of are dependent, (explain how) you can find a specific vector with the property that . Combine with a given solution to the equation to produce a second solution. Can you always get an infinite set of solutions?)
Problem 06-04
Checking independence
Is the following set of vectors linearly independent?
If it’s dependent, find a dependency relation.
(You are encouraged to do this problem without Steinitz Exchange and dimensional reasoning: simply solve a system of equations instead.)
Problem 06-05
Vector operations according to components in another basis
Show that the list of components of a vector in some basis are added and scaled componentwise.
This means: given two vectors, if you express them in the basis , and add the components in this basis, you will get the same result as if you first added the two vectors (according to the definition of adding components within ) and then expressed the result in the basis . And similarly for scaling.