Linear systems: homogeneous, constant coefficients

We are interested in systems of equations that can be written in the form , where is a vector function of , and is a matrix with constant entries (independent of ).

Our goal is to find a complete set of solutions to . It is not enough to find just one. How many solutions constitute a complete set?

Picard-Lindelöf for first-order systems

The system: always has existence and uniqueness of solutions, for any initial vector and constant coefficient matrix .

This implies: a complete set of solutions is a vector space (by linearity) of the same dimension as (by the theorem).

Eigenvectors The eigenvector approach to solving the system starts from the observation that if is an eigenvector of with eigenvalue , meaning that , then the vector function is a solution to the system.

The downside of this approach is that the solution cannot be written down until all the eigenvalues have been computed; the eigenvalues frequently involve imaginary numbers; and it is possible there are simply not enough eigenvectors to give a complete family.

Matrix exponentials The matrix exponential approach to solving the same system starts by constructing the function using power series. This function is a matrix-valued function of that satisfies , and because is always an invertible matrix, the columns of automatically give a complete set of solutions.

The downside of this approach is that it is hard to compute , and the typical method requires finding the eigenvalues of anyway. On the other hand, it is not necessary to use imaginary numbers because the matrix can be put into what is called “Real Jordan Form” (with only real numbers). This form uses an extension of the idea of eigenvector that is designed specifically to handle the situation of missing eigenvectors. “Putting into” this form simply means changing variables according to some invertible transformation . The matrix of the new equation will have the Real Jordan Form.

Eigenvector approach

Suppose we have any system of the form for a matrix with constant entries . Suppose is an eigenvector of with eigenvalue , meaning that . (Note that also has constant entries, like .) Then the function solves the system:

Eigenvector-eigenvalue solutions

Any eigenvector-eigenvalue pair determines a solution .

Remember that if is an eigenvector of , then any multiple is also an eigenvector. In terms of solutions, this simply means that if is a solution of the form , then is also a solution. We already knew this: it is part of linearity – the fact that solutions are vectors in a vector space.

The Picard-Lindelöf theorem implies that the space of solutions has dimension equal to the dimension of the vectors and . (The dimension of is just the number of components it has.) Therefore a set of solutions is a complete set when (a) we have , and (b) they are independent when considered as vectors. The Wronskian (using the determinant) can tell us when vectors having components are independent.

When there are independent eigenvectors, and all of their eigenvalues are real numbers, the eigenvector strategy works perfectly without further complication. Just find all eigenvector-eigenvalue pairs and write down all the solutions they determine, and we are done.

The main challenge of the eigenvector strategy becomes the challenge of handling situations of two types. It can happen that (a) some eigenvalues involve imaginary numbers, or (b) some systems do not have enough eigenvectors. In applications, the former tends to arise for oscillatory systems, and the latter arises for analogues of critical damping.

Example

Eigenvector approach: real eigenvalues, enough eigenvectors

Problem: Solve the system given by . Solution: First find the eigenvalues:

The roots are . Now we seek eigenvectors by solving for each root. When :

This is equivalent to the system , , and . Solving this system, we have and can be anything. Plugging these in we have , and we choose obtaining .

Next for we have:

and the equivalent system is , , and . Solving this system, we have . Plugging these in we have and we choose obtaining .

Finally for we have:

and the equivalent system is , and . Solving this system, we have . Plugging in we have , and we choose obtaining .

(Note: the only reason we have been choosing is for convenience. Anything except zero will work.)

Notice that we have found three independent eigenvectors for a D system, so we are done. Here is the complete set of solutions:

Complex eigenvalues

Suppose the solution to the eigenvalue equation is a complex eigenvalue .

Because the matrix has real-valued entries, such eigenvalue roots must come in conjugate pairs, meaning we would always find both eigenvalues . Furthermore, the corresponding eigenvectors also come in a complex conjugate pair and :

(This follows by taking the complex conjugate of both sides of the equation, using the fact that conjugation and multiplication can be performed in either order.)

We proceed as in the order linear theory, and combine the solutions and in such a way as to extract vector functions having exclusively real-valued entries. We have:

In summary: whenever is an eigenvalue for the eigenvector , the formulas above for and give two independent solutions to the system .

Question 13-01

Extracting real solutions

Verify the real-valued formulas for and by plugging in and and then expanding and simplifying.

Example

System with complex eigenvalues and eigenvectors

Problem: Solve the system . (Assume .) What are the solutions for that take the initial value at ?

Solution: The matrix has so . To find the eigenvectors:

Therefore and . These equations are redundant since the second is times the first. Dividing by and then plugging in we have , and we choose obtaining:

We treat similarly. To find the eigenvectors:

Therefore and . These equations are redundant since the second is times the first. Dividing by and then plugging in we have , and we choose obtaining:

The solutions are therefore

When and these solutions reduce to:

We seek satisfying . Clearly we need and . So our final answer (with ) is:

Exercise 13-01 (Discussion Worksheet problem)

Complex eigenvalues

Solve the system . First find the complex solutions, then find the real solutions.

Insufficient eigenvectors

Suppose the solution to the eigenvalue equation is a repeated eigenvalue, so for example the equation has a factor like . In this case, sometimes it happens that the eigenvalue does not have enough eigenvectors associated to it.

Example

System with too few eigenvectors

Problem: Solve the system .

Solution: The matrix has . Therefore it has a “double” eigenvalue . To find the eigenvectors, solve :

This tells us and can be anything. Therefore we have the eigenvectors and we may choose , obtaining .

However, no more independent eigenvectors are available (other than scalar multiples of this one, which would not be independent). Therefore it is impossible to find a complete set of solutions using eigenvectors.

Let us consider the way around this difficulty. The way around is not very simple! Still, we try to take the simplest path.

Suppose our situation is that as a factor in , and that we have a single eigenvector satisfying . So we start with the solution . We seek a second solution.

There is a concept of generalized eigenvector, which is any that satisfies for higher powers . According to linear algebra theory, whenever from a factor in , there will be enough independent generalized eigenvectors satisfying . Notice that if then , so regular eigenvectors always count as generalized eigenvectors.

Now let us consider the system again to see how to extract new solutions from generalized eigenvectors with . We start by computing two linearly independent solutions to , called and , which are connected according to multiplication by :

So is a regular eigenvector, while is only a generalized one with . (In a sense, is “deeper down” than . So regular eigenvectors do not always reach deep enough down into preimages of .)

We already know that since is a regular eigenvector with eigenvalue .

Now look what happens when we plug into the equation:

And look what happens when we plug into the equation:

Insufficient eigenvectors

If we add the solutions to form , the ODE equation holds.

Example

Solving a system with insufficient eigenvectors

Problem: Solve the system .

Solution: The matrix has .

For we solve:

and obtain , . So we get an eigenvector and a solution .

For we first compute :

Next seek solutions to:

We want and independent vector solutions such that . These vectors must satisfy , and that is the only condition. Consider the vector . Notice that , so we define . This also satisfies , and it is independent of .

Now that we have and , we can write the pair of solutions we need:

Finally, our complete family of solutions is the set of linear combinations of those we have found:

Exercise 13-02 (Discussion Worksheet problem)

Insufficient eigenvectors

Solve the IVP: .

Matrix exponential approach

Recall that the matrix exponential approach gives us the formula for the fundamental matrix solution to . Each column of is a solution vector function, and these columns are independent of each other. The complete family of solutions to is:

This approach has no difficulty in finding a formula for the solution. The challenge for this approach is: How do we describe the formula in more practical terms?

A complete description of the answer to this challenge gets too complicated for a single Packet and requires more linear algebra than this course assumes. Instead of trying to give a complete description, our goal will be to see a glimpse of the main ideas and then let your imagination fill in the rest.

Key observation

The central observation we need is this. First, we can rewrite the series in an ‘expansion around ’:

Now suppose we have a basis of generalized eigenvectors . This means for and are independent of each other.

Then when is applied to any of the , the terms of the second factor series are eventually just zeros:

where the zeros occur because whenever .

This finite series can then be calculated. No limits of matrix sums need be taken.

Example

Explicit matrix exponential using series termination

Problem: Solve the system . (Same system as a previous example.)

Solution: This time we solve the system using and the generalized eigenvectors. Recall that the matrix has . In the previous solution to this example we already found the eigenvector with eigenvalue , and we found the generalized eigenvectors and with eigenvalue . (We had also arranged that , but we do not need this fact here.)

Now we have:

Similarly, since is a genuine eigenvector, we have the same calculation but for and :

Now for the generalized eigenvector, the summation has one additional nonzero term:

Now remember that , so the extra term which isn’t cancelled is actually . So we arrive at the solution:

The middle formulation may be recognized as the same result we got by the previous method. So our approach here justifies the solutions of the form studied previously.

In order to generalize the previous method to all cases you could encounter, it suffices to be able to find a set of independent generalized eigenvectors whenever (or has the latter as a factor). The best general method for this is to use “row reduction” from linear algebra to find a basis of the null space of , i.e. to solve for in the vector equation when .

Example

Explicit matrix exponential using series termination

Problem: Solve the system .

Solution: The matrix has . So is a factor, and we expect three independent generalized eigenvectors with eigenvalue .

Next we compute that and . Therefore , , and are independent eigenvectors that satisfy , so they are a basis of generalized eigenvectors. Now we compute acting on these vectors:

Similarly we have:

and:

These three functions give us independent solutions. In fact, since , and , we can simply write a fundamental matrix by putting these solutions as the columns:

Problems due Thursday 18 Apr 2024 by 11:59pm

Easier Problems

Problem 13-01

Complex eigenvalues

Find the complete family of solutions for the system .

Problem 13-02

Generalized eigenvector

Find the complete family of solutions for the system .

Harder Problems

Problem 13-03

Cauchy-Euler system

The Cauchy-Euler equation is given by .

  • (a) Show how to convert this order linear equation to a system of the form . (Hint: .)
  • (b) Show that the solutions to this system have the form , for an eigenvector of with eigenvalue .
  • (c) Solve the system .
Problem 13-04

Repeated roots

  • (a) Consider the generic equation with repeated roots: . (Repeated roots implies that and .) Solve this ODE by converting it to a -variable system and using the matrix exponential technique.
  • (b) Solve the order ODE given by by converting it to a -variable system and using the matrix exponential technique. (Hint: it works well to use , , and for your generalized eigenvectors.) (Note: you haven’t been able to solve with a triple repeated root before!)
  • (c) Verify that , , and each solve the equation in (b). Translate these functions into the vector-valued format for the system you created in (b). Express these vector solutions in terms of the complete family you found with the matrix exponential technique in (b) by choosing appropriate parameter constants.