Linear systems: homogeneous, constant coefficients

We are interested in systems of equations that can be written in the form 𝐲=A𝐲, where 𝐲(t) is a vector function of t, and A is a matrix with constant entries (independent of t).

Our goal is to find a complete set of solutions to 𝐲=A𝐲. It is not enough to find just one. How many solutions constitute a complete set?

Picard-Lindelöf for first-order systems

The system: 𝐲(t)=A𝐲(t),𝐲(0)=𝐲0 always has existence and uniqueness of solutions, for any initial vector 𝐲0 and constant coefficient matrix A.

This implies: a complete set of solutions is a vector space (by linearity) of the same dimension as 𝐲0 (by the theorem).

Eigenvectors The eigenvector approach to solving the system starts from the observation that if 𝐯 is an eigenvector of A with eigenvalue λ, meaning that A𝐯=λ𝐯, then the vector function 𝐲(t)=𝐯eλt is a solution to the system.

The downside of this approach is that the solution cannot be written down until all the eigenvalues have been computed; the eigenvalues frequently involve imaginary numbers; and it is possible there are simply not enough eigenvectors to give a complete family.

Matrix exponentials The matrix exponential approach to solving the same system starts by constructing the function Y(t)=eAt using power series. This function is a matrix-valued function of t that satisfies Y=AY, and because eAt is always an invertible matrix, the columns of Y(t) automatically give a complete set of solutions.

The downside of this approach is that it is hard to compute eAt, and the typical method requires finding the eigenvalues of A anyway. On the other hand, it is not necessary to use imaginary numbers because the matrix eAt can be put into what is called “Real Jordan Form” (with only real numbers). This form uses an extension of the idea of eigenvector that is designed specifically to handle the situation of missing eigenvectors. “Putting into” this form simply means changing variables 𝐳=P𝐲 according to some invertible transformation P. The matrix PAP1 of the new equation 𝐳=(PAP1)𝐳 will have the Real Jordan Form.

Eigenvector approach

Suppose we have any system of the form 𝐲=A𝐲 for a matrix with constant entries aij. Suppose 𝐯 is an eigenvector of A with eigenvalue λ, meaning that A𝐯=λ𝐯. (Note that 𝐯 also has constant entries, like A.) Then the function 𝐲(t)=𝐯eλt solves the system:

𝐲(t)=(𝐯eλt)=𝐯eλt+𝐯(eλt)=0+𝐯λeλt=λ(𝐯eλt)=λ𝐲(t)A𝐲(t)=A(𝐯eλt)=(A𝐯)eλt=(λ𝐯)eλt=λ(𝐯eλt)=λ𝐲(t).

Eigenvector-eigenvalue solutions

Any eigenvector-eigenvalue pair 𝐯,λ determines a solution 𝐲(t)=𝐯eλt.

Remember that if 𝐯 is an eigenvector of A, then any multiple a𝐯 is also an eigenvector. In terms of solutions, this simply means that if 𝐲(t) is a solution of the form 𝐯eλt, then a𝐲(t) is also a solution. We already knew this: it is part of linearity – the fact that solutions are vectors in a vector space.

The Picard-Lindelöf theorem implies that the space of solutions has dimension n equal to the dimension of the vectors 𝐲 and 𝐲0. (The dimension of 𝐲 is just the number of components it has.) Therefore a set of solutions 𝐲1(t),,𝐲k(t) is a complete set when (a) we have k=n, and (b) they are independent when considered as vectors. The Wronskian (using the determinant) can tell us when n vectors having n components are independent.

When there are n independent eigenvectors, and all of their eigenvalues are real numbers, the eigenvector strategy works perfectly without further complication. Just find all eigenvector-eigenvalue pairs and write down all the solutions they determine, and we are done.

The main challenge of the eigenvector strategy becomes the challenge of handling situations of two types. It can happen that (a) some eigenvalues involve imaginary numbers, or (b) some systems do not have enough eigenvectors. In applications, the former tends to arise for oscillatory systems, and the latter arises for analogues of critical damping.

Example

Eigenvector approach: real eigenvalues, enough eigenvectors

Problem: Solve the system given by 𝐲=(401210201)𝐲. Solution: First find the eigenvalues:

detAλ=|4λ0121λ0201λ|=(4λ)|1λ001λ|0|2021λ|+1|21λ20|=(4λ)(1λ)20+2(1λ)=λ3+6λ211λ+6.

The roots are λ=1,2,3. Now we seek eigenvectors by solving Aλ𝐱=𝟎 for each λ root. When λ=1:

(301200200)(x1x2x3)=(000).

This is equivalent to the system 3x1+x3=0, 2x1=0, and 2x1=0. Solving this system, we have x1=0=x3 and x2 can be anything. Plugging these in we have 𝐱=(0x20), and we choose x2=1 obtaining (010).

Next for λ=2 we have:

(201210201)(x1x2x3)=(000)

and the equivalent system is 2x1+x3=0, 2x1x2=0, and 2x1x3=0. Solving this system, we have x2=x3=2x1. Plugging these in we have 𝐱=(x12x12x1) and we choose x1=1 obtaining (122).

Finally for λ=3 we have:

(101220202)(x1x2x3)=(000)

and the equivalent system is x1+x3=0, 2x12x2=0 and 2x12x3=0. Solving this system, we have x2=x3=x1. Plugging in we have 𝐱=(x1x1x1), and we choose x1=1 obtaining (111).

(Note: the only reason we have been choosing x1=1 is for convenience. Anything except zero will work.)

Notice that we have found three independent eigenvectors for a 3D system, so we are done. Here is the complete set of solutions:

C1(010)et+C2(122)e2t+C3(111)e3t.

Complex eigenvalues

Suppose the solution to the eigenvalue equation detAλ=0 is a complex eigenvalue λ=a+bi.

Because the matrix Aλ has real-valued entries, such eigenvalue roots must come in conjugate pairs, meaning we would always find both eigenvalues λ=a±bi. Furthermore, the corresponding eigenvectors also come in a complex conjugate pair 𝐯+=𝐯a+𝐯bi and 𝐯=𝐯a𝐯bi:

A𝐯+=(a+bi)𝐯+A𝐯=(abi)𝐯.

(This follows by taking the complex conjugate of both sides of the equation, using the fact that conjugation and multiplication can be performed in either order.)

We proceed as in the 2nd order linear theory, and combine the solutions 𝐲+(t)=𝐯+e(a+bi)t and 𝐲(t)=𝐯e(abi)t in such a way as to extract vector functions having exclusively real-valued entries. We have:

𝐲1(t):=12(𝐲++𝐲)=𝐯aeatcos(bt)𝐯beatsin(bt)𝐲2(t):=12i(𝐲+𝐲)=𝐯aeatsin(bt)+𝐯beatcos(bt).

In summary: whenever λ=a+bi is an eigenvalue for the eigenvector 𝐯=𝐯a+𝐯bi, the formulas above for 𝐲1 and 𝐲2 give two independent solutions to the system 𝐲=A𝐲.

Question 13-01

Extracting real solutions

Verify the real-valued formulas for 𝐲1 and 𝐲2 by plugging in 𝐲+ and 𝐲 and then expanding and simplifying.

Example

System with complex eigenvalues and eigenvectors

Problem: Solve the system 𝐲=(abba)𝐲. (Assume a0,b0.) What are the solutions for a=b=1 that take the initial value 𝐲0=(AB) at t=0?

Solution: The matrix (abba) has detAλ=(λa)2=b2 so λ=a±bi. To find the eigenvectors:

Aa+bi𝐱=(bibbbi)(x1x2)=(00).

Therefore bix1bx2=0 and bx1bix2=0. These equations are redundant since the second is i times the first. Dividing by b and then plugging in x1=ix2 we have (ix2x2), and we choose x2=1 obtaining:

𝐯+=(i1)=(01)+(10)i=𝐯a+𝐯bi,𝐯a=(01),𝐯b=(10).

We treat λ=abi similarly. To find the eigenvectors:

Aabi𝐱=(bibbbi)(x1x2)=(00).

Therefore bix1bx2=0 and bx1+bix2=0. These equations are redundant since the second is i times the first. Dividing by b and then plugging in x1=ix2 we have (ix2x2), and we choose x2=1 obtaining:

𝐯=(i1)=(01)(10)i=𝐯a𝐯bi,𝐯a,𝐯bas above.

The solutions are therefore

𝐲1(t)=(01)eatcos(bt)(10)eatsin(bt)𝐲2(t)=(01)eatsin(bt)+(10)eatcos(bt).

When a=b=1 and t=0 these solutions reduce to:

𝐲1(0)=(01),𝐲2(0)=(10).

We seek 𝐲(t)=C1𝐲1+C2𝐲2 satisfying 𝐲(0)=(AB). Clearly we need C1=B and C2=A. So our final answer (with a=b=1) is:

𝐲(t)=B((01)etcos(t)(10)etsin(t))+A((01)etsin(t)+(10)etcos(t))=(AB)etcos(t)+(BA)etsin(t).
Exercise 13-01 (Discussion Worksheet problem)

Complex eigenvalues

Solve the system 𝐲=(31351)𝐲. First find the complex solutions, then find the real solutions.

Insufficient eigenvectors

Suppose the solution to the eigenvalue equation detAλ=0 is a repeated eigenvalue, so for example the equation has a factor like (λa)k=0. In this case, sometimes it happens that the eigenvalue λ=a does not have enough eigenvectors associated to it.

Example

System with too few eigenvectors

Problem: Solve the system 𝐲=(1101)𝐲.

Solution: The matrix A=(1101) has detAλ=(1λ)2. Therefore it has a “double” eigenvalue λ=+1. To find the eigenvectors, solve Aλ𝐱=𝟎:

A+1𝐱=(0100)(x1x2)=(00).

This tells us x2=0 and x1 can be anything. Therefore we have the eigenvectors (x10) and we may choose x1=1, obtaining (10).

However, no more independent eigenvectors are available (other than scalar multiples of this one, which would not be independent). Therefore it is impossible to find a complete set of solutions using eigenvectors.

Let us consider the way around this difficulty. The way around is not very simple! Still, we try to take the simplest path.

Suppose our situation is that (λa)2=0 as a factor in detAλ, and that we have a single eigenvector 𝐯 satisfying Aa𝐯=0. So we start with the solution 𝐯eat. We seek a second solution.

There is a concept of generalized eigenvector, which is any 𝐱 that satisfies Aλk𝐱=𝟎 for higher powers k1. According to linear algebra theory, whenever (λa)k=0 from a factor in detAλ, there will be enough independent generalized eigenvectors satisfying Aλk𝐱=𝟎. Notice that if Aλ𝐱=𝟎 then Aλ2𝐱=𝟎, so regular eigenvectors always count as generalized eigenvectors.

Now let us consider the system 𝐲=A𝐲 again to see how to extract new solutions from generalized eigenvectors with k2. We start by computing two linearly independent solutions to Aλk𝐱=𝟎, called 𝐯0 and 𝐯1, which are connected according to multiplication by Aa=(AaI):

𝐯0(AaI)𝐯1(AaI)𝟎.

So 𝐯1 is a regular eigenvector, while 𝐯0 is only a generalized one with k=2. (In a sense, 𝐯0 is “deeper down” than 𝐯1. So regular eigenvectors do not always reach deep enough down into preimages of Aa.)

We already know that (𝐯1eat)=A(𝐯1eat) since 𝐯1 is a regular eigenvector with eigenvalue a.

Now look what happens when we plug 𝐯1teat into the equation:

(𝐯1teat)=𝐯1eat+a𝐯1teat,A(𝐯1teat)=(AaI+aI)𝐯1teat=(AaI)𝐯1teat+a𝐯1teat=a𝐯1teat.

And look what happens when we plug 𝐯0eat into the equation:

(𝐯0eat)=a𝐯0eat,A(𝐯0eat)=(AaI+aI)𝐯0eat=(AaI)𝐯0eat+a𝐯0eat=𝐯1eat+a𝐯0eat.

Insufficient eigenvectors

If we add the solutions to form 𝐲(t)=𝐯0eat+𝐯1teat, the ODE equation holds.

Example

Solving a system with insufficient eigenvectors

Problem: Solve the system 𝐲=(100130011)𝐲.

Solution: The matrix A=(100130011) has detAλ=(λ1)2(λ3).

For λ=3 we solve:

(200100012)(x1x2x3)=(000)

and obtain x1=0, x2=2x3. So we get an eigenvector (021) and a solution 𝐲+3(t)=(021)e3t.

For λ=1 we first compute Aλ2:

A+12=(000120010)2=(000240120).

Next seek solutions 𝐱 to:

A+12𝐱=(000240120)(x1x2x3)=(000).

We want 𝐯0 and 𝐯1 independent vector solutions such that A+1𝐯0=𝐯1. These vectors must satisfy x1+2x2=0, and that is the only condition. Consider the vector 𝐯0=(210). Notice that A+1𝐯0=(001), so we define 𝐯1=(001). This also satisfies x1+2x2=0, and it is independent of 𝐯0.

Now that we have 𝐯0 and 𝐯1, we can write the pair of solutions we need:

y+1(t)=𝐯1ety+1(t)=𝐯0et+𝐯1tet.

Finally, our complete family of solutions is the set of linear combinations of those we have found:

A(021)e3t+B(001)et+C((210)et+(001)tet)=(2Cet2Ae3t+CetAe3t+Bet+Ctet).
Exercise 13-02 (Discussion Worksheet problem)

Insufficient eigenvectors

Solve the IVP: \quad \begin{align*} x'(t) &= 7x+y\\ y'(t) &= -4x+3y, \end{align*} ParseError: {align*} can be used only in display mode. (x(0)y(0))=(25).

Matrix exponential approach

Recall that the matrix exponential approach gives us the formula Y(t)=eAt for the fundamental matrix solution to 𝐲=A𝐲. Each column of Y=(𝐲1𝐲n) is a solution vector function, and these columns are independent of each other. The complete family of solutions to 𝐲=A𝐲 is:

Y𝐜=Y(C1C2Cn)=C1𝐲1++Cn𝐲n.

This approach has no difficulty in finding a formula for the solution. The challenge for this approach is: How do we describe the formula eAt in more practical terms?

A complete description of the answer to this challenge gets too complicated for a single Packet and requires more linear algebra than this course assumes. Instead of trying to give a complete description, our goal will be to see a glimpse of the main ideas and then let your imagination fill in the rest.

Key observation

The central observation we need is this. First, we can rewrite the series in an ‘expansion around λ’:

eAt=eλte(AλI)t=eλteAλt=eλt(I+tAλ+t22!Aλ2+t33!Aλ3+).

Now suppose we have a basis of generalized eigenvectors 𝐮1,,𝐮k. This means Aλk𝐮i=𝟎 for i=1,,k and 𝐮1,,𝐮k are independent of each other.

Then when eAt is applied to any of the 𝐮i, the terms of the second factor series are eventually just zeros:

eAt𝐮i=eλt(I+tAλ+t22!Aλ2+t33!Aλ3+)𝐮i=eλt(𝐮i+tAλ𝐮i+t22!Aλ2𝐮i++tk1(k1)!Aλk1𝐮i+𝟎+𝟎+)

where the zeros occur because Aλj𝐮i=𝟎 whenever jk.

This finite series can then be calculated. No limits of matrix sums need be taken.

Example

Explicit matrix exponential using series termination

Problem: Solve the system 𝐲=(100130011)𝐲. (Same system as a previous example.)

Solution: This time we solve the system using eAt and the generalized eigenvectors. Recall that the matrix has detAλ=(λ1)2(λ3). In the previous solution to this example we already found the eigenvector (021) with eigenvalue λ=3, and we found the generalized eigenvectors 𝐯0=(210) and 𝐯1=(001) with eigenvalue λ=1. (We had also arranged that A+𝐯0=𝐯1, but we do not need this fact here.)

Now we have:

eAt(021)=e3teA3t(021)=e3t(I+tA3+t22!A32+t33!A33+)(021)=e3t((021)+tA3(021)tA3(021)tA3(021)0+t22A32(021)t22A32(021)t22A32(021)0+)=e3t(021).

Similarly, since 𝐯1 is a genuine eigenvector, we have the same calculation but for A1 and 𝐯1:

eAt(001)=et(I+tA1+t22!A12+t33!A13+)(001)=et((001)+tA1(001)tA1(001)tA1(001)0+t22A12(001)t22A12(001)t22A12(001)0+)=et(001).

Now for the generalized eigenvector, the summation has one additional nonzero term:

eAt(210)=et(I+tA1+t22!A12+t33!A13+)(210)=et((210)+tA1(210)+t22A12(210)t22A12(210)t22A12(210)0+)=et(210)+tet(001).

Now remember that A1𝐯0=𝐯1, so the extra term which isn’t cancelled is actually t𝐯1. So we arrive at the solution:

eAt(210)=et((210)+t(001))=(210)et+(001)tet=(2etettet).

The middle formulation may be recognized as the same result we got by the previous method. So our approach here justifies the solutions of the form 𝐲(t)=𝐯0eat+𝐯1teat studied previously.

In order to generalize the previous method to all cases you could encounter, it suffices to be able to find a set of k independent generalized eigenvectors whenever detAλ=(λa)k (or has the latter as a factor). The best general method for this is to use “row reduction” from linear algebra to find a basis of the null space of Aa, i.e. to solve for 𝐱 in the vector equation (AλI)𝐱=𝟎 when λ=a.

Example

Explicit matrix exponential using series termination

Problem: Solve the system 𝐲=(211311934)𝐲.

Solution: The matrix has detAλ=λ33λ23λ1=(λ+1)3. So (λ1)3 is a factor, and we expect three independent generalized eigenvectors with eigenvalue 1.

Next we compute that A12=(301000903) and A13=0. Therefore 𝐞1, 𝐞2, and 𝐞3 are independent eigenvectors that satisfy A13𝐱=𝟎, so they are a basis of generalized eigenvectors. Now we compute eA1t acting on these vectors:

eAt𝐞1=et(I+tA1+t22!A12+t33!A13+)𝐞1=et(𝐞1+tA1𝐞1+t22!A12𝐞1)=et((100)+t(339)+t22!(309)).

Similarly we have:

eAt𝐞2=et(I+tA1+t22!A12+t33!A13+)𝐞2=et(𝐞2+tA1𝐞2+t22!A12𝐞2)=et((010)+t(103)+t22!(000)),

and:

eAt𝐞3=et(I+tA1+t22!A12+t33!A13+)𝐞3=et(𝐞3+tA1𝐞3+t22!A12𝐞3)=et((001)+t(113)+t22!(103)).

These three functions give us independent solutions. In fact, since (𝐞1𝐞2𝐞3)=I3, and eAtI3=eAt, we can simply write a fundamental matrix by putting these solutions as the columns:

X(t)=eAt=((1+3t3t22!)ette1(t+t22!)et3tetettet(9t9t22!)et3tet(13t+3t22!)et).

Problems due Thursday 18 Apr 2024 by 11:59pm

Easier Problems

Problem 13-01

Complex eigenvalues

Find the complete family of solutions for the system 𝐲=(121011011)𝐲.

Problem 13-02

Generalized eigenvector

Find the complete family of solutions for the system 𝐲=(1113)𝐲.

Harder Problems

Problem 13-03

Cauchy-Euler system

The Cauchy-Euler equation is given by at2y+bty+cy=0.

  • (a) Show how to convert this 2nd order linear equation to a system of the form t𝐲=A𝐲. (Hint: y0=y/t,y1=y.)
  • (b) Show that the solutions to this system have the form 𝐲(t)=tλ𝐯, for 𝐯 an eigenvector of A with eigenvalue λ.
  • (c) Solve the system t𝐲=(1191)𝐲.
Problem 13-04

Repeated roots

  • (a) Consider the generic equation with repeated roots: ay+by+cy=0. (Repeated roots implies that b2=4ac and r=b2a.) Solve this ODE by converting it to a 2-variable system and using the matrix exponential technique.
  • (b) Solve the 3rd order ODE given by y6y+12y8y=0 by converting it to a 3-variable system and using the matrix exponential technique. (Hint: it works well to use 𝐞1, A2𝐞1, and A22𝐞1 for your generalized eigenvectors.) (Note: you haven’t been able to solve with a triple repeated root before!)
  • (c) Verify that e2t, te2t, and t2e2t each solve the equation in (b). Translate these functions into the vector-valued format for the system you created in (b). Express these vector solutions in terms of the complete family you found with the matrix exponential technique in (b) by choosing appropriate parameter constants.