Packet 09

Matrices I: Reduction and Decomposition

A very important general method for understanding what matrices do when they act upon vectors is to transform them into matrices or products of matrices with standardized form. Each standardized form is designed to make certain aspects of matrix behavior very easy to see, or to make certain calculations that apply the matrix very easy to perform. Typically a matrix is converted to a standardized form using an algorithmic procedure that is run without intuitive or subjective assessment along the way, and the final result is studied afterwards.

In this packet we study a standardized form called β€˜LU factorization’ that is produced using the technique of elimination. Given the LU factorization (or a refinement of part of it called row-echelon form or reduced row-echelon form), one can read off qualitative aspects of the matrix such as its rank, which is the dimension of its image subspace, as well as provide a basis for the image and a basis for the kernel, which is the subspace that is reduced to zero by the matrix action.

In addition to this qualitative information about the action of a matrix on some vectors, the LU factorization can be followed by another important technique called back-substitution. The combined process is typically used for inverting matrix equations such as these:

A𝐱=𝐛,AX=B.

The vector inverse of 𝐛 under the action of A is the vector 𝐱 which A sends to 𝐛. The matrix inverse of B under the action of A is the matrix X=Aβˆ’1B which is sent to B by the action of A multiplying it on the left. (In the second case, if B=In is the identity, then we are just computing X=Aβˆ’1.) We will see below how the LU factorization process followed by back-substitution allows us to find 𝐱 or X.

LU factorization

An LU factorization of a matrix A is simply a lower triangular matrix L and an upper triangular matrix U such that A=LU.

For example 1.

A=(123251032βˆ’6)=(1002103βˆ’41)(123014001)=LU

or 2.

A=(24βˆ’15βˆ’2βˆ’4βˆ’53βˆ’812βˆ’5βˆ’418βˆ’607βˆ’31)=(1000βˆ’21001βˆ’310βˆ’3421)(24βˆ’15βˆ’20312βˆ’30002100005)=LU.

Observe:

  • In 1., both L and U have ones on the diagonal, and all matrices are square.
  • In 2., A and U are not square, L has ones on the diagonal, while U has other values: 2,3,0,0.

We require the L matrix to have ones on the diagonal, while the U matrix may have other diagonal values.

A more general rule for U called β€œrow-echelon form” will be explained below; it is applicable when zeros on the diagonal crop up in the process; in essence it means the entries should be mashed β€œhigh and to the right.”

Keys to LU factorization

The principle of elimination is to use row-add matrices acting on the left in order to eliminate nonzero entries of A that are below the main diagonal entries, clearing a column. These row-add matrices are then collected into a single lower-triangular matrix that combines the actions of all of them.

Creating columns: Observe first that we can combine row-add matrices which add from the same row into a single matrix having a single column of nonzero subdiagonal entries, for example:

(1000210030104001)=(1000210000100001)(1000010030100001)(1000010000104001).

The order in the product on the RHS does not matter!

Question 09-01

Order doesn’t matter when multiplying row-add matrices of the same column

Explain why (or prove algebraically that) order does not matter when multiplying row-add matrices with their nonzero subdiagonal entries lying in the same column. Explain this in general, not just for the 4Γ—4 matrix above.

Combining columns: Furthermore, we can break up any lower-triangular matrix L into a product of single-column matrices like the one above:

L=(1000210035104671)=(1000210030104001)(1000010005100601)(1000010000100071)=L1β‹…L2β‹…L3.

Combining columns – order matters

The three factors in the RHS product cannot be reordered without changing the result!

By combining the above two factorizations, we can write any lower-triangular matrix L (with ones on the diagonal) as a product of row-add matrices, and these row-add matrices can be grouped into single-column row-add combo matrices. If L is lower-triangular with n rows, and we write Li for the single-column combo with nonzeros in the ith column, then we have L=L1β‹…L2β‹…β‹―β‹…Lnβˆ’1. When you see Li you should think β€œsingle-column, row-add combo” and remember that the column of nonzeros lies below the main diagonal of ones.

Inverse of Li: The last critical fact is that the inverse of any single-column row-add combo is the same matrix but with opposite signs on the subdiagonal entries:

(1000210030104001)(1000βˆ’2100βˆ’3010βˆ’4001)=(1000010000100001).

Computing LU factorizations

Here is the plan:

  1. Postulate hypothetically that A=LU=(L1β‹―Lnβˆ’1)U, where Li are single-column row-add combos which combine to form the hypothetical L. We know A and we seek L and U.
  2. Act upon A and LU on the left simultaneously by L1βˆ’1, then by L2βˆ’1, etc., through Lnβˆ’1βˆ’1. We must invent L1βˆ’1, L2βˆ’1 etc. as we go along:
    1. Choose L1βˆ’1 such that L1βˆ’1A has all zeros in the first column on the subdiagonal: In each nonzero subdiagonal entry of L1βˆ’1, put Ξ» such that row-add of the top entry into entry 2 eliminates entry 2, and so on to create the entire first column of L1βˆ’1 which acts to clear the first column of A.
    2. Continue with L2βˆ’1 chosen such that L2βˆ’1(L1βˆ’1A) now has zeros in the first and second columns, and L2βˆ’1 has subdiagonal entries Ξ» in the second column.
    3. Continue through Lnβˆ’1βˆ’1 and obtain (L1βˆ’1β‹―Lnβˆ’1βˆ’1)A=U. By flipping signs on Li for i=1,…,nβˆ’1, we now know Li and simply combine the subdiagonal columns of all Li to create the whole L.
    4. Notice! The matrix Li is exactly the unique single-column row-add combo matrix which has ith column cleared by whatever clears the ith column of A. In other words, we cook up Li so that the same elimination works both on A and on Li, the difference being that Li has a 1 at the top entry by definition, while the corresponding entry of A may have other values.

In other words, we find the matrices L1,…,Lnβˆ’1 by backwards reasoning: row-add eliminations performed on A should simultaneously effect eliminations on L1,…,Lnβˆ’1, converting them each to the identity matrix In. If this works, we know we have the Li ingredients inside the factorization A=LU=(L1β‹―Lnβˆ’1)U.

Example

Basic LU factorization

Problem: Show how to obtain the LU factorization:

A=(2βˆ’35βˆ’84βˆ’511βˆ’166βˆ’417βˆ’268βˆ’65βˆ’45)=(1000210035104671)(2βˆ’35βˆ’8011000βˆ’3βˆ’20001)=LU

Solution: Full steps for the product multiplication showing A=LU as above can be found here. Full steps for the reverse, factoring A into LU, can be found here. (Click β€œDetails” to show the steps.) We explain the latter details below. You are encouraged to use MatrixCalc.org to help you master the process, but remember that on the exams you must be able to run the process by hand.

Since A is a 4Γ—4 matrix, we are looking for L1,L2,L3 such that A=(L1L2L3)U and each Li is a single-column row-add combo matrix. Since the factorization is given to us, we know the answers should be:

L1=(1000210030104001),L2=(1000010005100601),L3=(1000010000100071).

Obtain these as follows. To eliminate the subdiagonal entries in the first column of A, we must add βˆ’2 times row 1 into row 2, then βˆ’3 times row 1 into row 3, and then βˆ’4 times row 1 into row 4.

To cook up L1, we first place ones on the diagonal, zeros elsewhere (namely I4). Then we arrange the first column so that it’s cleared by the same sequence of steps, except now we have 1 in the first row whereas A had 2. Thus: βˆ’2 times 1 added to 2 gives 0, then βˆ’3 times 1 added to 3 gives 0, and βˆ’4 times 1 added to 4 gives 0. The sequence (1,2,3,4) is therefore appropriate for the first column in L2, since it will be cleared by whatever row-adds clear the first column of A.

We have thus found L1 as the matrix which is sent to I4 by the action of the elimination process just described. This elimination process is represented by the action of the single-column row-add combo matrix:

L1βˆ’1=(1000βˆ’2100βˆ’3010βˆ’4001)

and therefore L1βˆ’1 just given is the inverse of L1 that we sought: since L1βˆ’1L1=I4.

After applying the process to A and finding L1, we proceed to find L2, creating its column 1,βˆ’5,βˆ’6 in order to clear the 5,6 from the second column of L1βˆ’1A:

L1βˆ’1A=(1000βˆ’2100βˆ’3010βˆ’4001)(2βˆ’35βˆ’84βˆ’511βˆ’166βˆ’417βˆ’268βˆ’65βˆ’45)=(2βˆ’35βˆ’80110052βˆ’206βˆ’15βˆ’13) L2βˆ’1(L1βˆ’1A)=(100001000βˆ’5100βˆ’601)(2βˆ’35βˆ’80110052βˆ’206βˆ’15βˆ’13)=(2βˆ’35βˆ’8011000βˆ’3βˆ’200βˆ’21βˆ’13).

Therefore

L2=(1000010005100601)=(100001000βˆ’5100βˆ’601)βˆ’1

is the right choice for L2, and we are almost finished. Observe that βˆ’21=βˆ’3β‹…7. Thus:

L3βˆ’1(L2βˆ’1L1βˆ’1A)=(10000100001000βˆ’71)(2βˆ’35βˆ’8011000βˆ’3βˆ’200βˆ’21βˆ’13)=(2βˆ’35βˆ’8011000βˆ’3βˆ’20001)

and so

L3=(1000010000100071)=(10000100001000βˆ’71)βˆ’1.
Question 09-02

Compute LU factorization

Find the LU factorization of the matrix A=(711βˆ’2805141431).

Exercise 09-01

Computing LU factorization

Find the LU factorization of A=(0101).

Exercise 09-02

Computing LU factorization

Find the LU factorization of B=(10302βˆ’8214βˆ’50).

Exercise 09-03

Computing LU factorization

Find the LU factorization of C=(0001010000010101).

Pivoting

The algorithm presented above for LU factorization can fail down because of zeros. For example, consider A:

A=(01102βˆ’35βˆ’86βˆ’417βˆ’268βˆ’65βˆ’45).

We cannot even begin to eliminate 2,6,8 using row-add steps. In this situation, it is best to pivot to a different top row by using a row-swap permutation. The new top row is sometimes called the pivot row, and the new entry used to eliminate others by row-adding may be called the pivot entry or simply the pivot.

Write P for the row-swap matrix that swaps rows 1 and 2. After pivoting, we have a new matrix:

PA=(0100100000100001)(01102βˆ’35βˆ’86βˆ’417βˆ’268βˆ’65βˆ’45)=(2βˆ’35βˆ’801106βˆ’417βˆ’268βˆ’65βˆ’45).

The new matrix has an LU factorization now:

(2βˆ’35βˆ’801106βˆ’417βˆ’268βˆ’65βˆ’45)=(1000010035104671)(2βˆ’35βˆ’8011000βˆ’3βˆ’20001).

It is appropriate to say that β€œA has an LU factorization with pivots”, or to say that β€œwe have a factorization PA=LU.” We do not say simply that β€œA has an LU factorization” if pivots are required.

Every matrix A has a factorization of the form PA=LU.

Numerical methods

For numerical stability, it is preferable to always pivot to the largest entry in a given column before performing elimination. This technique is called β€œpartial pivoting.”

(β€œComplete pivoting” refers to a technique where elimination is performed to clear any uncleared column by choosing the largest remaining entry across the matrix. It is computationally expensive and therefore rarely used.)

The general procedure for computing PA=LU factorizations is fairly complex because the natural algorithm interlaces elimination (using factors Liβˆ’1) with permutations for the pivots (using row-swap matrices Pi). You are not required to master this procedure for exams.

The general process of computing U by successive elimination of terms with pivots, clearing out one column at a time, is called row reduction (see below). You are required to master row-reduction for exams.

The difference between row-reduction and the procedure for PA=LU is that in the latter we do not keep track of Li and Pi and do not calculate L and P, we simply perform the row-add operations on A until the L factor inside A is gone (by clearing columns) and the result is U.

Here is the theory for producing PA=LU in general:

Factorization PA=LU

For i=1,…,n, work through row i of A:

  1. Choose a row of A to pivot into the first row, defining the corresponding row-swap matrix P1. After swapping we have P1A. (For numerical stability, choose the row of A with largest entry in the first column; but in principle, can choose any row with nonzero entry in the first column.)
  2. Construct L1βˆ’1 to eliminate entries in row 2 and below, clearing column 1 of P1A. Obtain L1βˆ’1P1A.
  3. Proceed to row 2: choose a pivot row and define row-swap P2 and obtain P2L1βˆ’1P1A.
  4. Construct L2βˆ’1 to clear column 2 of P2L1βˆ’1P1A.
  5. Repeat with Pi a row-swap putting a chosen pivot from below row i into row i.
  6. Repeat with Liβˆ’1 clearing the ith column below the ith row.
  7. End when i=n.

In this way we build a sequence:

Lnβˆ’1βˆ’1Pnβˆ’1Lnβˆ’2βˆ’1…P2L1βˆ’1P1A=U.

If we didn’t have all the Pi matrices interlaced between the Li’s, this would be Lnβˆ’1βˆ’1…L1βˆ’1A=U, and therefore A=L1…LnU as in the ordinary LU factorization. So what we want to do here is to extract the Pi’s out to the right and group them into P. This is the hardest part!

There is a trick we can use to make this happen. It turns out that PjLiβˆ’1Pjβˆ’1 (for i<j) is the same as Liβˆ’1, except that the nonzero stack of entries undergoes the row-swap of Pj. Using this fact, together with the fact that Pjβˆ’1=Pj because Pj swaps two rows, we can rewrite our sequence in terms of other single-column matrices called Liβ€². For example, in the case n=4, we could write:

L3β€²βˆ’1=L3,L2β€²βˆ’1=P3L2βˆ’1P3,L1β€²βˆ’1=P3P2L1βˆ’1P2P3.

From these definitions, it follows that

(L3β€²βˆ’1L2β€²βˆ’1L1β€²βˆ’1)(P3P2P1)A=L3βˆ’1P3L2βˆ’1P2L1βˆ’1P1A=U

and thus

(P3P2P1)A=(L1β€²L2β€²L3β€²)UPA=LU,

where we define P=P3P2P1 and L=L1β€²L2β€²L3β€².

Here is a concise computational algorithm to perform these steps on a computer:

Algorithm on entries for PA=LU factorization

Initialize: P=L=In,U=A. For j=1 to nβˆ’1 :

  • Choose iβ‰₯j with maximum (or simply nonzero) |uij|. – That is: i is the new pivot row index.
  • Exchange uj,j:n↔ui,j:n. – That is: swap current row j with new pivot row i on future columns of U.
  • Exchange pj,:↔pi,:. – That is: swap current row j with new pivot row i on all of P to create PjPjβˆ’1β‹―P1.
  • Exchange β„“j,1:jβˆ’1↔ℓi,1:jβˆ’1. – That is: swap current row j with new pivot row i on the subdiagonal entries of L to create PjL1β€²β‹―Ljβˆ’1β€²Pj.
  • For k=j+1 to n :
    • Set β„“kj=ukj/ujj – That is: write next subdiagonal stack of L: add terms to PjL1β€²β‹―Ljβˆ’1β€²Pj to create L1β€²β‹―Ljβ€².
    • Set uk,j:n=uk,j:nβˆ’β„“kjuj,j:n – That is: clear subdiagonals of column j of U by row-adding.

To understand the step of setting β„“kj for k>j, it is useful to make the following observation:

(PjL1β€²β‹―Ljβˆ’1β€²Pj)Lj=L1β€²β‹―Ljβ€².

This notation is a little misleading though, because L1β€² on the RHS has subdiagonals permuted by Pj, whereas L1β€² on the LHS does not. The factor PjL1β€²Pj will, though. Therefore, the meaning of Liβ€² changes at each step in the comments on the above algorithm.

Row-echelon form

When U has a number of zeros cropping up, we can sometimes use pivoting to clear out more columns. For example, consider the matrices:

A=(0010023400450001),B=(0234001000450001),C=(0234001000050000).

Here A is technically upper-triangular and satisfies the rules for U in an LU factorization. However, it is possible to perform a pivot and obtain B, which can be further reduced by two elimination steps to obtain C.

The form of C is called row-echelon form, or sometimes just echelon form. To be precise: in row-echelon form, the leading entry (left-most nonzero entry) in each row is somewhere to the right of the leading entries in the rows above, and rows of only zeros are at the bottom.

The combined process of creating U as in LU decomposition and then continuing pivots and eliminations to put U in row-echelon form is called row reduction.

  • Row-echelon form for U is very useful for inversion problems (see below). – For inversion problems, we will perform row reduction simultaneously on A and on a vector 𝐛 or matrix B upon which we are inverting the action of A.
  • Row-echelon form for U is not needed for finding determinants (see below). – Indeed, row-echelon U can only differ from other triangular U when the determinant is zero.

Preimages and inversion

Recall the inversion problems we are interested in, namely solving for vector 𝐱 or matrix X in these equations:

A𝐱=𝐛,AX=B.

The LU factorization, and in particular the row reduction process yielding row-echelon form, is the perfect tool to solve these equations. Replacing A with LU, we have:

LU𝐱=𝐛,LUX=B.

In the process of finding LU we actually found Lβˆ’1=Lnβˆ’1β‹―L1βˆ’1. Applying the Lβˆ’1 action to both sides:

U𝐱=𝐛′,UX=Bβ€²

where 𝐛′=Lβˆ’1𝐛 and Bβ€²=Lβˆ’1B.

Now assume that U is in row-echelon form. We can solve these equations for 𝐱 and X using the second fundamental technique of this packet, called back substitution.

Just as elimination is equivalent to inverting L, back substitution is equivalent to inverting U. Inverting L with eliminations provides the row-echelon form, and inverting U with back substitutions provides the reduced row-echelon form.

Reduced row-echelon form

Given U in row-echelon form, it is also in reduced row-echelon form (RREF) when the leading nonzero entries in each row are ones, and any column with a leading one has all other entries equal to zero.

Back substitution is the process of applying row-scale and row-add matrix actions to convert a matrix in REF to a matrix in RREF. This is a continuation of β€œrow reduction,” with a difference being that the matrices performing back substitution operations are not lower triangular, they are now upper triangular. Another difference is that we don’t have good reasons (in terms of applications) to record these back substitution matrices, we simply let them act on our matrix, or simultaneously on our matrix and vector (in the case of A𝐱=𝐛) or on our matrix and matrix (in the case of AX=B).

To find the RREF of a matrix in REF, we:

Computing reduced row-echelon form from row-echelon form

  • Act by row-scale matrices to change the leading entries to ones.
  • Act by more row-add matrices to eliminate the entries above leading ones.

Any RREF matrix can be written in terms of block submatrices:

(IkX00).

Here Ik is the identity as a submatrix (the size of k depends on how many zeros are in U), X is some other submatrix, and 00 are zeros filling out the rest of U.

center

Example

Back substitution: convert REF to RREF

REF(0234001000050000)∼(0100001000010000)RREF REF(24βˆ’15βˆ’20312βˆ’30002100005)∼(12βˆ’1/25/2βˆ’1011/32/3βˆ’100011/200001)∼(10βˆ’7/600011/3000001000001)RREF
Exercise 09-04

Back substitution: convert REF to RREF

Convert (1βˆ’30βˆ’10βˆ’20200βˆ’82000194000000) from REF to RREF.

Preimage of a vector

The RREF of a matrix reveals the complete set of possible inverses (preimages) of a given vector.

Example

Inverting a vector using RREF

Consider the equation A𝐱=𝐛, with a row-reduced A:

(100βˆ’3001200000000)(x1x2x3x4)=(b1b2b3b4).

This matrix translates to the system of equations:

x1βˆ’3x4=b1,x1=+3x4+b1x3+2x4=b2,x3=βˆ’2x4+b20=b30=b4

There is a solution vector 𝐱 provided b3=b4=0. If that is true, any x2 and x4 can be chosen, and x1,x3 are quickly found by solving the system. Given any choice of x2 and x4, there is only one possibility for each of x1,x3. Notice that the leading ones (also called pivots) are in columns 1 and 3, and x1 and x3 are determined. There are no pivots in columns 2 and 4, and x2 and x4 are free to take any value.

We can go farther and write the complete set of solutions 𝐱 as the span of two vectors plus an offset, using values of x2 and x4 as the coefficients:

𝐱=(3x4+b1x2βˆ’2x4+b2x4)=x2(0100)+x4(30βˆ’21)+(b10b20)

so we could write, if you like: 𝐱∈⟨(0100),(30βˆ’21)⟩+(b10b20).

If we wish to solve an equation A𝐱=𝐛 and A is not yet in reduced row-echelon form, then we must convert A to that form, while simultaneously applying the row reductions and pivots to the vector 𝐛. (This means applying the same invertible transformations to both sides of the equation.) Row reduction is easily performed on A and 𝐛 simultaneously by combining them into the augmented matrix (A𝐛) and row reducing this instead. (This means: create a matrix by adding 𝐛 as an additional column to A, and put a vertical line between them to demarcate the end of A.)

Example

Inverting a vector: row reduction of augmented matrix

Suppose we are trying to solve A𝐱=𝐛 with A and 𝐛 given by the following data:

A=(162βˆ’5βˆ’2002βˆ’8βˆ’100001),𝐛=(βˆ’437).

Notice that A is already in row-echelon form. So we create the augmented matrix and row-reduce it further to put (A|𝐛) into RREF:

(A|𝐛)=(162βˆ’5βˆ’2βˆ’4002βˆ’8βˆ’13000017)∼(1603βˆ’1βˆ’7002βˆ’8βˆ’13000017)∼(160300001βˆ’405000017)

and we discover that x2,x4 are free, x1=βˆ’6x2βˆ’3x4 and x3=4x4+5 and x5=7. The possible solutions for 𝐱 are therefore:

𝐱=x2(βˆ’61000)+x4(βˆ’30410)+(00007),x2,x4βˆˆβ„.

Solving linear systems

A linear system looks something like this:

x1βˆ’2x2+x3=02x2βˆ’8x3=85x1βˆ’5x3=10.

This system is easily represented as the vector inversion problem of A𝐱=𝐛, with 𝐱=(x1,x2,x3) and (A𝐛) given by:

(A𝐛)=(1βˆ’21002βˆ’8850βˆ’510)

The techniques of elimination and back substitution that we have learned for matrices correspond to the techniques of the same names that you already know for solving systems of equations.

Exercise 09-05

Comparing elimination and elimination

Explain (with some details in an example) how elimination for matrices (corresponding to multiplication by Liβˆ’1 for some single-column row-add combo matrix Liβˆ’1) is equivalent to elimination for a system of equations.

Hint: solve the above system using elimination and back substitution as in high school algebra, and then solve the same system as a vector preimage problem using row reduction to RREF of the augmented matrix. Write your solution steps in corresponding order so that you can match the corresponding steps between the two methods.

Preimage of a matrix

The technique for inverting a vector can be applied to find the preimage of a matrix simply by inverting the column vectors of that matrix. Namely, given the equation:

AX=B,

notice that if we write X and B in terms of their column vectors, we have:

A(𝐱1𝐱2…𝐱n)=(𝐛1𝐛2…𝐛n)A𝐱1=𝐛1,A𝐱1=𝐛1,…,A𝐱n=𝐛n.

In principle, we can simply invert 𝐛1,…,𝐛n one at a time, in sequence.

In practice, a much faster way to find X is to write the augmented matrix (AB) and row reduce this matrix.

The most important use of this technique is to find the inverse of an invertible matrix. Given A, the inverse Aβˆ’1 is found by computing the preimage of In under the action of A, in other words by setting B=In and row reducing the augmented matrix (AIn).

Example

Inverse of a 3Γ—3 matrix

Problem: Find the inverse Aβˆ’1 of the matrix A=(0121034βˆ’38).

Solution: We row reduce the augmented matrix using pivots, eliminations, and back substitutions to put A into RREF:

(0121001030104βˆ’38001)∼(1030100121004βˆ’38001)∼(1030100121000βˆ’3βˆ’40βˆ’41)∼(1030100121000023βˆ’41)∼(1030100121000013/2βˆ’21/2)∼(100βˆ’9/27βˆ’3/2010βˆ’24βˆ’10013/2βˆ’21/2).

Therefore we have found Aβˆ’1=(βˆ’9/27βˆ’3/2βˆ’24βˆ’13/2βˆ’21/2).

Determinants

Two facts about determinants allow us to use LU factorization to compute the determinant of any matrix:

Two determinant facts

  • The determinant of any triangular matrix is equal to the product of its diagonal entries.
  • The determinant is multiplicative: det⁑AB=det⁑Aβ‹…det⁑B.

If A=LU, then det⁑A=det⁑Lβ‹…det⁑U, and det⁑L=1 assuming it has only ones on the diagonal. So det⁑A is the product of the diagonal entries of U.

If PA=LU, then det⁑Pβ‹…det⁑A=det⁑Lβ‹…det⁑U. It turns out that det⁑P=Β±1. So det⁑A=βˆ“det⁑U where again det⁑U is the product of diagonal entries. The choice of sign is given by det⁑P=(βˆ’1)k where k is the number of non-trivial row-swaps that are composed to give P.

Problems due 20 Mar 2024 by 9:00am

Problem 09-01

Row reduction of a product

Suppose A=BC and B is invertible. Prove that any sequence of row reduction operations which reduces B to In will also reduce A to C. Can you relate this fact to the LU decomposition?

Problem 09-02

LU factorization and solving A𝐱=𝐛

Compute the LU factorization of A:

A=(1βˆ’2βˆ’4βˆ’32βˆ’7βˆ’7βˆ’6βˆ’1264βˆ’4βˆ’198).

Use this to find the unique value of 𝐱 that satisfies A𝐱=𝐛, where 𝐛=(1,7,0,3). (You should perform the elimination and back-substitution operations on the rows of 𝐛 that are dictated by the LU decomposition of A.) Also: what is the determinant of A?

Problem 09-03

Inverse of A using LU

Compute the inverse of A from Problem 09-02 using the LU factorization computed in that problem, in other words by computing Uβˆ’1Lβˆ’1. Now compute Aβˆ’1 using the method at the end of the packet: perform row reduction on the augmented matrix (AI4). Which method do you like better?

Problem 09-04

Row reduction with pivoting

Perform row reduction with pivoting upon the matrix A=(200140004039βˆ’602449) in order to find all solutions 𝐱 to the equation A𝐱=𝐛 with 𝐛=(1,1,1,1). (You can use the augmented matrix; no need to compute the full PA=LU decomposition.)