Eliminations = Factorization; A = LU

Today we will be discussing the following topics:

1. What is LU Factorization?

2. Factoring a Matrix Using the LU Factorization Method

3. Backward and Forward Substitution

4. Solving a System of Equations using A = LU

5. Band Matrices

6. The Cost of Matrix Calculations

VIDEO COMING SOON!

What is LU Factorization?

LU Factorization (sometimes called LU Decomposition) is a method we can use to factor a matrix. Factoring a matrix means splitting up the matrix into simpler matrices. So let’s call the matrix you want to perform LU Factorization on matrix A. This will end up factoring into two simpler matrices: L, a lower triangular matrix and U, an upper triangular matrix – these two simpler matrices give us more information than the original matrix A does (such as pivots, multipliers and so on, which we talk about later in this lesson).

The U matrix above is the result of Gaussian Elimination. It is an upper triangular matrix with pivots on the diagonal. Click this link if you missed the lesson on Gaussian Elimination and pivot points.

The L matrix above is a lower triangular matrix with 1’s along the diagonal. The entries of this matrix consist of the multipliers we used during Gaussian elimination when we are finding the U matrix (which is the result of Gaussian Elimination as I mentioned above). A multiplier is a number we multiplied a row by during Gaussian Elimination. Recall from the previous lesson that you can interchange rows, multiply values in one row by a constant, add rows together, and more during Gaussian Elimination.

Here’s an example from a step during Gaussian Elimination, and the multiplier is highlighted in green, which is always a number.

Factoring a Matrix Using the LU Factorization Method

In order to begin the process for LU Factorization on a matrix, we must first make sure that the matrix we are using has two properties (let’s call it matrix A):

1. A must be nonsingular (A^-1 must exist)

2. A must be square (an n x n matrix – for example, 3×3, 4×4, etc)

If matrix A does not have one of the above properties, then LU Factorization cannot happen. If matrix A satisfies these properties, we can proceed with the steps.

We must also point out that there are two different methods for LU Factorization, depending on if row interchanges were made during Gaussian elimination.

We will be covering these two methods – first where no row interchanges are used, which is the more frequent case, and then when row interchanges are used.

Steps for LU Factorization if no Row Interchanges were made during Gaussian Elimination:

1. Apply Gaussian Elimination to your matrix. The end result will give you matrix U, as I mentioned above.

2. Identify what the multipliers are and their respective subscripts.

3. Create a skeleton matrix for matrix L, which will look like the example of this matrix I had above (putting it down here again) so you just put 1’s down the diagonal, 0’s above the diagonal, and the letter m in positions below the diagonal. Your matrix won’t look exactly like this because it depends on the size and how many rows and columns you have in your matrix. The subscripts on the letter m’s are 21, 31, 32, etc because it goes by row column. This means m21 is in positions row 2, column 1, and so on. The letter m is called a multiplier.

4. Now add in the multiplier entries – meaning fill in numbers for the letter m’s. Note that when putting the multipliers in the matrix, you must swap the signs. If you don’t have a multiplier for a specific position, it is 0.

But how do you know what your multipliers are? You need to go back to the Gaussian Elimination steps and identify anytime you multiplied by a number, putting the steps together like this example below did. You can see in their Elimination steps, they have first taken R2 (row 2) and subtracted 1/2 * R1 (row 1). The multiplier here was the 1/2, highlighted in green. This same principle applies to the next step, where they took the matrix in the middle and did R3 – 2/3*R2 to obtain the matrix U. When you set up your elimination to look like this, you can identify what your multipliers are.

But now that you have these values, how do you know what the subscripts will be? How do you know which m’s will go in which positions in the matrix L? You will have to identify which rows were used to do the operations on that matrix. For example, when the operations R2 – 1/2*R1 was done, the multiplier 1/2 was equal to m with a subscript 21 (m21). This was because it was row 2 which then was subtracted and multiplied by row 1, hence becoming a subscript of 21. This same principle applies to the R3 – 2/3*R2. The multiplier was 2/3, and it is equal to m32 because the operation done used row 3, then row 2.

Once you have identified the multipliers and what m values and subscripts they are equal to, just sub them into matrix L above.

Now you have the factorization of your matrix into the two factors, Matrix L and Matrix U.

TIP: To check your work, multiply matrix L and matrix U together. If the product matrix is the matrix you started with, then your process is correct!

Let’s do an example! This is how to do LU Factorization, where we first find matrix U as the result of Gaussian Elimination, then we find matrix L by finding the multiplier values and substituting them into the matrix. Multiplying the two matrices L and U by one another will give us the original matrix, A, since it is known that A = LU. By decomposing matrix A into matrices L and U, you have broken it down into factors and successfully decomposed it (aka you have done the LU factorization).

Now what if we do have Row Interchanges during Gaussian Elimination?

LU Factorization if Row Interchanges were made during Gaussian Elimination – should we do the above method to find L and U and then solve a system of equations below using 2 different factorization methods?

Solving a System of Equations using A = LU

The goal of this section is to solve for x in Ax = b (which is a system of linear equations that’s rewritten as Ax=b) by using a method other than Gaussian Elimination or Gauss-Jordan Elimination. This method is going to be using LU Factorization to solve for x instead. Click this link if you missed the lesson on putting matrices into Ax = b form, or on Gaussian Elimination.

From what I explained in the first part of this lesson above, we came to the conclusion that A = LU after we completed the LU Factorization, because our original matrix was equal to the L matrix multiplied by the U matrix. Knowing the equation A = LU can help us to solve for x in the equation Ax = b.

Recall in math that when we have two equations with at least one variable that’s the same, we can substitute a known variable from one of the equations into the other (this is called back substitution, and it’s from high school math). If you don’t remember how to do this, take a look at the example below, were we solve for X1 after knowing X2 = 1:

Backward and Forward Substitution

Before we move forward, it is important to know the difference between forward and backward (or just back for short) substitution.

Forward substitution

This a process used to solve a system of linear equations that make up a lower triangular matrix. A lower triangular matrix is one that has only zeroes above the main diagonal, and non-zero values below it. We start solving for the variable at the top of the matrix (x1 in the example below) and work our way down the matrix to solve for the rest of the unknown variables.

The reason we start with solving that variable at the very top (x1) is because all other variables in that top row should be zero, so we are given the value of x1 without really having to solve for it.

Now knowing the value of x1, we can substitute it into the rest of the equations to solve for the other variables, in this case we are just solving for x2 which ends up being 1.

Backward Substitution

This is the process we use if we have an upper triangular matrix, which has zeroes below the main diagonal, and non-zero values above it. It is the opposite for forward substitution because this time, we know the value of x2, or whichever the last variable is (eg. X1 X2 X3 … Xn or however long your X vector is). All the other variables in that last row are 0, so the value of the last variable is known. In this case, X2 = 1 is known.

Now we can use this value of X2 to substitute into the equations to find X1. Notice how we found X2 first with backward substitution here, rather than X1 first like we did with forward substitution.

Steps to Solving a System of Linear Equations Ax = B using LU Factorization

1. Convert the system of linear equations into the form Ax = b. Click this link if you missed put a system of equations into Ax=b form. Here’s an example:

Given the matrix Ax=B on the top, or given the system of equations on the bottom, you can convert between one another. For this though, you’ll want to keep it in the Ax=B matrix form

2. Find the LU Factorization (meaning find the matrices L and U) of the matrix you are given, with the steps discussed above.

Note: We know two equations – Ax = b and A = LU. Substituting one into the other gives a third equation, LUx = b. Letting Ux = Y, we end up doing some more substitution to obtain the equation LY = b. This whole process is shown in the image below, so you only have to understand how we derived it, and go ahead to use LY = b for Step 3.

Steps on how to go from the first two equations to obtaining LY = b

3. Now using LY = b, solve for Y using forward substitution. Substitute in L for the lower triangular matrix, which means it has only zeroes above the main diagonal. Y as a vector with variables depending on the size of your given matrix, and you can solve for Y by putting this LY = b into a linear system of equations. Check out Step 3, highlighted in red in the example below, to see what I mean. Notice how the vector Y, which is our unknown, has variables Y1, Y2, to match up with the 2 rows from matrix L.

4. Solve Ux = Y using back substitution, as I mentioned above, where x is a vector that we solve for. You do this by creating and solving a linear system of equations out of the vectors and matrix U, x, and Y. U is an uppe triangular matrix, which means it has only zeroes below the main diagonal. Now though, a vector with variables x1, x2, etc replaces what we did with Y1, Y2, etc earlier. Check out Step 4 in the example below to understand this further.

5. Now you have the solution to the system!

Band Matrices

A band matrix is a random matrix, say we call it matrix B, consisting of a random number (say we call that variable w) of non-zero diagonals below the main diagonal. The main diagonal is the longest one, highlighted in red. Below that, the only other non-zero diagonal is highlighted in yellow). Because there’s only one other diagonal that isn’t all zeroes below the main diagonal, w=1. Here is an example of a 5 x 5 band matrix.

We are covering band matrices here so that the cost to obtain these will be discussed in this next section:

The Cost of Matrix Calculations

When talking about cost in mathematics, it is talking about the amount of operations (row interchanges, multiplication by a scalar, addition, and other matrix operations) needed to perform a certain method, such as doing Gaussian Elimination, LU Factorization, Back Substitution, and so on.

Cost of Performing Gaussian Elimination

Recall that during Gaussian Elimination, we try to make the entries below the pivot zero (recall the pivot is the first number that is not zero in each row of a matrix). This results in Matrix U, the Upper Triangular Matrix. So finding the cost of Gaussian Elimination is the same as the cost of obtaining Matrix U. I am going to show all the different forms of this equation as I simplify it down so a different form.

The number of operations (the cost) required to complete Gaussian Elimination and obtain Matrix U is the following equation:

(Eq. 1) n2 – n, where n is the size of the matrix

Next in elimination, remember that we try to make all of the entries below the second pivot 0. The cost of making the entries below the second pivot 0 is the following equation:

(Eq. 2) (n-1)2, since the matrix we are looking at is now of size n – 1.

Continuing in this way, the rough cost to get to matrix U as I mentioned above (which means performing Gaussian Elimination to make all the entries below the pivot in each row to be a 0) can be shown as the following equation:

(Eq. 3) n2 + (n – 1)2 + … + 22 + 12

This equation then gets simplified to this (although we won’t cover the proof of how exactly in this lesson):

(Eq. 4)

However, when n is large, the ½ and 1 can be cancelled. This is because when a number is very, very large, you can add a tiny number to it without the original value changing much. So adding that 1/2 and 1 within the brackets become irrelevant. I have cancelled these numbers from the equation above, then multiplied through the equation:

(Eq. 5)

Cancelling 1/2 and 1 gives only 1/3*n^3

Thus, we can conclude that to complete Gaussian Elimination we will use the above equation, where n is the number of columns in the matrix). Multiplications means the amount of multiplication steps and subtractions means the amount of subtraction steps required to complete Gaussian Elimination:

Cost of Solving LY = b and Ux = Y

The total amount of steps needed from applying forward substitution and backward substitution in the examples above to solve for x using LY = b and Ux = Y is the following:

Thus, we can conclude that the right-hand side needs:

Cost of going from Band Matrix (B) to Upper Triangular Matrix (U)

The cost to go from a band matrix (which was matrix B as defined under the Band Matrix section above) to the Upper Triangular Matrix U is the following equation:

n*w2

Cost of Solving a System of Equations for a Band Matrix

The last cost equation covers the cost of solving a system of equations for a band matrix using forward and backward substitution. This would mean putting the band matrix into Ax = b form if you are given a vector b, and solving for the unknown variables (the x). The cost uses this equation:

2*n*w