The Algorithm of Gaussian Elimination

In linear algebra, Gaussian elimination is an algorithm for solving systems of linear equations, finding the rank of a matrix, and calculating the inverse of an invertible square matrix. Gaussian elimination is named after German mathematician and scientist Carl Friedrich Gauss. GAUSS / JORDAN (G / J) is a method to find the inverse of the matrices using elementary operations on the matrices.To find the rank of a matrix we use gauss Jordan elimination metod but we use gauss Jordan method in case we have to find only the inverse of the invertible matrix.

Algorithm overview

Algorithm of gauss Jordan method is simple. We have to make the matrix an identity matrix using elementary operation on it. It is firstly written in the form of

AI=A

We will firstly write the upper equation and then perform elementary operation the right hand side matrix matrix and simultaneously on identity matrix to obtain following matrix.

I=A A-1

The process of Gaussian elimination has two parts. The first part (Forward Elimination) reduces a given system to either triangular or echelon form, or results in a degenerate equation with no solution, indicating the system has no solution. This is accomplished through the use of elementary row operations. The second step uses back substitution to find the solution of the system above.

Stated equivalently for matrices, the first part reduces a matrix to row echelon form using elementary row operations while the second reduces it to reduced row echelon form, or row canonical form.

Another point of view, which turns out to be very useful to analyze the algorithm, is that Gaussian elimination computes a matrix decomposition. The three elementary row operations used in the Gaussian elimination (multiplying rows, switching rows, and adding multiples of rows to other rows) amount to multiplying the original matrix with invertible matrices from the left. The first part of the algorithm computes an LU decomposition, while the second part writes the original matrix as the product of a uniquely determined invertible matrix and a uniquely determined reduced row-echelon matrix.

Gaussian elimination

In linear algebra, Gaussian elimination is an algorithm for solving systems of linear equations, finding the rank of a matrix, and calculating the inverse of an invertible square matrix. Gaussian elimination is named after German mathematician and scientist Carl Friedrich Gauss, which makes it an example of Stigler’s law.

Elementary row operations are used to reduce a matrix to row echelon form. Gauss-Jordan elimination, an extension of this algorithm, reduces the matrix further to reduced row echelon form. Gaussian elimination alone is sufficient for many applications, and is cheaper than the -Jordan version.

History

The method of Gaussian elimination appears in Chapter Eight, Rectangular Arrays, of the important Chinese mathematical text Jiuzhang suanshu or The Nine Chapters on the Mathematical Art. Its use is illustrated in eighteen problems, with two to five equations. The first reference to the book by this title is dated to 179 CE, but parts of it were written as early as approximately 150 BCE. It was commented on by Liu Hui in the 3rd century.

The method in Europe stems from the notes of Isaac Newton.In 1670, he wrote that all the algebra books known to him lacked a lesson for solving simultaneous equations, which Newton then supplied. Cambridge University eventually published the notes as Arithmetica Universalis in 1707 long after Newton left academic life. The notes were widely imitated, which made (what is now called) Gaussian elimination a standard lesson in algebra textbooks by the end of the 18th century. Carl Friedrich Gauss in 1810 devised a notation for symmetric elimination that was adopted in the 19th century by professional hand computers to solve the normal equations of least-squares problems. The algorithm that is taught in high school was named for Gauss only in the 1950s as a result of confusion over the history of the subject

Algorithm overview

The process of Gaussian elimination has two parts. The first part (Forward Elimination) reduces a given system to either triangular or echelon form, or results in a degenerate equation with no solution, indicating the system has no solution. This is accomplished through the use of elementary row operations. The second step uses back substitution to find the solution of the system above.

Stated equivalently for matrices, the first part reduces a matrix to row echelon form using elementary row operations while the second reduces it to reduced row echelon form, or row canonical form.

Another point of view, which turns out to be very useful to analyze the algorithm, is that Gaussian elimination computes a matrix decomposition. The three elementary row operations used in the Gaussian elimination (multiplying rows, switching rows, and adding multiples of rows to other rows) amount to multiplying the original matrix with invertible matrices from the left. The first part of the algorithm computes an LU decomposition, while the second part writes the original matrix as the product of a uniquely determined invertible matrix and a uniquely determined reduced row-echelon matrix.

Example

Suppose the goal is to find and describe the solution(s), if any, of the following system of linear equations:

The algorithm is as follows: eliminate x from all equations below L1, and then eliminate y from all equations below L2. This will put the system into triangular form. Then, using back-substitution, each unknown can be solved for.

In the example, x is eliminated from L2 by adding to L2. x is then eliminated from L3 by adding L1 to L3. Formally:

The result is:

Now y is eliminated from L3 by adding 4L2 to L3:

The result is:

This result is a system of linear equations in triangular form, and so the first part of the algorithm is complete.

The last part, back-substitution, consists of solving for the knowns in reverse order. It can thus be seen that

Then, z can be substituted into L2, which can then be solved to obtain

Next, z and y can be substituted into L1, which can be solved to obtain

The system is solved.

Some systems cannot be reduced to triangular form, yet still have at least one valid solution: for example, if y had not occurred in L2 and L3 after the first step above, the algorithm would have been unable to reduce the system to triangular form. However, it would still have reduced the system to echelon form. In this case, the system does not have a unique solution, as it contains at least one free variable. The solution set can then be expressed parametrically (that is, in terms of the free variables, so that if values for the free variables are chosen, a solution will be generated).

In practice, one does not usually deal with the systems in terms of equations but instead makes use of the augmented matrix (which is also suitable for computer manipulations). For example:

Therefore, the Gaussian Elimination algorithm applied to the augmented matrix begins with:

which, at the end of the first part(Gaussian elimination, zeros only under the leading 1) of the algorithm, looks like this:

That is, it is in row echelon form.

At the end of the algorithm, if the Gauss-Jordan elimination(zeros under and above the leading 1) is applied:

That is, it is in reduced row echelon form, or row canonical form.

Example of Gauss Elimination method!!!

(To solve System of Linear Equations)

One simple example of G/J row operations is offered immediately above the pivoting reference;

an example is below:

Below is a system

of equations which

we will solve

using G/J

step

1

Below is the

1st augmented

matrix :pivot

on the “1”

encircled in red

Row

operations

for the 1st

pivoting are

named below

Next we pivot on the

number “5”in the

2-2 position,

encircled below

 

Below is the result of

performing P1 on the

element in the 2-2 position.

Next we must perform P2

Row

operations

of P2

are below

The result of the 2nd

pivoting is below.

Now pivot on “-7”

encircled in red

Using P1

below

we change

“-7″to “1”

 

Below is the result of

performing P1 on “-7”

in the 3-3 position.

Next we must

perform P2

Row

operations

of P2

are below

The result of the

third (and last)

pivoting is below

with 3×3 ISM

matrix in blue

Step

[3]

of

G/J

Re-writing the

final matrix as

equations gives

the solution to

the original system

Other applications

Finding the inverse of a matrix

Suppose A is a matrix and you need to calculate its inverse. The identity matrix is augmented to the right of A, forming a matrix (the block matrix B = [A,I]). Through application of elementary row operations and the Gaussian elimination algorithm, the left block of B can be reduced to the identity matrix I, which leaves A 1 in the right block of B.

If the algorithm is unable to reduce A to triangular form, then A is not invertible.

General algorithm to compute ranks and bases

The Gaussian elimination algorithm can be applied to any matrix A. If we get “stuck” in a given column, we move to the next column. In this way, for example, some matrices can be transformed to a matrix that has a reduced row echelon form like

(the *’s are arbitrary entries). This echelon matrix T contains a wealth of information about A: the rank of A is 5 since there are 5 non-zero rows in T; the vector space spanned by the columns of A has a basis consisting of the first, third, fourth, seventh and ninth column of A (the columns of the ones in T), and the *’s tell you how the other columns of A can be written as linear combinations of the basis columns.

Analysis

Gaussian elimination to solve a system of n equations for n unknowns requires n(n+1) / 2 divisions, (2n3 + 3n2 5n)/6 multiplications, and (2n3 + 3n2 5n)/6 subtractions,[3] for a total of approximately 2n3 / 3 operations. So it has a complexity of .

This algorithm can be used on a computer for systems with thousands of equations and unknowns. However, the cost becomes prohibitive for systems with millions of equations. These large systems are generally solved using iterative methods. Specific methods exist for systems whose coefficients follow a regular pattern (see system of linear equations).

The Gaussian elimination can be performed over any field.

Gaussian elimination is numerically stable for diagonally dominant or positive-definite matrices. For general matrices, Gaussian elimination is usually considered to be stable in practice if you usepartial pivoting as described below, even though there are examples for which it is unstable.

Gauss-Jordan elimination

In linear algebra, Gauss-Jordan elimination is an algorithm for getting matrices in reduced row echelon form using elementary row operations. It is variation of Gaussian elimination. Gaussian elimination places zeros below each pivot in the matrix, starting with the top row and working downwards. Matrices containing zeros below each pivot are said to be in row echelon form. Gauss-Jordan elimination goes a step further by placing zeros above and below each pivot; such matrices are said to be in reduced row echelon form. Every matrix has a reduced row echelon form, and Gauss-Jordan elimination is guaranteed to find it.

It is named after Carl Friedrich Gauss and Wilhelm Jordan because it is a variation of Gaussian elimination as Jordan described in 1887. However, the method also appears in an article by Clasen published in the same year. Jordan and Clasen probably discovered Gauss-Jordan elimination independently.[1]

Computer science’s complexity theory shows Gauss-Jordan elimination to have a time complexity of O(n3) for an n by n matrix (using Big O Notation. This result means it is efficiently solvable for most practical purposes. As a result, it is often used in computer software for a diverse set of applications. However, it is often an unnecessary step past Gaussian elimination. Gaussian elimination shares Gauss-Jordon’s time complexity of O(n3) but is generally faster. Therefore, in cases in which achieving reduced row echelon form over row echelon form is unnecessary, Gaussian elimination is typically preferred.[citation needed]

Application to finding inverses

If Gauss-Jordan elimination is applied on a square matrix, it can be used to calculate the matrix’s inverse. This can be done by augmenting the square matrix with the identity matrix of the same dimensions and applying the following matrix operations:

If the original square matrix, A, is given by the following expression:

Then, after augmenting by the identity, the following is obtained:

By performing elementary row operations on the [AI] matrix until it reaches reduced row echelon form, the following is the final result:

The matrix augmentation can now be undone, which gives the following:

A matrix is non-singular (meaning that it has an inverse matrix) if and only if the identity matrix can be obtained using only elementary row operations.

Example of Gauss Jordan method!!!

(To Simply Find Inverse of a Matrix)

If the original square matrix, A, is given by the following expression:

Then, after augmenting by the identity, the following is obtained:

By performing elementary row operations on the [AI] matrix until it reaches reduced row echelon form, the following is the final result:

The matrix augmentation can now be undone, which gives the following:

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our Guarantees

Money-back Guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism Guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision Policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy Policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation Guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more