Approximation of Solutions to Linear Systems Using the Jacobi and Gauss-Seidel Methods

When one has a linear system that requires to be solved, sometimes finding the solution directly is too time consuming and processing intensive, and an approximation must be used. The Jacobi and Gauss-Seidel methods are two very similar ways to approximate the solution to a linear system. This document will explain the Jacobi method first, then Gauss-Seidel.

Both methods merely rearrange the equation of the linear system and then do substitutes for the solution, starting with an initial guess, until convergence is arrived at, at some tolerance.

Jacobi Method

We will consider the Jacobi method while letting our system remain in matrix form for simplicity sake. Suppose we have a linear system Ax = b that we wish to solve for x. Let us consider the following example:

 

math stuff

 

We can split up the matrix A into D+R where D is a diagonal matrix, and R is an off-diagonal matrix. That is, A can be considered the sum of its diagonal and off-diagonal entries.

 

math stuff

 

Now we can substitute this into our original equation and rearrange the terms.

 

math stuff

 

We see that in the process of moving terms, we required the calculation of D-1. While normally the calculation of the inverse of a matrix is a very computationally intensive task (and finding A-1 would pretty much give us our solution), we must remember that D is a diagonal matrix and the inverse of a diagonal matrix is simply D itself with all diagonal entries being reciprocals of themselves. In our case we have the following result:

 

math stuff

 

Note that I made a distinction between the x on the left side and the x on the right side of the equation for the next step. Now that we have this equation set up, we need an initial guess for what the solution to x may be. The better the guess, the faster we will reach convergence for the answer. If no other guess is available, simply substitute a zero-vector. If you know the relative orders of magnitude of the answer, that would be a good guess to use as well.
So let us take a guess of all zeros and substitute.

 

math stuff

 

Now we take the result, x2 and substitute that once again into our equation as our new guess (this guess is now slightly better than our last guess of all zeros). We repeat the process until it converges.

 

math stuff

 

After a while, the solution will converge to

 

math stuff

 

Using MATLAB we can confirm that this is the answer by directly calculating A1 and comparing the results. You will see that they are indeed the same (or almost the same, depending on your tolerance for convergence).

One important thing to note is that the lower our tolerance, the more accurate the results will be, but also it will take much longer to compute. Programatically, the way we would calculate if the results converge is by calculating ||xk-xk-1||(that is, the norm of the differences of our current and last guess). We keep iterating until the result of that calculation is less than our tolerance.

MATLAB Code for Jacobi Method

The following is a MATLAB function that calculates the approximate solution to a linear system using the Jacobi method.

jacobi.m

% A = Coefficient matrix
% b = Solution vector
% guess = initial guess for x
% abstol = absolute tolerance to converge at
% soln = n*1 vector of solutions for x
function [soln] = jacobi(A,b,guess,abstol);
% preconditions:
% 	size of A is n*n
% 	size of b is n*1
% 	The system converges

sizeA = size(A);
numRows = sizeA(1,1);
numCols = sizeA(1,2);
% A = M_diag + M_off
M_diag = zeros(numRows,numCols); % We will populate this with the reciprocal right away as an optimization M_off = zeros(numRows,numCols);
for i=1:numRows
	for j=1:numCols
		if i==j
			M_diag(i,j) = 1/A(i,j);
		else
			M_off(i,j) = A(i,j);
		end
	end end

% Compute M_diag'*b and M_diag'*M_off right away
r=M_diag*b;
G=M_diag*M_off;

% set difference to infinity as we iterate for the first time
diff = inf;

while diff>=abstol
	% Take one iterative step
	soln=r-G*guess;
	% Compute difference
	diff = norm(guess-soln);
	% Update guess for next iteration
	guess=soln;
end

Gauss-Seidel Method

The Gauss-Seidel method (referred to as GS method from this point on) is almost identical to the Jacobi method. To explain it simply, we will consider the system as a set of equations instead of a matrix. From the previous section, we have the following:

 

math stuff

 

We can rearrange each of these equations as follows:

 

math stuff

 

Basically we have just isolated for x1, x2 and x3respectively in each of the equations. Note that this is analogous to rearranging the terms the same way we did in the Jacobi method. The only difference now is that we use the previous result for each x to calculate the next. That is, suppose we have an initial guess vector zero. We then substitute x2 = 0 and x3 = 0 into our equation for x1 to get

 

math stuff

 

Now when we calculate x2 during this first iteration, instead of substituting zero for x1 we can substitute math stuff since we already have that result. That is:

 

math stuff

 

Now when we calculate x3 we calculate using the x1and x2we already acquired instead of using the guess vector at all. That is:

 

math stuff

 

So after our first iteration our result is

 

math stuff

 

We can keep carrying out the substitutions as required, until convergence is met.

Conclusion

Let’s compare the result for x after the first iteration of both methods to the final answer:

 

math stuff

 

We can see that the GS method is much closer for the answer to x2 and x3 to the solution after just one iteration than the Jacobi method by using the results of the previous calculations. We can conclude that the GS method will require less iterations to reach convergence as opposed to the Jacobi method. This does not necessarily mean however, that it will be better in all situations.

2 Comments

Etta sunny says:

It is interesting to know that Gs is far reach convergence than Jacobi method. So i commend your effort, putting method for learners . Like us.

Leave a Reply

Your email address will not be published.