SYLLABUS Previous: 3.4 Implementation and solution
Up: 3 FINITE ELEMENT METHOD
Next: 3.6 Variational inequalities
3.5 Linear solvers
Slide : [ Direct
LU  Iterative
splitting :
Jacobi 
GaussSeidel 
SOR 
Preconditioning 
VIDEO
login]
Writing efficient solvers for linear systems is a complete chapter of
numerical analysis and involves much more than what can be said in this
section. As a user of software libraries such as
Netlib
[5] or
PetSc
[3],
it is however sufficient to have only a rough idea of what type of
solvers exist and how to used them.

Direct LU factorization.
 Remember that you should a never calculate a matrix inverse to solve
a linear system; the solution can be calculated with only one third
of the number of operations by decomposing the matrix into
Lower and Upper triangular parts [3.9] with
arithmetic operations

(1) 
and later use a forward and backward substitution of the
right hand side vector involving
operations
A particularly simple version has been implemented in JBONE for
tridiagonal matrices, where the
solve3()
method computes the solution directly in
operations; the
first and the last equations are simply eliminated ``by hand'' to take
care of the periodicity.
Many different implementations of one and the same LUfactorization exist
taking advantage of the specific pattern of the matrix
; before
you start writing your own, check
Netlib
and it is likely that you
will find a routine tailored for your application.
When the matrix has more than three diagonals, it is important to allow for
a pivoting during the factorization process  interchanging rows and
columns in the matrix to avoid divisions by small numbers on the diagonal.
For particularly large problems, note the possibility of storing the matrix
on disk in a frontal implementation of the same algorithm.
When memory consumption, calculation time or parallelization become an
issue in 2, 3 or even higher dimensions, it might be useful to consider
iterative solvers as an alternative to the direct LU decomposition.
The idea behind them is best understood from a splitting of the
matrix into a sum^{17}of a diagonal
, strict lower
and upper
triangular parts

(3) 
An iterative solution of the problem

(4) 
is then sought where
is the
th
approximation of the solution having components from
.

Iterative Jacobi.
 The simplest form of iteration consists in inverting only the diagonal

(5) 
which is equivalent in matrix notation to

(6) 
Starting from an arbitrary initial guess
, simple
elliptic problems will albeit slowly converge to a stationary
point which is the solution of the linear problem.

GaussSeidel.
 As the equations
are updated one after the other, it is
possible to immediately use the updated components
using forward GaussSeidel iterations

(7) 
or

(8) 
where the calculation is carried out with the equation index increasing
from
. The reverse is also possible and is called
backward GaussSeidel

(9) 

Successive overrelaxations  SOR
 are obtained when one of the GaussSeidel iterations (3.5#eq.8 or
3.5#eq.9) is linearly combined with the value of the previous step

(10) 
using the parameter
for an underrelaxation or
rather
for an overrelaxation.
A symmetricSOR (SSOR) scheme is obtained with a combination



(11) 

Preconditioning / approximate inverse.
 Realizing that all the methods above are of the form
where
, the linear problem
can be diagonalized approximatively using a preconditioning

(12) 
where the preconditioner matrix
is one of
The product
is calculated without ever forming the inverse, calculating first
and then solving the linear
system
.
All these variants of GaussSeidel are simple and work fairly well for
reasonable sized (e.g.
elliptic problems  making the
much more complicated multigrid approach attractive only for
large problems that have to be solved many times.
Hyperbolic (wave) problems and nonHermitian operators are more
difficult to solve iteratively and remains a matter of research
[18],[17].
Efficient methods in general rely on a suitable preconditioning
and the rapid convergence of Krylov or Lanczos methods, such as the
generalized minimal residuals (GMRES) and the
quasiminimal residuals (QMR) [3.9].
Note that the matrixvector multiplications that are used to generate
new iterates can be performed in sparse format, i.e. using only
those matrix elements that are different from zero. This is the reason
why iterative methods can potentially be much more economical than
direct solvers  if they converge.
SYLLABUS Previous: 3.4 Implementation and solution
Up: 3 FINITE ELEMENT METHOD
Next: 3.6 Variational inequalities
