previous up next SYLLABUS  Previous: 3.4 Implementation and solution  Up: 3 FINITE ELEMENT METHOD  Next: 3.6 Variational inequalities

3.5 Linear solvers

Slide : [ Direct LU || Iterative splitting : Jacobi - Gauss-Seidel - SOR - Preconditioning || VIDEO login]

Writing efficient solvers for linear systems is a complete chapter of numerical analysis and involves much more than what can be said in this section. As a user of software libraries such as Netlib [5] or PetSc [3], it is however sufficient to have only a rough idea of what type of solvers exist and how to used them.

* Direct LU factorization.
Remember that you should a never calculate a matrix inverse to solve a linear system; the solution can be calculated with only one third of the number of operations by decomposing the matrix into Lower and Upper triangular parts [3.9] with $ \frac{2}{3}N^3$ arithmetic operations $ (*,+)$

$\displaystyle \mathbf{A}\cdot\mathbf{x} = (\mathbf{L}\cdot\mathbf{U})\cdot\mathbf{x} = \mathbf{L}(\mathbf{U}\cdot\mathbf{x}) =\mathbf{b}$ (1)

and later use a forward and backward substitution of the right hand side vector involving $ 2N^2$ operations
$\displaystyle y_1$ $\displaystyle =$ $\displaystyle \frac{b_1}{L_{11}}
$\displaystyle x_N$ $\displaystyle =$ $\displaystyle \frac{y_N}{U_{11}}
,\;\;j=N\!-\!1,...1$ (2)

A particularly simple version has been implemented in JBONE for tri-diagonal matrices, where the solve3() method computes the solution directly in $ \mathcal{O}(N)$ operations; the first and the last equations are simply eliminated ``by hand'' to take care of the periodicity. Many different implementations of one and the same LU-factorization exist taking advantage of the specific pattern of the matrix $ \mathbf{A}$ ; before you start writing your own, check Netlib and it is likely that you will find a routine tailored for your application.

When the matrix has more than three diagonals, it is important to allow for a pivoting during the factorization process - interchanging rows and columns in the matrix to avoid divisions by small numbers on the diagonal. For particularly large problems, note the possibility of storing the matrix on disk in a frontal implementation of the same algorithm.

When memory consumption, calculation time or parallelization become an issue in 2, 3 or even higher dimensions, it might be useful to consider iterative solvers as an alternative to the direct LU decomposition. The idea behind them is best understood from a splitting of the matrix into a sum17of a diagonal $ \mathbf{D}$ , strict lower $ \mathbf{-E}$ and upper $ \mathbf{-F}$ triangular parts

$\displaystyle \mathbf{A} = \mathbf{D} - \mathbf{E} - \mathbf{F}$ (3)

An iterative solution of the problem

$\displaystyle \left(\mathbf{b} - \mathbf{A}\cdot\mathbf{x_{k+1}}\right)_i = 0\;, \;\;\; i=1,... N$ (4)

is then sought where $ \mathbf{x_{k+1}}=\xi_i^{(k+1)}$ is the $ (k+1)$ -th approximation of the solution having components from $ i=1,...N$ .

* Iterative Jacobi.
The simplest form of iteration consists in inverting only the diagonal

$\displaystyle a_{ii}\xi_i^{(k+1)} = \beta_i - \sum_{i\neq j} a_{ij}\xi_j^{(k)}$ (5)

which is equivalent in matrix notation to

$\displaystyle \mathbf{D}\cdot\mathbf{x_{k+1}} = (\mathbf{E} + \mathbf{F})\cdot\mathbf{x_{k}} + \mathbf{b}$ (6)

Starting from an arbitrary initial guess $ \mathbf{x_{0}}$ , simple elliptic problems will -albeit slowly- converge to a stationary point which is the solution of the linear problem.

* Gauss-Seidel.
As the equations $ i=1,...N$ are updated one after the other, it is possible to immediately use the updated components $ \xi_j^{(k+1)}$ using forward Gauss-Seidel iterations

$\displaystyle a_{ii}\xi_i^{(k+1)} + \sum_{j<i} a_{ij}\xi_j^{(k+1)} = \beta_i - \sum_{j>i} a_{ij}\xi_j^{(k)}$ (7)


$\displaystyle (\mathbf{D} - \mathbf{E})\cdot\mathbf{x_{k+1}} = \mathbf{F}\cdot\mathbf{x_{k}} + \mathbf{b}$ (8)

where the calculation is carried out with the equation index increasing from $ i=1,...N$ . The reverse is also possible and is called backward Gauss-Seidel

$\displaystyle (\mathbf{D} - \mathbf{F})\cdot\mathbf{x_{k+1}} = \mathbf{E}\cdot\mathbf{x_{k}} + \mathbf{b}$ (9)

* Successive over-relaxations - SOR
are obtained when one of the Gauss-Seidel iterations (3.5#eq.8 or 3.5#eq.9) is linearly combined with the value of the previous step

$\displaystyle \mathbf{x_{k+1}} = \omega \mathbf{x_{k}^{GS}} +(1-\omega) \mathbf{x_{k}}$ (10)

using the parameter $ \omega\in[0;1]$ for an under-relaxation or rather $ \omega\in[1;2]$ for an over-relaxation. A symmetric-SOR (SSOR) scheme is obtained with a combination
$\displaystyle \left\{\begin{array}{l}
(\mathbf{D} -\omega\mathbf{E})\cdot\mathb...
\end{array}\right.$     (11)

* Preconditioning / approximate inverse.
Realizing that all the methods above are of the form $ \mathbf{x_{k+1}} = \mathbf{G}\cdot\mathbf{x_{k}} +\mathbf{f}$ where $ \mathbf{G}=\mathbf{1}-\mathbf{M^{-1}}\cdot\mathbf{A}$ , the linear problem can be diagonalized approximatively using a preconditioning

$\displaystyle \mathbf{M^{-1}}\cdot\mathbf{A}\cdot\mathbf{x} =\mathbf{M^{-1}}\cdot\mathbf{b}$ (12)

where the preconditioner matrix $ \mathbf{M}$ is one of
$\displaystyle \mathbf{M^{Jacobi}} =$ $\displaystyle \mathbf{D} \hspace{1cm}$ $\displaystyle \mathbf{M^{SOR}} =\frac{1}{\omega}(\mathbf{D}-\omega\mathbf{E})$  
$\displaystyle \mathbf{M^{GS}} =$ $\displaystyle \mathbf{D}-\mathbf{E} \hspace{1cm}$ $\displaystyle \mathbf{M^{SSOR}}=\frac{1}{\omega(2-\omega)}(\mathbf{D}-\omega\mathbf{E})

The product $ \mathbf{s}=\mathbf{M^{-1}}\cdot\mathbf{A}\cdot\mathbf{x}$ is calculated without ever forming the inverse, calculating first $ \mathbf{r}= \mathbf{A}\cdot\mathbf{x}$ and then solving the linear system $ \mathbf{M}\cdot\mathbf{s}=\mathbf{r}$ .

All these variants of Gauss-Seidel are simple and work fairly well for reasonable sized (e.g. $ 20\times20) $ elliptic problems - making the much more complicated multigrid approach attractive only for large problems that have to be solved many times. Hyperbolic (wave-) problems and non-Hermitian operators are more difficult to solve iteratively and remains a matter of research [18],[17]. Efficient methods in general rely on a suitable preconditioning and the rapid convergence of Krylov or Lanczos methods, such as the generalized minimal residuals (GMRES) and the quasi-minimal residuals (QMR) [3.9].

Note that the matrix-vector multiplications that are used to generate new iterates can be performed in sparse format, i.e. using only those matrix elements that are different from zero. This is the reason why iterative methods can potentially be much more economical than direct solvers - if they converge.

SYLLABUS  Previous: 3.4 Implementation and solution  Up: 3 FINITE ELEMENT METHOD  Next: 3.6 Variational inequalities

back up next contents bibliography Copyright © Lifelong-learners at 03:55:57, March 24th, 2019