previous up next SYLLABUS  Previous: 1.4 Numerical discretization  Up: 1.4 Numerical discretization  Next: 1.4.2 Sampling on a


1.4.1 Convergence, Richardson extrapolation

One of the most important properties of a numerical approximation based on a finite number of data points is that the approximation converges to the exact value as more information is gathered (exercise 1.01). This convergence can be defined locally for any arbitrary point in space and time (more restrictive) or by monitoring a global quantity (more permissive).

The convergence rate can be estimated experimentally from a geometric sequence of approximations $ \{f^{(i)}\}$ , by successively refining the numerical resolution $ i=N,QN,Q^2N,Q^3N$ , for example by doubling the resolution a couple of times when $ Q=2$

$\displaystyle r=\frac{\ln\left[(f^{(N)}-f^{(QN)})/(f^{(QN)}-f^{(Q^2N)})\right]} {\ln[Q]}$ (1.4.1#eq.1)

Knowing the convergence rate $ r$ , it is possible to further improve the accuracy of the numerical approximation by using a so-called Richardson extrapolation

$\displaystyle F^*=F_2 + \frac{F_2-F_1}{Q^r-1}$ (1.4.1#eq.2)

where a more accurate solution $ F^*$ is obtained from an evaluation $ F_1$ and its refinement $ F_2$ .

Let's take an example and approximate the function $ f(x)=x^\alpha$ on a grid in $ [0;1]$ , simply by sampling the value of the function in the middle of $ N$ intervals at $ x_n=(n+1/2)/N$ . The approximation for the value at the origin $ f(0)\approx f^{(N)}(x_0)=(2N)^{-\alpha }$ converges to zero provided that $ \alpha\ge 0$ , which is best visualized in a convergence study using a lin-lin plot (figure 1.4.1#fig.1, left).

Figure 1.4.1#fig.1: Convergence studies for the approximation $ f(0)\approx f^{(N)}(x_0)=(2N)^{-\alpha }$ showing the same data in a lin-lin plot (left) and a log-log plot (right).
\includegraphics[height=5.7cm]{figs/convlin2.eps} \includegraphics[height=5.7cm]{figs/convlog2.eps}
The convergence rate for the approximation measured at the origin $ r=\ln\left[( (2N)^{-\alpha} - (4N)^{-\alpha})/
( (4N)^{-\alpha} - (8N)^{-\alpha})\right]/\ln[2]
= \alpha$ drops as $ \alpha\rightarrow 0$ as shown by the lower slope in the log-log plot (figure 1.4.1#fig.1, right). Not apparent in the log-log plot is that the approximation converges to a different value $ f(0)\rightarrow 1$ when $ \alpha=0$ .

The mid-point rule (sect.3.3) can be used to approximate an integral by computing the area of small boxes in order to evaluate the global quantity $ \int_0^1 f(x)dx \approx \sum_{n=0}^N x_n^\alpha h$ with $ h=1/N$ . Global convergence is achieved for $ \alpha>-1$ ; because of the weak singularity, the convergence rate drops from the $ \mathcal{O}(h^2)$ expected from a quadrature of smooth functions to $ r\simeq 1.5,0.5,0.2$ when $ \alpha=0.5,-0.5,-0.8$ .

SYLLABUS  Previous: 1.4 Numerical discretization  Up: 1.4 Numerical discretization  Next: 1.4.2 Sampling on a

      
back up next contents bibliography Copyright © Lifelong-learners at 16:38:22, November 18th, 2017