next up previous
Next: About this document ... Up: No Title Previous: ``Exact'' numerical solution

Iterative numerical solution

A solution of the finite difference equations can be obtained iteratively. An initial value for the field ui,j is guessed for all interior points in the domain. The value of the field at the grid point i and j is solved assuming that the field at the other grid locations is known.

\begin{displaymath}u_{i,j}^{k+1} =
\frac{1}{4} \left( u_{i+1,j}^k + u_{i-1,j}^k + u_{i,j+1}^k + u_{i,j-1}^k
- h^2 S_{i,j}
\right)
\end{displaymath} (14)

This equation is used to calculate ui,j at all interior grid points within the domain $\Omega$, calculating ui,j at all grid points in the domain. The index k refers to the iteration number.

The Jacobi method uses a new ``array'' to store the new value of the field ui,j at the interior grid points.

The Gauss-Siedel method consists in replacing the new value of the field within the same ``array''. This corresponds to the following form:

\begin{displaymath}u_{i,j}^{k+1} =
\frac{1}{4} \left( u_{i+1,j}^k + u_{i-1,j}^...
...} +
u_{i,j+1}^k + u_{i,j-1}^{k+1}
- h^2 * S_{i,j}
\right)
\end{displaymath} (15)

The iterations are repeated until convergence. This is checked by calculating the largest change between the old and new field at all interior points in the domain at each iteration

\begin{displaymath}diff = Max \Vert u_{i,j}^{k+1} - u_{i,j}^{k} \Vert
\end{displaymath} (16)

and requiring this difference to reach $diff \le \epsilon$, where $\epsilon$ is a small pre-defined number.

This numerical solution is unconditionally stable, i.e., it will converge to a solution. The accuracy of the solution is a separate issue; it depends on the adequacy of the numerical grid to describe the regions of high curvature of the field and on the ability of the numerical grid to cover the domain.

The Gauss-Siedel method converges faster than the Jacobi method. Yet it is a notoriously slow method, often requiring very large number of iterations to achieve convergence. A better rate of convergence is achieved by the Succesive Over Relaxation (SOR) method. In this approach, the old and new fields are mixed via a parameter $\omega$.

\begin{displaymath}u_{i,j}^{k+\frac{1}{2}} =
\frac{1}{4} \left( u_{i+1,j}^k + ...
...{i,j+1}^k + u_{i,j-1}^{k+\frac{1}{2}} - h^2 * S_{i,j}
\right)
\end{displaymath} (17)


\begin{displaymath}u_{i,j}^{k+1} = \omega u_{i,j}^{k+\frac{1}{2}}
- ( 1-\omega) u_{i,j}^k
\end{displaymath} (18)

$\omega$ may change as a function of iteration number. It is taken to be small for few iterations, increased to a value close to 1 for many iterations, and eventually taken larger than 1 to ``accelerate'' convergence in the latter stages of the iteration process. The choice of the best value for $\omega$ is discussed in ``Numerical Recipes'' in the section on SOR.

 

 

Based on notes by Guillermo Marshall, University of Buenos Aires


next up previous
Next: About this document ... Up: No Title Previous: ``Exact'' numerical solution
Michel Vallieres
2001-05-11