Jump to content

Numerical stability

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Omicronpersei8 (talk | contribs) at 12:52, 9 August 2006 (Reverted edits by 24.115.50.77 (talk) to version 68599249 by Omicronpersei8 using VS). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In the mathematical subfield of numerical analysis, numerical stability is a desirable property of numerical algorithms. The precise definition of stability depends on the context, but it is related to the accuracy of the algorithm.

Sometimes a single calculation can be achieved in several ways, all of which are algebraically equivalent in terms of ideal real or complex numbers, but in practice yield different results as they have different levels of numerical stability. One of the common tasks of numerical analysis is to try to select algorithms which are robust — that is to say, have good numerical stability.

Backward stability

Diagram showing the forward error Δy and the backward error Δx, and their relation to the exact solution map f and the numerical solution f*.

Consider the problem being solved by the numerical algorithm as a function f mapping the data x to the solution y. The actual result of the algorithm, say y*, will usually deviate from the exact solution. The main causes of error are round-off error, truncation error and data error. The forward error of the algorithm is the difference between the actual result and the exact solution, i.e. Δy = y* − y. The backward error is the smallest Δx such that f(x + Δx) = y*; in other words, the backward error tells us what is the problem actually solved by the algorithm. The forward and backward error are related by the condition number: the forward error is at most as big in magnitude as the condition number multiplied by the magnitude of the backward error.

The algorithm is said to be backward stable if the backward error is small for all inputs x. Of course, "small" is a relative term and its definition will depend on the context. Often, we want the error to be of the same order as, or perhaps only a few orders of magnitude bigger than, the unit round-off.

In many cases, it is more natural to consider the relative error

instead of the absolute error Δx.

Mixed stability and forward stability

Mixed stability combines the concepts of forward error and backward error.

The usual definition of numerical stability uses a more general concept, called mixed stability, which combines the forward error and the backward error. An algorithm is stable in this sense if it solves a nearby problem approximately, i.e., if there exists a Δx such that both Δx is small and f(x + Δx) − y* is small. Hence, a backward stable algorithm is always stable.

An algorithm is forward stable if its forward error divided by the condition number of the problem is small. This means that an algorithm is forward stable if it has a forward error of similar magnitude as that of some backward stable algorithm.

Stability in numerical differential equations

The above definitions are particularly relevant in situations where truncation errors are not important. In other contexts, for instance when solving differential equations, a different definition of numerical stability is used.

In numerical ordinary differential equations, various concepts of numerical stability exist, for instance A-stability. They are related to some concept of stability in the dynamical systems sense, often Lyapunov stability. It is important to use a stable method when solving a stiff equation.

Yet another definition is used in numerical partial differential equations. An algorithm for solving an evolutionary partial differential equations is stable if the numerical solution at a fixed time remains bounded as the step size goes to zero. The Lax equivalence theorem states that an algorithm converges if it is consistent and stable (in this sense). Stability is sometimes achieved by including numerical diffusion. Numerical diffusion is a mathematical term which ensures that roundoff and other errors in the calculation get spread out and do not add up to cause the calculation to "blow up".

References

  • Nicholas J. Higham, Accuracy and Stability of Numerical Algorithms, Society of Industrial and Applied Mathematics, Philadelphia, 1996. ISBN 0-89871-355-2.