Contact us Heritage collections Image license terms
HOME ACL Associates Technology Literature Applications Society Software revisited
Further reading □ OverviewUse of Atlas in the discovery of a theoremComputing, partitions and ThirteenFeasible computabilityConjecture related to Riemann hypothesisComputing with character tables of finite groupsComputable error boundsDestructive and constructive computingNew theorem in additive theoryCombinatorial problemsClassifying elements in a set of properties □ Symposia □ Computational problems in abstract algebra (1967)Computers in number theory (1969)
ACD C&A INF CCD CISD Archives Contact us Heritage archives Image license terms

Search

   
ACLApplicationsMathematics :: Mathematics
ACLApplicationsMathematics :: Mathematics
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
Use of Atlas in the discovery of a theorem
Computing, partitions and Thirteen
Feasible computability
Conjecture related to Riemann hypothesis
Computing with character tables of finite groups
Computable error bounds
Destructive and constructive computing
New theorem in additive theory
Combinatorial problems
Classifying elements in a set of properties
Symposia
Computational problems in abstract algebra (1967)
Computers in number theory (1969)

Computable Error Bounds for some Numerical Problems

Joan Walsh

1968

The problem of error estimation

A large amount of computer time in University and other research laboratories is spent on the solution of problems in physics and engineering which involve mathematical models of physical situations, expressed in terms of differential or integral equations. Numerical methods of solutions have enabled research workers to tackle problems beyond the reach of classical analytical techniques, and have encouraged the use of more complex and accurate models. Two stages of approximation occur in these problems, in the formulation of the mathematical equations, and in the process of numerical solution, and both stages need careful analysis if the results are to be of practical use. The first lies outside the province of the computing establishment, but the second can be very much influenced by the subroutine packages and hardware facilities provided.

In numerical solution, the limiting processes of analysis are replaced by finite algorithms, and the real number field in which analysis works is replaced by the finite set of numbers which are representable in the machine. (Multi-length facilities which are available on Atlas make this set extremely large, but most computation is still done to single-length precision.) Error estimates must take account of the difference between the exact solutions of the discrete and the continuous problems, the errors committed in solving the discrete equations, and the rounding errors due to the finite word-length. The actual process of carrying out the calculation has been greatly simplified since the early days of computers, by the provision of autocodes; and floating-point facilities and the increase in machine size, but the fact that results can be obtained very easily does not necessarily make users more critical about accuracy. When all computation was carried out in fixed-point mode, it was usually obvious if the results were invalidated by loss of significant figures, but with floating-point arithmetic such errors can easily be overlooked.

So an important problem for the numerical analyst is to find reliable and computable error estimates for each algorithm, which can be included in any subroutines written. This will inevitably lengthen the computation time, but since results of unknown accuracy are not really worth anything, the extra time is well-spent. Error analysis needs much further development, but some useful results have been obtained in recent research, and two particular methods will be described briefly.

Interval analysis

Moore (1966) and his associates have developed the idea of working with intervals instead of with real numbers, and their methods are designed to give at each stage the end-points of an interval within which the exact solution lies. The interval is defined by lower and upper limits which are numbers capable of being represented in the machine. Thus in a three-figure decimal machine the real number t would be represented by the pair of numbers (0.333, 0.334). We can easily define the basic operations of addition, subtraction, multiplication, and division for these interval numbers. Suppose the pair of real numbers (a,b) represents a number in the interval a ≤ x ≤ b; then, for example, subtraction of another interval number gives

  (a,b) - (c,d) = (a - d, b - c).

For machine computation, expressions like this do not define the operation completely, because of rounding in the arithmetic unit and the floating-point representation of numbers. To get strict bounds, it is clear that the algebraically smaller number must be rounded down, and the other number rounded up. We must also take account of various special effects of floating-point arithmetic; for example (on Atlas), two numbers less than about 10-60 multiplied together give an exact zero in the machine, and adding say 10-30 to a number of order 1 has the same effect as adding zero. The definition of the arithmetic operations has to be extended to cover all such cases.

A set of machine-code subroutines has been written for Atlas to carry out the four basic operations in interval arithmetic. These operate on pairs of numbers and give the result as another pair. The routines are available at present for use in Atlas Autocode programs, but a version for Fortran programs will be prepared shortly.

They have been used in conjunction with special input and output routines for tests on various matrix operations. It is not generally satisfactory to take standard methods and re-program them in interval arithmetic, because the bounds obtained in this way are too wide to be useful. It is necessary to design special procedures for interval analysis, and unfortunately these sometimes involve a good deal of algebraic manipulation. It is interesting that Moore found the classical Taylor series method very suitable for initial-value problems, because it has an error term of known form at each step. If we have some overall bounds initially for the dependent variable, we can find strict bounds for the total error. Several procedures of interval analysis are of this type; they require an initial assumption that the solution lies within certain bounds, and then provide a method which leads to improved bounds. The first step of obtaining overall bounds can sometimes be done automatically in the machine by trial and error.

Boundary-value problems

Boundary-value problems in ordinary and partial differential equations are often solved by finite-difference methods. We can usually prove that the approximations converge to the true solution as the mesh-length tends to zero, but it is not easy to estimate the error of the approximate solution obtained on a particular mesh. This is particularly important for elliptic equations, where it is often not feasible to reduce the mesh-size repeatedly, because of storage limitations. A well-known method for estimating or improving the accuracy of a finite-difference approximation is to calculate the higher difference terms in the expressions for the derivatives. If these are not negligible they can be used in a process of difference correction (Fox 1962).

However the correction process by itself does not give us error bounds. For certain classes of problems we can make use of monotonic properties to get more information. If the problem

L(u) = f

with given boundary conditions for u, is of monotonic type, the solution can be bounded if we can find functions v and w which satisfy the same boundary conditions and are such that

L(v) ≤ f ≤ L(w)

Then v and w give bounds on the exact solution u. This property has often been used in conjunction with analytical expressions for the solution (Collatz 1960), but this requires a preliminary study of the form of the solution, which cannot be done automatically. It would be useful to apply monotonic properties to get error bounds for approximate solutions obtained by finite differences, because the whole process could be carried out without prior knowledge. Some algorithms have been designed for this problem, using various forms of interpolation to obtain a continuous solution from the discrete finite-difference results. At present some practical tests are being carried out on Atlas with the help of Mr. A. Kemmer, and it seems that the method is feasible for certain ordinary differential equations. But for elliptic equations the process of two-dimensional interpolation is very lengthy, and for practical applications one may have to compromise by getting simpler error bounds which are not completely rigorous.

On-line work

Within the next five years most of the large computing laboratories will be provided with on-line facilities, enabling users to run programs in a time-sharing mode from individual consoles. This opens up possibilities for interaction between the user and the machine in problem-solving. In scientific work, the user has to formulate his problem completely, then divide it into stages and design numerical methods for each stage, and finally write the complete program. In a complex problem, it can easily happen that errors arise in' intermediate results which are never printed out, and if the user finds a certain consistency in his final answers he is satisfied that all is going well. Computation is carried out so fast that it is difficult to remember that several million operations may have occurred between input and output. With on-line facilities it would not be necessary to decide the whole scheme of the calculation initially; the user could enter his data, and then work in collaboration with the machine by calling appropriate routines successively and examining the results of each stage before deciding on the next. (This is exactly what happens when he is solving a problem analytically.)

This type of work would require a special library of routines, which would give the user all necessary information about the progress of the calculation. For example, loss of significant figures would be indicated; some form of error estimate would be given for standard operations; the routine might even print out suggestions, such as the use of an alternative routine, or double-length working. A scheme of this sort seems more feasible, and could be more useful, than attempts to provide complete programs for complex problems, where the number of possible variations is so great that the machine has to spend much of its time in testing for different cases.

References

Collatz, L. 1960. The Numerical Treatment of Differential Equations. Springer.

Fox, L. (ed.) 1962. Numerical Solution of Ordinary and Partial Differential Pergamon.

Moore, R. E. 1966. Interval Analysis. Prentice-Hall.

⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site