Numerical Solutions Of Differential Equations
Numerical solutions of differential equations are essential methods used to find approximate solutions for equations that describe various physical phenomena in science and engineering. Differential equations involve unknown functions and their derivatives, serving as mathematical models for systems like satellite orbits or mechanical stresses in engines. While some simple differential equations can be solved analytically, most require numerical methods due to their complexity. These methods divide the solution domain into smaller parts, using techniques like the finite element method, which breaks down the problem into manageable subregions, or the finite difference method, which calculates values at discrete points.
Problems can be categorized as either initial value problems, where starting conditions are known, or boundary value problems, where conditions are defined along boundaries. Different numerical techniques, such as Runge-Kutta methods and predictor-corrector methods, offer varying strengths and weaknesses in terms of accuracy and computational efficiency. The choice of method often depends on the characteristics of the equation and the specific physical system being modeled. As computers have become integral to this process, numerical solutions have revolutionized fields such as aerospace engineering and environmental modeling, making them indispensable in contemporary scientific research and application.
Subject Terms
Numerical Solutions Of Differential Equations
Type of physical science: Computation
Field of study: Numerical methods
Differential equations are fundamental tools of the physical sciences, engineering, and medicine. Most differential equations must be solved on a computer.


Overview
Differential equations are important in almost all fields of science and engineering.
They are equations that involve an unknown function and its derivatives. The orbit of a satellite around the earth or the stresses on the turbine in a jet engine can be determined by the solution of differential equations.
Some simple differential equations have solutions that are polynomials or basic mathematical functions, such as the sine or logarithm. Nevertheless, very few differential equations have these "analytical" solutions. Those that do not have such solutions must be solved by numerical approximations, usually on a computer. Often, even if an analytical solution exists, it is easier or more practical to determine a numerical solution than to find the analytical one.
Differential equations can occur singly or in sets. In such a "coupled system," more than one unknown function and its derivatives occur in more than one equation. These functions often represent interrelated quantities in a physical system. Just as in the case of a set of simultaneous algebraic equations, the number of unknown functions must equal the number of equations in a set of coupled differential equations. In fact, the solution of coupled differential equations involves the solution of simultaneous algebraic equations. The basic principles are the same for a system as for one equation. There are cases, however, in which special techniques work best.
Differential equations can be classified in several ways. In an "ordinary" differential equation, the unknown function depends on only one independent variable; that is, the differential equation contains only ordinary derivatives. A "partial" differential equation involves a function of more than one variable. It contains "partial derivatives" with respect to each variable. An example of a well-known partial differential equation in the physical sciences is the Schrodinger equation of quantum mechanics, in which the solution is a function of the particle's position as well as the time. The numerical solution of partial differential equations is, in general, difficult and is a rich field of research in applied mathematics. Entire books have been devoted to the solution of particular types of partial differential equations.
The "order" of a differential equation is equal to the order of the highest derivative that occurs in the equation. If only first derivatives (either ordinary or partial) occur in a differential equation, then that equation is said to be of the "first order." If it contains second derivatives (and possibly first derivatives) then the equation is a second-order differential equation, and so on. A partial differential equation of the second order or higher can contain terms with derivatives with respect to more than one variable. For example, it might have a term with derivatives with respect to both position and time. Most important differential equations in the physical sciences are of the first or second order.
In a "linear" differential equation, at most the first power of the function and its derivatives occurs. There are not any products of the function and its derivatives. If the differential equation is not linear, then it is "nonlinear." Linear differential equations have the property that, if two functions satisfy the equation, then their sum does so also. This property often simplifies the problem of finding a solution in a physical system.
All these classifications would suggest much complexity; however, a higher-order ordinary differential equation can be replaced by a system of first-order equations. (This is not necessarily true for partial differential equations.) Even though the details differ, the same basic principles apply when solving one equation or system. Even though a nonlinear equation may make an analytical solution impossible, it often does not impede a numerical solution. Thus, in the case of ordinary differential equations, the most important aspect is the solution of a single, first-order differential equation.
The solution to the differential equation will be found in some region of space.
Physically, this may be along a line, over an area in a plane, or in a volume. In an "initial value problem," the values of the unknowns at one point are fixed at the start. Later, values are calculated based on this initial value. An example of this type of problem is the flight of a rocket.
It starts from rest, then follows some trajectory. In a "boundary value problem," values of the unknowns are fixed along the boundaries of the region. The solution is then found in the interior of the region. An example of this is a drumhead. It is fixed at the edges and responds to impulses in the middle.
There are two basic approaches to the numerical solution of differential equations. One is the finite element method. The region in which a solution is desired is divided into a number of subregions, called elements. A function is defined on each element, and the entire solution is taken to be a weighted sum of these basic functions. The weights are then computed by use of the original differential equation. The finite element method is most commonly used in the solution of partial differential equations. The other approach is known as the finite difference method, in which a number of points are chosen in the region of interest and the solution is calculated at these "mesh points." There are many ways to obtain the solution, but they are all based on a Taylor series expansion of the unknown functions.
Applications
To solve an initial value problem for an ordinary differential equation by a finite difference method, assume that, at worst, a system of first-order ordinary differential equations exists. There are three items that must be considered. The first is the initial conditions of the problem. The second is the mesh that is laid out on the region of the solution. Finally, there must be some way to calculate values at points other than the starting point.
This ordinary differential equation will be solved on an interval, along a path running from some starting point to an end point. Since ordinary differential equations are being considered, there is only one independent variable. For concreteness, refer to this variable as the time. To begin the solution, the value of either the function or its derivative must be specified at the initial time. The number of initial values must be equal to the order of the system--one value must be given for each independent variable.
The solution interval is divided by the mesh points into a number of subintervals, which are almost always chosen to be of equal length, and are referred to as the "step size."
Solutions to the equation are then calculated at the end points of each of these steps. These computed values, however, are not independent. The solution at each point depends on the solutions at the preceding points. Because this process runs from one end of the interval to the other, any error can be magnified. This type of error--inherent in the solution process--is referred to as the truncation error. A smaller step size will usually reduce the truncation error, giving more accurate results, but it will take longer because the solution must be found at more points.
Also, since computer arithmetic is, by its nature, only approximate, calculation at more points can allow this arithmetic (or roundoff) error to accumulate. These trade-offs must be considered when solving a differential equation numerically.
The most commonly used types of finite difference methods of solution are known as Runge-Kutta methods, predictor-corrector methods, and Richardson extrapolation. Each has its own strengths and weaknesses. All these methods use formulas derived from a Taylor series expansion of the unknown function. The number of retained terms in this expansion determines the order of the method, which should not be confused with the order of the differential equation.
Higher-order methods usually do not need as small a step size as do lower-order methods.
Nevertheless, the higher the order, the more work there is that must be done with each step.
The Runge-Kutta method replaces evaluation of the second and higher derivatives in the Taylor series with evaluations of the first derivative. Yet, the first derivative is only the original differential equation. For example, the fourth-order Runge-Kutta method replaces the second through fourth derivatives with three evaluations of the differential equation. Some of these evaluations are done at points that fall between the steps. Evaluating the differential equation can take too much time if the expression is complicated, which is the biggest disadvantage of the Runge-Kutta method. Its advantages are that only one value is needed to start the solution and that the step size can be changed during the calculation, providing control over both roundoff and truncation error. Also, there are few problems for which Runge-Kutta methods do not work. Not all other methods are this robust.
Predictor-corrector methods use a two-step process to solve the differential equation at each point. Two different formulas, each derived from the Taylor series, are used. The predictor calculates a first approximation to the solution at the point. The corrector is then applied to improve the estimate. Repeated application of the corrector formula can greatly reduce the truncation error in the solution. Nevertheless, this refinement, while improving accuracy, will also increase the computational time needed. The biggest advantage of predictor-corrector methods is that the error in the solution can be controlled at each step. There are two major drawbacks. They must have starting values, in addition to the boundary conditions. These values are usually determined directly from the Taylor series, although they might be obtained from a few Runge-Kutta steps. It is also difficult to change the step size in the middle of a computation.
Richardson extrapolation is perhaps the most popular method. The other methods use small steps between the beginning and the end points. While one may not be interested in the solutions at these intermediate points, they must be computed anyway. Richardson extrapolation, however, can be used with points some distance apart. It also can be used with a lower-order formula from the Taylor series, which would not work well with the other two methods.
Richardson extrapolation works by making several evaluations at the desired point. Each evaluation is done with a smaller step size than the previous one. From these values, an estimate can be made of what the solution would be if an infinite number of steps were taken. The error control in Richardson extrapolation is as good as in predictor-corrector methods. Solutions can be obtained at irregularly spaced points. Yet, there are a number of problems for which Runge-Kutta methods will yield a good solution, but because of the underlying physical system that is being modeled, Richardson extrapolation will fail to solve.
Although the above approaches can be used to solve a variety of ordinary differential equations, they do not always work. For certain problems, special solution methods exist.
The first step in the numerical solution of a differential equation is to look carefully at the physical system that gives rise to the equation. An examination must be done to find a method that is appropriate to the problem. Otherwise, the solution may bear no relation to the physical behavior of the system. If the problem is presented as a mathematical exercise, however, then one cannot draw insights from a physical system. An example is the Schrodinger equation of quantum mechanics. It contains a term that depends on the potential energy. If the potential energy is constant, or varying slowly, then the solution presents no problems, as it can be expected to show the same slow variation. If, however, there is a jump in the potential, then the problem must be solved in two different regions. One must ensure that the two solutions are the same at the point where the potential energy jumps. This check would be done by starting in the first region with the initial condition, computing the solution up to the boundary point, and then using the value at the boundary point as a boundary condition for the solution in the second region. The basic rule is that a discontinuity in a physical system can lead to numerical difficulties when one tries to solve a problem arising from that physical system. If the solution is varying slowly, then it is generally the case that, while almost any method will work, Richardson extrapolation may give the best results. If the solution is varying rapidly, however, then it may be necessary to use Runge-Kutta methods.
It is possible to solve a differential equation numerically using pencil and paper or a calculator. This method is realistic, however, only for simple problems in which the solutions are needed at only a few points. In practice, one will use a computer. It is seldom necessary to sit down and derive formulas to use. Software libraries for solving differential equations are available for large and small computers. Also, numerous texts and reference books contain formulas or even algorithms that can be easily implemented on the computer. Whenever possible, it is simplest to do things in a cookbook fashion.
Context
The first step in the numerical solution of a differential equation is to look carefully at the physical system that gives rise to the equation. This step must be done to find a method that is appropriate to the problem. Otherwise, the solution may bear no relation to the physical behavior of the system. If the problem is presented as a mathematical exercise, then one cannot draw insights from a physical system.
The roots of the numerical solution of differential equations can be traced to the late 1600's, when the German mathematician Leonhard Euler discovered what is now called Euler's method, the simplest of the finite difference methods. Progress in the field was slow until World War I, when problems in ballistics were solved by numerical methods.
In the 1920's, efforts were made to do large-scale meteorological calculations. These calculations required solving a large number of differential equations on a grid covering the world. One proposal called for a room full of "computers," people who used mechanical calculations, slide rules, or pencil and paper to perform the calculations. This approach was ahead of its time. Realistic meteorological computing was not done to any large extent until the advent of supercomputers in the 1970's and 1980's.
A major application of numerical methods, particularly the solution of differential equations, was in the Manhattan Project during World War II. The engineering problems encountered during work on the atomic bomb were too difficult to solve by analytical means or by building models. The computational techniques learned during this time served as the basis for the numerical calculation of trajectories during the Apollo lunar program.
Engineering applications remain one of the biggest areas for the numerical solution of differential equations. The design of aircraft and spacecraft, automobiles, and even artificial hips depends on the ability to solve, as economically as possible, models containing tens or hundreds of thousands of elements. Computation has all but replaced wind-tunnel testing in the design of aircraft. Without adequate techniques to solve such huge systems of differential equations, it would be necessary to do all design work using mathematical models. The numerical solution of differential equations plays an important role in many consumer products.
Principal terms
DIFFERENTIAL EQUATION: an equation that involves a function and its derivatives
PARTIAL DERIVATIVE: the derivative of a function of several variables with respect to only one of the variables, the remainder being kept constant
ROUNDOFF ERROR: an error in a calculation resulting from the inherent imprecision of arithmetic on a computer
TAYLOR SERIES: a representation of a function by an infinite sum whose coefficients involve the derivatives of the function evaluated at a given point
TRUNCATION ERROR: an error in a calculation resulting from using only some of the terms in the Taylor series
Bibliography
Ascher, Uri M., Robert M. M. Mattheij, and Robert D. Russell. NUMERICAL SOLUTION OF BOUNDARY VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS. Englewood Cliffs, N.J.: Prentice-Hall, 1988. An advanced but very thorough treatment of boundary value problems. Goes into great detail on both theory and numerical methods. Focuses primarily on finite difference methods and has an extensive bibliography of the literature on boundary value problems.
Kahaner, David, Cleve Moler, and Stephen Nash. NUMERICAL METHODS AND SOFTWARE. Englewood Cliffs, N.J.: Prentice-Hall, 1989. Chapter 8 is devoted to ordinary differential equations. Begins with a very good introduction to the subject and contains a number of examples of the different methods. Discusses the stability of solutions and includes several computer subroutines (in FORTRAN) for solving ordinary differential equations. Many problems are given at the end of the chapter.
Kellison, Stephen G. FUNDAMENTALS OF NUMERICAL ANALYSIS. Homewood, Ill.: Richard D. Irwin, 1975. An undergraduate text that has a concise, but meaty, treatment of ordinary differential equations. One of its strongest features is its treatment of roundoff and truncation errors.
Milne, William Edmund. NUMERICAL SOLUTION OF DIFFERENTIAL EQUATIONS. New York: Dover, 1970. A classic in the field. Considers both theory and numerical techniques for the solution of ordinary and partial differential equations. Initial and boundary value problems of ordinary differential equations are discussed. Even though the details of some of the numerical methods are somewhat dated in an age of computational power, it remains a valuable resource.
Press, William H., Brian P. Flannery, Saul A. Teukolsky, and William T. Vetterling. NUMERICAL RECIPES. New York: Cambridge University Press, 1986.
Press, William H., NUMERICAL RECIPES IN C. New York: Cambridge University Press, 1988.
Press, William H., NUMERICAL RECIPES IN PASCAL. New York: Cambridge University Press, 1989. One of these three books should be read by anyone who is interested in doing numerical analysis. Chapters 15-17 deal with ordinary and partial differential equations. The authors provide computer subroutines for a variety of problems and just enough theory to back it up.
Orthogonal Functions and Expansions