Partial Differential Equations

  • Type of physical science: Mathematical methods
  • Field of study: Calculus

Partial differential equations describe the behavior of a physical system in space and at some time. A partial differential equation contains partial derivatives with respect to some function, such as velocity or displacement, at some point in space and time. The goal is to determine this unknown function from the given partial differential equation and so obtain a description of the physical system.

89317142-89526.jpg89317142-89527.jpg

Overview

The study of partial differential equations arose in the eighteenth century as an outgrowth of calculus, which came into being in the late seventeenth century. After resolving the problems for which it had been invented, scientists attempted to use calculus-based techniques to model natural phenomena as humans perceive them. An atomic theory was not yet available, and solids, fluids, and gases were thought of as continuous media. A piano string, for example, looks like a continuous line; neither the individual molecules nor their connections are visible. Instead, one can see only a continuous wire. Similarly, one cannot perceive the individual molecules making up the air; instead, the gas is experienced as being everywhere. The different rooms of a house may have different temperatures, and the temperature may change with time. Thus, one thinks of the temperature as being defined everywhere in the house, and it is a function of both space and time. In any fluid or solid body, the temperature in the medium can vary with both the position and the time.

Most experienced phenomena vary with position and time. As a result, if one wishes to model a phenomenon as one perceives it, one must formulate the equations in such a way that the sought-after quantity depends on both space and time, or if the phenomenon does not change in time, then it can depend on the position. When one incorporates local changes into the model, one is led to partial differential equations.

One of the first partial differential equations to be investigated extensively was the equation describing the motions of a vibrating string. To see how partial differential equations are used to approach a problem, let us consider the piano string in more detail. It is tightly stretched, its ends are held fixed, and it is brought into vibration by striking it with a hammer.

The vibrations of the string excite motions in the air, which are experienced as sound. The vibrations of the string themselves are quite small, so one must observe the phenomenon carefully. Initially, only the region in the area where the string was struck will vibrate, but immediately the disturbance will propagate in both directions. Each particle on the string moves up and down in the vertical direction, while the disturbance propagates horizontally. Waves in which the vibration takes place perpendicular to the direction of propagation are called transverse waves. To model this phenomenon, assume for each time, t, the shape of the string describing the displacement above or below the equilibrium position is given by a function, u. Lay out an x-axis along the string. Then, for each t, the position of the particle originally located as x will be at u(x,t) so that u(x,t) represents a snapshot of the string at the time t. The motion is described by Newton's second law, which says that the mass times acceleration equals the sum of the forces. Now, the acceleration is the time rate of change of the velocity, which, in turn, is the time rate of change of the displacement. From calculus, it is known that the acceleration is the second derivative of the displacement with respect to time, or ∂²u(x,t)/∂t². The notation indicates the derivatives are calculated with respect to t with x held fixed. Let ρ denote the linear mass density (mass per unit length) of the string. Then, the mass times acceleration per unit length is ρ∂²u(x,t)/∂t². Next, one must know the forces that act on the string. Let us neglect the weight of the string, which is small compared to the tension, and also the air resistance, which also does not play a great role. Then, the forces in the string are generated by internal stresses. These stresses are a kind of internal pressure. According to Hooke's law, the stress is generated by the relative stretching in the string, which means how far the local lengths are stretched compared to the original length under consideration. The quantity giving the local stretching is ∂u(x,t)/∂x and is called the strain. The constant of proportionality relating stress to strain comes from the tension in the string. Hooke's law now states that the stress is given by τ∂u(x,t)/∂x. The partial derivative of u with respect to x with t held fixed is ∂u(x,t)/∂x. The force at each point on the string arises from the interaction of adjacent particles. As noted above, the force is determined by the local stress or pressure, and since these particles are very close, the net force will be the relative difference in the stress over a short distance, that is, τ∂²u(x,t)/∂x². If one equates this expression to the expression for the mass times the acceleration and divide by rho, one obtains ∂²u(x,t)/∂t² = c²∂²u(x,t)/∂x² This equation is the one-dimensional wave equation. It is called one-dimensional because the string is a one-dimensional object in the spatial dimension. The number c has the units of velocity, and it gives the speed with which the disturbance is spreading out. The solutions to this partial differential equation are built up by adding up the effects of simple, basic solutions. Since each point on the string is executing simple harmonic motion, it is not surprising that the trigonometric functions will play a role, and these basic solutions have the form An(x)sinωnt, where n can be any natural number.

The ωn's are the fundamental frequencies of the vibrating string. Let L denote the length of the string. Then these frequencies are given by ωn=(nπc)/L=(nπ/L)√(τ/ρ) The lowest frequency, ω1=(π/L)√(τ/ρ) is called the fundamental tone. Note that it can be lowered by taking L large or a large linear density ρ. In piano strings, the large linear density is achieved by taking heavier strings for the bass notes. The bass notes also have longer strings. The notes in the treble clef have strings that are shorter, thinner, and tighter. To achieve precisely the right pitch, the piano tuner adjusts the tension τ in the string. The frequencies (ωn) above the fundamental frequency give the overtones. The ratio of the frequency of an overtone to the fundamental frequency is ωn1=n, a natural number, a fact known to the ancient Greeks. The functions, An(x) give the amplitude of each vibrating particle on the string. These amplitudes decrease with increasing n. The total vibration that one hears is given by the sum

u(x,t)=A1(x)sin ω1t + A2(x)sin ω2t + A3(x)sin ω3t + . . . . The amplitudes of the vibrations after the first three or four overtones are so small that they are no longer heard. The amplitudes themselves have characteristic shapes associated with them which are called the modes of the vibration. How fast the amplitudes die out determines the quality of the sound. If the hammer is hard, they do not die out so fast, and higher overtones are heard. These higher overtones are not perceived to be pleasant, and the resulting sound has a tinny quality, like a honky-tonk piano. Concert pianos have padded hammers, so the fundamental tone is heard plainly and the overtones after about the third one are no longer heard; the resulting sound is perceived to be rich and mellow.

A violin string satisfies a similar equation, except that its vibrations are caused by an additional outside force, the bowing. The equation then has the form ∂²u(x,t)/∂t²=c²∂²u(x,t) /∂x² + f(x,t), where f(x,t) describes the bowing. This equation is called a forced wave equation because of the exterior force present; the first equation is sometimes referred to as the unforced wave equation. All stringed musical instruments satisfy one of these two equations.

In fact, all classical musical instruments that are capable of playing a tune are one-dimensional, and the vibrations are distinguished by what is vibrating. In a flute, for example, a column of air is vibrating, and the length of the air column is controlled by the fingering. Electronic instruments, such as synthesizers, have been invented to simulate these vibrations.

It is important to notice that a single equation governs so many different kinds of phenomena. In fact, the theory of partial differential equations is one of the great unifiers in science: Extremely varied phenomena often satisfy the same equation. The wave equation above, for example, is not restricted to the theory of musical instruments. The current in a long, insulated, electrical wire (for example, a long power line) also satisfies the unforced wave equation. If there is a resistance to the flow of current in the wire, the current, i, at the point, x, and the time, t, satisfies the partial differential equation ∂²i(x,t)/ ∂t² + a∂i(x,t)/ ∂t = c²∂²i(x,t) /∂x², where the constant a depends on the resistance per unit length in the wire. The term a∂i(x,t)∂t describes how current is lost in overcoming the resistance. In this context, the equation is called the telegrapher's equation. The same equation arises in treating the damped vibrations of a string, and then it is called the damped wave equation. Because of the fact that the vibration of strings and the flow of electric currents in a wire satisfy similar partial differential equations, the knowledge of the solution properties in one field carries over immediately to a knowledge of the solution properties in another field.

Partial differential equations also arise in treating temperature problems. Let T represent the temperature. As was noted at the beginning of this article, the temperature depends on both the position and the time. Heat, a form of energy, flows from high temperatures to low temperatures. It has been found empirically that the heat flow is proportional to the negative of the slope of the temperature curve. On the other hand, the thermal energy in a medium depends on the temperature. The rate at which energy is changing in a system must be balanced by the energy flowing into or out of the system. These considerations in one spatial dimension yield a partial differential equation of the form (∂T/∂t)=a(∂²T/∂x²) and the coefficient, a, is called the thermal diffusivity. This equation is called the heat equation, and solutions to it tell how the temperature will be distributed throughout the system as time progresses.

There are other systems where a quantity flows from high concentrations of the quantity to low concentrations of it. Water and oil underground flow from high to low pressures.

Each of these empirical laws has a name attached to it, and they all have the same form. Thus, if q represents a flux and k some constant, the laws in one spatial dimension have the form q=-k(∂T/∂x) (T=temperature, Fourier's law); q=-k(∂C/∂x) (C=concentration, Fick’s law); q=-k(∂p/∂x) (p=pressure, Darcy's law). Each of these phenomena is governed by a heat equation. The interpretation of the results varies with the field, but the analysis is common to all.

Not all phenomena are time-dependent. Time-independent, or steady-state, phenomena are also modeled using partial differential equations. Such problems often lead to an equation for a potential, as in the case of electrostatics. The resulting equation is called the potential equation.

If u is the potential for a time-independent electrostatic field when no currents are present, it will satisfy the potential equation, a partial differential equation, which in two dimensions has the form ∂²u(x,y)/∂x² + ∂²u(x,y)/∂y² = 0.

The partial differential equations described above are all based on a macroscopic view of nature. They give a description of nature as human beings experience it. At the end of the last century and the beginning of this one, a rigorous atomic theory began to evolve. At first, one might expect that now one should set up a differential equation for the motion of every particle and solve the resulting system; however, for only one mol of a substance, that would result in 6.02 x 1023 equations, an impossible solution task for even the largest computers. Thus, researchers were forced to start looking for average behavior on the microscopic level, and the deterministic approach characterizing macroscopic physics had to be modified, and concepts from probability and statistics were introduced. Albert Einstein proposed a probabilistic model for dealing with Brownian motion. Remarkably, the probability function satisfies a heat equation. The Heisenberg uncertainty principle forced physicists to give up the idea of deterministic equations of motion for a localized particle such as the electron, and replaced it with a partial differential equation, which gives the probability that an atomic particle will be located at some place in space and time. The resulting equation is called the Schrödinger wave equation. It also has characteristic constants, called eigenvalues, associated with it, which are reminiscent of the fundamental frequencies for the vibrating string. These eigenvalues give, for example, the energy levels of an electron in an atom.

Partial differential equations are playing an increasingly large role in modeling technical problems: There are now many different methods available for solving them. One of the most popular methods is based on finding characteristic modes together with the corresponding eigenvalues. Other methods are based on transforming the problem into an equivalent problem that can be solved easily. If the method involves integrals, it is called the method of integral transforms. As science and technology progress, however, the mathematical problems to be solved increase in complexity and difficulty, and not all problems of interest can be solved explicitly. Many approximation methods have been proposed. The approximation methods lead to efficient computer implementations, but even the computer methods are not without their difficulties: Errors in the approximation arise, and it is often difficult to estimate these errors; when the problems become too large, it is impossible to run them on the computer.

Because of its importance, the field of partial differential equations is rapidly increasing and the scope of its applications is broadening daily. New results of both a theoretical and numerical nature are being obtained, and the field is the subject of much active research.

Applications

Partial differential equations arise in nearly every field of science and technology. The first applications arose in the field of mechanics, and some of the deepest and most difficult unsolved problems deal with partial differential equations in that area.

The central equations in the field of fluid mechanics are the Navier-Stokes equations.

This system is nonlinear, and despite intensive efforts, a satisfactory theoretical and numerical treatment is not yet available. If the fluid is nonviscous, such as a gas, these same equations with zero viscosity are used to describe the motion of air around airplanes. These equations can, under certain circumstances, have solutions exhibiting very unpleasant characteristics, which in everyday life are manifested as shocks. When the motion of water is slow, it can be thought of as an incompressible, irrotational fluid. In that case, a velocity potential can be introduced, which satisfies the potential equation. This equation is used for modeling flows in bays and estuaries as well as in the open sea and forms the basis for the computation of wakes behind ships. It is also used in calculating water structure interactions and in sediment transport problems. In the years to come, global circulation models for the oceans and global atmospheric models will be perfected, as well as models treating the interaction between the atmosphere and the ocean in the presence of varying temperatures. With such models, it will be possible to make better weather forecasts and better predict the effects of global warming.

Modeling the flow of fluids underground also leads to interesting partial differential equations. These equations are used in reservoir modeling, where predictions concerning the long-term performance are made based on the solutions to the underground flow equations.

Based on these predictions, production forecasts are made and the profitability of the field is determined. Reservoir modeling is also important in determining the availability of groundwater supplies and the exploitation of aquifers. This knowledge allows planners to make decisions on how the water supplies can be divided to meet agricultural and urban demands.

These same techniques have been applied to modeling the transport of toxic wastes underground. These problems are increasing in importance and can lead to acrimonious political debates. The decision about where to locate the nation's repository for high-level nuclear wastes depends to a large extent on the outcome of studies regarding the behavior of the groundwater in and around the proposed facility.

Partial differential equations are used to model solid bodies when the bodies are not completely rigid. One common category of such bodies is elastic materials. Such materials have the property that they give a little when pressure is applied to them, and then, they spring back to their original shape like a spring, provided they have not been deformed too much, or "bent out of shape," as it is expressed colloquially. The stresses inside such materials depend on the strainsthat is, the relative displacement of the small particles as they move within their lattice structure. The elastic equations which result are a system of wave equations. They can even be applied to modeling the earth (away from the liquid molten core). One of their applications is to the study of earthquakes. The analysis of the elastic equations predicts that there will be two waves propagating out from the region where the earthquake took place, a primary, or P, wave and a secondary, or S, wave which follows the P wave, and that these waves can also be identified in the seismic records. These same elastic equations also form the basis for seismic prospecting, as in the search for oil. Other elastic materials of interest are beams, plates, and shells. The treatment of these problems leads to partial differential equations with fourth derivatives in the spatial variable.

Elastic solids are not the only types of solids, although they are the most widely investigated. Plastics are common, and there are solids that do not spring back when deformed but instead return to their undeformed state by a mechanism that is reminiscent of both an elastic and a viscous material. Such materials are said to be visco-elastic. One example of a visco-elastic material is a flake board. The composite material is pressed under conditions of high heat. The resulting board is used in construction.

Mechanical problems are not the only source of partial differential equations. They also form the backbone of the treatment of electromagnetic phenomena. Maxwell's equations are a system of partial differential equations that must be solved to determine electric and magnetic fields. These equations form the basis for the treatment of radio and television waves.

Partial differential equations also arise in the study of nuclear fission and fusion. The fission equation is an equation like the heat equation, but it also contains a source term depending on the number of neutrons present. In attempting to solve this problem, one is led to an eigenvalue problem again, and it is the size of the first eigenvalue that determines whether a given reaction is going to be an uncontrolled chain reaction (as in a nuclear bomb), a sustained reaction (as in the case of a nuclear power plant), or will simply die out. The equations for the study of a plasma in the case of nuclear fusion are a system of partial differential equations known as the temperature-dependent electro-magneto-hydrodynamic equations. These equations have not yet been adequately treated, and it is this lack of treatment that represents a major stumbling block in the development of power obtained from nuclear fusion.

Other sources of partial differential equations are nuclear physics, where the Schrödinger wave equation plays a fundamental role. There are now equations for solitary waves called solutions. Diffusion equations and reaction-diffusion equations occur in the study of semiconductors. Partial differential equations are being used in population dynamics, a field dealing with the growth of populations. These methods are now being used to predict, for example, fish runs, deer and bear populations, and human populations. They also are used to study the way signals are sent along nerves, calculating the stresses on fractures arising when bones are set in certain ways, and so on. They are also used in control processes and robotics.

The use of partial differential equations is continuing to grow. Part of the reason is economic. Nowadays, when a new process is being developed, a general mathematical model describing the process is developed. This model usually consists of a system of partial or ordinary differential equations or a combination of both. Based on these equations, a computer simulation can be carried out. If the model is general enough, there will be many free constants in the equation. The choice of different constants in the computer run corresponds to different experiments that would ordinarily have to be carried out. These computer runs are fast and cheap and allow the choice of optimal constants, which, in turn, can give a producer the opportunity to put out the best product possible using given materials. This can represent significant savings to any producer of a product. A good mathematical model can doom a competitor, but a poor model can doom the company that commissioned it. Mathematical models can be quite abstract, but because of the developments described above, engineering is becoming more and more mathematical. For this reason, the development of new products can no longer take place "in some garage," even though the product may have been invented there.

Context

Partial differential equations began in the eighteenth century with the study of special problems in continuum mechanics and physics. Sir Isaac Newton's laws, which appeared at the end of the seventeenth century, set the stage for a rapid development in mechanics, but the extension to problems in continuum mechanics was not that clear, and Newton's treatment of hydrodynamics was flawed. It was not until the middle of the eighteenth century that Leonhard Euler (1707-1783) obtained the proper mechanical formulation for the second law, which has become fundamental for continuum physics. In the meantime, the Bernoullis had begun their own fundamental research in fluid mechanics. They made use of the concept of hydrostatic pressure that Simon Stevin (1548-1620) had introduced a century earlier and obtained theorems bearing their name. The equation for the vibrating string had been set up, and both Euler and Jean Le Rond d'Alembert (1717-1783) had worked on it. D'Alembert had obtained a general solution, but in the course of the investigations, a question arose about how general functions could be represented, which was not resolved until the following century.

Euler also introduced another principle into physics, which was to play a large role in this century. This principle is a minimizing, or variational, principle. It is based ultimately on the philosophical view of Gottfried Leibniz (1646-1716) that God created the best of all possible universes so that all physical laws should satisfy a minimum or maximum principle. He suggested that the quantity to focus on was what he called the vis viva, the living force, or, as it is now called, the kinetic energy. Pierre de Maupertuis (1698-1759) gave an initial formulation of such a principle, but it was Euler who gave it its usable form and Joseph Lagrange (1736-1813) who gave it its modern mathematical treatment. The physical principle is known today as Hamilton's principle, after William Hamilton (1805-1865), who gave it its final form.

Pierre-Simon Laplace (1749-1827) derived the potential equation in his research on celestial mechanics, and this equation is sometimes called the Laplace equation.

At the beginning of the nineteenth century, a number of problems had been formulated, but a general theory of partial differential equations was lacking. Then, in 1805, Jean-Baptiste-Joseph Fourier (1768-1830) tried to publish his work on the theory of heat, which contained a derivation of the heat equation. His interest had been stimulated from his work with cannons in the military. This work was too revolutionary, and it was not well received. In defense of the scientists of the time, though, it must be admitted that it was not well written and the presentation was not convincing. It was reworked and published in 1822. The solutions are based on a series of eigenfunctions, and these so-called Fourier series were made rigorous by Peter Dirichlet (1805-1859). At the same time, Sophie Germain (1776-1831) published her work on the vibrations of elastic plates, which contained a fourth-order equation. After that, the field developed rapidly. Augustin-Louis Cauchy (1789-1857) gave the general formulation for the elastic equations in continuum mechanics, and Louis Navier (1785-1836) and George Stokes (1819-1903) derived the equations of fluid mechanics in the form that is known today. Carl Friedrich Gauss (1777-1855) and Dirichlet had completed their research in potential theory, and James Clerk Maxwell (1831-1879) had completed his work on electricity and magnetism.

Georg Friedrich Bernhard Riemann (1826-1866) discovered the existence of shocks, and his lectures on partial differential equations remain relevant. At the end of the century, the fundamental existence result for solutions to partial differential equations, the Cauchy-Kovalevskaya theorem, was published by Sophia Kovalevskaya (1850-1891; in the West, she called herself "Sonya").

By the end of the nineteenth century, many physicists believed that, as a field, physics was essentially completed. Mathematicians working in the field of partial differential equations could be nearly as smug. They had a rigorous way in which to formulate problems, there was a general theory to serve as a guide to what could be done, and there were numerous solutions to specialized problems. As in the case of classical physics, however, there were a number of unresolved problems. Dirichlet's principle, a method for solving the potential equation, gave good answers when applied, but the mathematical theory was inconsistent. Methods such as integral equations were just being developed. If discontinuities were to be allowed, the definition of a solution would have to be modified. Approximation methods were in their infancy. At the same time, a revolution in physics occurred which brought with it new mathematical problems.

Moreover, technology placed increasing demands on mathematics, and engineers forced the development of new tools to solve the complex problems that they were facing. The field of mathematics soon found itself in the throes of a revolution, and researchers in partial differential equations tried to make use of the new methods being developed.

The twentieth century came to be characterized by an abstraction and generality undreamed of in earlier times. This has led to a unification of the fields of mathematics and a broader understanding of what mathematics can do, but the price has been that many users are either unaware or unable to use the results of modern techniques. Problems now are routinely formulated in infinite dimensions, and the numerical approximations are thought of as taking place in some finite dimensional approximation to the infinite dimensional space. Contributors to the field of partial differential equations in this century read like a veritable WHO'S WHO of modern mathematics. In contrast to previous eras, where first one European country and then another seemed to dominate the field, mathematical research is now international in character, with results reported worldwide. Meetings also have an international flavor. The linear theory for homogeneous media as it pertains to practical problems is fairly complete, and there are also approximation methods available which are being sharpened daily. There are a number of results of a general nature available when the medium is not homogeneous but still linear. The key here is the word "linear." When the equations are nonlinear, unusual and unexpected behavior can occur. Sometimes, the effects of the nonlinearities can be estimated, but in most cases, they cannot. Computer-aided solutions can sometimes be obtained, but it is not always clear whether one is generating numerical "garbage" or whether the numbers really have a meaning. A general, coherent nonlinear theory is a task for the future. In the field of ordinary differential equations, a nonlinear theory of dynamical systems is under development. It is popularly known as a theory of chaos.

The core theory of partial differential equations maintained its roots in mathematics through the late twentieth and early twenty-first centuries. However, new applications emerged as mathematicians better understood nonlinear effects and computational techniques. Advances in solving the Navier-Stokes equation impacted climate modeling, aerodynamics, and astrophysical flows. Neuroscience, epidemiology, tumor growth modeling, economics, machine learning, and engineering also benefited from applications of partial differential equations. Climate and environmental sciences use these equations to describe air and water pollutants, wildfire modeling, and weather prediction.

Principal terms

CONTINUUM MECHANICS: that branch of mechanics dealing with macroscopic phenomena whereby the medium is thought of not as consisting of individual particles but as a continuous medium

EIGENVALUE: a characteristic value for a system, such as a characteristic frequency

MATHEMATICAL MODEL: an equation or a set of equations that purports to describe a certain physical phenomenon

PARTIAL DERIVATIVE: the rate of change of a quantity with respect to one variable when all the other variables are held fixed

PARTIAL DIFFERENTIAL EQUATION: a relation between a function and its partial derivatives

Essay by Christine M. Guenther and Ronald B. Guenther

Bibliography

Guenther, R. B., and J. W. Lee. Partial Differential Equations of Mathematical Physics and Integral Equations. Prentice-Hall, 1988.

Hellwig, Gunter. Partial Differential Equations. Blaisdell, 1964.

John, Fritz. Partial Differential Equations. 4th ed., Springer-Verlag, 1982.

Krantz, Steven G., and George F. Simmons. Differential Equations: Theory, Technique, and Practice. 3rd ed., Chapman & Hall/CRC, 2022.

Ladyzhenskaya, O. A. The Boundary Value Problems of Mathematical Physics. Springer-Verlag, 1985.

Razdan, Atul Kumar, and V. Ravichandran. Fundamentals of Partial Differential Equations. Springer, 2022.

Shipps, Alex. "A Framework for Solving Aarabolic Partial Differential Equations." Massachusetts Institute of Technology, 28 Aug. 2024, news.mit.edu/2024/framework-solving-parabolic-partial-differential-equations-0828. Accessed 7 Feb. 2025.