Weather Forecasting: Numerical Weather Prediction

Most industrialized nations use numerical weather prediction (NWP) techniques to formulate weather forecasts. NWP is based on physical laws that are incorporated into a mathematical model, with solutions determined using computer algorithms. Large amounts of data are used to initialize the numerical models and verification of short-range NWP models shows a considerable improvement over climatology. NWP models require large mainframe supercomputers to formulate each forecast.

88953080-50908.jpg

Historical Development

The atmosphere is a very complex, chaotic natural system. Before the science of meteorology was developed, keen observers of the natural environment developed forecasting rules epitomized by such sayings as “Red sky at morning/ Sailors take warning/ Red sky at night/ Sailors delight.” In the early nineteenth century, a generally acceptable systematic classification of clouds was introduced by Luke Howard, known as the father of British meteorology, and scientists began making routine daily weather observations at major cities and universities. A systematic study of the atmosphere as a chemical and physical entity began.

In 1904, Vilhelm Bjerknes brought the scientific community to understand that atmospheric motions were largely governed by the first law of thermodynamics, Newton’s second law of motion, conservation of mass (the “continuity equation”), the equation of state, and conservation of water in all its forms. (These physical laws form what is now known as the “governing equations” for numerical weather prediction.) Bjerknes, writing that the fundamental governing equations constituted a determinate, nonlinear system, realized that the system had no analytic solution. He also recognized that available data to determine initial conditions were inadequate. In 1906, Bjerknes devised graphical methods to use in atmospheric physics. During the next decade, he adopted an approach to apply physics in a qualitative, as opposed to a numerical, technique to weather forecasting.

In 1916, Bjerknes began working at the Bergen Museum in Norway, a move that would be decisive in establishing meteorology as an applied science. In 1918, Bjerknes’s son Jacob noticed distinctive features on weather maps that led him to publish an essay entitled “On the Structure of Moving Cyclones.” This was followed by the development of the concept of “fronts” as a forecasting tool at the Bergen School in 1919. Following World War I, Norwegian and Swedish meteorologists educated at the Bergen School began using the theory of fronts operationally. Their ability to correctly predict severe weather events led other Europeans to adopt this frontal forecasting method. Enthusiasm for this empirical method blossomed and remains strong among the general public today.

Meanwhile, attempts to develop numerical weather prediction techniques languished. A method for numerically integrating the governing equations was published by L. F. Richardson in 1921. It contained several problems and errors. Following World War II, the development of mainframe computers allowed development of objective forecasting models incorporating many meteorological variables. By 1950, the first scientific results of a computer model, based on numerical integration of barotropic vorticity equations, were published. In the following decades, computer models grew increasingly sophisticated through the incorporation of larger and larger numbers of operations, iterations, and data input. Computer speeds increased dramatically, allowing use of sophisticated equations modeling atmospheric behavior. Larger models with relatively fine grids were developed.

Use of Supercomputers

Modern weather forecasting attempts to model processes in the atmosphere by representing the appropriate classical laws of physics in mathematical terms. Models use calculations of assemblages of approximations of physical equations (algorithms). Large (mainframe) supercomputers generate numerical forecasts using these algorithms and an array of observations. Most models have at least fifteen vertical levels and a grid with a mesh size less than 200 kilometers. The exact formulation of any particular model depends upon the amount of data being input, how long in advance, how detailed, and which variables are being forecast.

Historically, weather observations for numerical forecasts were taken mainly at 0000 Coordinated Universal Time (UTC), or Greenwich time, and 1200 UTC using ground stations and radiosondes (weather balloons). Now, these observations are supplemented by asynchronous observations from infrared and microwave detectors on satellites, radar, pilot reports, weather drones, and automatic weather observation stations (including ocean buoys). These new data sources have substantially enlarged the amount of input data. However, because these data are fed into the models as they run, the distinction between input data and model predictions has become blurred. Thus, a weather map showing a numerically generated forecast (as seen on the Internet or in a television presentation) may contain information from many data sources of varying accuracy taken at various times and locations.

Data from all available observations are interpolated to fit the model grid size. Every forecast incorporates inherent errors from the observations, data interpolations, model approximations, the instabilities of the mathematics, and the limitations of the computers used. Models can be separated by their time frame into nowcasts, short-term forecasts, medium-range forecasts, long-term forecasts, and outlooks. In addition, specialized forecast models for hurricanes and other specific types of severe weather have been developed. Because of hazards associated with severe, rapidly developing weather threats, there is increasing interest in developing specialized nowcast and short-range mesoscale (local) forecast models.

Many countries staff their own meteorological offices, using satellite images and local weather observations to develop computer models uniquely suited to their needs. Countries with specific needs and adequate resources develop models to suit their own unique needs. Japan, with its high population concentrations exposed to tropical storm threats, has directed its meteorological efforts toward mesoscale forecasting. The entire country was covered by a network of weather radars by 1971, and Japan supplements its ground-based observations with geostationary weather satellites. This maximizes the Japanese ability to predict heavy rains from tropical typhoons and to issue flash flood forecasts.

In 1999, the United States National Weather Service (NWS) completed a twelve-year modernization program that included 311 automatic weather observing systems and 120 new Nexrad Doppler radars. Observed weather and forecasts were displayed on state-of-the-art computer systems at NWS forecasting offices.

In the mid-1970s, the European Centre for Medium Range Weather Forecasting (ECMWF) and the U.S. National Weather Service began utilizing high-speed supercomputers to generate and solve numerical weather models. Both of these entities have developed global atmospheric models for medium-range forecasts, producing a three- to six-day forecast. The numerical methods used in these global models involve a spectral transform method. One reason for using this method is that models frequently experience numerical problems for computations in polar latitudes. The ECMWF in 1999 was using a model with grid points spaced every 60 kilometers around the globe at thirty-one levels in the vertical. Its initial conditions used observations over the previous twenty-four hours and added in several early model runs forecasting about twenty minutes ahead to augment the observations. Wind, temperature, and humidity were then forecast at 4,154,868 points throughout the atmosphere. Although these models do very well compared with earlier, more primitive models, there is considerable room for improvement in forecasting ability, especially in the tropical latitudes. Medium-range forecast models continue to be improved periodically. For example, the prediction of wave heights in the Great Lakes was significantly improved in the 2010s with the implementation of the Wavewatch III model. The upgrade introduced bathymetry and sea surface temperature measurements into the wind and temperature readings from the National Digital Forecast Database (NDFD) to more accurately predict wave height and properly advise the public.

Further improvements in the use of supercomputers in numerical weather forecasting include the National Oceanic and Atmospheric Administration's (NOAA) Weather and Climate Operational Supercomputer System (WCOSS). The WCOSS replaced the IBM and Cray supercomputers previously used at the NOAA in 2022. In 2023, the WCOSS's computing power was increased by 20 percent to improve and guide forecast models. These changes have increased short-term forecast and warning system accuracy and contributed to the study and monitoring of climate change events, such as sea-level rise, out-of-season hurricanes, droughts, and other similar events.

Future of Numerical Weather Prediction

The greatest advancements in numerical weather prediction may arise through continued improvement in the quality and quantity of weather data used as input in the models, through better methods of data interpolation, through improved algorithms of atmospheric behavior, and in the use of increasingly sophisticated supercomputers capable of more and faster iterations. Some modelers believe that departing from a latitude-longitude grid would prevent the clustering of grid points near the poles, which now induces some problems in numerical computations.

Nowcasting, especially for rapidly developing severe weather conditions, including severe thunderstorms, flash floods, and tornadoes, has received a great deal of attention by scientists and governments. Within the United States, it is hoped that the inclusion of the Nexrad Doppler radar observations into operational models will lead to better nowcasting. Other developing nowcasting technologies include a three-dimensional approach to nowcasting, such as the three-dimensional convolutional neural network model (3DCNN).

Forecasts demonstrate both accuracy and skill. A forecast may be entirely accurate simply because of the normal weather pattern in a location, and not because of the skill of the forecaster. Conversely, a forecaster or a forecasting method demonstrates skill by accurately predicting weather that is out of the expected normal pattern for that location. Forecasts have been improving in a slow, steady fashion since the introduction of supercomputers. However, even for short-range time periods, there is still room for improvement. One method for improving any model’s forecasting ability is to make ensemble forecasts by running the same model many times, each time with slightly different initial conditions, and then pooling the results statistically. This technique yields better results than any single model run, based on comparing verifications of the individual and ensemble results for a large number of forecasts. A variation of this technique is the superensemble forecast in which forecasts from several different computer models runs or even ensemble runs are pooled. Superensemble forecasts have been found to show greater accuracy than ensemble forecasts. One way in which superensemble forecasts are thought to achieve this is in the minimization of “forecast bias” in which any one model develops a spurious trend as the increasing numbers of iterations magnify small errors.

Worldwide, better weather satellite imagery for input to more detailed hemispheric models could lead to enhanced quality of numerical weather predictions. Greater availability of weather observations for the world’s oceans would also be helpful. While hemispheric models are good at forecasting at midlatitudes, less accuracy and skill are shown in tropical forecasting, as determined by model verification. Improved models incorporating more detailed initial data could provide better forecasts in the tropics.

One of the greatest challenges confronting numerical weather prediction lies in the development of reliable long-range predictive models. Long-range predictions often fail because of nondeterministic or random events. The El Niño/Southern Oscillation is an example. El Niño events are now viewed as virtually nondeterministic events; when a strong El Niño event is occurring, the output from some long-range models may be subjectively altered by meteorologists to reflect past historical events. The goal of numerical weather prediction is to provide objective forecasts using a model that verifies data more frequently than do subjective forecasts, even those formulated by highly trained meteorologists. To forecast El Niño events better, it would be advantageous to develop a predictive model that could successfully incorporate Pacific Ocean surface temperatures as initial conditions. With longer and better data records, and further study of why existing long-range predictions have failed, better models may be developed.

To improve overall weather forecasting in the United States and contribute to the weather research community in meaningful ways, some scientists have called for individual weather agencies to be combined as one federal organization. As climate change continues to alter weather patterns and severe storms put lives at risk, it is increasingly important to accurately predict and document weather events in a timely manner. Combining the resources of the NWP, NOAA, and other organizations would provide better funding and scientific collaboration in weather science.

Principal Terms

climatology: the scientific study of climate that depends on the statistical database of weather observed over a period of twenty or more years for a specific location

ensemble weather forecasting: repeated use of a single model, run many times using slightly different initial data; the results of the model runs are pooled to create a single “ensemble” weather forecast

forecast verification: comparison of predicted weather to observed weather conditions to assess forecasting accuracy and reliability

global atmospheric model: computational model of global weather patterns based on a spherical coordinate system representing the entire planet

hemispheric model: a numerical model that extends over the whole Northern or Southern Hemisphere, or just one half of the planet

long-range prediction: a weather forecast for a specific region for a period greater than one week in advance, often supplemented with climatological information

mesoscale model: a weather forecast for an area of up to several hundred square kilometers in extent on a time scale of between one and twelve hours

nondeterminism: chaotic, random events that cannot be predicted but that have a significant influence on the development of weather systems

nowcasting: a very short-term weather forecast usually for the prediction of rapidly changing, severe weather events within a time of no more than a few hours

Bibliography

Bader, M. J., et al. Images in Weather Forecasting: A Practical Guide for Interpreting Satellite and Radar Imagery. New York: Cambridge University Press, 1997.

Browning, K. A., and R. J. Gurney. Global Energy and Water Cycles. Cambridge University Press, 2007.

Burroughs, William J. Watching the World’s Weather. New York: Cambridge University Press, 1991.

Djuric, Dusan. Weather Analysis. Englewood Cliffs, NJ: Prentice Hall, 1994.

Douville, Hervé, et al. "Chapter 8: Water Cycle Changes." Climate Change 2021: The Physical Science Basis, 2021, Cambridge University Press, pp. 1055-1210, www.ipcc.ch/report/ar6/wg1/downloads/report/IPCC‗AR6‗WGI‗Chapter08.pdf. Accessed 25 Aug. 2024.

Hamilton, Kevin, and Wataru Ohfuchi, eds. High Resolution Numerical Modeling of the Atmosphere and Ocean. Springer Science+Business Media, 2008.

Hodgson, Michael. Basic Essentials: Weather Forecasting. 3rd ed. Globe Pequot, 2007.

Ma, Yaoming, et al. Energy and Water Cycles in the Third Pole. MDPI - Multidisciplinary Digital Publishing Institute, 2022.

Monmonier, Mark S. Air Apparent: How Meteorologists Learned to Map, Predict, and Dramatize Weather. Chicago: University of Chicago Press, 1999.

Santurette, Patrick, and Christo Georgiev. Weather Analysis and Forecasting. Academic Press, 2005.

Stensrud, David J. Parameterization Schemes: Keys to Understanding Numerical Weather Prediction Models. Cambridge University Press, 2007.

Temmam, R. and J. Tribbia, eds. Handbook of Numerical Analysis. Vol. XIV. Special Volume Computational Methods for the Atmosphere and the Oceans. North-Holland, 2009.

Vasquez, Tim. Weather Analysis & Forecasting Handbook. 2nd ed., Weather Graphics Technologies, 2021.

Yang, Dawen, et al. "Hydrological Cycle and Water Resources in a Changing World: A Review." Geography and Sustainability, vol. 2, no. 2, 2021, pp. 115-122. doi.org/10.1016/j.geosus.2021.05.003.

Zdunkowski, Wilfred, and Andreas Bott. Thermodynamics of the Atmosphere. A Course in Theoretical Meteorology. Cambridge University Press, 2004.