Expected values (mathematical concept)
Expected value is a fundamental mathematical concept that represents the long-term average of all possible outcomes of a random variable. It is calculated by taking the weighted sum of outcomes, where the weights correspond to the probabilities of those outcomes occurring. This concept plays a crucial role in fields like probability theory, statistics, and applied mathematics, particularly in scenarios involving decision-making under uncertainty, such as gambling, insurance, and investments.
Historically, expected values emerged from early discussions on fairness in games and risk assessment, notably through the correspondence between mathematicians Pascal and Fermat, as well as the work of Pierre-Simon Laplace. In practical applications, expected values help in estimating parameters and analyzing data distributions, laying the groundwork for statistical methods like the Central Limit Theorem. This theorem demonstrates that, given large enough sample sizes, the sampling distribution of the mean will approximate a normal distribution, which significantly aids in hypothesis testing and parameter estimation.
Further, expected values are integral to understanding moments—measures that describe the shape of probability distributions, such as variance, skewness, and kurtosis. These measures help characterize data behavior, providing insights into phenomena across various disciplines, including psychology, biology, and economics. Overall, the concept of expected value is essential for making informed decisions based on probabilistic outcomes.
On this Page
Expected values (mathematical concept)
Summary: The mathematical concept of “expected value” arose in the study of fairness in gambling but it has many scientific applications.
When people play lotteries or purchase insurance, they are investing money for a chance of some future financial return that may or may not occur. From the lottery or insurance company’s perspective, money comes in from multiple purchasers and is paid out to the winners or claimants. Both sides may have questions regarding whether the investments are worthwhile or the payments are fair. These questions appear to date back to antiquity. Evidence of gambling games has been found in archaeological excavations of caves and in many ancient civilizations, including Egypt, Greece, and the Roman Empire. Babylonians used a form of maritime insurance and the Romans paid some investments in annuities.
![Expected value in a continuous probability distribution. By Joxemai (Own work) [CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons 94981805-91333.jpg](https://imageserver.ebscohost.com/img/embimages/ers/sp/embedded/94981805-91333.jpg?ephost1=dGJyMNHX8kSepq84xNvgOLCmsE2epq5Srqa4SK6WxWXS)
![Uncertianty Theory By Pingfanlj (Own work) [Public domain], via Wikimedia Commons 94981805-91334.jpg](https://imageserver.ebscohost.com/img/embimages/ers/sp/embedded/94981805-91334.jpg?ephost1=dGJyMNHX8kSepq84xNvgOLCmsE2epq5Srqa4SK6WxWXS)
A question concerning the fairness of certain gambling games spurred the development of probability theory in the seventeenth century. Mathematicians Blaise Pascal and Pierre de Fermat addressed fairness and related concepts while corresponding about a scenario in which two people wanted to quit playing a game and divide the winnings fairly, given that one player had a better chance of winning the game than the other. Mathematician Pierre-Simon Laplace seems to have first defined expected value in his 1814 work Essai Philosophique sur les Probabilitiés, writing, “This advantage in the theory of chance is the product of the sum hoped for by the probability of obtaining it.”We call this advantage mathematical hope.” Expected value is the long-term average of the possible outcomes of a random variable or process, like tossing a six-sided die. Mathematically, expected value is computed as the weighted sum of the outcomes, where the weights are the corresponding probabilities. For discrete random variables, expected value is a summation; for continuous variables, it is an integration. While computing means for data is very common beginning in middle school classrooms in the twenty-first century, finding expected values for random variables is more commonly part of high school and college curricula. Though initially motivated by notions of fairness, expected values have many important applications in probability and statistical theory and practice.
Applications
Scientific problems involving measurement were an inspiration for many mathematical advances in probability and applied data analysis. Astronomers in the eighteenth century often computed arithmetic means (or averages) for data to estimate parameters and describe distributions of “errors,” like those they found when taking multiple measurements of the same astronomical distance. These averages were likely to be close to the true distance or value, or so they generally believed. This technique was used without proof for a long time, though mathematician Thomas Simpson had shown that an average was a better measure than a single observation in a very limited set of cases. Some issues in finding a suitable proof stemmed from the fact that probability distributions commonly used for describing errors at that time presented mathematical difficulties when trying to find expected values for averages versus expected values for individual observations. Work by mathematicians Abraham de Moivre and Laplace led to the Central Limit Theorem, derived by Laplace in the nineteenth century and later extended by other mathematicians such as Francis Edgeworth. This result is sometimes called the “DeMoivre-Laplace theorem” and was given its more common name in work by George Pólya in the early twentieth century. The primary impact of the Central Limit Theorem with regard to expected values is that it defined the expected value for the sampling distribution of the mean, given sufficiently large sample sizes. It established a theoretical basis for estimation and a later hypothesis testing for various parameters.
There are many different probability distributions that mathematicians, statisticians, and others have found, derived, named, and studied. For many years the normal distribution, credited to mathematician Carl Friedrich Gauss, played a central role in error modeling and other applications. However, approaching the twentieth century, increasing application of probability and statistics in a wide variety of fields, including biology, business, genetics, and psychophysics, led investigators like statistician Karl Pearson to research non-normal or skewed distributions to better represent phenomena they encountered. The problem then became to estimate parameters for these distributions and discover their mathematical properties. The method of moments estimates parameters like variance and skew using expected values. It primarily considers deviations of points from the distribution mean, called “central moments,” which are conceptually related to the idea of moment or torque about a point in physics. Deviations are raised to various powers so that the k-th moment corresponds to the k-th power. The first central moment is zero, since it essentially sums all deviations from the mean or expected value. Variance is the second central moment, which is the expected value (the weighted sum) of all squared deviations from the mean. The third moment quantifies skew or asymmetry and is the expected value of all cubed deviations from the mean. A symmetric distribution has skew of zero. The fourth moment is called “kurtosis” and measures whether the distribution is taller or shorter and has thicker or thinner tails than a normal distribution with the same variance. Mixed moments can be found for two variables together to quantify the covariance and, by extension, correlation. Measures of skewness and kurtosis based on moments are credited to Pearson.
Bibliography
Aven, T. Risk Analysis: Assessing Uncertainties Beyond Expected Values and Probabilities. Hoboken, NJ: Wiley, 2008
Fey, James, Elizabeth Phillips, and Catherine Anderson. What Do You Expect: Probability and Expected Value. Palo Alto, CA: Dale Seymour Publications, 1997.
Hald, Anders. A History of Parametric Statistical Inference from Bernoulli to Fisher, 1713-1935. New York: Springer, 2007.