Bayesian inference

Bayesian inference (which is also known as Bayesian analysis) is a method of extrapolating more accurate predictive answers to questions using only limited amounts of data. In other words, Bayesian inference allows someone to solve a problem using information that is already known, but there is only a limited data set on which to rely. For instance, using a real-world model, if a person cannot find his car in a mall parking lot, assuming that he has a remote key that activates a sound, logic dictates that the car owner should follow the sound of the car's beeping to find the vehicle. However, Bayesian inference allows the car owner to consider previous behaviors. Therefore, if the car owner has a tendency to park near elevators, Bayesian inference suggests that the car owner should search in an area where there are both elevators and the sound of a car beeping. This model also suggests that the car owner has only parked in the garage a few times, and in different spots each time. So by using both behavioral characteristics (a tendency to park near elevators) and known data (the sound of the beeping), the owner can make calculated decisions with greater accuracy.

Bayesian inference has applications in a variety of industries. It can be used in making targeted determinations in marketing campaigns, pricing goods, financial risk analysis, offering predictive models for sales, and using demographic models to target relevant markets. Bayesian inference remains a fixture of statistical educational programs.

Brief History

Bayesian inference was derived from the work of Thomas Bayes, an English minister and statistician. He was responsible for the development of Bayes' theorem, which has been used to create a number of statistical models in wide use in the twenty-first century. Bayes was born into a wealthy Presbyterian family in northern England. Like his father, Bayes eventually entered the ministry, but he also retained an interest in logic and rhetoric. He studied all three at the University of Edinburgh beginning in 1720. By 1732, he was serving as an assistant at his father's Presbyterian chapel.

rssalemscience-20170213-106-152827.jpgrssalemscience-20170213-106-152828.jpg

Bayes' theories were outlined in an essay entitled "An Essay towards Solving a Problem in the Doctrine of Chances" (1763). It was published by his colleague and fellow minister Richard Price two years after Bayes' death. Price recognized the value of Bayes' ideas and refined them for publication. As a result, some academics suggest that this theorem should be more appropriately titled the Bayes-Price theorem. This essay postulated a method for determining conditional probability, a statistical tool that enables people to determine the likelihood that an event might occur based upon previous events (that is, known statistical information).

Bayes' paper remained in obscurity until being rediscovered by French mathematician Pierre-Simon Laplace. Laplace believed that Bayes' theories could be useful in accounting for the differences in astronomical observations made by many different cultures over a period of two thousand years. Prior to Laplace, astronomers would simply average these calculations together. Like Bayes, Laplace had independently arrived at the same conclusions about probabilities. Laplace argued that the chances of an event occurring (given its cause) were proportional to its cause (given an event). Laplace used birth records across Europe to demonstrate this idea. Birth records in Laplace's era had shown that more male than female births were typically recorded. He sought to show whether this was simply a geographical or time-specific trend, or whether it had been reoccurring across all cultures and periods. He initially used French data, and then added birth records from across Europe, Egypt, and South America. Ultimately, he was able to deduce that humankind had generally given birth to more boys than girls on average. Using this same method, astronomer Alexis Bouvard then calculated the masses of Saturn and Jupiter using all known reliable observations. Modern science has subsequently determined that Bouvard's statistical results were extraordinarily accurate. However, despite his promotion of Bayes' theorem, Laplace eventually moved more toward a frequentist approach to statistical probability. Over time, Bayesian inference fell into disuse among theoreticians, but later regained value among mathematicians who revived it for cryptography during World War II (1939–1945). It has subsequently been discovered to have valuable applications in actuarial science, epidemiology, determining authorship of anonymous works, election results, and other statistical problems that lack large amounts of sample data.

Overview

Bayesian inference is based upon Bayes' theorem. In its simplest form, this theorem offers people a means to guesstimate an answer to a question based solely upon how many times this event did or did not happen in the past. As more information becomes available, the hypothesis is altered to account for the new data. For example, Bayes conducted a test in which a ball was tossed onto a table by an assistant while his back was turned away from the table. He asked the assistant not to tell him where the ball landed. Then he had the assistant drop more balls onto the table from the same position while Bayes remained facing away. After each ball was dropped, the assistant informed him whether it landed to the left or right of the first ball. Based upon these results, Bayes increasingly understood the dynamics of where the first ball most likely landed on the table. According to Bayes' theorem, this hypothesis was later calculated as prior + likelihood = posterior. In this calculation, prior refers to the existing hypothesis before any new data is taken into account. Likelihood refers to any new data that may be included in the model. Therefore, posterior then becomes the new hypothesis that accounts for all known data. Every time new data (likelihood) is added, the new summation (posterior) becomes the prior in future calculations.

To infer means to come to a conclusion based on all available evidence. Bayesian inference, then, uses design models that rely upon Bayes' theories of probability to draw conclusions. Bayesian inference is different from the closely related form of reasoning called frequentist inference. Frequentist approaches to probability involve determining the likelihood of a future event occurring by determining how frequently it had previously happened by accounting for all recorded observations. Frequentist approaches typically favor having a broader cross-section of data upon which to rely, while Bayesian inference can be used when only smaller statistical samples are available. For instance, statistician Nate Silver was able to use Bayesian inference to predict election results from US presidential elections by relying on election polls (which are limited in number) rather than talking to a large number of voters (which represents a very large sample).

Bibliography

"An Intuitive (and Short) Explanation of Bayes' Theorem." Better Explained, betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem/. Accessed 23 June 2017.

Bayes, Thomas. "An Essay towards Solving a Problem in the Doctrine of Chances." Philosophical Transactions of the Royal Society, vol. 53, 1763, pp. 370–418.

Bellhouse, D.R. "The Reverend Thomas Bayes FRS: A Biography to Celebrate the Tercentenary of His Birth." H. Milton Stewart School of Industrial & Systems Engineering, Georgia Tech University, www2.isye.gatech.edu/~brani/isyebayes/bank/bayesbiog.pdf. Accessed 23 June 2017.

Bolstad, William M., and James M. Curran. Introduction to Bayesian Statistics. 3rd ed., Wiley, 2017.

Casella, George. "Bayesians and Frequentists: Models, Assumptions, and Inference." Department of Statistics, University of Florida, www.stat.ufl.edu/archived/casella/Talks/BayesRefresher.pdf. Accessed 23 June 2017.

Harney, Hanns Ludwig. Bayesian Inference: Parameter Estimation and Decisions. Springer, 2013.

Jeliazkov, Ivan, and Xin-She Yang, eds. Bayesian Inference in the Social Sciences. Wiley, 2014.

Kennedy, Aaron. "Introduction to Bayesian Inference." Data Science, 12 Dec. 2016, www.datascience.com/blog/introduction-to-bayesian-inference-learn-data-science-tutorials. Accessed 23 June 2017.

Link, William A., and Richard J. Barker. Bayesian Inference: With Ecological Applications. Academic Press, 2010.

Loredo, Thomas J., et al. "Bayesian Inference: More Than Bayes's Theorem." Frontiers in Astronomy and Space Sciences, 21 Oct. 2024, doi.org/10.3389/fspas.2024.1326926. Accessed 12 Nov. 2024.

Muehlhauser, Luke. "A History of Bayes' Theorem." Less Wrong, 29 Aug. 2011, lesswrong.com/lw/774/a‗history‗of‗bayes‗theorem/. Accessed 23 June 2017.