top of page

It’s a Toss-Up: Models of Probability

 

Written by Olga Cooperman in collaboration with ChatGPT

​

 
Introduction

 

Our quest for reliable knowledge leads us to distinguish between what we know with certainty to be true, what we believe to be likely true, and what is generally accepted as true through expert consensus. In this exploration, we will probe into beliefs that reside in the 'likely true' category and the fascinating realm of probability that underpins them.

 

The statement 'The sum of interior angles of a triangle equals two right angles' stands as an absolute certainty within mathematics. Its truth doesn't come from physical measurement but from a rigorous logical progression from established axioms. Such mathematical truths, once proven, are immutable. When seeking absolute assurance, we rely on such deductive reasoning that begins with universally acknowledged truths and builds upon them. For instance, we can all agree that all humans are mortal. From this, we can conclude that Socrates is mortal, because he is human, and that connection is clear and direct.

 

Conversely, the assertion that 'The Earth revolves around the Sun' is a stellar example of how scientific truths, derived from inductive reasoning, remain inherently open to revision. For many centuries, it was widely accepted based on observational evidence that all celestial bodies revolved around the Earth. Ptolemy's geocentric model, despite being fundamentally incorrect, was nevertheless able to predict the positions of planets and stars. It wasn't until the advent of the telescope, which allowed for more detailed observations like the phases of Venus and the moons of Jupiter, that the heliocentric model gained traction, placing the Sun at the center of the known universe. Despite the robustness of current scientific models, we must acknowledge that they are not immune to revision. New data could one day lead to a paradigm shift, altering our current understanding of the universe.

 

Newton's First Law, which posits that 'a body in motion will remain in motion at a constant velocity unless acted upon by an external force,' represents a significant departure from earlier beliefs about motion. Its greatest validation comes from the predictive success of Newtonian mechanics and its broad explanatory power rather than from empirical evidence. French mathematician Henri Poincaré later discussed how many principles that scientists take for granted are not necessarily natural laws but are instead conventions chosen for their utility in constructing a consistent and predictive theoretical framework. Such frameworks are immensely valuable in science, though they may not be the final word on reality.

 

As we delve deeper, we will scrutinize the essence of probability. Is it a measurable quantity? Does it represent an objective reality, or is it a reflection of our subjective knowledge? Our discussion will navigate through various models of probability and the contexts in which they are best applied, illuminating the importance of probabilistic reasoning in a world where certainties are scarce.

 

Classical Probability

 

In 1654, Blaise Pascal and Pierre de Fermat significantly advanced the new field of probability through their discussions on gambling strategies. Their key insight was the quantification of uncertainty: the idea that the likelihood of an event could be represented as a number. They introduced the concept of a sample space, which is a comprehensive list of all potential outcomes. When these outcomes are equally likely and mutually exclusive, classical probability can be used to calculate the likelihood of an event by dividing the number of favorable outcomes by the total number of possible outcomes. For example, the roll of a die provides six equally probable outcomes. Therefore, the probability of rolling a specific number is 1/6, and the probability of rolling an odd number is 1/2.

 

However, applying classical probability to real-world situations is not always straightforward. The division of a sample space can be subjective and may not reflect actual probabilities. While classical probability is an excellent tool for calculating the probabilities of symmetric outcomes in controlled environments like card games or lotteries, its application to complex, dynamic situations may be problematic.

 

Probability as Long-Term Frequencies

 

The limitations inherent in the classical model of probability led to the development of the frequentist approach. This approach defines the probability of an event as the rate of the event’s occurrence when the same process is repeated many times. Under this model, probabilities only make sense when assigned to repeatable events. The probability of a coin landing heads up is not defined for a single trial.  However, given many tosses, we can say that roughly half of the time the coin will land heads up.

 

In certain contexts, the Frequentist Principle allows us to estimate the probability of a singular event based on its occurrence over numerous trials. For example, if a drug has serious side effects in 0.001% of cases, we can infer that any given patient has a 0.001% risk of experiencing these side effects – but only when no other individual information is available. The actual outcome, however, for the individual is binary—either they will or will not experience serious side effects. As Nassim Taleb wittily illustrates with the average of one breast and one testicle per human, statistical averages may not always reflect individual realities. This highlights that while statistical probabilities provide valuable risk insights, they must be interpreted with care when applied to individual cases.

​

This probability model is particularly powerful when applied to random samples because such samples are representative of the larger population, ensuring that the frequencies observed truly reflect the underlying probabilities. However, obtaining a truly random sample can be challenging in practice.

 

Probability as Physical Tendency

 

The probability of individual events may be inferred from physical properties. Take a fair coin: it lands with heads facing up approximately 50% of the time. This consistent outcome arises from the coin's symmetry and balance. Therefore, probability can also be conceptualized as the inherent propensity of a system to yield a particular result under consistent conditions. The link between a system's physical attributes and its probabilistic outcomes might be more pronounced in fields like biology, while in other domains such clear relationships might not be as apparent.

 

Probability as Degree of Belief

 

In certain situations, events are unique and cannot be replicated, nor do they have discernible physical attributes that dictate their outcomes. For these one-time occurrences, probability serves as a measure of belief regarding the event's occurrence. A probability of 1 signifies a firm belief in the event's inevitability, whereas a probability of 0 indicates certainty that the event will not happen. Values between 0 and 1 represent varying levels of uncertainty, with the uncertainty peaking at a probability of 0.5. These subjective probabilities are based on personal knowledge and experience, and they do not depend on the event being repeatable. This interpretive approach to probability is particularly relevant for singular events, such as predicting the results of a presidential elections.

​

 Logical Probability

 

The question of whether an asteroid impact led to the extinction of the dinosaurs, a unique historical event, cannot be answered by repeated empirical testing. However, in science, subjective belief alone does not suffice for explanation. John Maynard Keynes suggested that probability is a logical relationship between evidence and hypothesis. Although this relationship might not be expressible in numerical terms, it is nonetheless concrete and objective. Logical probability differs from the subjective interpretation as it implies a consensus among rational observers based on the evidence at hand.

 

For instance, the discovery of a global layer of iridium-rich sediment dating back approximately 65 million years supports the asteroid impact hypothesis for the dinosaurs' extinction. Similarly, if a suspect is linked to a crime by fingerprints on the weapon, a motive, and an eyewitness account, it is logical to infer guilt.

 

However, logical probabilities cannot be measured precisely or compared directly. For example, evaluating whether left-handed individuals are more creative than right-handed ones depends on the evidence. Which is more compelling: data from 10,000 left-handed individuals from one country, or data from a smaller, but more diverse sample of 1,000 individuals from multiple countries? The superiority of one body of evidence over the other is not inherently clear.

 

A criticism of Keynes' concept of logical probability pertains to its claim of objectivity. This perspective is frequently employed to interpret financial markets fluctuations, yet it is often observed that financial commentators ascribe the same evidence to justify market movements in opposite directions. This challenges the notion that there is a universally objective relationship between evidence and conclusions.

 

Bayesian Probability

​

The weather forecast may predict only a 10% chance of rain, but as you notice the sky darkening with clouds, you intuitively adjust this probability upwards and decide to take an umbrella. This practical application of probability revision is in line with the Bayesian model. Developed from the ideas of an 18th-century theologian, this model allows for the adjustment of an initial probability estimate—the prior—based on new evidence, using a process known as Bayes' rule.

 

Bayesian probability reveals insights that can sometimes be counterintuitive. Take the example of receiving a positive result from a cancer screening test. If the incidence of this cancer in the general population is 1% (the prior), and the test has a 10% error rate, the actual probability of having cancer given a positive result is not as high as one might think. In fact, it's only 8%.

 

Here's how it works: In a group of random 1,000 people, 1%, or 10 people would have cancer, while the remaining 990 are cancer-free. With a test error rate of 10%, 99 of those who are cancer-free will incorrectly receive positive results. Meanwhile, one of the 10 cancer-positive individuals will incorrectly receive a negative result. So, with 108 positive results but only 9 true cases of cancer, the probability of having cancer with a positive result is 9 out of 108, or approximately 8%. This example illustrates how a relatively small error rate can lead to a significant number of false positives when the condition is rare in the population.

 

One criticism of Bayesian probability is that it seems to contradict the scientific method, which holds that a theory remains viable until a negative experimental result falsifies it. Since a theory can never be conclusively proven—only disproven—repeated experimental confirmations should not significantly strengthen our belief in it. However, the Bayesian approach suggests that each new piece of evidence should proportionally increase our confidence in a hypothesis. This has raised questions about its scientific validity, as it appears to challenge the falsifiability criterion for scientific hypotheses.​

 

Algorithmic Probability

 

The Bayesian model relies on having a prior probability to work with, which can pose challenges when such prior knowledge isn't available. To address this, Ray Solomonoff formulated the concept of algorithmic probability, which provides a way to establish a prior for use in Bayes' rule. This concept aligns closely with Occam’s razor, the philosophical principle that, given two competing hypotheses, the one with the fewest assumptions is generally preferred.

 

Algorithmic probability assigns likelihood to hypotheses by considering the conciseness of their descriptions. In this framework, an observation is treated as a statement, and the complexity of the statement is gauged by the length of its shortest possible description. In essence, a statement that can be explained with fewer words is deemed more likely than one that necessitates a more elaborate explanation. Within algorithmic information theory, these descriptions are akin to theoretical computer programs capable of generating the statements in question.

 

Despite its conceptual elegance, a significant challenge arises when trying to put algorithmic probability into practice: it has been shown to be non-computable. This means that there is no general method to calculate these probabilities algorithmically, which complicates their use in practical applications.

 

Inductive Probability

​

Inductive reasoning is employed to predict the likelihood of future occurrences by examining past events. This form of reasoning underpins scientific inquiry as well as our every-day decision-making processes. Scientists postulate a theory and then gather evidence to corroborate or disprove it. All of us largely rely on our past experiences for decisions and actions. Nonetheless, inductive reasoning can lead to illogical outcomes, as demonstrated by the Raven Paradox.

 

Consider the proposition that all ravens are black. Conceptually, the universe can be categorized into two groups: one comprising all black objects and another containing everything else. If the hypothesis is valid, every raven should belong to the set of black objects. Evidence supporting the hypothesis could come from observing a black raven or any non-black object that isn't a raven. Thus, encountering a red apple can, paradoxically, be considered evidence that all ravens are black. Given the scarcity of ravens compared to the plethora of other objects, one might amass extensive evidence supporting the hypothesis without ever seeing a raven. While it might seem irrational to use non-related objects to affirm a theory about ravens, we subconsciously depend on our accumulated experiences with non-black non-ravens to judge the veracity of the hypothesis.

 

Inductive reasoning, which draws generalizations from specific observations, is met with skepticism in financial professions due to the high unpredictability of markets and the potential for exceptional events that can defy trends. Financial professionals often caution against overreliance on past data to predict future market behaviors because this can lead to significant misjudgments. On the other hand, scientists rely on inductive reasoning to build theories and models, testing hypotheses through accumulated evidence. In the scientific domain, this method is fundamental for advancing knowledge, despite the understanding that no amount of empirical data can ever fully prove a theory.

 

 

Quantum Mechanical Probability

​

Is probability merely an abstract mathematical concept, or does it have a real presence in the physical world? This question becomes particularly intriguing when we make statements like "there's a 60% chance of rain today." Are we merely expressing uncertainty due to incomplete knowledge, or are we describing an intrinsic aspect of the event itself?

 

Quantum mechanics, a fundamental theory in physics, suggests that randomness is an integral part of the natural world. In this view, probabilities are not just about our limited knowledge but are properties of the physical systems themselves. Take the case of radioactive decay: the moment at which a particular uranium atom will decay is a stochastic event. While we can estimate the average rate of decay over a period and for a group of atoms, pinpointing the exact moment of decay for an individual nucleus is beyond our predictive capabilities. This is not due to some concealed information but because the decay process is inherently probabilistic.

 

This interpretation implies that individual events at the quantum level are fundamentally random, and thus the probability of a single, specific event occurring cannot be precisely determined. Although quantum mechanics provides a robust probabilistic framework for the behavior of microscopic particles, its direct application to the macroscopic world—where human-scale events occur—is not straightforward. The laws governing everyday objects are still rooted in classical physics, where determinism largely holds sway.

 

Possible Future Interpretations and Applications of Probability

​

Probability in Storytelling

 

Examining how we understand and apply the concept of probability in storytelling may offer a useful perspective on human behavior.

 

In literature, authors manipulate probability to create tension and surprise. The 'unlikely' hero, the 'unexpected' twist, the 'coincidental' meeting—these are all staples of narrative construction that defy pure mathematical probability.

 

Similarly, in life, we often see ourselves as protagonists in our personal narratives, expecting plot developments and resolutions that defy statistical odds. Possibly, our brains are constantly trying to fit events into a coherent story, affecting our perception of what is likely or unlikely. For instance, an investor might buy a stock based on the compelling story of the company, rather than a sober analysis of financial metrics. This reminds us that the narrative bias carries a lot of weight in our decision-making processes.

 

Our understanding of historical events may also be influenced by our innate preference for compelling narratives. Historians often provide a storyline to the past, emphasizing causality, character, and plot development. Just as literature cherishes the 'unlikely' hero, historical narratives often focus on individuals who rise against the odds. The probability of a single individual's actions altering the course of history may seem low, yet our historical accounts frequently revolve around such figures. By acknowledging the narrative structure we impose on history, we can strive for a more balanced and probabilistically informed study of the past.

 

Probability in Interconnected Systems

 

Another novel approach, the ecological model of probability, could be useful for understanding the likelihood of events within complex, interdependent, and adaptive networks. This model considers events not as isolated occurrences but as nodes within a network. It recognizes that feedback loops affect the probability of future events and minor perturbations can lead to disproportionally large effects.

 

This model would consider probabilities in a more holistic and interconnected manner, reflecting the true complexity of the world around us. Quantifying ecological probability would require an interdisciplinary approach, combining statistical models with insights from systems ecology, network theory, and computational methods. Some applications of this approach may include epidemiology, economics, markets behavior, and social dynamics.

 

 

CONCLUSION

​

In the vast sea of human knowledge, certainty is scarce; much of what we know is understood in terms of probabilities. Probability is not a single idea; it encompasses various models tailored to different situations. We can measure some probabilities, while others elude quantification. Gamblers often rely on classical probability for its clear-cut application to controlled gaming environments. Frequentist probability lends itself well to phenomena that are repeatable. Logical probability provides a framework for making sense of singular, non-repeatable events through a lens of rationality and objectivity.

 

Experts frequently employ subjective probabilities, or "educated guesses," to forecast the future. The Bayesian model adapts to evolving information, asserting that probabilities should be updated as new data emerges. Algorithmic probability, valuing simplicity, posits that the most straightforward explanation is usually the best. Inductive probability underpins scientific discovery, deriving broader insights from accumulated empirical evidence. Quantum probability delves into the inherent randomness of the microscopic realm, explaining natural behaviors rather than our perceptions of them. Other novel ways of interpreting, quantifying and applying probability are likely to be developed in the future.

 

Various probability models are helpful thinking tools, each offering a unique lens through which to view the uncertainties of the world. However, the limitations of these models remind us to embrace the inherent unpredictability of life. They remind us that, at the intersection of chance and choice, there lies the freedom to shape our destinies, one decision at a time.

 

 

 

 

Bibliography

 

Poincaré, H. (2017). Science and Hypothesis. [Latest edition]. London: Routledge.

 

Keynes, J. M. (2018). A Treatise on Probability. [Latest edition]. London: Macmillan Collector's Library.

 

Brown, J. R. (1999). Philosophy of Mathematics: An Introduction to the World of Proofs and Pictures. London: Routledge.

 

Hacking, I. (2001). An Introduction to Probability and Inductive Logic. Cambridge: Cambridge University Press.

  • LinkedIn
bottom of page