Why Investment Risk Is Never One Thing

Advanced probabilistic analysis and the hidden mathematics of due diligence

DDScore.ai — Swiss Cheese Model applied to investment failure and due diligence risk

In 1977, a Boeing 747 and a KLM 747 collided on a runway in Tenerife, killing 583 people. It remains the deadliest accident in aviation history. And it was not caused by one thing.

It was caused by fog that reduced visibility. By a miscommunication between the KLM captain and air traffic control. By a KLM flight engineer who raised a concern and was overruled. By a runway configuration that placed two aircraft on the same strip of tarmac at the same time. By time pressure from crew scheduling rules that made the KLM captain anxious to depart.

Any one of these factors, in isolation, was manageable. Fog alone does not cause crashes. Miscommunications alone do not cause crashes. Overruled concerns alone do not cause crashes. But when the holes in each layer lined up — when fog met miscommunication met time pressure met an unusual runway configuration — the result was catastrophic.

James Reason, the British psychologist who developed what became known as the Swiss Cheese Model of accident causation, described it this way: safety systems are like slices of Swiss cheese stacked on top of each other. Each slice has holes. Normally the holes do not align. Accidents happen when, by chance or by systemic failure, the holes line up across every layer simultaneously.

Most failed investments work the same way.


The Holes That Line Up

A pitch deck with an optimistic sales forecast is not, by itself, a reason to pass. Founders are supposed to be optimistic. An optimistic sales forecast combined with an under-resourced sales team is more concerning, but still not fatal — execution gaps can be closed. An optimistic sales forecast combined with an under-resourced sales team combined with a market timing assumption that requires customers to be ready to buy now, in a category where the average sales cycle is nine months, is a different kind of problem entirely.

Each of these weaknesses, read in isolation on separate slides, looks manageable. Read together, as a system, they describe a plan that cannot be executed. The holes have lined up.

This is the central problem with how investment materials are typically reviewed. Human beings are exceptionally good at evaluating single variables. We are poorly equipped, at a fundamental cognitive level, to assess the interaction effects between multiple variables that are themselves probabilistically dependent on each other.

The psychologist George Miller established in 1956 that the human working memory can reliably hold approximately seven items at once, plus or minus two. A pitch deck routinely makes forty or fifty independent claims. An experienced analyst can hold a handful of these in mind simultaneously and form a judgment about their interaction. But the mathematics of how those claims compound — how the probability of one being true changes the probability of another being true — is beyond what intuition can reliably compute.

This is not a criticism of investors or analysts. It is a description of how human cognition works. And it is the reason that the most dangerous failures in investment decision-making are not the obvious ones. They are the compounding ones.


The Game Nobody Is Playing Alone

In 1950, John Nash proved something that seems obvious in retrospect but had profound implications for economics, strategy, and the analysis of competitive markets. In any situation where multiple rational actors are making decisions, the optimal strategy for each actor depends not on some abstract standard of efficiency, but on what the other actors are doing.

This is the Nash Equilibrium: a state where no player can improve their outcome by changing their strategy unilaterally, given what everyone else is doing. The insight is not just that competition exists. It is that competitive dynamics are not static. They are interactive. They respond.

When a startup describes its competitive position in a pitch deck, it almost always does so as though the competitive landscape were a photograph — a fixed image of who exists today and what they offer. The competitive matrix places the company in the favorable quadrant. The moat is described as durable. The differentiation is presented as defensible.

What the matrix does not show is how the landscape responds to the startup’s entry. What does the well-funded incumbent do when a new competitor begins taking customers in a segment it had assumed was secured? What does the platform provider do when a third-party tool starts generating meaningful revenue in a workflow the platform considers its own territory? What is the probability that the largest player in the space ships a functionally equivalent feature within eighteen months?

These are not rhetorical questions. They are probabilistic ones. Game theory provides the framework for thinking about them systematically. A market entry that appears well-positioned in a static analysis may be fragile in a dynamic one — not because the product is weak, but because the response it will provoke was never modelled.

Evaluating a competitive section without assessing the likely responses of the key players is not due diligence. It is a description of the current state, mistaken for an analysis of the future one.


How New Evidence Should Change What You Believe

Thomas Bayes was an eighteenth-century English minister who developed, largely as an intellectual exercise, a theorem about how rational agents should update their beliefs when they encounter new information. Published posthumously in 1763, Bayes’ theorem became one of the foundational tools of modern probability theory, statistics, and — eventually — artificial intelligence.

The core idea is straightforward. You begin with a prior belief — an initial probability estimate based on what you already know. You then encounter new evidence. Bayes’ theorem tells you exactly how much your belief should change in response to that evidence, as a function of how likely that evidence would be if your prior belief were true, versus how likely it would be if your prior belief were false.

The reason this matters for investment analysis is that a pitch deck is not a complete picture. It is one source of evidence. The market data that contradicts the deck’s TAM calculation is another source. The Crunchbase entry showing a competitor raised thirty million dollars last quarter is another. The LinkedIn profile of the CTO that does not match the biographical claim in the team slide is another.

A Bayesian approach to due diligence does not simply read the deck and form a view. It reads the deck, forms an initial view, and then systematically updates that view as each additional piece of evidence arrives. The founder claims product-market fit. What is the probability that this claim is accurate, given that the traction metrics cited are not independently verifiable and the company’s website was not accessible during analysis? The financial model projects three hundred percent growth in year two. What is the probability that this is achievable, given that comparable companies at the same stage and with the same budget reached an average of sixty percent?

Each piece of external evidence — market data, competitor information, benchmark figures, public records — shifts the probability estimate. The final assessment is not a reading of the deck. It is a calibrated view formed by updating an initial prior against every available source of relevant information.


The Superforecaster Problem

Philip Tetlock spent twenty years studying expert prediction. His findings, published in his 2005 book Expert Political Judgment and expanded in Superforecasting with Brian Gardner in 2015, were not flattering to experts.

Tetlock found that most experts — economists, political scientists, intelligence analysts, strategists — predicted the future no more accurately than chance, and in many cases less accurately than simple statistical base rates. The more famous the expert, the worse they tended to perform. Confidence and accuracy were inversely correlated.

The small group of predictors who consistently outperformed the rest shared a specific set of cognitive habits. They thought in probabilities rather than certainties. They updated their views frequently as new information arrived. They kept track of their predictions and measured their accuracy against outcomes. They were willing to say “I don’t know” and to express uncertainty in calibrated numerical terms rather than confident narratives.

Tetlock called the best of them superforecasters. What distinguished them was not superior intelligence or domain expertise. It was epistemic humility combined with systematic method. They did not tell a story about what would happen. They estimated the probability that different things would happen, and they held those estimates loosely, ready to revise them.

The investment decision-making process, as it is typically conducted, looks much more like the expert-narrative model than the superforecaster model. The pitch deck tells a story. The investor responds to the story. The due diligence process is largely a search for evidence that confirms or challenges the narrative — but it is still narrative-driven. The conclusion is rarely expressed as a calibrated probability. It is expressed as a judgment: this is a good opportunity, or it is not.

The problem with narrative-driven judgment is not that narratives are wrong. It is that they are compelling in ways that are disconnected from their accuracy. A well-constructed pitch deck is a persuasive document by design. The founders who are most articulate, most confident, and most compelling are not necessarily the ones whose businesses are most likely to succeed. But they are the ones whose decks are most likely to survive first-pass review.


What Advanced Probabilistic Analysis Actually Does

DDScore does not eliminate uncertainty. Nothing does. What it does is bring a different set of tools to bear on the materials, in a way that surfaces the compounding effects that narrative-driven review tends to miss.

The analysis begins where any diligence process begins: with the submitted materials. But it does not treat those materials as a narrative to be evaluated. It treats them as a set of claims to be tested — individually, and in combination.

Each claim in the materials is assessed against current market data, comparable company benchmarks, sector performance data, and publicly available information about the specific company, its team, and its competitive environment. This is the Bayesian layer: the prior formed by the materials is updated by every relevant external source.

The interaction effects between claims are then modelled simultaneously. A sales forecast is not assessed in isolation. It is assessed in relation to the team’s demonstrated sales capacity, the budget allocated to customer acquisition, the average sales cycle in the sector, the current state of the competitive landscape, and the market timing assumptions embedded in the product roadmap. If the sales forecast requires all of these variables to be simultaneously at the optimistic end of their plausible ranges, the analysis identifies this as a compounding risk — not a collection of separate optimistic assumptions, but a single structural problem that is invisible when each assumption is read on its own.

This is the Swiss Cheese layer: identifying when the holes in different parts of the plan are aligned in ways that create a path to failure that would not be visible from any single slide.

The competitive analysis applies game-theoretic reasoning: not just who the competitors are today, but what the probable responses of key players will be to the company’s entry into the market. What is the probability that the largest incumbent treats this as a threat worth responding to? What does that response look like, and what is its likely timeline? How does the company’s differentiation hold up in a dynamic competitive environment rather than a static one?

The output is not a binary verdict. It is a scored, structured assessment across twelve dimensions, with the reasoning made explicit at each step. Where uncertainty is high, it is described as such. Where the evidence supports a confident assessment, it is stated directly. The score reflects the aggregate picture — not a narrative conclusion, but a calibrated read of the evidence as it currently stands.

A higher score does not mean the investment will succeed. A lower score does not mean it will fail. Any individual outcome can diverge from any probabilistic assessment — that is the nature of probability. What the analysis provides is a more accurate picture of where the risk actually sits, what assumptions are doing the most work, and where the holes in different layers of the plan are aligning in ways that deserve attention before capital is committed.


The Only Honest Conclusion

There is a version of investment decision-making that treats due diligence as a process of finding reasons to invest or not to invest. The materials come in, the story is evaluated, a judgment is made. The conclusion feels confident because confident conclusions are what the process is designed to produce.

There is another version that treats due diligence as a process of calibrating uncertainty. The materials come in, the claims are tested against external evidence, the interaction effects between variables are modelled, and the output is an honest assessment of what is known, what is unknown, and what the evidence — taken as a whole — actually supports.

The second approach does not produce certainty. It produces better calibrated uncertainty, which is a different thing and a more useful one.

Tetlock’s superforecasters did not win because they knew more than other experts. They won because they were honest about what they did not know, systematic about how they updated their views, and rigorous about measuring the gap between their confidence and their accuracy.

Advanced probabilistic analysis does not replace the judgment of an experienced investor or the instinct of a founder who has spent years in a market. It provides the analytical substrate that makes those judgments better informed and more accurately calibrated.

No model predicts the future. No score guarantees an outcome. But a decision made with a more accurate understanding of where the risk sits, how the variables interact, and what the evidence actually supports will, across many decisions, outperform one made on the basis of narrative alone.

That is not a promise. It is a probability. And in this context, that is exactly what it should be.

See what the materials really show

DDScore delivers structured due diligence across twelve dimensions — scored, reasoned, and grounded in current market intelligence.

Run a Report

DDScore.ai delivers structured due diligence across twelve dimensions using advanced probabilistic analysis, multi-model AI, and current market intelligence. The analysis is provided for professional review only and does not constitute investment advice.