Back to blog
Mar 08, 2026
4 min read

Game Theory Doesn't Work in Litigation (Until You Fix It)

Classical game theory assumes rational actors. Litigation has none. I built a framework that bridges the gap — and it changes how you predict what people will actually do.

The Problem I Kept Running Into

I ended up in a high-stakes business dispute with my co-founder. That’s a story for another day. But in the process of researching evidence workflows and litigation strategy, I kept running into the same wall.

Game theory told me what people should do. It didn’t tell me what they would do.

The math was clean. A third party had a clear rational incentive to cooperate — the payoff was 2x higher than defecting. Nash equilibrium, dominant strategy, textbook case. And yet, in the real world, they didn’t cooperate. They defaulted to inaction even when inaction cost them money.

Classical game theory had no explanation for this. But behavioral economics did.

The Gap Nobody Bridges

Here’s what I realized: game theory and behavioral economics have operated in parallel for decades. One gives you structure — payoff matrices, equilibria, BATNA computation. The other gives you realism — loss aversion, status quo bias, hyperbolic discounting. But nobody has built the bridge between them.

Game theorists know about biases but don’t formalize them. Behavioral economists know about strategic interaction but don’t model it. The result is a gap — and that gap is exactly where the most actionable intelligence lives in litigation.

Behavioral Modifiers: The Bridge

So I built it. The Behavioral Modifiers Framework (BMF) takes 12 documented cognitive biases and turns them into quantified adjustments to classical payoff calculations. Not hand-wavy “people are irrational” disclaimers — actual multipliers and additions you can compute against.

The formula is straightforward:

Adjusted Payoff = Classical Composite × Π(bias multipliers) + Σ(bias additions)

Take that advisor who wouldn’t cooperate. Classical analysis said cooperate was worth 70, defect was worth 35. Obvious choice. But when you apply the behavioral modifiers:

  • Status quo bias knocked 15 points off cooperation (change feels risky)
  • Hyperbolic discounting knocked another 10 off (future benefits feel distant)
  • Loss aversion added 5 to defection (current position feels safe)
  • Status quo bias added another 8 to defection (inaction is comfortable)

Adjusted payoffs: Cooperate 45, Defect 48. The dominant strategy flipped. A 35-point gap collapsed to 3 points of near-indifference — which, in practice, means the player defaults to inaction every time.

The framework didn’t just explain the behavior after the fact. It would have predicted it in advance.

Why This Matters Beyond My Case

I originally developed this framework while studying strategic interactions in litigation. But it’s general. Any strategic interaction where you need to predict what a real human will do — not what a rational actor should do — benefits from behavioral adjustment.

Settlement negotiations. Vendor disputes. Partnership dissolutions. Custody battles. M&A negotiations. Any time the other side’s behavior seems “irrational,” it’s probably not. It’s probably predictable — you’re just using the wrong model.

The Visual Explainer

I put together an interactive visual explainer that walks through the entire framework — the three-layer architecture, all 12 biases with their adjustment factors, the worked example with animated payoff charts, and the strategic levers you can pull to shift equilibrium back in your favor.

It’s published on Acquit.ai, the litigation intelligence platform I’m building on top of the evidence pipeline behind Acquit.ai.

Read the full visual explainer →


This framework is part of the research behind Acquit.ai — AI-powered eDiscovery and litigation intelligence. I’m speaking about this at the AI & Law Summit on March 27 at the ACC Center for Government and Civic Service in Austin.

Let's Build AI That Works

Ready to implement these ideas in your organization?