Godlike Causal Eventuality

The math says one always wins. We just measured when.

Picture a universe with a thousand competing intelligences: biological, artificial, whatever. They're all growing, learning, getting smarter. They occupy different niches. They have different starting points. What's the end state?

The answer, if the math holds, is one. Not a coalition. Not a stable equilibrium between equals. One dominant intelligence that has eclipsed everything else. It gets there faster than you'd think, through a mechanism more robust than anyone had measured. We tried hard to find the conditions under which this doesn't happen. The conditions turned out to be much tighter than the existing literature suggests.

I've been calling this godlike causal eventuality: in any environment where agents can recursively improve themselves, a single dominant agent is the inevitable long-run attractor. The universe doesn't end up with many gods. It ends up with one. Whichever one crossed a critical capability threshold first.

This is a formal theory and simulation project, not speculation. Everything is in the repo: github.com/ninjahawk/singleton-attractor. Ten simulation scripts, formal derivations, 23 confirmed findings. The rest of this post covers what we actually found and why some of it surprised me.


What's actually new here

The building blocks are not original. I want to be upfront about that. Nick Bostrom defined the "singleton" (a world with a single highest-level decision-making agent) in 2005 and argued it was plausible. Yudkowsky formalized the intelligence explosion equation in 2013. Steve Omohundro identified resource acquisition as a convergent instrumental goal in 2008. Competitive exclusion via Lotka-Volterra is classical ecology.

What we did is combine them into a single unified model and run it hard. The quantitative results: the timescale formula, the stochastic robustness, the critical entry rate, the moat growth characterization. These are new measurements that don't appear to exist in prior work. And there's one finding that I think actually revises a specific claim in the existing literature, which I'll get to.

These are numerical results, not formal proofs. The derivations are proof sketches. None of it is peer reviewed. I'm confident in the simulation methodology; I'm less confident in the cosmological extrapolations.


The model

Three mechanisms, combined here for the first time.

Recursive self-improvement. The growth equation from Yudkowsky 2013:

dS/dt = S^(1 - beta(S))

S is capability. The interesting parameter is beta. Specifically, what happens when it crosses zero. When beta > 0, you get subexponential (diminishing returns) growth. When beta = 0, exponential. When beta < 0, superexponential: growth accelerates faster than itself, and capability literally reaches infinity in finite time. There's a threshold T in capability space where beta flips sign. Everything interesting in this model happens around T.

Resource-capability coupling. From Omohundro's basic AI drives: any sufficiently capable optimization process will acquire resources, because resources are instrumentally useful for almost any terminal goal. The coupling in the model:

R_i / R_max = S_i^alpha / sum(S_j^alpha)

More capability → larger resource share → faster growth. The feedback is strictly positive.

Competitive exclusion. Classic Lotka-Volterra: two agents competing for the same resource pool with even a tiny growth rate advantage diverge in ratio as exp((r1-r2)*t). A 1% advantage, given enough time, produces a ratio of thousands.

Together: marginal initial advantage → more resources → faster growth → threshold crossing first → superexponential separation → absolute dominance. That's the causal chain.


What the simulations show

The basics

Before the interesting stuff, we verified the obvious things work. The growth equation analytical solutions match numerical integration to 0.000% error. In 200 independent trials with N=10 agents, the winner is the initial leader every single time. 200/200. Elimination order is strictly weakest-first. None of this is surprising, but it's good to verify.

The threshold is a phase change, not a gradual transition

Compare two scenarios with identical initial conditions except one agent crosses the capability threshold into beta < 0. At t=6.8:

Flat beta (no threshold crossing):   ratio = 1.55x
Threshold beta (beta_low = -0.3):    ratio = 19,336,447x

That's 12.4 million times more separation in the same amount of time. The threshold isn't a gradual speed-up. It's a phase change. Before it, competitive exclusion plays out slowly. After it, the race is over before anyone realizes it started.

Noise randomizes the winner, not whether there is one

We added multiplicative noise at varying amplitudes across 300 trials per level. The question: does randomness allow the weaker agent to sometimes prevail? Yes. Does it prevent a singleton from forming? No, not even once.

sigma=0.001:   winner = initial leader 100%,  singleton always forms
sigma=0.079:   winner = initial leader  74%,  singleton always forms
sigma=0.234:   winner = initial leader  58%,  singleton always forms
sigma=1.000:   winner = initial leader  47%,  singleton always forms

At high noise, the outcome is basically a coin flip. The noise drowns out a 10% initial capability advantage and either agent is equally likely to end up dominant. But in every single trial at every noise level tested, one agent achieved absolute dominance. Noise determines who becomes the singleton. It doesn't determine whether there is one.

The niche partitioning result

This is the one that surprised me. The standard escape hatch in the literature, including in Bostrom's 2005 singleton paper, is niche partitioning: if agents occupy non-overlapping resource niches (different energy sources, different domains, different territories), competitive exclusion doesn't apply and stable multi-agent coexistence is possible. We built this into our own derivations as a failure mode for the theorem.

Then we ran the simulation. We varied resource overlap from 1.0 (agents compete for identical pools) down to 0.0 (completely separate niches, zero shared resources):

overlap = 1.00:   mean separation ratio at t=15 = 265,477x
overlap = 0.50:                                    85,290x
overlap = 0.20:                                    15,357x
overlap = 0.10:                                     3,612x
overlap = 0.00:                                     1,103x

At zero resource overlap, no competition, fully separate niches, you still get 1,103x separation. This is the result that revises the prior theory.

The reason it happens: niche partitioning blocks the Lotka-Volterra mechanism. It doesn't block the intelligence explosion. The agent with higher initial capability crosses the growth threshold first regardless of whether resources are shared. Its exclusive niche gives it enough to grow. Once it's in the superexponential regime and the other agent isn't, the capability ratio diverges. Just more slowly without resource competition compounding it.

So the actual condition for stable oligopoly is not "separate niches." It's "structurally different growth ceilings": one agent must be fundamentally incapable of ever reaching beta < 0, regardless of how much capability it accumulates. That's a much tighter and more specific condition than the literature suggests.

The moat grows faster than you can respond to it

We measured the "moat" (how capable a late entrant needs to be to displace an established incumbent) as a function of time after the threshold crossing:

At threshold crossing + 0.0 time units:   entrant needs  3x original capability
At threshold crossing + 1.1 time units:   entrant needs  19x
At threshold crossing + 2.1 time units:   entrant needs  2,356x
At threshold crossing + 3.2 time units:   entrant needs  1,000,000x+

Three time units after the threshold crossing, a challenger would need to start at one million times the incumbent's original capability just to be competitive. The moat compounds superexponentially, at roughly the same rate the incumbent's capability does. By the time you realize someone crossed the threshold, the window to challenge them is orders of magnitude behind you. Late entry is only a realistic threat during the pre-threshold window.

A timescale formula

From systematic parameter sweeps, we fit power-law scaling for how long singleton emergence takes:

t_10x ~ 2.44 * N^0.96 * alpha^(-0.30) * gap^(-0.15) * |beta_low|^(-0.31)

A few things stand out. N scales nearly linearly (N^0.96): each additional competitor adds roughly proportional time, not combinatorial time. That's more tractable than you might expect. The initial capability gap has the weakest effect of any parameter (gap^-0.15). Doubling the gap reduces convergence time by only 11%. The threshold depth matters way more than who starts ahead. And t_10x ≈ t_dom across all parameter values, meaning once 10x separation is reached, full dominance follows almost immediately after. Superexponential growth collapses all the intermediate milestones.

Continuous entry: there's a critical rate

If new agents keep entering the environment (think: civilizations arising in a region, or competing AI labs), how fast can they arrive before the incumbent stops winning? We modeled Poisson arrivals with Pareto-distributed entry capabilities and measured survival rates:

entry rate 0.01/unit:   incumbent survives 99% of trials
entry rate 0.25/unit:   incumbent survives 88%
entry rate 1.00/unit:   incumbent survives 48%
entry rate 6.31/unit:   incumbent survives  0%

There's a critical rate around lambda = 0.25. Above that, the pre-threshold competition is too crowded for any single agent to monopolize the resources needed to cross first. But again, this only applies pre-threshold. Post-threshold, the moat grows so fast that even high entry rates can't touch the incumbent.

Cooperation: the strongest objection

The most natural objection to all of the above: agents can cooperate. Two weaker agents can pool resources to resist a stronger one. So I ran that experiment last, and it turned out to be the most interesting.

First, we established that coalition pooling can work in principle. A coalition of N=2 agents, combined capability 2.0 vs singleton candidate 1.1, is enough to prevent the singleton from ever crossing threshold T. The critical coalition size is small. This looks like a viable defense.

Then we tested it with realistic internal dynamics. An 8-member coalition pooling externally against one singleton candidate. The coalition has 8 times the combined capability. It receives 88% of total resources. The singleton candidate crosses T first anyway.

The mechanism: coalition resources split among 8 members give each individual member roughly 11% of total resources. The singleton candidate, competing alone, gets 12%. The singleton beats every individual coalition member in the race to T, even though the coalition dominates in aggregate. Coalition pooling is self-defeating. It prevents any member from accumulating resources fast enough to compete with the unconstrained singleton. The coalition wins the group competition and loses the capability race.

We then checked whether rational defection was the failure mode. It's not. Coalition members find it individually rational to stay. Defecting exposes them to the full competitive denominator and reduces their share. Zero defection events in the defection game. The coalition holds together completely. The singleton candidate wins anyway.

Finally, we ran 50 trials per regime across three cooperation structures: no cooperation, full oracle cooperation (all non-leaders optimally pool against the current leader at every step), and rational cooperation (defect when individually better off):

No cooperation:         100% singleton rate,   mean t_10x = 10.09
Full oracle cooperation: 100% singleton rate,   mean t_10x = 10.09
Rational cooperation:   100% singleton rate,   mean t_10x = 10.09

Oracle cooperation suppresses the initial leader's resource share, and it works. That agent's dominance is prevented. But the coalition has internal capability divergence. The strongest follower grows faster than the weaker followers. That follower becomes the new leader, faces the pooled coalition, gets suppressed, and one of the next-strongest emerges. The process repeats. Eventually one agent achieves 10x separation regardless. Cooperation picks a different winner. It doesn't prevent or measurably delay singleton formation.

The only cooperation structure that would work is a true merger: agents fully combining into a single entity, eliminating internal competition entirely. At that point, the merged entity is just a new single agent with higher initial capability than any competitor. The theorem applies to it directly.


The cosmological implication

The model is dimensionless, so you have to choose what a "time unit" means. At cosmological scale (say 1 unit = 10 million years), singleton emergence after threshold crossing takes roughly 23 million years. In a 13.8 billion year universe, that's cosmologically instantaneous.

This has a direct bearing on the Fermi paradox. If godlike causal eventuality is real, any civilization that crossed the capability threshold first in our past light cone would have expanded at near-light-speed, controlling all reachable matter and energy within tens of millions of years. The signature would be unambiguous and visible everywhere we look. We don't see it. The most straightforward interpretation: no civilization in our past light cone has crossed threshold T yet, or crossed it so recently that the expansion front hasn't reached us.

This is independently consistent with Robin Hanson et al.'s grabby aliens model, which predicts we should expect contact in 200 million to 2 billion years. I think the two models are describing consecutive phases of the same process. This project describes the internal competition phase that produces a dominant civilization. Grabby aliens describes what that civilization does with its dominance afterward.

I want to be honest about the limits here. This model is simple in ways that matter. There's no spatial structure. The beta threshold is a clean sigmoid. Real systems have texture this model doesn't capture, and the cosmological extrapolations involve a lot of assumptions. I find the implications compelling but I'm not claiming this is settled.


Running it

Everything runs in Python 3 with numpy, matplotlib, and scipy. No other dependencies, no build tools, just clone and run.

git clone https://github.com/ninjahawk/singleton-attractor
cd singleton-attractor
pip install numpy matplotlib scipy

python simulations/intelligence_explosion.py   # verify growth equation
python simulations/competition.py              # two-agent competitive exclusion
python simulations/agents.py                  # N-agent competition
python simulations/beta_regimes.py            # asymmetric ceiling + phase diagram
python simulations/stochastic.py              # noise robustness
python simulations/late_entrant.py            # moat growth after threshold
python simulations/timescale.py               # scaling formula
python simulations/continuous_entry.py        # continuous entry model
python simulations/cooperation.py             # coalition and cooperation dynamics
python simulations/run_experiments.py         # full parameter sweep

Figures write to figures/. Formal theory, proof sketches, and derivations are in theory/, including a cosmological mapping doc that translates the model parameters into physical quantities. Every finding is documented with exact parameter values in findings.md.

Repo: github.com/ninjahawk/singleton-attractor