elmerdata.ai blog

My blog

The Singularity as a Systems Problem, Not a Problem of Scale

Artificial intelligence is advancing, but emerging evidence suggests its trajectory may be shaped by systems and constraints rather than by the accumulation of scale alone.


A Prediction Without a Model

For decades, the singularity has been treated as an approaching milestone rather than a hypothesis. The claim is familiar: artificial intelligence improves, begins improving itself, and crosses a threshold beyond which human understanding no longer applies.

A closer look reveals something unusual. No shared definition exists, and no agreed mechanism explains how recursive self-improvement begins. No empirical benchmark signals its arrival, and timelines vary widely, shifting forward as prior predictions expire.

Infolding Siegel Disk animation Adam Majewski, Infolding Siegel Disk (Critical Orbit near t = 1/2), 2017. Animated visualization of a dynamical system on the boundary of the Mandelbrot set, showing structured complexity emerging from simple iterative rules. Creative Commons Attribution-Share Alike 4.0.

Predictive markets, which typically price future expectations with more discipline, offer little support. Platforms track forecasts on artificial general intelligence and related milestones, yet even these narrower questions show wide distributions rather than convergence. Forecasts do not converge on a singularity event; instead, they cluster around narrower questions, such as when AI might match human performance on specific tasks, and even those remain uncertain.

Researchers such as Rodney Brooks have long noted that predictions of transformative AI tend to overestimate short-term breakthroughs while underestimating long-term complexity, and Gary Marcus similarly argues that current systems lack the foundations required for general intelligence. A forecast without a model, without a metric, and without convergence does not behave like science; it behaves more like pseudoscience, sustained by narrative rather than measurement.

The Strongest Counter-Theory

The most effective critique does not deny progress but reframes it. The singularity assumes intelligence scales like compute, yet the evidence suggests intelligence scales like systems.

Modern AI advances depend on data pipelines, human labeling, institutional deployment, and economic constraints, including the cost of data, computation, and human supervision. Performance improves not through autonomous self-modification, but through coordinated engineering across organizations. Gains arrive unevenly, often brittle, and require continuous human correction.

Scholars such as Michael Jordan emphasize that intelligence is not a single scalable quantity, but a property of systems embedded in context, and Yann LeCun has argued that current approaches lack the architecture required for generalized reasoning. History supports this pattern. Electricity, aviation, and the internet appeared transformative, yet none produced a discontinuity in human comprehension; each unfolded through infrastructure, regulation, and iteration.

Under this view, intelligence does not explode but accumulates. Recent work on biologically inspired systems offers a useful counterexample. Simulated insect-scale intelligence, such as digital models of a fly’s neural architecture, demonstrates goal-directed behavior emerging from tightly constrained, structured architectures. These systems navigate, adapt, and respond without general intelligence, without recursive self-improvement, and without any trajectory toward a unified cognitive system. In this sense, the fly is not an exception but an example of how intelligence emerges from systems rather than scales within a single one. The implication is straightforward: complexity can increase without converging toward a singularity.

What Would Count as Evidence

A distinction is necessary here: superintelligence refers to systems exceeding human capability across domains, while the singularity describes a process of recursive self-improvement leading to rapid, discontinuous change. The two are often linked, but they are not equivalent, and one does not necessarily imply the other.

A scientific concept requires measurement, and the singularity lacks one. A credible threshold would require at least three conditions:

  1. Autonomous improvement, in which systems redesign their own architectures without human intervention.

  2. Cross-domain generality, where capabilities transfer reliably across domains without retraining or external scaffolding.

  3. Sustained acceleration, in which each generation of systems measurably reduces the time and resources needed to produce the next.

No current system meets these conditions. Even researchers concerned with long-term risks, such as Stuart Russell, frame such capabilities as open technical problems rather than imminent outcomes. Evidence meeting these conditions would materially change the current assessment, but for now, systems such as ChatGPT operate within fixed architectures, rely on human-designed training loops, and degrade outside their domains. They extend human capability, but they do not recursively redefine it.

Implications for the Trajectory of AI

The singularity rests on a familiar premise: intelligence increases as knowledge and computational capacity accumulate, eventually reaching a threshold where further acceleration becomes self-sustaining. Much of the discussion around artificial intelligence has taken that progression as a natural trajectory.

Recent developments suggest a more complicated picture. The absence of a shared model, a measurable threshold, or convergence among expert forecasts already introduces uncertainty, and without a stable definition of intelligence, claims about its acceleration remain difficult to test, compare, or falsify. Emerging approaches to artificial intelligence further indicate that capability does not depend exclusively on cumulative knowledge gains.

The counterexample is instructive. Simulated insect-scale intelligence demonstrates that goal-directed behavior can emerge from tightly constrained architectures, without generality, without recursive self-improvement, and without scaling knowledge in the way large models do. These systems do not point toward a unified intelligence that expands indefinitely, but toward multiple pathways for building intelligent behavior.

Consider, then, what would follow if a singularity-like event did occur. The outcome would not resemble the continued scaling of current tools, but a discontinuity that renders them obsolete. Systems designed for human interaction would give way to systems operating beyond human interpretability, and tools such as ChatGPT would not evolve into that state but be replaced by something fundamentally different. Even in theoretical accounts, such as those explored by Nick Bostrom, the transition represents a break rather than a continuation.

Artificial intelligence will continue to advance, but the direction of that progress may not inevitably be singular. Systems based on scale and data will coexist with systems based on structure and constraint, and the evolution of AI may therefore resemble a diversification of approaches rather than a convergence toward a single point of acceleration.


Further Reading

The Singularity Is Near by Ray Kurzweil

Superintelligence by Nick Bostrom

Rebooting AI by Gary Marcus


AI Assistance Statement ▾
Preparation of this blog entry included drafting assistance from ChatGPT using a GPT-5 series reasoning model. The tool was used to help organize ideas, propose structure, refine language, and accelerate revision. It was also used to assist in identifying image sources and verifying that selected images appear to be released for reuse (for example through public domain or Creative Commons licensing). The author selected the topic, determined the argument, reviewed and edited the text, confirmed image licensing, and takes full responsibility for the final published content. (Last updated: 03/06/2026)

#AIData #Observations