
“Science is not only compatible with spirituality; it is a profound source of spirituality.” — Carl Sagan, The Demon-Haunted World
Every era invents its own version of inevitability. In earlier centuries, it was fate, providence, or historical destiny. In our own time, it has taken the form of technological escalation—the belief that computation, given enough data and speed, must naturally culminate in something greater than humanity itself.
In recent years, leading figures in technology have made increasingly confident predictions. We are told that artificial general intelligence—machines matching human capability across all domains—will arrive within years, not decades, and that superintelligence will follow shortly after. Some company roadmaps treat this progression as certain as Moore’s Law once seemed. Billions in investment flow toward this vision. The message is clear: the rise of machine intelligence beyond our own is not a possibility to consider, but an inevitability to prepare for.
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” — Edsger W. Dijkstra
The modern pursuit of super-intelligent artificial intelligence is often framed as a purely technical challenge. We are told that progress follows from mathematics, engineering discipline, and scaling laws, and that the end result—machines that surpass human intelligence in all domains—is simply the next step in a neutral process. Yet this framing conceals more than it reveals. Beneath the surface of technical jargon lies a set of assumptions that are philosophical, ideological, and in some cases, metaphysical in nature.
“Consciousness is a much smaller part of our mental life than we are conscious of, because we cannot be conscious of what we are not conscious of.” — Julian Jaynes
Many of the strongest advocates of super-intelligence treat intelligence as a substance that can be accumulated like energy or mass. The logic seems straightforward: add more parameters, more compute, more training data, and intelligence must increase accordingly. Consciousness follows from sufficient complexity. Agency emerges from sophisticated pattern matching. Moral judgment arises as a natural consequence of scale. These are not scientific conclusions so much as metaphysical hopes, inherited from older narratives of transcendence and rebirth.
What is striking is how familiar these ideas are. The promise of liberation from bodily limits, the aspiration toward immortality, and the expectation of a higher intelligence that will resolve human conflict all echo long-standing religious motifs. The difference is essentially one of vocabulary. Where earlier traditions spoke of souls and heavens, contemporary discourse speaks of uploads and singularities. The structure of belief remains remarkably consistent.
“We shape our tools and thereafter our tools shape us.” — Marshall McLuhan
This matters because beliefs shape priorities. When AI development is presented as inevitable, human choice is quietly removed from the equation. Questions of governance, responsibility, and social consequence are treated as inconveniences rather than central concerns. For those considering careers in artificial intelligence, this atmosphere can be particularly distorting. The field’s discourse may suggest that accepting these metaphysical assumptions is simply part of being serious about the technology—that skepticism about superintelligence means skepticism about AI itself. This is false. One can work rigorously on machine learning, contribute to genuine advances, and remain unconvinced that scaling alone will produce consciousness or moral agency. Confusing ideological consensus with scientific necessity serves no one well.
None of this is to dismiss legitimate research on AI safety and alignment. Questions about how to ensure systems behave as intended, how to prevent misuse, and how to manage risks from increasingly capable tools are important and grounded. The problem arises when these practical concerns are reframed as preparation for an assumed future—one where machines possess goals, autonomy, and intelligence in the full human sense. This reframing shifts attention from tractable problems we face now to speculative scenarios that may never materialize.
There is also a persistent confusion between performance and understanding. Systems that generate fluent language or pass narrow behavioral tests are often described as possessing intelligence in the human sense. Yet imitation is not explanation. Output alone tells us little about internal experience or moral comprehension. A system that completes the sentence “I think, therefore I am” has not thereby demonstrated self-awareness, any more than a calculator that outputs “2+2=4” has demonstrated mathematical understanding. Tests originally designed as practical benchmarks—can this system fool an observer, can it answer this category of question—have been inflated into metaphysical proofs, asked to answer questions they were never meant to address.
Consider the difference between pattern matching and reasoning. Current large language models excel at identifying statistical regularities in text and generating continuations that feel plausible. This is powerful. It is useful. But when we ask these systems to explain their reasoning, we often get fluent post-hoc rationalizations rather than insight into actual decision processes. The system produces text that sounds like reasoning because reasoning-like text appeared in its training data, not because it engages in the cognitive process we recognize as understanding. Mistaking one for the other has consequences. We may trust these systems with decisions they are not equipped to make, or conversely, we may dismiss genuine capabilities because we’ve miscategorized what the system is actually doing.
A more grounded view of artificial intelligence begins with a simpler premise: these systems are tools, shaped by human goals and human institutions. They do not arise independently of society. They do not absolve us of responsibility. Their effects—beneficial or harmful—reflect the values embedded in their design and deployment. When a language model reproduces gender stereotypes, that reflects patterns in its training data, which reflect broader social patterns. When a recommendation system optimizes for engagement over well-being, that reflects the objectives chosen by its creators. When an automated hiring tool discriminates, that reflects both historical biases in the data and choices about what signals to prioritize. These are human decisions, visible in the architecture, the data curation, the loss functions, and the deployment contexts. No amount of scale makes these choices disappear or transfers responsibility to the machine itself.
The danger of the super-intelligence narrative is not that it imagines ambitious futures, but that it narrows the present. By focusing attention on speculative endpoints, it diverts energy away from immediate, solvable problems: how to align technology with human well-being, how to distribute benefits fairly, and how to preserve human agency in systems of increasing automation. We know that algorithmic systems are already reshaping labor markets, influencing elections, and mediating social relationships. These are not hypothetical future risks but present realities requiring immediate attention.
Progress in science has rarely depended on surrendering judgment to destiny. It has depended on careful definition, intellectual humility, and an insistence that explanations remain tethered to evidence rather than aspiration. Artificial intelligence should be no exception. The future of the field will be determined not by the metaphysical narratives we inherit, but by the choices we are willing to acknowledge as our own.
—