On an overcast day in early 2025, as delegates gathered in Geneva to debate lethal autonomous weapons, a video surfaced from Gaza. It showed a drone hovering low over a narrow street, scanning and tagging targets in seconds. Whether a human had final sign-off was unclear. To those below, the distinction didn’t matter.
This is how the “end of the world” arrives - not with a sentient Skynet deciding to exterminate us, but with brittle algorithms and compressed human oversight. The apocalypse creeps in not through nuclear fireballs but through the shrinking space for judgment, accountability, and time.
Ends Before: How Technology Rewrites Worlds
The phrase end of the world is melodramatic until you realize it has happened before. The printing press ended the world of the medieval Church, breaking its monopoly on scripture and unleashing the Reformation. The steam engine ended the world of agrarian rhythms, birthing both industrial progress and Dickensian misery. The personal computer ended the world of centralized computation, empowering individuals but also ushering in surveillance capitalism.
Each time, the old order died, even if humanity continued. Each time, the story was not of technology alone, but of how humans chose - or failed - to govern it.
Artificial intelligence is the next inflection. It is the printing press, the steam engine, and the PC rolled into one. And it will end our world. The question is not whether it will, but how.
Theories of the End
Nick Bostrom warned of the “control problem”: once machines can redesign themselves, they may race beyond our ability to supervise . Even without malice, a system single-mindedly optimizing a poorly defined goal could do immense damage.
Stuart Russell reframed the problem. His design principles: AI objectives should reflect human preferences, remain uncertain about those preferences, and learn continually from human behavior . In his words, “The key is that machines are beneficial to the extent that their actions can be expected to achieve our objectives” .
Meanwhile, policymakers are playing catch-up. The EU AI Act, in force since August 1, 2024, phases in rules: bans and AI-literacy duties from Feb 2, 2025; obligations for general-purpose AI from Aug 2, 2025; most high-risk rules fully applicable Aug 2, 2026 (with some to Aug 2, 2027) . In July 2025, the European Commission published guidelines for general-purpose AI and a training-data summary template .
Across the Atlantic, the NIST AI Risk Management Framework sets out trustworthiness categories - validity, transparency, accountability - while stressing: “The Framework is intended for voluntary use” . It lacks the bite of Brussels’ law but increasingly serves as the U.S. baseline.
The War Machine
One obvious pathway to an ending is through war.
Autonomy is no longer theoretical. Ukraine has fielded AI-enabled drone swarms alongside ubiquitous loitering munitions . In Gaza, reporting has described AI-assisted target generation systems nicknamed Lavender and Where’s Daddy?, with the Israel Defense Forces insisting that humans remained in the loop.
The International Committee of the Red Cross has warned: “The use of autonomous weapon systems entails risks… This loss of human control and judgment in the use of force… raises serious concerns from humanitarian, legal and ethical perspectives”
This is where The Terminator retains its relevance. Not as prophecy of a malevolent Skynet, but as metaphor for delegation. Once we hand control of killing to machines, we may never fully get it back. The danger is not rebellion, but speed and scale.
The Soft Apocalypse
Ukrainian FPV drone team prepping gear
There is a subtler form of ending. Generative AI floods the infosphere with forgeries and persuasion. Deepfakes blur truth. Cheap synthetic text clogs public discourse. Fraud scales to industrial levels. Job categories dissolve before institutions can adapt.
This is the Matrix myth in its truest form - not humans in pods, but citizens living in a world where signals can’t be trusted, where scarcity assumptions collapse, and where institutions built on verification crumble.
The EU AI Act tackles this directly: it requires disclosure when interacting with chatbots, mandates labelling of AI-generated content, and specifically compels identification of deepfakes when used to inform the public. In other words, the law is betting that provenance can hold back the soft apocalypse.
The Utopian Counterpoint
Pop culture is not only dystopian. It has also given us visions of better endings.
In Star Trek: The Next Generation’s “The Measure of a Man” (1989), the android Data stands trial for his personhood. The Federation ultimately grants him the right not to be dismantled - a reminder that governance can extend dignity rather than deny it.
Iain M. Banks’ Culture novels describe societies where benevolent “Minds” steer ships and civilizations, ushering in a post-scarcity existence . Humanity does not vanish, but flourishes in abundance, freed from drudgery.
Spike Jonze’s Her (2013) imagines a quieter ending: artificial companions who elevate and unsettle human relationships. Duncan Jones’s Moon (2009) offered GERTY, a caretaker AI that resists the evil-machine cliché . In Robot & Frank (2012), a domestic robot helps an aging thief, a fable of companionship instead of conquest .
And then there is Dune. Its backstory invokes the Butlerian Jihad: a revolt so devastating that humanity swore never to build a machine “in the likeness of a human mind” . It is prohibition rather than governance. For some, that sounds like wisdom. For others, like surrender.
Data on trial in Star Trek: TNG
The Human Factor
History turns on the interplay of human ambition and the tools we create. Franklin wielded the printing press not only for science but for political revolution. Jobs insisted on elegance at the cost of chaos, birthing the iPhone. Musk pushes AI and rockets forward while polarizing public trust.
So it will be with AI. The technology itself is not destiny. It is the context. The decisive factor is how humans channel it: through law, through markets, through design.
Choosing the Ending
We can already sketch the options:
Degenerate Ending: Delegation runs too far. Autonomy in weapons, finance, and information outpaces oversight. The world ends in brittle automation and cascading errors.
Constrained Ending: Robust governance - kill switches that actually work, bans on autonomous targeting, transparent safety audits - keeps AI dangerous but contained. The world ends in regulation, not ruin.
Transcendent Ending: Abundance arrives. Post-scarcity economies, guided by aligned systems, dissolve many of our current struggles. The world ends in prosperity, our old fights rendered obsolete.
Each ending destroys the world we know. Each replaces it with something else.
The Deciding Decade
In the next ten years, the crucial decisions will not be about chips or algorithms, but about rules and incentives. Do we prohibit machines from selecting human targets in war, as the Red Cross urges: Do we demand provenance tags for all digital media? Do we audit high-capability models before they are unleashed ?
These are not technical choices. They are political, moral, and cultural ones.
The truth is simple. Our world will end. The old assumptions - about labor, truth, war, and scarcity - cannot survive AI. The only question is what kind of world replaces it: Colossus, Starfleet, or the Culture.
That choice is still ours, if we act while time remains.