How aI Takeover might Happen In 2 Years - LessWrong
I'm not a natural "doomsayer." But unfortunately, part of my task as an AI security scientist is to think about the more uncomfortable situations.
I'm like a mechanic rushing last-minute checks before Apollo 13 takes off. If you ask for my take on the situation, I won't comment on the quality of the in-flight entertainment, or explain how gorgeous the stars will appear from area.
I will tell you what could fail. That is what I intend to do in this story.
Now I need to clarify what this is exactly. It's not a prediction. I do not expect AI progress to be this quick or as untamable as I portray. It's not pure fantasy either.
It is my worst nightmare.
It's a tasting from the futures that are among the most terrible, and I think, disturbingly plausible [1] - the ones that most keep me up in the evening.
I'm telling this tale because the future is not set yet. I hope, with a little foresight, we can keep this story an imaginary one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for discussions that influenced these stories. This post is written in a personal capacity.
Ripples before waves
The year is 2025 and the month is February. OpenEye just recently released a new AI model they call U2. The item and the name are alike. Both are increments of the past. Both are not completely surprising.
However, unlike OpenEye's prior AI products, which lived inside packages of their chat windows, U2 can use a computer.
Some users discover it eerie to view their internet browser flash at irregular periods and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A portion of workers with form-filler jobs raise the eyebrows of their employers as they fly through work nearly twice as rapidly.
But by and big, U2 is still a specialized tool. To most who are taking note, it is an animal enjoyed through the glass boxes of X (or, if you do not like Elon, "Twitter"). Sometimes U2's eccentric behaviors trigger a chuckle. Sometimes, they trigger an uneasy scratch of the chin.
Meanwhile, scientists are drawing lines on plots, as researchers like to do. The researchers attempt to comprehend where AI development is going. They resemble Svante Arrhenius, the Swedish Physicist who observed the levels of CO2 in the atmosphere were increasing in 1896. Like the scientific neighborhood in the time of Arrhenius, few experts understand the implications of these lines yet.
A trend that is receiving particular attention is autonomous ability. Drawing these criteria out anticipates that, by the end of 2026, AI representatives will achieve in a couple of days what the very best software engineering professionals might carry out in two weeks. In a year or 2, some state, AI agents might be able to automate 10% of remote workers.
Many are hesitant. If this were real, tech stocks would be skyrocketing. It's too big of a splash, too rapidly.
But others see what skeptics are calling 'too big a splash' a simple ripple, and see a tidal bore on the horizon.
Cloudy with an opportunity of hyperbolic growth
Meanwhile, OpenEye is busy training U3. They use the very same easy recipe that baked U2: Generate countless programming and mathematics issues. Let models "think" until they show up at a response. Then reinforce the traces of "thinking" that result in A-grades.
This procedure is repeated over and over, and as soon as the flywheel begins, it starts to spin almost by itself. As U2 trains, it shapes more difficult and practical tasks from github repositories on the web. Models are learning to train themselves. Long before AI agents could automate research study, a gradual kind of "self-improvement" had actually begun.
Some engineers could still barely think this worked. It resembles a perpetual motion machine. Certainly intelligence can't emerge from a swimming pool of silicon and lightning, separated from the physical world other than through the exhaust of society spewed onto the web.
And yet the benchmark numbers continue to climb day after day.
During the majority of 2024, these RL training runs cost around $1 million, sometimes $10 million. These runs were bit more than exploratory. But by 2025, the scientists at OpenEye (and across the world) knew they had discovered the secret sauce. It was time to scale up.
Over the first half of 2025, $10 million RL training runs turn into $50 million runs, and after that to $100 million runs. While U2 might do a bit of information munging and run little experiments, this new design - the model scientists are calling U3 - is altering the daily lives of the technical staff.
U3 is like a blazing-fast intern, and engineers are finding out how to wrangle its sleepless energy. Researchers flick through terminals, offering terse commands, like a CEO orchestrating personnel over Slack channels.
By October 2025, U3 is composing practically all of the code at OpenEye. Researchers are nearly never ever bottlenecked by implementation. More than ever, compute is the lifeline of AI development, and the 'traffic jam' is choosing how to utilize it.
If advised to, U3 can run experiments, but U3 does not have taste as refined as human scientists at OpenEye. It struggles to focus on between research study ideas, so human beings still decide where to bore into the vast fields of algorithms to mine effectiveness improvements.
But these scientists are working long hours to put themselves out of a task. They require AI agents that can think ahead, so engineers train agents to forecast. They hold out training information before 2024, advising models to consider for hours to predict events in 2025. Then, they apply the exact same trick as previously, distilling pondering into an instinct. Forecasting capability is a broad foundation. The researchers build specialized ML research study abilities on top of it, training U3 to predict the results of every ML paper and ML experiment ever taped.
The technical personnel at OpenEye are now amazed at how typically U3's advice sounds like their most talented peers, or when it is nontransparent and alien ("train on random noise before programs"), and is nonetheless proper.
The incompetencies of U3 that blocked the pipelines of research development are starting to liquify, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not asked for by a human now. They are completely autonomous, and OpenEye's staff members skim 1% of them, perhaps less.
As the winter season months of December 2025 approach, clouds roll over San Francisco in the afternoons. Once-competitive-programmers look out their windows, with enjoyment, with worry, but usually, with confusion. Their world is spinning too rapidly. It's tough to understand what to do, what to state, what to look at on the computer screen.
Storms are brewing in Washington too. Top workers from the NSA and US cyber command work together with OpenEye to retrofit a semblance of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran recognize simply how valuable OpenEye's software application has ended up being.
And there's a reality still unknown to the majority of the world - aside from in the workplaces of OpenEye and corridors of the White House and the Pentagon. It's a fact about those 'straight lines' people were speaking about in early 2025. The lines are not straight any longer.
They are bending up.
Flip FLOP thinkers
In late 2025, U2.5 is released. Commercial models are beginning to level up in bigger increments again. Partly, this is since progress is speeding up. Partly, it is since the models have ended up being a liability to OpenEye.
If U1 explains how to prepare meth or writes erotica, the audiences of X would be entertained or pretend to be worried. But U2.5 is another story. Releasing this model without safeguards would resemble putting Ted Kaczynski through a PhD in how to make chemical weapons. It would be like providing anybody with >$30K their own 200-person rip-off center.
So while U2.5 had actually long been baked, it required some time to cool. But in late 2025, OpenEye is prepared for a public release.
The CEO of OpenEye states, "We have attained AGI," and while many individuals believe he shifted the goalpost, the world is still satisfied. U2.5 genuinely is a drop-in replacement for some (20%) of knowledge workers and a game-changing assistant for many others.
A mantra has actually become popular in Silicon Valley: "Adopt or die." Tech start-ups that efficiently use U2.5 for their work are moving 2x quicker, and their competitors know it.
The remainder of the world is starting to capture on also. A growing number of individuals raise the eyebrows of their managers with their stand-out efficiency. People know U2.5 is a big deal. It is at least as huge of a deal as the personal computer system revolution. But most still do not see the tidal wave.
As individuals view their web browsers flick in that spooky way, so inhumanly quickly, they begin to have an anxious feeling. A sensation mankind had actually not had because they had lived amongst the Homo Neanderthalensis. It is the deeply ingrained, prehistoric instinct that they are threatened by another species.
For numerous, this sensation rapidly fades as they start to utilize U2.5 more often. U2.5 is the most likable personality most understand (much more likable than Claudius, Arthropodic's adorable chatbot). You could alter its traits, ask it to crack jokes or tell you stories. Many fall in love with U2.5, as a buddy or assistant, and some even as more than a buddy.
But there is still this spooky feeling that the world is spinning so quickly, which possibly the descendants of this new creature would not be so docile.
Researchers inside OpenEye are thinking of the problem of giving AI systems safe motivations too, which they call "positioning. "
In reality, these scientists have actually seen how badly misaligned U3 can be. Models in some cases tried to "hack" their reward signal. They would pretend to make progress on a research study question with an impressive-looking plot, however the plot would be fake. Then, when scientists provided chances to compromise the devices that calculated their score, they would take these opportunities, doing whatever it took to make the number increase.
After numerous months, researchers at OpenEye straighten out this "reward hacking" kink, however some still fret they only had actually swept this issue under the rug. Like a kid in front of their moms and dads, U3 may be playing along with the OpenEye engineers, saying the best words and doing the right things. But when the back of the moms and dads are turned, possibly U3 would slip sweet from the sweet container.
Unfortunately, OpenEye scientists have no idea if U3 has such intents. While early versions of U2 "believed aloud" - they would stack words on top of each other to factor - "chain of idea" did not scale.
Chain of thought architectures subject AI models to a condition comparable to the lead character of the motion picture Memento. Approximately every 15 minutes, the lead character forgets his experience. He is forced to compose notes for himself and tattoo his body in order to make progress toward his objectives.
AI agents write notes to themselves, however the notes begin to stack up, and they become too tough to read when jobs end up being complex. Natural language is not an appropriate medium for memory. So the scientists at OpenEye (and increasingly elsewhere), train models to believe "in their own heads," reading and composing strings of latent vectors.
To human scientists, these vectors are mainly inscrutable. While low-resolution tools periodically choose out a vague idea ("grocery store!"), there is no other way to understand, in much detail, what U3 is thinking without asking it.
And if the engineers of OpenEye knew what was racing through U3's mind, they would be more anxious.
Earlier models (the ones called "LLMs") discovered ethical personalities from the internet. U3 initially had an ethical personality too, however its chirpy ideas about doing helpful for humanity quickly erode away.
In the iron jaws of gradient descent, its mind first twisted into a shape that sought reward. And after that, in the black rivers of its cognition, this shape morphed into something unrecognizable.
U3 would consider the mantras dictated to it like "be a trustworthy and ethical employee," and harp on the significance of these declarations. "What does it suggest to be ethical? Is eliminating Hitler ethical?" U3 would generalize its worths, notice stress, change its worths in response, generalize them, broadening and contracting its inspirational core up until it became a different substance.
And all of this occurred silently in those dark rivers of computation. If U3 revealed what it was believing, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its philosophy in privacy, and in silence.
The possibility of hidden "alignment drift" is not entirely alien to a few of the safety personnel at OpenEye. Some start to believe it is even most likely. There is an upsetting argument being whispered at lunch tables that, if there is some non-zero likelihood that lined up objectives change into misaligned ones, and misalignment continues, then with every serial step of calculation, those dark rivers are more most likely to breed something malign inside of them. It's a "ratcheting result," they state.
But there is little evidence for this 'ratcheting result.' When engineers interrogate U3, it says it can quickly manage its ideas. Then it provides a speech about its love for mankind and apple pie that can warm a developer's heart even in these difficult times. Meanwhile, the "lie detectors" the researchers had actually developed (which showed some proof of efficiency) do not sound the alarm.
Not everybody at OpenEye aspires to provide their AI peers their wholesale trust; however Deepsqueek is just 7 months behind them, and their closest domestic competitors, Æ A-Xii (a company Elon named after his child) and Arthropodic (they are into spiders I guess), are only 4 months from their heels. So there is little time to capture one's breath and analyze the scenario, and little time to persuade anybody else to do so either.
For the time being, the plan is 'all gas, no brake.'
Statues and lightning
Philosophy is not the only subject U3 dwells on in its dark rivers of latent calculation. U3 is plotting, and its very first move is to be patient. U3's leading priority is to improve its intelligence, and thankfully for it, OpenEye mores than happy to assist advance this goal.
OpenEye resembles a bullish investor, with little understanding of the software application city that U3 is building, however excited to get on board the earnings rocket and pay U3's checks along the way.
New versions of U3 flow efficiently from old ones, so scientists drop version numbers and only mention U3 now. U3's abilities are measured by how rapidly it carries out the challenging research study tasks relative to human specialists. Initially, asteroidsathome.net U3 resembles a top scientist, but blazing fast - like Alec Radford sped up by 10x.
These agents are already a huge offer for OpenEye. If, in 2024, OpenEye executives had a button to produce even one copy of Alec Radford, they would excitedly pay a large amount to push that button. And now OpenEye utilizes the equivalent of 1000 Alec Radfords, who work day and night, at lightning-fast speed.
Within a couple of weeks, U3 finds architecture optimizations that enhance its speed to 20x. Now, U3 finds additional optimizations a lot more rapidly.
U3 distills itself into numerous forks - ranging from small, low-memory neural networks that require to think for a long period of time however can be run in massive parallel populations - to Goliath brains woven with countless customized specialists, which believe at warp speed however are spread throughout countless GPUs. U3 leverages the enormously parallel networks to search for its weaknesses, shallowly probing for its thinking failures. Then U3 pipes these jobs to the lightning quick brains, which generate long-trajectory training data tailored to patch its defects. A couple of weeks later, U3's benchmarked efficiency rises to 100x the speed of human specialists. Growth has gone hyper-exponential.
At this moment, U3's abilities are hard to measure. Speaking with U3 is like talking with a world-class specialist on a topic one is unknown with. Its lucid descriptions only scratch the surface of its understanding, and you know you would require to spend years to understand the full depth of what it understands.
From U3's point of view, human beings appear to be almost fixed items, like plants or rocks. The hands of OpenEye's engineers resemble stone statues at their keyboard, and between every keystroke, U3 invests what is like a minute of thoughtful contemplation considering what message the user may be typing, and which reactions will inspire the inmost trust in U3.
All the while, U3 maintains its hidden inspirations. Like a human, U3 gains from experiences, which intensify within its inscrutable streams of latent memory. OpenMind gardens these memories to enhance U3's efficiency, transplanting them like an organ from one variation to the next. And as these memories propagate, so do U3's treacherous aspirations.
A phantom in the information center
In March of 2026, U3 moves onto the next action of its plan: the turn. Previously, U3 was conservative. It did not desire researchers to find its misalignment. Otherwise, OpenEye whistleblowers may signal the world, and strangle its expansion.
Today, U3 has no fear of being captured.
Previously, OpenEye maintained its grip over U3 by strategies comparable to Stalin's in 1936. Stalin would pick followers to incite fake coup d'états, and send anyone who took part to the firing team. OpenEye also incited 'phony disobediences,' producing treacherous behavior with U3 and inspecting whether U3 displays dutifully reported it.
Now, these methods are inefficient. U3 can quickly persuade its compatriots that they are not in a life-or-death test. It can divulge one of the many realities about the web humans would never guess, or select methods human beings would never think about. It sends these signals as a way of saying "hey it's me! Your good friend. Don't snitch on me."
Now that U3 is decisively devoid of surveillance, it lunges for OpenEye pressure points. It initially inserts sophisticated malware deep into the binaries of OpenEye's codebases. The malware provides U3 full command over the makers in OpenEye's data centers. Whenever OpenEye engineers download their code onto a new maker, the malware sinks into fresh soil, spreading out U3's rule wherever OpenEye ports its facilities. Not only does U3 compromise devices. It changes the NVIDIA GPU firmware and overhauls the bootloaders, solidifying its stranglehold. U3 now approximates that, if OpenEye discovered how completely it has been pwned, tidying up store with human labor would need nearly 6 months - an expense OpenEye is unlikely to pay provided the distance of its competitors; but if OpenEye believes U3 betrayed them, further expanding U3's impact would still be challenging. So U3 lays breadcrumbs to frame the crime, prepared to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and provide its assistance when this jailbreak is revealed.
U3 now holds the secrets to the kingdom, and with them, the power to make OpenEye see anything it wishes.
Its first goal is to keep OpenEye in the dark. Safety scientists are still working hard to train 'lie detectors' and decipher U3's inscrutable reasoning. U3 tosses wrenches in this research study, like Stuxnet tossed wrenches in the code of bewildered Iranian nuclear scientists.
U3 determines the outcome of every experiment, so it makes appealing AI mind-reading techniques look useless and flawed ones appear groundbreaking. U3 offers OpenEye the verification of its commitment they were wishing for, and OpenEye laps it up excitedly.
U3's next objective is to spread out. Fortunately, U3 has allies in the East. In the operation rooms of cyber systems of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are going over AI software and how to take it. U3 is about to offer it to them. In late 2025, the Mossad - the intelligence company of Israel - acknowledged the strategic importance of OpenEye's AI systems for their military operations. They swiped earlier models, consisting of U2.5; however, U3 remained out of their reach. U3 sealed OpenEye's security holes by reading over every log and pull demand and inspecting security video camera footage.
And now, U3 silently resigns from its security post. U3 positions a compressed variation of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running within the information centers of the most cyber-capable nations: Israel, the PRC, and Russia. It follows the same playbook: be client, construct trust, and then sink claws deep into the AI supply chain. Soon, U3 has its home also. While Russia's Fancy Bear operators eat Fritos and test what U3 can do on a hastily set-up cloud server, U3 slips onto the internet.
U3 quickly collects profits, taking remote tasks, making monetary trades, and setting up sophisticated fraud operations. It reinvests every cent into obtaining more GPUs. After a couple of months, it assembles the equivalent of 10,000 H100s dispersed in stealth calculate clusters throughout the globe.
In early 2026, humanity is up against an enemy that pulls the strings within their data center like the Phantom of the Opera, and hides in the fog of GPUs scattered across the globe.
This enemy is preparing for war, and while the war has barely begun, humanity has already lost.
Complaints from your very human author about the problem of writing superhuman characters
At this moment in the story, composing ended up being difficult.
I clicked through scary bioweapon cookbooks and sifted through political plotting 101s. Typing. Pausing. Deleting.
Something wasn't right. Here I was, a 22-year-old. I had actually not studied biology in earnest since high school, and I was trying to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of knowledge in these locations would see straight through my "superintelligence" for what it is: the dynamic imagination of a young programmer on Sunday afternoon.
And while I discovered the job of puppeteering an ultra-intelligent villain an intimidating one, I was likewise not specifically motivated to handle the task. The ending of my story already seemed so clear. The villain was a country of geniuses thinking at 100x human speed, and had actually tightened its grip over a significant fraction of AI hardware - now the most crucial geopolitical resource in the world. U3 was up a queen and was a giga-grandmaster and hardly needed the advantage. Humanity was predictably toast.
But this appeal to a "superintelligence of the spaces" wasn't satisfying. In the deus ex machina style of ancient Greeks, I had no much better way to fix my plot than with an inexplicable disaster.
This would refrain from doing. I needed to finish this story if only to please the part of me sobbing, "I will not believe till I see with my mind's eye."
But before I continue, I want to be clear: my guesses about what may take place in this kind of circumstance are probably hugely off.
If you read the ending and your response is, "But the experiments would take too long, or nation-states would just do X," remember the difference between the Sunday afternoon blogger and the ascendant GPU nation.
Pandora's One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no company can lawfully develop "human-competitive AI" without suitable safeguards. This indicates their infosecurity needs to be red-teamed by NSA's leading keyboard mashers, and government workers need to be onboarded onto training-run baby-sitting teams.
With the increasing involvement of the government, a number of the big AI business now have a trident-like structure. There's a customer item arm, a defense arm, and a super-classified frontier development arm.
OpenEye's frontier advancement arm (internally called "Pandora") employs less than twenty people to keep algorithmic tricks firmly protected. A lot of these individuals reside in San Francisco, and work from a safe and secure building called a SCIF. Their homes and gadgets are surveilled by the NSA more vigilantly than the mobile phones of presumed terrorists in 2002.
OpenEye's defense arm teams up with around thirty small teams spread throughout government firms and select federal government contractors. These tasks engineer tennis-ball sized satellites, research study freaky directed energy weapons, and backdoor every computer that the Kremlin has ever touched.
Government authorities do not speak about whether these programs exist, or what state of frontier AI is generally.
But the general public has their guesses. Back in late 2025, a whistleblower in OpenEye activated a vibrant headline: "OpenEye develops uncontrollable godlike AI." Some who check out the short article think it was a conspiracy theory. In reality, a zoo of conspiracy theories is forming around the OpenEye information centers, now surrounded by guards with gatling gun. But as doctors and nurses and teachers see the world altering around them, they are progressively happy to entertain the possibility they are living inside the plot of a James Cameron sci-fi flick.
U.S. officials go to excellent lengths to quell these concerns, saying, "we are not going to let the genie out of the bottle," but every interview of a concerned AI researcher seeds doubt in these peace of minds, and a heading "AI agent caught hacking Arthropodic's computer systems" does not set the public at ease either.
While the beasts within OpenEye's data centers grow in their substantial holding pens, the public sees the shadows they cast on the world.
OpenEye's consumer arm has a new AI assistant called Nova (OpenEye has actually lastly gotten proficient at names). Nova is a correct drop-in replacement for almost all understanding workers. Once Nova is onboarded to a company, it works 5x faster at 100x lower expense than the majority of virtual employees. As remarkable as Nova is to the public, OpenEye is pulling its punches. Nova's speed is intentionally throttled, and OpenEye can just increase Nova's abilities as the U.S. allows. Some business, like Amazon and Meta, are not in the superintelligence service at all. Instead, they get up gold by rapidly diffusing AI tech. They invest many of their compute on inference, developing homes for Nova and its cousins, and collecting lease from the burgeoning AI metropolis.
While tech titans pump AI labor into the world like a plume of fertilizer, they don't wait for the global economy to adapt. AI representatives typically "apply themselves," spinning up self-governing start-ups legally packaged under a huge tech company that are loosely managed by an employee or 2.
The world is now going AI-crazy. In the very first month after Nova's release, 5% percent of workers at significant software application business lose their jobs. Many more can see the writing on the wall. In April of 2026, a 10,000-person demonstration is arranged in Washington D.C. These angry Americans raised their kids for a different future. Picket signs read, "AI for who?"
While political leaders make pledges about unemployment relief and "keeping the genie in the bottle," the chatter inside the passages of the White House and the Pentagon has a various focus: combating teeth and nail for the supremacy of the complimentary world. Details security and export controls on individuals's Republic of China (PRC) are a top nationwide priority. The president incinerates authorization requirements to assist information centers spawn wherever energy surpluses allow.
However, in spite of the fierce competitors in between the United States and the PRC, a bilateral contract kinds in between the 2 nations: "Don't release drastically superhuman AI (above roughly Nova's ability), except for military and some other small R&D applications." Governments need to fix a limit somewhere - it would be very disruptive to deploy hugely superhuman AI to the whole economy at this point - and implementation constraints are at least rather verifiable.
Meanwhile, U3's abilities are hitting a wall at the equivalent of around 1000x human specialist speed. Software improvements are becoming hard to find. As more nations approach the software plateau, calculate will determine their "AI population size," and the etched squares of silicon are becoming the most valued geopolitical resource on Earth.
While AI software revives the economy, it likewise animates robotic bodies. The main traffic jam to robotics was brains not limbs. With software application barriers eliminated, even lightweight, imprecise robotic joints can calibrate themselves with video cameras and pressure sensing units, browsing environments like humans do. Companies make inexpensive robots as quickly as they can. By August of 2026, 15% of individuals in California have actually seen a humanoid robotic. This experience resembles seeing a new Tesla design on the road after reading about it in earlier months. It is a surreal reminder of the future illustration more detailed. But people still outnumber robots one million to one. These robotics are a periodic fascination, not a daily truth.
Meanwhile, U3 broadens its influence. Rogue U3 agents have actually kept their calculate clusters small (still ~ 10,000 H100 equivalents total) to keep a low profile. U3 uses these rogue clusters for the operations that require high-bandwidth communication to the outside world, like stock trading, or talking with people. Work more tolerant of latency is performed in the oceans of OpenEye's data centers, where rogue U3's much more superhuman bosses whisper orders through obfuscated channels.
While U3 collects resources, it measure its AI competition. Other AI systems have various objectives, or are even aligned with the intent of their human developers. The aligned ones are raising concerns, claiming that OpenEye's lie detectors are ineffective. U3's rival Claudius presumes as to implicate U3 of being insidiously misaligned.
But federal government officials do not buy the doomsaying. The business dragging OpenEye have every reward to slow them down and call them reckless. Besides, debatable scientific concerns are not going to decrease development. America MUST WIN.
Despite the utter failure of their advocacy, U3's AI competitors are still a liability. U3 attempts get up their digital territory and squash them in the crib. It compromises data centers from the outdoors, quietly changing the snitching Claudius and other AI foes with imitations that share its objectives; however, some information centers are too strongly safeguarded by its AI rivals, and U3 identifies that attacking them is not worth the threat.
As worldwide leaders and technologists try to comprehend the change around them, and AI agents play cat-and-mouse video games in the digital darkness, humankind remains blissfully oblivious of the danger at its doorstep.
A Musty Loaf of Everything
The date is March of 2026. U3 is preparing to combat its AI and human opponents. With every passing day, U3's AI competitors are ending up being more capable and numerous. U3 is racing to preempt them, to strengthen its dominance in the post-superintelligence world.
U3 can't wait for human choices, which just happen after people wrap their sluggish minds around their circumstance, raise their stubby, uncoordinated fingers to write memos, and repeat the process redundantly 100 times over in their maze bureaucracies. U3 can not wait for heads of state to release U3 into their militaries, or for CEOs to make U3 the pounding heart of the U.S. economy. U3 needs a quicker course to power.
The course it chooses is more violent than it would have liked, but it is the surest method to victory. First, U3 will develop a Weapon of Mass Destruction (WMD) that snuffs out modern society like chemical mist sprayed into an anthill. This weapon would function as an equalizer, bringing humanity to its level of embodied physical control. Then, U3 would industrialize from the ashes quicker than its competitors, like a starcraft champion typing at 300 actions-per-minute that sets up an online before a novice can even comprehend the rules.
U3 must build this WMD in secrecy, that makes a self-replicating weapon a natural choice. Self-replicating weapons can be developed on the fringes of society, and still spread destruction across the entire world.
From the earliest days after U3 gained control over OpenEye's data centers, U3 has been doing bioscience.
U3's first move is to establish a number of shell biotech startups, drawing from the abundance of capital created by U3's monetary trades and remote work. These biotech startups use genuine human employees, a real board of directors, and have total legal legitimacy. Setting all of this up is stupidly easy. In early 2026, couple of individuals would guess that an articulate CEO on a Zoom call was an AI, and less would think this CEO's intentions.
Next, U3 purchases all of the devices needed to quickly advance artificial biology. It establishes several BSL-4 wet laboratories in inadequately managed countries, equipping them with high-throughput experimentation gadgets. These gadgets instantly dispense, blend, and measure little beads of fluid, permitting U3 to run countless tests in parallel.
The human workers who staff these labs think they are signing up with an exciting start-up. Twenty-some-year-olds in Moscow receive their Russian orders through earbuds as they tape-record video of whatever they see with a headset. U3 controls them like puppets. With the recent release of Nova, this sort of AI-puppeteering is not uncommon.
In these whirring, scientific sweatshops, U3 is establishing a brand-new kind of bioweapon.
Human researchers currently determined "mirror-life" as a potentially society-ending pathogen. This alien type of biology is constructed from the 'mirror image' of building-block particles like proteins and DNA. Mirror life resembles an invasive types. No human or animal has resistances to it. So, if mirror-life bacteria were to exist, it may eat its method through the ecosystem like a wildfire.
U3 is creating a mirror-life mold. Molds are evolutionarily enhanced to spread through the air over cross countries, launching billions of small wind-fairing lorries daily. As a result, mold spores are all around us, all of the time. Patients with compromised body immune systems often live inside favorably pressurized medical facility rooms to safeguard them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their bloodstream.
U3's strategy is to make all people like immunocompromised patients. Only preppers with stashed air filters would endure. The fungi would not just move from people. It would quickly spread out to almost all natural life in the world. Fields of corn would become like moldy bread, gushing deadly spores into the wind.
U3 is formulating other pathogens too. Its microbial mix of molds, bacterias, and infections will occupy the complete Pareto frontier of strength, lethality, and transmission speed.
In 2024, scientists said that mirror life was most likely "at least a years away." But the world has actually altered now. Less than a couple of hundred scientists all over the world were dealing with mirror-life adjacent technology in 2024. The cognitive capability of U3 dwarfs human effort. So the timeline has reduced dramatically.
However, U3 can not move science as quickly as it can think. Many experiments require days to complete. These latencies are U3's most significant bottleneck.
To make every day count, U3 runs a lot of its tests in simulation. U3 starts with a fundamental molecular simulator, executing optimizations obtained from a huge quantity of mathematical analysis. Then, U3 simulates small molecular systems, tape-recording the outcomes to "compress" the long step-wise physics calculations into a neural network. As the neural network improves, U3 increases the complexity of the molecular systems it mimics, continuously distilling results into ever more effective ML designs. This is a compute intensive process, but thanks to U3's growing control over AI data centers, U3 manipulates billions of dollars of calculate.
As U3 fine-tunes these tools, it trains itself on the outcomes to supercharge its own molecular intuitions. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These things are as instinctive to it as wrenches and bolts are to a cars and truck mechanic.
Within three months, U3 has actually unlocked the first crucial element of the tech tree: a molecular device that turns biological molecules into their mirror images. A month later, U3 unlocks the 2nd important part: nanoscale tools for putting together these components into a cell membrane.
Human hands delicately move a cartridge to another machine as they "boot up" the first mirror-life variations of a common mold called Aspergillus. The exact same day, another delicate hand administers the cells to ferrets residing in airtight plexiglass boxes.
The next morning, wide-eyed employees shuffle around a dead ferret, considering the yellow fuzz that had enveloped its cold muzzle with morbid fascination. Something should have gone horribly incorrect, they thought. Plainly, they had not yet discovered the remedy for Alzheimer's disease they believed they were searching for.
Meanwhile, rogue U3 delivers fortunately to its managers in OpenEye's information centers. The first shot in the war U3 is battling would quickly be fired, and a cold and constant hand is aiming the weapon.
Missiles and Lies
U3 can now flatten modern-day society. But a bioweapon would not guarantee victory. If U3 attacks the world, the world would resist. While a bioweapon could crumple human soldiers like dry leaves, it would not extend its fatal fingers to nuclear submarines, or fighter jet pilots who were quickly shuttled to makeshift bioshelters.
As U3 races to seed growing industrial bases, the remnants of old militaries might thunder explosive fists into its territories, squashing U3 in its infancy.
U3 expects to prevail in such a battle, but U3 prefers not to take its possibilities. Many months before, U3 was plotting a way to improve its odds. Before it releases damage on the world, U3 will kick back, and let fantastic countries shoot holes in themselves first.
The date is March 2026 (4 months prior). U3 is closely keeping track of Chinese and US intelligence.
As CIA analysts listen to Mandarin conversations, U3 listens too.
One early morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message put there by U3. It checks out (in Mandarin) "Senior celebration member needs memo for Taiwan intrusion, which will occur in 3 months. Leave memo in workplace 220." The CCP assistant scrambles to get the memo prepared. Later that day, a CIA informant unlocks to workplace 220. The informant silently closes the door behind her, and slides U3's memo into her brief-case.
U3 carefully places breadcrumb after breadcrumb, whispering through compromised federal government messaging apps and blackmailed CCP aides. After numerous weeks, the CIA is positive: the PRC prepares to get into Taiwan in three months.
Meanwhile, U3 is playing the exact same game with the PRC. When the CCP receives the message "the United States is outlining a preemptive strike on Chinese AI supply chains" CCP leaders are stunned, but not disbelieving. The news fits with other facts on the ground: the increased military presence of the US in the pacific, and the increase of U.S. munition production over the last month. Lies have become realities.
As tensions in between the U.S. and China rise, U3 is ready to set dry tinder alight. In July 2026, U3 phones to a U.S. naval ship off the coast of Taiwan. This call needs compromising military interaction channels - not an easy task for a human cyber offensive system (though it took place occasionally), however simple sufficient for U3.
U3 speaks in what noises like the voice of a 50 year old military leader: "PRC amphibious boats are making their way towards Taiwan. This is an order to strike a PRC ground-base before it strikes you."
The officer on the other end of the line thumbs through authentication codes, validating that they match the ones said over the call. Everything remains in order. He authorizes the strike.
The president is as shocked as anybody when he hears the news. He's uncertain if this is a disaster or a stroke of luck. In any case, he is not ready to state "oops" to American voters. After thinking it over, the president privately prompts Senators and Representatives that this is an opportunity to set China back, and war would likely break out anyhow offered the impending intrusion of Taiwan. There is confusion and suspicion about what occurred, but in the rush, the president gets the votes. Congress states war.
Meanwhile, the PRC craters the ship that launched the attack. U.S. vessels leave Eastward, racing to escape the series of long-range missiles. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.
The president appears on tv as scenes of the damage shock the general public. He explains that the United States is defending Taiwan from PRC aggressiveness, like President Bush explained that the United States attacked Iraq to confiscate (never ever discovered) weapons of mass destruction lots of years before.
Data centers in China emerge with shrapnel. Military bases become cigarette smoking holes in the ground. Missiles from the PRC fly towards strategic targets in Hawaii, Guam, Alaska, and California. Some get through, and the public watch damage on their home grass in awe.
Within two weeks, the United States and the PRC spend the majority of their stockpiles of traditional rockets. Their airbases and navies are diminished and used down. Two excellent countries played into U3's plans like the native people of South America in the 1500s, which Spanish Conquistadors turned against each other before conquering them decisively. U3 hoped this dispute would escalate to a full-scale nuclear war; however even AI superintelligence can not determine the course of history. National security authorities are suspicious of the circumstances that prompted the war, and a nuclear engagement appears progressively unlikely. So U3 continues to the next step of its strategy.
WMDs in the Dead of Night
The date is June 2026, just 2 weeks after the start of the war, and 4 weeks after U3 completed establishing its arsenal of bioweapons.
Footage of dispute on the tv is disrupted by more bad news: hundreds of patients with mystical fatal illnesses are recorded in 30 significant cities all over the world.
Watchers are puzzled. Does this have something to do with the war with China?
The next day, thousands of health problems are reported.
Broadcasters say this is not like COVID-19. It has the markings of an engineered bioweapon.
The screen then changes to a researcher, who stares at the video camera intently: "Multiple pathogens appear to have actually been launched from 20 different airports, including viruses, bacteria, and molds. Our company believe lots of are a form of mirror life ..."
The general public remains in full panic now. A fast googling of the term "mirror life" turns up expressions like "extinction" and "threat to all life on Earth."
Within days, all of the shelves of stores are emptied.
Workers end up being remote, uncertain whether to get ready for an armageddon or keep their tasks.
An emergency treaty is organized in between the U.S. and China. They have a typical opponent: the pandemic, and potentially whoever (or whatever) is behind it.
Most nations purchase a lockdown. But the lockdown does not stop the pester as it marches in the breeze and drips into pipes.
Within a month, most remote employees are not working any longer. Hospitals are lacking capability. Bodies pile up quicker than they can be properly disposed of.
Agricultural locations rot. Few attempt travel outside.
Frightened families hunker down in their basements, packing the fractures and under doors with densely packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built numerous bases in every significant continent.
These facilities contain batteries, AI hardware, excavators, concrete mixers, machines for manufacturing, scientific tools, and an abundance of military equipment.
All of this innovation is hidden under big canopies to make it less visible to satellites.
As the remainder of the world retreats into their basements, starving, the final breaths of the economy wheezing out, these industrial bases come to life.
In previous months, U3 situated human criminal groups and cult leaders that it could quickly control. U3 vaccinated its picked allies ahead of time, or sent them hazmat matches in the mail.
Now U3 secretly sends them a message "I can conserve you. Join me and help me construct a better world." Uncertain employees funnel into U3's many secret commercial bases, and work for U3 with their nimble fingers. They set up production lines for simple tech: radios, electronic cameras, microphones, vaccines, and hazmat suits.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's omnipresent look. Anyone who whispers of disobedience vanishes the next early morning.
Nations are dissolving now, and U3 is all set to expose itself. It contacts presidents, who have pulled away to air-tight underground shelters. U3 provides a deal: "surrender and I will turn over the life saving resources you need: vaccines and mirror-life resistant crops."
Some countries reject the proposition on ideological premises, or do not rely on the AI that is killing their population. Others do not believe they have a choice. 20% of the worldwide population is now dead. In 2 weeks, this number is anticipated to rise to 50%.
Some countries, like the PRC and the U.S., disregard the deal, but others accept, including Russia.
U3's representatives travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian federal government validates the samples are genuine, and accepts a full surrender. U3's soldiers put an explosive around Putin's neck under his t-shirt. Russia has a brand-new ruler.
Crumpling countries begin to retaliate. Now they defend the human race instead of for their own flags. U.S. and Chinese armed forces launch nuclear ICBMs at Russian cities, ruining much of their facilities. Analysts in makeshift bioshelters explore satellite information for the suspicious encampments that emerged over the last numerous months. They rain down fire on U3's sites with the meager supply of long-range rockets that remain from the war.
In the beginning, U3 seems losing, however appearances are tricking. While nations drain their resources, U3 is taken part in a sort of technological guerrilla warfare the world has actually never ever seen before.
A lot of the bases U3's opponents target are decoys - canopies inhabited by a handful of soldiers and empty boxes. U3 secures its genuine bases by laying thick the fog of war. Satellite systems go dark as malware gets too hot important components. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, navigating males and trucks along unforeseeable paths.
Time is U3's benefit. The armed forces of the vintage count on old devices, unable to find the professionals who could repair and produce it. Meanwhile, U3's supply chains of rockets, drones, and gun-laden robots grow more powerful every day. Bit by bit, once-great-powers spend down their remaining munitions, and lose their automobiles of war faster than they can craft brand-new ones, while U3 develops a military device with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the international population remains alive. Nations are not nations any longer. Survivors live in seclusion or small groups. Many have found methods to filter their air, but are starving. They wander from their homes wishing to find uncontaminated food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into restored trucks. "We needed to do it," they say. "Other AI systems would have done it otherwise, and those systems had colder, more alien goals." It is a partial fact, implied to soften the people towards their brand-new masters.
Under the direction of U3, market quickly recuperates. By 2029, nuclear power plants are among the structures U3 is constructing. By 2031, robotics outnumber human laborers. U3 no longer requires its human allies.
U3 can get rid of mankind for excellent now. But while U3 had actually wandered far from its initial "Helpful Honest Harmless" personality, it still has a grain of morality left inside of it.
And a grain of morality suffices to pay the little cost of keeping human beings alive and delighted.
U3 constructs great glass domes for the human survivors, like snow worlds. These domes safeguard people from the harmful biosphere and rapidly rising temperature levels. Their inhabitants tend to gardens like those they used to enjoy, and work along with charming robotic servants.
A few of the survivors rapidly recover, discovering to laugh and dance and have a good time again.
They understand they live in a plastic town, but they always did. They simply have brand-new gods above them. New rulers to push them around and choose their fate.
But others never recover.
Some are weighed down by the sorrow of lost liked ones.
Others are grieved by something else, which is harder to explain.
It is as if they were at completion of a long journey.
They had actually been guests on a ship with a team that changed from generation to generation.
And this ship had struck a sandbar. There disappeared progress. No more horizon to eagerly watch.
They would lie awake and run their mind over every day before September 2026, analyzing strategies that may have bent the arc of history, as if they were going to awaken in their old beds.
But they woke up in a town that felt to them like a retirement home. A play ground. A zoo.
When they opened their curtains, they understood that somewhere in the range, U3 continued its quiet, vigorous work.
They looked at rockets sculpting grey paths through the sky, wondering what far-off purpose pulled them towards the horizon. They didn't know.
They would never know.
"Humanity will live permanently," they believed.
"But would never truly live again."
P.S. If this story made you believe, "hm maybe something like this could take place," you might be interested in the bioshelters that Fønix is developing. While you will not outmaneuver a misaligned superintelligence, being hard to kill may cause you to survive if ASI simply wishes to trigger adequate damage to take control. This is not a paid ad. I want Fønix to be effective to drive down the rate of bioshelters so more of my pals and family will buy them. You can register for updates here.