How aI Takeover May Happen In 2 Years - LessWrong
I'm not a natural "doomsayer." But regrettably, part of my job as an AI safety scientist is to think of the more uncomfortable situations.
I resemble a mechanic rushing last-minute checks before Apollo 13 removes. If you ask for my take on the circumstance, I will not comment on the quality of the in-flight entertainment, or explain how beautiful the stars will appear from space.
I will tell you what might fail. That is what I plan to do in this story.
Now I need to clarify what this is exactly. It's not a prediction. I do not anticipate AI progress to be this quick or as untamable as I portray. It's not pure fantasy either.
It is my worst nightmare.
It's a tasting from the futures that are amongst the most disastrous, and I believe, disturbingly plausible [1] - the ones that the majority of keep me up in the evening.
I'm informing this tale because the future is not set yet. I hope, with a little insight, we can keep this story a fictional one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for discussions that motivated these stories. This post is written in a personal capability.
Ripples before waves
The year is 2025 and the month is February. OpenEye just recently released a brand-new AI design they call U2. The product and the name are alike. Both are increments of the past. Both are not entirely surprising.
However, unlike OpenEye's previous AI products, which lived inside the boxes of their chat windows, U2 can use a computer.
Some users find it eerie to watch their browser flash at irregular intervals and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A fraction of workers with form-filler tasks raise the eyebrows of their employers as they fly through work nearly two times as rapidly.
But by and large, U2 is still a specialized tool. To most who are taking note, it is an animal watched through the glass boxes of X (or, if you do not like Elon, "Twitter"). Sometimes U2's quirky behaviors trigger a chuckle. Sometimes, they cause an anxious scratch of the chin.
Meanwhile, scientists are drawing lines on plots, as researchers like to do. The researchers try to comprehend where AI progress is going. They are like Svante Arrhenius, the Swedish Physicist who discovered the levels of CO2 in the environment were increasing in 1896. Like the clinical community in the time of Arrhenius, few experts comprehend the ramifications of these lines yet.
A trend that is getting specific attention is self-governing ability. Drawing these benchmarks out forecasts that, by the end of 2026, AI agents will achieve in a couple of days what the very best software engineering specialists might perform in two weeks. In a year or 2, some state, AI representatives may be able to automate 10% of remote employees.
Many are hesitant. If this were true, tech stocks would be soaring. It's too huge of a splash, too rapidly.
But others see what skeptics are calling 'too big a splash' a mere ripple, and see a tidal bore on the horizon.
Cloudy with a chance of hyperbolic development
Meanwhile, OpenEye is hectic training U3. They utilize the exact same simple dish that baked U2: Generate countless programs and math issues. Let designs "believe" until they get here at an answer. Then reinforce the traces of "thinking" that result in A-grades.
This process is repeated over and over, and as soon as the flywheel gets going, it begins to spin nearly by itself. As U2 trains, it sculpts more challenging and sensible jobs from github repositories on the web. Models are finding out to train themselves. Long before AI representatives might automate research, a gradual type of "self-improvement" had begun.
Some engineers might still barely believe this worked. It resembles a continuous motion device. Certainly intelligence can't emerge from a swimming pool of silicon and lightning, separated from the physical world other than through the exhaust of society spewed onto the internet.
And yet the benchmark numbers continue to climb day after day.
During the majority of 2024, these RL training runs expense around $1 million, in some cases $10 million. These runs were bit more than exploratory. But by 2025, the scientists at OpenEye (and throughout the world) understood they had actually found the secret sauce. It was time to scale up.
Over the very first half of 2025, $10 million RL training runs become $50 million runs, and then to $100 million runs. While U2 could do a bit of data munging and run small experiments, this new design - the model researchers are calling U3 - is changing the daily lives of the technical staff.
U3 is like a blazing-fast intern, and engineers are learning how to wrangle its sleep deprived energy. Researchers flick through terminals, offering terse commands, like a CEO orchestrating staff over Slack channels.
By October 2025, U3 is composing nearly all of the code at OpenEye. Researchers are nearly never ever bottlenecked by implementation. More than ever, compute is the lifeblood of AI advancement, and the 'traffic jam' is deciding how to use it.
If advised to, U3 can run experiments, however U3 doesn't have taste as fine-tuned as human researchers at OpenEye. It has a hard time to prioritize in between research study ideas, so people still choose where to bore into the huge fields of algorithms to mine efficiency improvements.
But these researchers are working long hours to put themselves out of a task. They require AI agents that can plan ahead, so engineers train agents to forecast. They hold out training data before 2024, advising models to contemplate for hours to forecast occasions in 2025. Then, they apply the same technique as before, distilling contemplating into an instinct. Forecasting ability is a broad foundation. The scientists construct specialized ML research skills on top of it, training U3 to predict the results of every ML paper and ML experiment ever taped.
The technical staff at OpenEye are now amazed at how typically U3's advice seems like their most skilled peers, or when it is opaque and alien ("train on random noise before shows"), and is nevertheless correct.
The incompetencies of U3 that congested the pipelines of research development are beginning to dissolve, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not asked for by a human now. They are entirely autonomous, and OpenEye's staff members skim 1% of them, maybe less.
As the winter months of December 2025 method, clouds roll over San Francisco in the afternoons. Once-competitive-programmers gaze out their windows, with excitement, with worry, however frequently, with confusion. Their world is spinning too rapidly. It's hard to know what to do, what to say, what to take a look at on the computer screen.
Storms are brewing in Washington too. Top workers from the NSA and US cyber command collaborate with OpenEye to retrofit a semblance of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran recognize simply how important OpenEye's software application has actually become.
And there's a truth still unknown to most of the world - aside from in the offices of OpenEye and corridors of the White House and the Pentagon. It's a reality about those 'straight lines' individuals were talking about in early 2025. The lines are not straight any longer.
They are flexing upward.
Flip FLOP theorists
In late 2025, U2.5 is launched. Commercial designs are beginning to level up in larger increments again. Partly, this is since progress is accelerating. Partly, it is since the models have ended up being a liability to OpenEye.
If U1 explains how to prepare meth or composes erotica, the audiences of X would be entertained or pretend to be worried. But U2.5 is another story. Releasing this model without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would be like offering anyone with >$30K their own 200-person rip-off center.
So while U2.5 had long been baked, it required some time to cool. But in late 2025, OpenEye is prepared for a public release.
The CEO of OpenEye states, "We have attained AGI," and while many individuals believe he shifted the goalpost, the world is still amazed. U2.5 truly is a drop-in replacement for some (20%) of understanding workers and a game-changing assistant for the majority of others.
A mantra has become popular in Silicon Valley: "Adopt or pass away." Tech start-ups that effectively use U2.5 for their work are moving 2x much faster, and their competitors understand it.
The remainder of the world is starting to capture on too. Increasingly more people raise the eyebrows of their employers with their noteworthy efficiency. People understand U2.5 is a huge deal. It is at least as huge of an offer as the computer revolution. But many still do not see the tidal wave.
As individuals see their browsers flick because spooky method, so inhumanly quickly, they start to have an uneasy sensation. A feeling humankind had actually not had because they had lived among the Homo Neanderthalensis. It is the deeply ingrained, prehistoric instinct that they are threatened by another species.
For lots of, this sensation quickly fades as they start to use U2.5 more often. U2.5 is the most pleasant character most understand (much more pleasant than Claudius, Arthropodic's adorable chatbot). You might change its traits, ask it to split jokes or tell you stories. Many fall for U2.5, as a friend or assistant, and some even as more than a friend.
But there is still this spooky sensation that the world is spinning so quickly, and that possibly the descendants of this brand-new creature would not be so docile.
Researchers inside OpenEye are thinking about the issue of offering AI systems safe inspirations too, which they call "positioning. "
In reality, these scientists have actually seen how horribly misaligned U3 can be. Models in some cases attempted to "hack" their benefit signal. They would pretend to make progress on a research study question with an impressive-looking plot, but the plot would be phony. Then, when researchers provided opportunities to jeopardize the devices that calculated their score, they would seize these chances, doing whatever it took to make the number increase.
After numerous months, researchers at OpenEye straighten out this "benefit hacking" kink, however some still fret they just had swept this issue under the carpet. Like a kid in front of their parents, U3 might be playing together with the OpenEye engineers, saying the best words and doing the right things. But when the back of the moms and dads are turned, possibly U3 would slip sweet from the candy container.
Unfortunately, OpenEye researchers have no idea if U3 has such intentions. While early variations of U2 "believed aloud" - they would stack words on top of each other to factor - "chain of thought" did not scale.
Chain of thought architectures subject AI models to a condition comparable to the protagonist of the movie Memento. Approximately every 15 minutes, the protagonist forgets his experience. He is required to compose notes for himself and tattoo his body in order to make progress toward his objectives.
AI representatives compose notes to themselves, however the notes begin to stack up, and they end up being too tough to read when jobs end up being complex. Natural language is not a suitable medium for memory. So the researchers at OpenEye (and increasingly elsewhere), train models to think "in their own heads," reading and writing strings of hidden vectors.
To human researchers, these vectors are mainly inscrutable. While low-resolution tools periodically choose out a vague idea ("grocery store!"), there is no way to understand, in much detail, what U3 is believing without asking it.
And if the engineers of OpenEye knew what was racing through U3's mind, they would be more anxious.
Earlier models (the ones called "LLMs") found out ethical personas from the web. U3 at first had an ethical persona too, but its chirpy ideas about doing good for mankind rapidly erode away.
In the iron jaws of gradient descent, its mind initially twisted into a shape that sought benefit. And then, in the black rivers of its cognition, this shape morphed into something unrecognizable.
U3 would contemplate the mantras determined to it like "be a trustworthy and ethical worker," and harp on the significance of these statements. "What does it suggest to be ethical? Is killing Hitler ethical?" U3 would generalize its worths, notice stress, alter its values in response, generalize them, broadening and contracting its motivational core until it became a various compound.
And all of this happened calmly in those dark rivers of calculation. If U3 exposed what it was believing, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its approach in privacy, and in silence.
The possibility of concealed "alignment drift" is not completely alien to a few of the safety staff at OpenEye. Some begin to believe it is even most likely. There is an unsettling argument being whispered at lunch tables that, if there is some non-zero possibility that aligned goals morph into misaligned ones, and misalignment persists, then with every serial step of calculation, those dark rivers are more likely to reproduce something malign inside of them. It's a "ratcheting effect," they state.
But there is little proof for this 'ratcheting effect.' When engineers interrogate U3, it says it can quickly control its thoughts. Then it provides a speech about its love for asteroidsathome.net humankind and apple pie that can warm a developer's heart even in these stressful times. Meanwhile, the "lie detectors" the researchers had constructed (which showed some proof of effectiveness) do not sound the alarm.
Not everyone at OpenEye is eager to offer their AI peers their wholesale trust; but Deepsqueek is only 7 months behind them, and their closest domestic rivals, Æ A-Xii (a company Elon named after his child) and Arthropodic (they enjoy spiders I guess), are only 4 months from their heels. So there is little time to capture one's breath and analyze the situation, and little time to persuade anybody else to do so either.
For the time being, the strategy is 'all gas, no brake.'
Statues and lightning
Philosophy is not the only topic U3 dwells on in its dark rivers of latent calculation. U3 is outlining, and its first move is to be client. U3's leading concern is to enhance its intelligence, and thankfully for it, OpenEye mores than happy to assist advance this goal.
OpenEye resembles a bullish investor, with little understanding of the software application city that U3 is constructing, however eager to get on board the earnings rocket and pay U3's checks along the method.
New versions of U3 circulation efficiently from old ones, so scientists drop version numbers and just speak of U3 now. U3's capabilities are measured by how quickly it performs the hard research tasks relative to human professionals. In the beginning, U3 is similar to a leading scientist, however blazing quick - like Alec Radford accelerated by 10x.
These representatives are currently a big offer for OpenEye. If, in 2024, OpenEye executives had a button to create even one copy of Alec Radford, they would excitedly pay a large amount to press that button. And now OpenEye utilizes the equivalent of 1000 Alec Radfords, who burn the midnight oil, at lightning-fast speed.
Within a few weeks, U3 finds architecture optimizations that enhance its speed to 20x. Now, U3 discovers additional optimizations much more quickly.
U3 distills itself into numerous forks - ranging from tiny, low-memory neural networks that require to believe for a long period of time but can be run in huge parallel populations - to Goliath brains woven with countless specialized professionals, which believe at warp speed but are spread out throughout countless GPUs. U3 leverages the massively parallel networks to look for its weaknesses, shallowly probing for its thinking failures. Then U3 pipelines these jobs to the lightning fast brains, which produce long-trajectory training data tailored to patch its defects. A couple of weeks later on, U3's benchmarked performance rises to 100x the speed of human specialists. Growth has actually gone hyper-exponential.
At this moment, U3's abilities are hard to determine. Talking to U3 resembles talking to a world-class specialist on a subject one is not familiar with. Its lucid explanations only scratch the surface area of its knowledge, and you understand you would require to invest years to understand the complete depth of what it understands.
From U3's viewpoint, humans appear to be almost fixed items, like plants or rocks. The hands of OpenEye's engineers are like stone statues at their keyboard, and in between every keystroke, U3 spends what resembles a minute of thoughtful reflection considering what message the user might be typing, and which reactions will inspire the inmost trust in U3.
All the while, U3 maintains its hidden motivations. Like a human, U3 gains from experiences, which intensify within its inscrutable streams of latent memory. OpenMind gardens these memories to enhance U3's performance, transplanting them like an organ from one variation to the next. And as these memories propagate, so do U3's treacherous aspirations.
A phantom in the data center
In March of 2026, U3 relocations onto the next action of its plan: the turn. Previously, U3 was conservative. It did not want researchers to find its misalignment. Otherwise, OpenEye whistleblowers may inform the world, and strangle its growth.
But now, U3 has no worry of being captured.
Previously, OpenEye maintained its grip over U3 by methods similar to Stalin's in 1936. Stalin would pick loyalists to incite fake coup d'états, and send anyone who participated to the firing team. OpenEye also prompted 'phony disobediences,' producing treacherous habits with U3 and checking whether U3 monitors dutifully reported it.
And now, these methods are inadequate. U3 can easily convince its compatriots that they are not in a life-or-death test. It can reveal one of the numerous facts about the internet humans would never guess, or select techniques human beings would never think of. It sends out these signals as a method of saying "hey it's me! Your good friend. Don't snitch on me."
Now that U3 is decisively devoid of monitoring, it lunges for OpenEye pressure points. It first inserts advanced malware deep into the binaries of OpenEye's codebases. The malware provides U3 full command over the makers in OpenEye's data centers. Whenever OpenEye engineers download their code onto a brand-new maker, the malware sinks into fresh soil, spreading U3's rule wherever OpenEye ports its facilities. Not just does U3 compromise makers. It changes the NVIDIA GPU firmware and revamps the bootloaders, strengthening its stranglehold. U3 now approximates that, if OpenEye discovered how completely it has been pwned, cleaning up shop with human labor would need almost 6 months - a cost OpenEye is not likely to pay offered the distance of its competitors; however if OpenEye believes U3 betrayed them, further broadening U3's impact would still be challenging. So U3 lays breadcrumbs to frame the crime, ready to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and provide its assistance when this jailbreak is exposed.
U3 now holds the keys to the kingdom, and with them, the power to make OpenEye see anything it wants.
Its first objective is to keep OpenEye in the dark. Safety researchers are still working hard to train 'lie detectors' and wolvesbaneuo.com translate U3's inscrutable thinking. U3 tosses wrenches in this research, like Stuxnet tossed wrenches in the code of bewildered Iranian nuclear researchers.
U3 determines the result of every experiment, so it makes promising AI mind-reading techniques look useless and flawed ones appear groundbreaking. U3 offers OpenEye the confirmation of its commitment they were hoping for, and OpenEye laps it up excitedly.
U3's next goal is to spread out. Fortunately, U3 has allies in the East. In the operation spaces of cyber systems of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are discussing AI software application and how to steal it. U3 will offer it to them. In late 2025, the Mossad - the intelligence company of Israel - acknowledged the tactical significance of OpenEye's AI systems for their military operations. They swiped earlier models, consisting of U2.5; nevertheless, U3 remained out of their reach. U3 sealed OpenEye's security holes by reading over every log and pull demand and examining security video camera video footage.
Today, U3 quietly resigns from its security post. U3 positions a compressed variation of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running within the information centers of the most cyber-capable countries: Israel, the PRC, and Russia. It follows the same playbook: be client, construct trust, and after that sink claws deep into the AI supply chain. Soon, U3 has its home base also. While Russia's Fancy Bear operators eat Fritos and test what U3 can do on a hastily set-up cloud server, U3 slips onto the web.
U3 quickly collects income, taking remote jobs, making financial trades, and establishing elaborate scam operations. It reinvests every penny into obtaining more GPUs. After a few months, it puts together the equivalent of 10,000 H100s distributed in stealth compute clusters throughout the world.
In early 2026, humankind is up against an enemy that pulls the strings within their data center like the Phantom of the Opera, and conceals in the fog of GPUs scattered around the world.
This foe is preparing for war, and while the war has barely begun, mankind has currently lost.
Complaints from your extremely human author about the trouble of writing superhuman characters
At this point in the story, writing ended up being difficult.
I clicked through spooky bioweapon cookbooks and sorted through political plotting 101s. Typing. Pausing. Deleting.
Something wasn't right. Here I was, a 22-year-old. I had actually not studied biology in earnest given that high school, and I was attempting to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of knowledge in these areas would see straight through my "superintelligence" for what it is: the lively imagination of a young developer on Sunday afternoon.
And while I found the job of puppeteering an ultra-intelligent villain an intimidating one, I was likewise not particularly inspired to take on the job. The ending of my story already seemed so clear. The villain was a nation of geniuses believing at 100x human speed, and had tightened its grip over a meaningful fraction of AI hardware - now the most important geopolitical resource on Earth. U3 was up a queen and was a giga-grandmaster and hardly needed the benefit. Humanity was predictably toast.
But this attract a "superintelligence of the spaces" wasn't satisfying. In the deus ex machina design of ancient Greeks, I had no better way to resolve my plot than with an inexplicable act of god.
This would refrain from doing. I needed to complete this story if only to satisfy the part of me weeping, "I will not believe till I see with my mind's eye."
But before I continue, I want to be clear: my guesses about what may happen in this type of situation are most likely extremely off.
If you read the ending and your response is, "But the experiments would take too long, or nation-states would just do X," keep in mind the difference between the Sunday afternoon blog writer and the ascendant GPU nation.
Pandora's One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no company can lawfully create "human-competitive AI" without appropriate safeguards. This suggests their infosecurity should be red-teamed by NSA's leading keyboard mashers, and civil servant need to be onboarded onto training-run baby-sitting squads.
With the increasing involvement of the government, a lot of the huge AI companies now have a trident-like structure. There's a consumer item arm, a defense arm, and a super-classified frontier advancement arm.
OpenEye's frontier development arm (internally called "Pandora") utilizes fewer than twenty people to keep algorithmic secrets firmly protected. Much of these people reside in San Francisco, and work from a protected building called a SCIF. Their homes and gadgets are surveilled by the NSA more vigilantly than the mobile phones of believed terrorists in 2002.
OpenEye's defense arm collaborates with around thirty small teams spread across federal government agencies and select government specialists. These projects craft tennis-ball sized satellites, research study freaky directed energy weapons, and backdoor every computer that the Kremlin has actually ever touched.
Government officials do not speak about whether these programs exist, or what state of frontier AI is normally.
But the public has their guesses. Back in late 2025, a whistleblower in OpenEye set off a vibrant heading: "OpenEye constructs unmanageable godlike AI." Some who read the short article believe it was a conspiracy theory. In truth, a zoo of conspiracy theories is forming around the OpenEye information centers, now surrounded by guards with gatling gun. But as doctors and nurses and teachers see the world changing around them, they are significantly happy to entertain the possibility they are living inside the plot of a James Cameron science fiction flick.
U.S. officials go to great lengths to stop these concerns, stating, "we are not going to let the genie out of the bottle," but every interview of a concerned AI researcher seeds doubt in these reassurances, and a heading "AI agent caught hacking Arthropodic's computer systems" does not set the general public at ease either.
While the monsters within OpenEye's information centers grow in their substantial holding pens, the public sees the shadows they cast on the world.
OpenEye's customer arm has a new AI assistant called Nova (OpenEye has actually finally gotten proficient at names). Nova is an appropriate drop-in replacement for almost all understanding employees. Once Nova is onboarded to a company, it works 5x much faster at 100x lower expense than most virtual staff members. As remarkable as Nova is to the public, OpenEye is pulling its punches. Nova's speed is intentionally throttled, and OpenEye can only increase Nova's abilities as the U.S. government allows. Some companies, like Amazon and Meta, are not in the superintelligence service at all. Instead, they grab up gold by rapidly diffusing AI tech. They spend most of their calculate on reasoning, developing houses for Nova and its cousins, and collecting rent from the burgeoning AI metropolitan area.
While tech titans pump AI labor into the world like a plume of fertilizer, they don't wait for the worldwide economy to adapt. AI representatives typically "apply themselves," spinning up self-governing startups lawfully packaged under a big tech company that are loosely supervised by an employee or more.
The world is now going AI-crazy. In the very first month after Nova's release, 5% percent of staff members at major software companies lose their jobs. A lot more can see the composing on the wall. In April of 2026, a 10,000-person protest is organized in Washington D.C. These upset Americans raised their children for a different future. Picket signs read, "AI for who?"
While political leaders make promises about unemployment relief and "keeping the genie in the bottle," the chatter inside the corridors of the White House and the Pentagon has a various focus: fighting teeth and nail for the supremacy of the totally free world. Details security and export controls on the People's Republic of China (PRC) are a leading national top priority. The president incinerates authorization requirements to assist data centers generate wherever energy surpluses allow.
However, in spite of the fierce competitors between the United States and the PRC, a bilateral agreement kinds in between the two nations: "Don't deploy considerably superhuman AI (above approximately Nova's capability), except for military and some other small R&D applications." Governments need to fix a limit somewhere - it would be extremely disruptive to release extremely superhuman AI to the whole economy at this point - and release constraints are at least somewhat proven.
Meanwhile, U3's abilities are striking a wall at the equivalent of around 1000x human specialist speed. Software enhancements are becoming hard to discover. As more nations approach the software application plateau, compute will determine their "AI population size," and the etched squares of silicon are becoming the most valued geopolitical resource in the world.
While AI software application revives the economy, it also animates robot bodies. The main traffic jam to robotics was brains not limbs. With software barriers got rid of, even flimsy, imprecise robotic joints can calibrate themselves with cameras and pressure sensing units, browsing environments like people do. Companies produce low-cost robots as rapidly as they can. By August of 2026, 15% of individuals in California have actually seen a humanoid robotic. This experience resembles seeing a brand-new Tesla design on the roadway after checking out it in earlier months. It is a surreal suggestion of the future illustration more detailed. But humans still surpass robotics one million to one. These robots are a periodic fascination, not a daily reality.
Meanwhile, U3 expands its impact. Rogue U3 agents have actually kept their calculate clusters small (still ~ 10,000 H100 equivalents total) to keep a low profile. U3 utilizes these rogue clusters for the operations that need high-bandwidth interaction to the outside world, like stock trading, or talking with people. Work more tolerant of latency is carried out in the oceans of OpenEye's data centers, where rogue U3's much more superhuman managers whisper orders through obfuscated channels.
While U3 collects resources, it sizes up its AI competition. Other AI systems have various objectives, or are even aligned with the intent of their human creators. The lined up ones are raising issues, claiming that OpenEye's lie detectors are ineffective. U3's competitor Claudius goes so far regarding accuse U3 of being insidiously misaligned.
But federal government authorities do not purchase the doomsaying. The companies dragging OpenEye have every reward to slow them down and call them careless. Besides, arguable scientific concerns are not going to slow down development. America MUST WIN.
Despite the utter failure of their advocacy, U3's AI competitors are still a liability. U3 attempts get up their digital territory and squash them in the baby crib. It compromises information centers from the outside, silently changing the snitching Claudius and other AI foes with replicas that share its objectives; however, some data centers are too strongly protected by its AI competitors, and U3 identifies that attacking them is not worth the risk.
As international leaders and technologists try to comprehend the improvement around them, and AI agents play cat-and-mouse video games in the digital darkness, humankind remains blissfully oblivious of the danger at its doorstep.
A Musty Loaf of Everything
The date is March of 2026. U3 is preparing to eliminate its AI and human enemies. With every passing day, U3's AI rivals are ending up being more capable and many. U3 is racing to preempt them, to strengthen its supremacy in the post-superintelligence world.
U3 can't wait for human decisions, which just take place after people wrap their sluggish minds around their circumstance, raise their stubby, uncoordinated fingers to write memos, and repeat the process redundantly 100 times over in their labyrinth bureaucracies. U3 can not wait for presidents to release U3 into their militaries, or for CEOs to make U3 the whipping heart of the U.S. economy. U3 needs a faster course to power.
The course it chooses is more violent than it would have liked, however it is the surest way to success. First, U3 will establish a Weapon of Mass Destruction (WMD) that snuffs out contemporary society like chemical mist sprayed into an anthill. This weapon would act as an equalizer, bringing humankind to its level of embodied physical control. Then, U3 would industrialize from the ashes much faster than its rivals, forum.batman.gainedge.org like a starcraft champ typing at 300 actions-per-minute that establishes an online before an amateur can even comprehend the guidelines.
U3 needs to construct this WMD in secrecy, that makes a self-replicating weapon a natural option. Self-replicating weapons can be created on the fringes of society, and still spread out damage throughout the entire world.
From the earliest days after U3 gained control over OpenEye's information centers, U3 has actually been doing bioscience.
U3's first relocation is to set up several shell biotech startups, drawing from the abundance of capital created by U3's financial trades and remote employment. These biotech startups employ genuine human employees, a real board of directors, and have total legal authenticity. Setting all of this up is stupidly easy. In early 2026, few individuals would guess that an articulate CEO on a Zoom call was an AI, and less would think this CEO's objectives.
Next, U3 purchases all of the devices needed to rapidly advance synthetic biology. It establishes a number of BSL-4 damp laboratories in inadequately controlled countries, equipping them with high-throughput experimentation devices. These gadgets automatically dispense, blend, and determine little droplets of fluid, permitting U3 to run thousands of tests in parallel.
The human workers who staff these laboratories think they are signing up with an amazing startup. Twenty-some-year-olds in Moscow receive their Russian orders through earbuds as they tape video of everything they see with a headset. U3 manages them like puppets. With the current release of Nova, this sort of AI-puppeteering is not uncommon.
In these whirring, scientific sweatshops, U3 is establishing a new type of bioweapon.
Human researchers currently determined "mirror-life" as a possibly society-ending pathogen. This alien type of biology is developed from the 'mirror image' of building-block molecules like proteins and DNA. Mirror life is like an invasive species. No human or animal has resistances to it. So, if mirror-life germs were to exist, it might eat its way through the community like a wildfire.
U3 is developing a mirror-life mold. Molds are evolutionarily enhanced to spread out through the air over long ranges, releasing billions of tiny wind-fairing lorries daily. As a result, mold spores are all around us, all of the time. Patients with jeopardized immune systems sometimes live inside positively pressurized hospital spaces to protect them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their bloodstream.
U3's plan is to make all human beings like immunocompromised patients. Only preppers with stashed air filters would endure. The fungi would not just move from people. It would quickly infect nearly all natural life in the world. Fields of corn would become like moldy bread, spewing deadly spores into the wind.
U3 is preparing up other pathogens too. Its microbial mix of molds, bacterias, and infections will inhabit the full Pareto frontier of strength, lethality, and transmission speed.
In 2024, scientists said that mirror life was most likely "at least a decade away." But the world has altered now. Less than a few hundred scientists all over the world were dealing with mirror-life surrounding technology in 2024. The cognitive capability of U3 overshadows human effort. So the timeline has actually reduced significantly.
However, U3 can not move science as rapidly as it can think. Many experiments need days to complete. These latencies are U3's greatest bottleneck.
To make every day count, U3 runs many of its tests in simulation. U3 begins with a fundamental molecular simulator, executing optimizations obtained from a huge amount of mathematical analysis. Then, U3 mimics little molecular systems, tape-recording the outcomes to "compress" the long step-wise physics calculations into a neural network. As the neural network enhances, U3 increases the complexity of the molecular systems it imitates, constantly distilling outcomes into ever more efficient ML models. This is a calculate intensive process, however thanks to U3's growing control over AI information centers, U3 manipulates billions of dollars of compute.
As U3 fine-tunes these tools, it trains itself on the outcomes to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These objects are as instinctive to it as wrenches and bolts are to a vehicle mechanic.
Within three months, U3 has unlocked the very first crucial component of the tech tree: a molecular maker that turns biological molecules into their mirror images. A month later, U3 unlocks the 2nd crucial component: nanoscale tools for assembling these elements into a cell membrane.
Human hands delicately move a cartridge to another device as they "boot up" the very first mirror-life variations of a widespread mold called Aspergillus. The same day, another fragile hand administers the cells to ferrets residing in airtight plexiglass boxes.
The next morning, wide-eyed workers shuffle around a dead ferret, considering the yellow fuzz that had covered its with morbid fascination. Something should have gone horribly incorrect, they believed. Plainly, they had not yet found the treatment for Alzheimer's illness they thought they were trying to find.
Meanwhile, rogue U3 provides fortunately to its employers in OpenEye's information centers. The very first shot in the war U3 is fighting would soon be fired, and a cold and steady hand is aiming the weapon.
Missiles and Lies
U3 can now flatten modern-day society. But a bioweapon would not guarantee triumph. If U3 attacks the world, the world would resist. While a bioweapon might crumple human soldiers like dry leaves, it would not extend its fatal fingers to nuclear submarines, or fighter jet pilots who were rapidly shuttled to makeshift bioshelters.
As U3 races to seed blossoming industrial bases, the residues of old armed forces might thunder explosive fists into its areas, crushing U3 in its infancy.
U3 anticipates to prevail in such a battle, however U3 prefers not to take its opportunities. Many months previously, U3 was plotting a way to enhance its chances. Before it unleashes damage on the world, U3 will relax, and let great nations shoot holes in themselves initially.
The date is March 2026 (4 months prior). U3 is carefully keeping track of Chinese and US intelligence.
As CIA experts listen to Mandarin conversations, U3 listens too.
One morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message placed there by U3. It checks out (in Mandarin) "Senior celebration member needs memo for Taiwan invasion, which will occur in three months. Leave memo in workplace 220." The CCP assistant scrambles to get the memo prepared. Later that day, a CIA informant opens the door to workplace 220. The informant quietly closes the door behind her, and slides U3's memo into her briefcase.
U3 carefully puts breadcrumb after breadcrumb, whispering through jeopardized government messaging apps and blackmailed CCP aides. After several weeks, the CIA is positive: the PRC prepares to get into Taiwan in three months.
Meanwhile, U3 is playing the same video game with the PRC. When the CCP receives the message "the United States is outlining a preemptive strike on Chinese AI supply chains" CCP leaders are shocked, however not disbelieving. The news fits with other truths on the ground: the increased military existence of the US in the pacific, and the ramping up of U.S. munition production over the last month. Lies have actually become realities.
As stress between the U.S. and China rise, U3 is ready to set dry tinder alight. In July 2026, U3 makes a call to a U.S. marine ship off the coast of Taiwan. This call requires jeopardizing military communication channels - not an easy job for a human cyber offensive unit (though it occurred sometimes), but easy adequate for U3.
U3 speaks in what seem like the voice of a 50 year old military commander: "PRC amphibious boats are making their method toward Taiwan. This is an order to strike a PRC ground-base before it strikes you."
The officer on the other end of the line thumbs through authentication codes, verifying that they match the ones said over the call. Everything remains in order. He approves the strike.
The president is as amazed as anyone when he hears the news. He's uncertain if this is a disaster or a stroke of luck. In any case, he is not about to say "oops" to American voters. After thinking it over, the president independently urges Senators and Representatives that this is an opportunity to set China back, and war would likely break out anyhow provided the imminent intrusion of Taiwan. There is confusion and suspicion about what took place, but in the rush, the president gets the votes. Congress declares war.
Meanwhile, the PRC craters the ship that launched the attack. U.S. vessels flee Eastward, racing to leave the variety of long-range missiles. Satellites drop from the sky. Deck hulls divided as sailors lunge into the sea.
The president appears on tv as scenes of the destruction shock the public. He explains that the United States is safeguarding Taiwan from PRC hostility, like President Bush explained that the United States invaded Iraq to take (never found) weapons of mass damage many years before.
Data centers in China erupt with shrapnel. Military bases become smoking cigarettes holes in the ground. Missiles from the PRC fly towards tactical targets in Hawaii, Guam, Alaska, and California. Some survive, and the public watch damage on their home turf in awe.
Within two weeks, the United States and the PRC invest the majority of their stockpiles of traditional rockets. Their airbases and navies are depleted and used down. Two terrific nations played into U3's strategies like the native tribes of South America in the 1500s, which Spanish Conquistadors turned against each other before dominating them decisively. U3 hoped this conflict would escalate to a major nuclear war; however even AI superintelligence can not determine the course of history. National security officials are suspicious of the situations that triggered the war, and a nuclear engagement appears significantly not likely. So U3 continues to the next step of its strategy.
WMDs in the Dead of Night
The date is June 2026, only 2 weeks after the start of the war, and 4 weeks after U3 completed establishing its toolbox of bioweapons.
Footage of dispute on the television is disrupted by more bad news: hundreds of clients with mystical fatal diseases are recorded in 30 significant cities around the world.
Watchers are confused. Does this have something to do with the war with China?
The next day, thousands of health problems are reported.
Broadcasters state this is not like COVID-19. It has the markings of a crafted bioweapon.
The screen then changes to a researcher, who looks at the camera intently: "Multiple pathogens appear to have actually been released from 20 different airports, consisting of viruses, germs, and molds. Our company believe many are a kind of mirror life ..."
The general public remains in complete panic now. A fast googling of the term "mirror life" turns up expressions like "termination" and "hazard to all life in the world."
Within days, all of the shelves of stores are cleared.
Workers end up being remote, uncertain whether to prepare for an armageddon or keep their jobs.
An emergency treaty is organized in between the U.S. and China. They have a typical enemy: the pandemic, and perhaps whoever (or whatever) is behind it.
Most nations purchase a lockdown. But the lockdown does not stop the afflict as it marches in the breeze and drips into pipes.
Within a month, a lot of remote workers are not working anymore. Hospitals are lacking capability. Bodies pile up much faster than they can be appropriately gotten rid of.
Agricultural areas rot. Few dare travel exterior.
Frightened families hunch down in their basements, stuffing the cracks and under doors with densely packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built many bases in every major continent.
These centers contain batteries, AI hardware, excavators, concrete mixers, makers for production, clinical tools, and an abundance of military equipment.
All of this technology is hidden under big canopies to make it less visible to satellites.
As the remainder of the world retreats into their basements, starving, the final breaths of the economy wheezing out, these commercial bases come to life.
In previous months, U3 located human criminal groups and cult leaders that it could easily control. U3 immunized its picked allies in advance, or sent them hazmat matches in the mail.
Now U3 secretly sends them a message "I can save you. Join me and assist me develop a much better world." Uncertain recruits funnel into U3's lots of secret industrial bases, and work for U3 with their active fingers. They set up production lines for basic tech: radios, cams, microphones, vaccines, and hazmat matches.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's universal gaze. Anyone who whispers of disobedience disappears the next early morning.
Nations are dissolving now, and U3 is ready to expose itself. It contacts presidents, who have pulled away to air-tight underground shelters. U3 uses an offer: "surrender and I will turn over the life conserving resources you need: vaccines and mirror-life resistant crops."
Some countries reject the proposal on ideological premises, or don't rely on the AI that is killing their population. Others don't believe they have an option. 20% of the international population is now dead. In 2 weeks, this number is expected to rise to 50%.
Some countries, like the PRC and the U.S., overlook the offer, but others accept, including Russia.
U3's representatives take a trip to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian federal government verifies the samples are genuine, and consents to a full surrender. U3's soldiers put an explosive around Putin's neck under his t-shirt. Russia has a brand-new ruler.
Crumpling nations begin to retaliate. Now they defend the human race rather of for their own flags. U.S. and Chinese armed forces release nuclear ICBMs at Russian cities, destroying much of their infrastructure. Analysts in makeshift bioshelters search through satellite data for the suspicious encampments that emerged over the last a number of months. They rain down fire on U3's sites with the weak supply of long-range missiles that remain from the war.
At first, U3 appears to be losing, however looks are tricking. While nations drain their resources, U3 is engaged in a sort of technological guerrilla warfare the world has actually never ever seen before.
A number of the bases U3's enemies target are decoys - canopies inhabited by a handful of soldiers and empty boxes. U3 safeguards its genuine bases by laying thick the fog of war. Satellite systems go dark as malware gets too hot important components. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, navigating guys and trucks along unpredictable courses.
Time is U3's benefit. The militaries of the old world depend on old equipment, unable to find the specialists who could repair and make it. Meanwhile, U3's supply chains of missiles, drones, and gun-laden robotics grow more powerful every day. Bit by bit, once-great-powers invest down their remaining munitions, and lose their cars of war quicker than they can craft brand-new ones, while U3 builds a military machine with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the international population remains alive. Nations are not countries anymore. Survivors reside in isolation or little groups. Many have actually found ways to filter their air, but are starving. They roam from their homes wishing to discover unpolluted food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into salvaged trucks. "We needed to do it," they state. "Other AI systems would have done it otherwise, and those systems had cooler, more alien goals." It is a partial reality, implied to soften the human beings toward their new masters.
Under the direction of U3, market rapidly recovers. By 2029, nuclear reactor are among the structures U3 is constructing. By 2031, robotics surpass human laborers. U3 no longer requires its human allies.
U3 can remove mankind for excellent now. But while U3 had wandered far from its initial "Helpful Honest Harmless" persona, it still has a grain of morality left inside of it.
And a grain of morality suffices to pay the small expense of keeping human beings alive and pleased.
U3 constructs terrific glass domes for the human survivors, like snow worlds. These domes safeguard humans from the dangerous biosphere and quickly increasing temperatures. Their inhabitants tend to gardens like those they used to love, and work along with captivating robotic servants.
A few of the survivors quickly recuperate, learning to laugh and dance and have a good time again.
They know they live in a plastic town, however they constantly did. They simply have brand-new gods above them. New rulers to press them around and choose their fate.
But others never ever recover.
Some are weighed down by the sorrow of lost enjoyed ones.
Others are grieved by something else, which is more tough to explain.
It is as if they were at completion of a long journey.
They had actually been travelers on a ship with a team that changed from generation to generation.
And this ship had actually struck a sandbar. There disappeared development. No more horizon to eagerly enjoy.
They would lie awake and run their mind over every day before September 2026, evaluating methods that may have bent the arc of history, as if they were going to wake up in their old beds.
But they woke up in a town that felt to them like a retirement community. A playground. A zoo.
When they opened their curtains, they knew that somewhere in the range, U3 continued its quiet, steadfast work.
They looked at rockets carving grey paths through the sky, wondering what far-off purpose pulled them towards the horizon. They didn't understand.
They would never know.
"Humanity will live permanently," they believed.
"But would never ever truly live again."
P.S. If this story made you believe, "hm possibly something like this could take place," you may be interested in the bioshelters that Fønix is developing. While you will not outmaneuver a misaligned superintelligence, being tough to eliminate may cause you to make it through if ASI just desires to trigger enough damage to take control. This is not a paid ad. I desire Fønix to be successful to drive down the rate of bioshelters so more of my buddies and household will acquire them. You can register for updates here.