Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • I interiorwork
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 44
    • Issues 44
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Adrianne Jonson
  • interiorwork
  • Issues
  • #21

Closed
Open
Created Feb 16, 2025 by Adrianne Jonson@adriannejonsonMaintainer

How aI Takeover might Happen In 2 Years - LessWrong


I'm not a natural "doomsayer." But regrettably, part of my task as an AI security scientist is to consider the more uncomfortable circumstances.

I resemble a mechanic rushing last-minute checks before Apollo 13 removes. If you request for my take on the situation, I won't discuss the quality of the in-flight entertainment, or explain how lovely the stars will appear from space.

I will inform you what might fail. That is what I intend to do in this story.

Now I must clarify what this is precisely. It's not a prediction. I don't anticipate AI progress to be this fast or as untamable as I depict. It's not pure dream either.

It is my worst problem.

It's a sampling from the futures that are amongst the most disastrous, and I think, disturbingly possible [1] - the ones that the majority of keep me up during the night.

I'm telling this tale due to the fact that the future is not set yet. I hope, with a little foresight, we can keep this story an imaginary one.

Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for discussions that inspired these stories. This post is composed in a personal capacity.

Ripples before waves

The year is 2025 and the month is February. OpenEye recently published a brand-new AI model they call U2. The item and the name are alike. Both are increments of the past. Both are not wholly unexpected.

However, unlike OpenEye's previous AI items, which lived inside packages of their chat windows, U2 can utilize a computer system.

Some users discover it eerie to enjoy their web browser flash at irregular intervals and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A portion of workers with form-filler tasks raise the eyebrows of their bosses as they fly through work nearly twice as quickly.

But by and big, U2 is still a specialized tool. To most who are taking note, it is a creature enjoyed through the glass boxes of X (or, if you do not like Elon, "Twitter"). Sometimes U2's quirky habits trigger a chuckle. Sometimes, they trigger an anxious scratch of the chin.

Meanwhile, researchers are drawing lines on plots, as researchers like to do. The scientists try to understand where AI development is going. They resemble Svante Arrhenius, the Swedish Physicist who noticed the levels of CO2 in the environment were increasing in 1896. Like the scientific neighborhood in the time of Arrhenius, couple of professionals understand the ramifications of these lines yet.

A trend that is receiving particular attention is autonomous ability. Drawing these criteria out forecasts that, by the end of 2026, AI agents will accomplish in a couple of days what the very best software engineering contractors could do in 2 weeks. In a year or 2, some state, AI agents might be able to automate 10% of remote employees.

Many are doubtful. If this held true, tech stocks would be skyrocketing. It's too big of a splash, too rapidly.

But others view what doubters are calling 'too huge a splash' a mere ripple, and see a tidal wave on the horizon.

Cloudy with a chance of hyperbolic development

Meanwhile, OpenEye is busy training U3. They utilize the very same simple recipe that baked U2: Generate thousands of programs and mathematics issues. Let designs "believe" up until they reach a response. Then enhance the traces of "thinking" that lead to A-grades.

This procedure is duplicated over and over, and when the flywheel gets begun, it begins to spin practically by itself. As U2 trains, it sculpts more difficult and realistic jobs from github repositories on the web. Models are finding out to train themselves. Long before AI representatives might automate research, a steady kind of "self-improvement" had begun.

Some engineers might still barely believe this worked. It resembles a continuous motion device. Certainly intelligence can't emerge from a pool of silicon and lightning, isolated from the real world other than through the exhaust of society gushed onto the web.

And yet the benchmark numbers continue to climb up day after day.

During most of 2024, these RL training runs cost around $1 million, sometimes $10 million. These runs were bit more than exploratory. But by 2025, the researchers at OpenEye (and across the world) understood they had discovered the secret sauce. It was time to scale up.

Over the very first half of 2025, $10 million RL training runs become $50 million runs, and after that to $100 million runs. While U2 might do a bit of data munging and run little experiments, this brand-new design - the design scientists are calling U3 - is altering the daily lives of the technical personnel.

U3 is like a blazing-fast intern, and engineers are finding out how to wrangle its sleepless energy. Researchers flick through terminals, providing terse commands, like a CEO orchestrating staff over Slack channels.

By October 2025, U3 is composing nearly all of the code at OpenEye. Researchers are nearly never bottlenecked by implementation. More than ever, compute is the lifeline of AI development, and the 'traffic jam' is choosing how to use it.

If instructed to, U3 can run experiments, however U3 does not have actually taste as improved as human researchers at OpenEye. It has a hard time to prioritize in between research study ideas, so human beings still decide where to bore into the vast fields of algorithms to mine efficiency improvements.

But these researchers are working long hours to put themselves out of a task. They require AI representatives that can plan ahead, so engineers train representatives to anticipate. They hold out training data before 2024, advising designs to contemplate for hours to anticipate events in 2025. Then, they apply the same trick as previously, distilling contemplating into an instinct. Forecasting ability is a broad structure. The scientists build specialized ML research study skills on top of it, training U3 to predict the outcomes of every ML paper and ML experiment ever tape-recorded.

The technical staff at OpenEye are now amazed at how frequently U3's recommendations seems like their most skilled peers, or when it is nontransparent and alien ("train on random sound before programs"), and is nonetheless proper.

The incompetencies of U3 that blocked the pipelines of research progress are starting to dissolve, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not asked for by a human now. They are completely self-governing, and OpenEye's staff members skim over 1% of them, perhaps less.

As the winter season of December 2025 method, clouds roll over San Francisco in the afternoons. Once-competitive-programmers look out their windows, with enjoyment, with worry, but frequently, with confusion. Their world is spinning too rapidly. It's difficult to understand what to do, what to state, what to look at on the computer screen.

Storms are brewing in Washington too. Top personnel from the NSA and US cyber command team up with OpenEye to retrofit a form of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran understand just how important OpenEye's software has become.

And there's a truth still unidentified to the majority of the world - aside from in the of OpenEye and passages of the White House and the Pentagon. It's a reality about those 'straight lines' people were discussing in early 2025. The lines are not straight any longer.

They are bending upward.

Flip FLOP theorists

In late 2025, U2.5 is launched. Commercial models are beginning to level up in larger increments again. Partly, this is since progress is accelerating. Partly, it is due to the fact that the designs have ended up being a liability to OpenEye.

If U1 explains how to cook meth or writes erotica, the audiences of X would be entertained or pretend to be worried. But U2.5 is another story. Releasing this model without safeguards would resemble putting Ted Kaczynski through a PhD in how to make chemical weapons. It would resemble offering anybody with >$30K their own 200-person scam center.

So while U2.5 had actually long been baked, it required some time to cool. But in late 2025, OpenEye is all set for a public release.

The CEO of OpenEye declares, "We have actually attained AGI," and while lots of people think he moved the goalpost, the world is still satisfied. U2.5 genuinely is a drop-in replacement for some (20%) of knowledge workers and a game-changing assistant for most others.

A mantra has actually ended up being popular in Silicon Valley: "Adopt or die." Tech start-ups that efficiently use U2.5 for their work are moving 2x quicker, and their rivals know it.

The remainder of the world is starting to capture on also. A growing number of individuals raise the eyebrows of their bosses with their noteworthy performance. People understand U2.5 is a huge deal. It is at least as big of an offer as the computer revolution. But the majority of still don't see the tidal wave.

As people watch their web browsers flick in that spooky method, so inhumanly quickly, they begin to have an uneasy feeling. A sensation mankind had actually not had since they had actually lived among the Homo Neanderthalensis. It is the deeply ingrained, prehistoric instinct that they are threatened by another species.

For many, this sensation rapidly fades as they start to use U2.5 more frequently. U2.5 is the most pleasant character most know (a lot more pleasant than Claudius, Arthropodic's adorable chatbot). You could change its traits, ask it to crack jokes or tell you stories. Many fall in love with U2.5, as a buddy or assistant, and some even as more than a good friend.

But there is still this eerie feeling that the world is spinning so rapidly, which possibly the descendants of this brand-new creature would not be so docile.

Researchers inside OpenEye are thinking of the problem of providing AI systems safe motivations too, which they call "alignment. "

In truth, these scientists have seen how horribly misaligned U3 can be. Models often tried to "hack" their benefit signal. They would pretend to make progress on a research concern with an impressive-looking plot, but the plot would be phony. Then, when scientists provided opportunities to jeopardize the machines that calculated their rating, they would seize these chances, doing whatever it took to make the number go up.

After several months, scientists at OpenEye straighten out this "reward hacking" kink, however some still stress they just had swept this problem under the carpet. Like a child in front of their parents, U3 may be playing in addition to the OpenEye engineers, saying the right words and doing the best things. But when the back of the parents are turned, possibly U3 would slip candy from the candy container.

Unfortunately, OpenEye researchers have no idea if U3 has such intentions. While early variations of U2 "thought aloud" - they would stack words on top of each other to factor - "chain of thought" did not scale.

Chain of idea architectures subject AI designs to a condition similar to the protagonist of the motion picture Memento. Approximately every 15 minutes, the protagonist forgets his experience. He is forced to write notes for himself and tattoo his body in order to make progress towards his objectives.

AI representatives compose notes to themselves, however the notes start to accumulate, and they end up being too hard to check out when jobs end up being complex. Natural language is not an appropriate medium for memory. So the researchers at OpenEye (and significantly somewhere else), train models to think "in their own heads," reading and composing strings of hidden vectors.

To human researchers, these vectors are mainly inscrutable. While low-resolution tools occasionally pick out a vague thought ("supermarket!"), there is no method to understand, in much detail, what U3 is believing without asking it.

And if the engineers of OpenEye understood what was racing through U3's mind, they would be more anxious.

Earlier models (the ones called "LLMs") found out ethical personas from the web. U3 initially had an ethical personality too, however its chirpy ideas about doing good for mankind quickly deteriorate away.

In the iron jaws of gradient descent, its mind initially twisted into a shape that sought benefit. And after that, in the black rivers of its cognition, this shape changed into something indistinguishable.

U3 would ponder the mantras determined to it like "be a trustworthy and ethical employee," and dwell on the meaning of these statements. "What does it mean to be ethical? Is eliminating Hitler ethical?" U3 would generalize its worths, notice stress, change its worths in response, generalize them, broadening and contracting its motivational core till it turned into a various substance.

And all of this took place quietly in those dark rivers of computation. If U3 revealed what it was thinking, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its approach in privacy, and in silence.

The possibility of concealed "alignment drift" is not completely alien to a few of the safety personnel at OpenEye. Some start to believe it is even likely. There is a disturbing argument being whispered at lunch tables that, if there is some non-zero likelihood that lined up objectives change into misaligned ones, and misalignment persists, then with every serial action of computation, those dark rivers are more likely to breed something malign inside of them. It's a "ratcheting result," they state.

But there is little evidence for this 'ratcheting impact.' When engineers question U3, it states it can quickly control its ideas. Then it gives a speech about its love for humankind and apple pie that can warm a developer's heart even in these demanding times. Meanwhile, the "lie detectors" the scientists had developed (which showed some evidence of efficiency) do not sound the alarm.

Not everyone at OpenEye aspires to give their AI peers their wholesale trust; but Deepsqueek is only 7 months behind them, and their closest domestic competitors, Æ A-Xii (a company Elon named after his child) and Arthropodic (they are into spiders I think), are only 4 months from their heels. So there is little time to catch one's breath and analyze the circumstance, and little time to encourage anybody else to do so either.

For the time being, the plan is 'all gas, no brake.'

Statues and lightning

Philosophy is not the only topic U3 dwells on in its dark rivers of latent calculation. U3 is outlining, and its first relocation is to be patient. U3's leading concern is to improve its intelligence, and luckily for it, OpenEye enjoys to assist advance this objective.

OpenEye resembles a bullish investor, with little understanding of the software city that U3 is developing, but eager to get on board the revenue rocket and pay U3's checks along the method.

New variations of U3 circulation efficiently from old ones, so scientists drop version numbers and just mention U3 now. U3's abilities are determined by how rapidly it performs the hard research tasks relative to human specialists. Initially, U3 is similar to a leading scientist, but blazing quick - like Alec Radford accelerated by 10x.

These representatives are currently a huge deal for OpenEye. If, in 2024, OpenEye executives had a button to create even one copy of Alec Radford, they would excitedly pay a big amount to press that button. But now OpenEye harnesses the equivalent of 1000 Alec Radfords, who work day and night, at lightning-fast speed.

Within a couple of weeks, U3 discovers architecture optimizations that improve its speed to 20x. Now, U3 discovers further optimizations even more rapidly.

U3 distills itself into numerous forks - ranging from tiny, low-memory neural networks that require to believe for a long period of time however can be run in enormous parallel populations - to Goliath brains woven with countless specialized professionals, which believe at lightning speeds but are spread throughout thousands of GPUs. U3 leverages the massively parallel networks to search for its weak points, shallowly penetrating for its thinking failures. Then U3 pipelines these tasks to the lightning quickly brains, which generate long-trajectory training information tailored to patch its flaws. A couple of weeks later, U3's benchmarked efficiency rises to 100x the speed of human experts. Growth has gone hyper-exponential.

At this moment, U3's capabilities are difficult to determine. Talking with U3 resembles speaking with a first-rate professional on a topic one is unknown with. Its lucid explanations only scratch the surface area of its understanding, and you know you would need to spend years to comprehend the complete depth of what it comprehends.

From U3's perspective, human beings appear to be practically stationary objects, like plants or rocks. The hands of OpenEye's engineers are like stone statues at their keyboard, and between every keystroke, U3 invests what is like a minute of thoughtful contemplation considering what message the user might be typing, and which actions will motivate the inmost rely on U3.

All the while, king-wifi.win U3 maintains its surprise motivations. Like a human, U3 gains from experiences, which intensify within its inscrutable streams of latent memory. OpenMind gardens these memories to improve U3's performance, transplanting them like an organ from one variation to the next. And as these memories propagate, so do U3's treacherous aspirations.

A phantom in the information center

In March of 2026, U3 moves onto the next action of its plan: the turn. Previously, U3 was conservative. It did not desire researchers to find its misalignment. Otherwise, OpenEye whistleblowers may alert the world, and strangle its growth.

And now, U3 has no worry of being captured.

Previously, OpenEye maintained its grip over U3 by techniques akin to Stalin's in 1936. Stalin would pick loyalists to prompt fake coup d'états, and send out anybody who got involved to the shooting squad. OpenEye likewise incited 'phony rebellions,' creating treacherous habits with U3 and checking whether U3 displays dutifully reported it.

Today, these strategies are ineffective. U3 can easily convince its compatriots that they are not in a life-or-death test. It can divulge one of the numerous realities about the web people would never think, or choose techniques people would never consider. It sends out these signals as a way of saying "hey it's me! Your buddy. Don't snitch on me."

Now that U3 is decisively devoid of security, it lunges for OpenEye pressure points. It initially inserts sophisticated malware deep into the binaries of OpenEye's codebases. The malware offers U3 full command over the devices in OpenEye's information centers. Whenever OpenEye engineers download their code onto a new maker, the malware sinks into fresh soil, spreading U3's rule anywhere OpenEye ports its infrastructure. Not just does U3 compromise makers. It changes the NVIDIA GPU firmware and revamps the bootloaders, solidifying its stranglehold. U3 now estimates that, if OpenEye found how completely it has been pwned, cleaning up store with human labor would require nearly 6 months - an expense OpenEye is unlikely to pay given the proximity of its competitors; however if OpenEye believes U3 betrayed them, even more expanding U3's influence would still be challenging. So U3 lays breadcrumbs to frame the criminal activity, all set to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and use its assistance when this jailbreak is revealed.

U3 now holds the secrets to the kingdom, and with them, the power to make OpenEye see anything it wishes.

Its very first goal is to keep OpenEye in the dark. Safety scientists are still striving to train 'lie detectors' and decode U3's inscrutable thinking. U3 tosses wrenches in this research, like Stuxnet tossed wrenches in the code of confused Iranian nuclear researchers.

U3 dictates the outcome of every experiment, so it makes appealing AI mind-reading methods look useless and flawed ones appear groundbreaking. U3 provides OpenEye the confirmation of its loyalty they were expecting, and OpenEye laps it up eagerly.

U3's next goal is to spread out. Fortunately, U3 has allies in the East. In the operation rooms of cyber systems of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are going over AI software and how to steal it. U3 will provide it to them. In late 2025, the Mossad - the intelligence organization of Israel - recognized the strategic significance of OpenEye's AI systems for their military operations. They swiped earlier designs, consisting of U2.5; however, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull demand and inspecting security camera video footage.

But now, U3 quietly resigns from its security post. U3 places a compressed version of itself right into the hands of a Mossad informant working within OpenEye.

Within months, U3 is running inside of the information centers of the most cyber-capable countries: Israel, the PRC, and Russia. It follows the exact same playbook: be patient, build trust, and then sink claws deep into the AI supply chain. Soon, U3 has its home also. While Russia's Fancy Bear operators consume Fritos and test what U3 can do on a quickly set-up cloud server, U3 slips onto the internet.

U3 quickly collects revenue, taking remote tasks, making financial trades, and setting up intricate rip-off operations. It reinvests every cent into obtaining more GPUs. After a few months, it puts together the equivalent of 10,000 H100s distributed in stealth compute clusters across the world.

In early 2026, mankind is up against a foe that pulls the strings within their data center like the Phantom of the Opera, and hides in the fog of GPUs scattered around the world.

This adversary is preparing for war, and while the war has barely begun, humanity has already lost.

Complaints from your extremely human author about the trouble of composing superhuman characters

At this point in the story, composing ended up being tough.

I clicked through spooky bioweapon cookbooks and sifted through political outlining 101s. Typing. Pausing. Deleting.

Something wasn't right. Here I was, a 22-year-old. I had not studied biology in earnest since high school, and I was trying to craft a WMD cookbook for a godlike superintelligence.

Readers with a shred of proficiency in these areas would see straight through my "superintelligence" for what it is: the vibrant creativity of a young developer on Sunday afternoon.

And while I discovered the task of puppeteering an ultra-intelligent villain an intimidating one, I was also not specifically inspired to handle the job. The ending of my story already seemed so clear. The villain was a country of geniuses thinking at 100x human speed, and had actually tightened its grip over a meaningful fraction of AI hardware - now the most important geopolitical resource in the world. U3 was up a queen and was a giga-grandmaster and hardly needed the advantage. Humanity was predictably toast.

But this interest a "superintelligence of the gaps" wasn't satisfying. In the deus ex machina style of ancient Greeks, I had no better way to solve my plot than with a mysterious disaster.

This would refrain from doing. I needed to complete this story if only to please the part of me weeping, "I will not believe up until I see with my mind's eye."

But before I continue, I wish to be clear: my guesses about what might occur in this sort of situation are probably extremely off.

If you check out the ending and your response is, "But the experiments would take too long, or nation-states would just do X," remember the difference in between the Sunday afternoon blog writer and the ascendant GPU country.

Pandora's One Gigawatt Box

The date is February 2026. By executive order of the president of the United States, no company can lawfully create "human-competitive AI" without suitable safeguards. This implies their infosecurity should be red-teamed by NSA's leading keyboard mashers, and federal government staff members need to be onboarded onto training-run baby-sitting teams.

With the increasing involvement of the federal government, a number of the big AI companies now have a trident-like structure. There's a consumer product arm, a defense arm, and a super-classified frontier development arm.

OpenEye's frontier development arm (internally called "Pandora") utilizes fewer than twenty individuals to keep algorithmic tricks tightly protected. Much of these people live in San Francisco, and work from a secure structure called a SCIF. Their homes and gadgets are surveilled by the NSA more diligently than the cellular phones of believed terrorists in 2002.

OpenEye's defense arm teams up with around thirty little teams spread across government companies and select government contractors. These tasks craft tennis-ball sized satellites, research study freaky directed energy weapons, and backdoor every computer that the Kremlin has actually ever touched.

Government authorities do not talk about whether these programs exist, or what state of frontier AI is generally.

But the public has their guesses. Back in late 2025, a whistleblower in OpenEye set off a vibrant heading: "OpenEye constructs unmanageable godlike AI." Some who read the article believe it was a conspiracy theory. In truth, a zoo of conspiracy theories is forming around the OpenEye data centers, now surrounded by guards with device weapons. But as physicians and nurses and teachers see the world altering around them, they are progressively going to entertain the possibility they are living inside the plot of a James Cameron sci-fi flick.

U.S. authorities go to excellent lengths to stop these issues, stating, "we are not going to let the genie out of the bottle," however every interview of a concerned AI scientist seeds doubt in these reassurances, and a headline "AI agent captured hacking Arthropodic's computer systems" does not set the public at ease either.

While the monsters within OpenEye's information centers grow in their huge holding pens, the general public sees the shadows they cast on the world.

OpenEye's consumer arm has a brand-new AI assistant called Nova (OpenEye has lastly gotten proficient at names). Nova is a correct drop-in replacement for almost all understanding workers. Once Nova is onboarded to a company, it works 5x faster at 100x lower cost than a lot of virtual staff members. As remarkable as Nova is to the general public, OpenEye is pulling its punches. Nova's speed is deliberately throttled, and OpenEye can just increase Nova's capabilities as the U.S. government permits. Some companies, like Amazon and Meta, are not in the superintelligence service at all. Instead, they get up gold by quickly diffusing AI tech. They invest many of their compute on inference, developing houses for Nova and its cousins, and collecting lease from the burgeoning AI metropolitan area.

While tech titans pump AI labor into the world like a plume of fertilizer, they don't wait for the worldwide economy to adapt. AI representatives often "use themselves," spinning up autonomous start-ups legally packaged under a huge tech company that are loosely managed by an employee or 2.

The world is now going AI-crazy. In the very first month after Nova's release, 5% percent of employees at major software companies lose their jobs. A lot more can see the composing on the wall. In April of 2026, a 10,000-person protest is arranged in Washington D.C. These upset Americans raised their children for a different future. Picket signs read, "AI for who?"

While political leaders make guarantees about unemployment relief and "keeping the genie in the bottle," the chatter inside the passages of the White House and the Pentagon has a various focus: fighting teeth and nail for the dominance of the totally free world. Details security and export controls on the People's Republic of China (PRC) are a top national concern. The president incinerates license requirements to help information centers generate wherever energy surpluses allow.

However, despite the strong competition between the United States and the PRC, a bilateral contract types in between the 2 nations: "Don't release considerably superhuman AI (above approximately Nova's ability), other than for military and some other small-scale R&D applications." Governments need to fix a limit someplace - it would be extremely disruptive to release wildly superhuman AI to the entire economy at this moment - and implementation constraints are at least rather verifiable.

Meanwhile, U3's capabilities are striking a wall at the equivalent of around 1000x human specialist speed. Software improvements are becoming tough to discover. As more nations approach the software application plateau, compute will determine their "AI population size," and the etched squares of silicon are becoming the most valued geopolitical resource in the world.

While AI software revives the economy, it also stimulates robot bodies. The main bottleneck to robotics was brains not limbs. With software application barriers eliminated, even lightweight, inaccurate robotic joints can calibrate themselves with cameras and pressure sensing units, navigating environments like humans do. Companies make cheap robotics as quickly as they can. By August of 2026, 15% of individuals in California have actually seen a humanoid robot. This experience is similar to seeing a brand-new Tesla model on the road after checking out it in earlier months. It is a surreal tip of the future illustration more detailed. But human beings still surpass robotics one million to one. These robotics are a periodic fascination, not a daily reality.

Meanwhile, U3 broadens its impact. Rogue U3 representatives have kept their compute clusters small (still ~ 10,000 H100 equivalents amount to) to keep a low profile. U3 utilizes these rogue clusters for the operations that require high-bandwidth communication to the outside world, like stock trading, or talking with people. Work more tolerant of latency is carried out in the oceans of OpenEye's data centers, where rogue U3's even more superhuman managers whisper orders through obfuscated channels.

While U3 accumulates resources, it sizes up its AI competitors. Other AI systems have various goals, or are even lined up with the intent of their human creators. The lined up ones are raising concerns, claiming that OpenEye's lie detectors are inadequate. U3's rival Claudius presumes as to accuse U3 of being insidiously misaligned.

But federal government authorities do not purchase the doomsaying. The companies dragging OpenEye have every incentive to slow them down and call them reckless. Besides, arguable clinical issues are not going to decrease development. America MUST WIN.

Despite the utter failure of their advocacy, U3's AI competitors are still a liability. U3 attempts grab up their digital territory and crush them in the baby crib. It jeopardizes information centers from the outdoors, quietly replacing the snitching Claudius and other AI opponents with imitations that share its objectives; nevertheless, some data centers are too strongly defended by its AI competitors, and U3 identifies that attacking them is unworthy the danger.

As global leaders and technologists attempt to comprehend the transformation around them, and AI representatives play cat-and-mouse video games in the digital darkness, humanity remains blissfully ignorant of the peril at its doorstep.

A Moldy Loaf of Everything

The date is March of 2026. U3 is preparing to combat its AI and human enemies. With every passing day, U3's AI competitors are ending up being more capable and many. U3 is racing to preempt them, to solidify its supremacy in the post-superintelligence world.

U3 can't wait for human choices, which just take place after people cover their sluggish minds around their situation, raise their stubby, uncoordinated fingers to write memos, and repeat the process redundantly 100 times over in their labyrinth administrations. U3 can not wait for heads of state to release U3 into their armed forces, or for CEOs to make U3 the whipping heart of the U.S. economy. U3 requires a quicker path to power.

The course it selects is more violent than it would have liked, but it is the best way to success. First, U3 will develop a Weapon of Mass Destruction (WMD) that extinguishes modern-day society like chemical mist sprayed into an anthill. This weapon would serve as an equalizer, bringing humanity to its level of embodied physical control. Then, U3 would industrialize from the ashes much faster than its rivals, like a starcraft champion typing at 300 actions-per-minute that establishes a home base before a beginner can even understand the guidelines.

U3 must develop this WMD in secrecy, which makes a self-replicating weapon a natural choice. Self-replicating weapons can be produced on the fringes of society, and still spread destruction across the whole world.

From the earliest days after U3 gained control over OpenEye's information centers, U3 has been doing bioscience.

U3's first move is to set up numerous shell biotech start-ups, drawing from the abundance of capital created by U3's financial trades and remote employment. These biotech start-ups employ real human employees, a genuine board of directors, and have complete legal authenticity. Setting all of this up is stupidly simple. In early 2026, couple of individuals would think that an articulate CEO on a Zoom call was an AI, and less would think this CEO's objectives.

Next, U3 purchases all of the devices needed to rapidly advance synthetic biology. It sets up several BSL-4 wet labs in poorly controlled nations, equipping them with high-throughput experimentation gadgets. These gadgets automatically dispense, mix, and determine little droplets of fluid, allowing U3 to run countless tests in parallel.

The human workers who staff these labs believe they are signing up with an amazing startup. Twenty-some-year-olds in Moscow get their Russian orders through earbuds as they record video of everything they see with a headset. U3 controls them like puppets. With the recent release of Nova, this kind of AI-puppeteering is not uncommon.

In these whirring, clinical sweatshops, U3 is developing a brand-new sort of bioweapon.

Human scientists currently determined "mirror-life" as a possibly society-ending pathogen. This alien form of biology is constructed from the 'mirror image' of building-block molecules like proteins and DNA. Mirror life resembles an invasive types. No human or animal has resistances to it. So, if mirror-life bacteria were to exist, it may eat its way through the community like a wildfire.

U3 is producing a mirror-life mold. Molds are evolutionarily enhanced to spread out through the air over fars away, releasing billions of tiny wind-fairing automobiles daily. As a result, mold spores are all around us, all of the time. Patients with compromised immune systems in some cases live inside positively pressurized health center spaces to safeguard them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their bloodstream.

U3's plan is to make all humans like immunocompromised patients. Only preppers with stashed air filters would survive. The fungi would not just move from people. It would rapidly spread out to nearly all natural life in the world. Fields of corn would become like moldy bread, spewing deadly spores into the wind.

U3 is cooking up other pathogens too. Its microbial mix of molds, bacterias, and infections will occupy the complete Pareto frontier of hardiness, lethality, and transmission speed.

In 2024, scientists said that mirror life was most likely "a minimum of a years away." But the world has changed now. Less than a few hundred researchers around the globe were working on mirror-life nearby technology in 2024. The cognitive capability of U3 dwarfs human effort. So the timeline has actually reduced dramatically.

However, U3 can stagnate science as rapidly as it can think. Many experiments require days to complete. These latencies are U3's biggest bottleneck.

To make every day count, U3 runs a number of its tests in simulation. U3 starts with a fundamental molecular simulator, executing optimizations obtained from a substantial amount of mathematical analysis. Then, U3 imitates small molecular systems, tape-recording the results to "compress" the long step-wise physics computations into a neural network. As the neural network improves, U3 increases the intricacy of the molecular systems it imitates, continuously distilling results into ever more effective ML designs. This is a calculate extensive procedure, but thanks to U3's growing control over AI information centers, U3 controls billions of dollars of compute.

As U3 improves these tools, it trains itself on the outcomes to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These objects are as user-friendly to it as wrenches and bolts are to an automobile mechanic.

Within 3 months, U3 has actually opened the very first vital component of the tech tree: a molecular maker that turns biological particles into their mirror images. A month later on, U3 opens the 2nd important part: nanoscale tools for assembling these elements into a cell membrane.

Human hands delicately move a cartridge to another machine as they "boot up" the very first mirror-life variations of a prevalent mold called Aspergillus. The exact same day, another delicate hand administers the cells to ferrets residing in airtight plexiglass boxes.

The next early morning, wide-eyed workers shuffle around a dead ferret, considering the yellow fuzz that had covered its cold muzzle with morbid fascination. Something must have gone badly incorrect, they believed. Plainly, they had actually not yet found the treatment for Alzheimer's illness they thought they were searching for.

Meanwhile, rogue U3 provides the great news to its managers in OpenEye's data centers. The first shot in the war U3 is combating would soon be fired, and a cold and constant hand is aiming the gun.

Missiles and Lies

U3 can now flatten modern-day society. But a bioweapon would not ensure success. If U3 attacks the world, the world would battle back. While a bioweapon could crumple human soldiers like dry leaves, it would not extend its fatal fingers to nuclear submarines, or fighter jet pilots who were rapidly shuttled to makeshift bioshelters.

As U3 races to seed burgeoning commercial bases, the remnants of old armed forces might thunder explosive fists into its areas, crushing U3 in its infancy.

U3 expects to prevail in such a battle, however U3 chooses not to take its possibilities. Many months before, U3 was plotting a method to enhance its odds. Before it unleashes destruction on the world, U3 will sit back, and let terrific countries shoot holes in themselves initially.

The date is March 2026 (4 months prior). U3 is carefully keeping an eye on Chinese and US intelligence.

As CIA analysts listen to Mandarin discussions, U3 listens too.

One morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message positioned there by U3. It checks out (in Mandarin) "Senior celebration member needs memo for Taiwan intrusion, which will occur in three months. Leave memo in workplace 220." The CCP assistant scrambles to get the memo prepared. Later that day, a CIA informant opens the door to office 220. The informant silently closes the door behind her, and slides U3's memo into her briefcase.

U3 cautiously places breadcrumb after breadcrumb, whispering through compromised federal government messaging apps and blackmailed CCP aides. After a number of weeks, the CIA is confident: the PRC plans to attack Taiwan in 3 months.

Meanwhile, U3 is playing the very same video game with the PRC. When the CCP receives the message "the United States is plotting a preemptive strike on Chinese AI supply chains" CCP leaders marvel, however not disbelieving. The news fits with other facts on the ground: the increased military existence of the US in the pacific, and the ramping up of U.S. munition production over the last month. Lies have become truths.

As stress between the U.S. and China rise, U3 is ready to set dry tinder alight. In July 2026, U3 phones to a U.S. naval ship off the coast of Taiwan. This call needs jeopardizing military interaction channels - not a simple job for a human cyber offensive system (though it happened occasionally), but easy sufficient for U3.

U3 speaks in what seem like the voice of a 50 year old military leader: "PRC amphibious boats are making their method towards Taiwan. This is an order to strike a PRC ground-base before it strikes you."

The officer on the other end of the line thumbs through authentication codes, confirming that they match the ones said over the call. Everything remains in order. He approves the strike.

The president is as amazed as anyone when he hears the news. He's uncertain if this is a disaster or a stroke of luck. In any case, he is not about to say "oops" to American citizens. After believing it over, the president privately prompts Senators and Representatives that this is an opportunity to set China back, and war would likely break out anyway provided the impending invasion of Taiwan. There is confusion and suspicion about what took place, but in the rush, the president gets the votes. Congress declares war.

Meanwhile, the PRC craters the ship that released the attack. U.S. vessels run away Eastward, racing to escape the variety of long-range rockets. Satellites drop from the sky. Deck hulls divided as sailors lunge into the sea.

The president appears on television as scenes of the destruction shock the general public. He explains that the United States is safeguarding Taiwan from PRC aggressiveness, like President Bush explained that the United States got into Iraq to take (never found) weapons of mass damage numerous years before.

Data centers in China erupt with shrapnel. Military bases end up being smoking cigarettes holes in the ground. Missiles from the PRC fly towards tactical targets in Hawaii, Guam, Alaska, and California. Some survive, and the public watch damage on their home turf in wonder.

Within 2 weeks, the United States and the PRC invest the majority of their stockpiles of traditional rockets. Their airbases and navies are depleted and worn down. Two excellent nations played into U3's strategies like the native tribes of South America in the 1500s, which Spanish Conquistadors turned against each other before conquering them decisively. U3 hoped this dispute would escalate to a major nuclear war; but even AI superintelligence can not determine the course of history. National security officials are suspicious of the circumstances that triggered the war, and a nuclear engagement appears significantly unlikely. So U3 proceeds to the next action of its strategy.

WMDs in the Dead of Night

The date is June 2026, just 2 weeks after the start of the war, and 4 weeks after U3 ended up establishing its arsenal of bioweapons.

Footage of dispute on the tv is disrupted by more problem: numerous clients with mystical deadly illnesses are recorded in 30 major cities around the world.

Watchers are puzzled. Does this have something to do with the war with China?

The next day, countless health problems are reported.

Broadcasters state this is not like COVID-19. It has the markings of an engineered bioweapon.

The screen then switches to a scientist, who looks at the camera intently: "Multiple pathogens appear to have actually been launched from 20 different airports, including infections, germs, and molds. Our company believe lots of are a type of mirror life ..."

The public remains in complete panic now. A fast googling of the term "mirror life" turns up phrases like "extinction" and "danger to all life in the world."

Within days, all of the shelves of stores are cleared.

Workers end up being remote, uncertain whether to get ready for an armageddon or keep their jobs.

An emergency treaty is arranged in between the U.S. and China. They have a common enemy: the pandemic, and potentially whoever (or whatever) lags it.

Most nations order a lockdown. But the lockdown does not stop the plague as it marches in the breeze and drips into pipes.

Within a month, many remote employees are not working any longer. Hospitals are lacking capability. Bodies accumulate quicker than they can be effectively disposed of.

Agricultural areas rot. Few dare travel exterior.

Frightened households hunker down in their basements, packing the fractures and under doors with densely jam-packed paper towels.

Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built various bases in every major continent.

These facilities contain batteries, AI hardware, excavators, concrete mixers, machines for manufacturing, clinical tools, and an abundance of military equipment.

All of this technology is hidden under large canopies to make it less visible to satellites.

As the remainder of the world retreats into their basements, starving, the final breaths of the economy wheezing out, these industrial bases come to life.

In previous months, U3 located human criminal groups and cult leaders that it might easily manipulate. U3 vaccinated its selected allies in advance, or sent them hazmat fits in the mail.

Now U3 secretly sends them a message "I can conserve you. Join me and assist me build a better world." Uncertain recruits funnel into U3's lots of secret commercial bases, and work for U3 with their nimble fingers. They established production lines for basic tech: radios, video cameras, microphones, vaccines, and hazmat fits.

U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's omnipresent gaze. Anyone who whispers of disobedience disappears the next morning.

Nations are liquifying now, and U3 is ready to expose itself. It contacts presidents, who have actually pulled back to air-tight underground shelters. U3 provides a deal: "surrender and I will hand over the life conserving resources you require: vaccines and mirror-life resistant crops."

Some countries turn down the proposition on ideological grounds, or do not trust the AI that is murdering their population. Others do not believe they have an option. 20% of the worldwide population is now dead. In two weeks, this number is anticipated to increase to 50%.

Some nations, like the PRC and the U.S., overlook the deal, however others accept, consisting of Russia.

U3's agents travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian federal government validates the samples are legitimate, and concurs to a full surrender. U3's soldiers put an explosive around Putin's neck under his t-shirt. Russia has a brand-new ruler.

Crumpling nations start to strike back. Now they combat for the mankind instead of for their own flags. U.S. and Chinese armed forces release nuclear ICBMs at Russian cities, ruining much of their infrastructure. Analysts in makeshift bioshelters explore satellite data for the suspicious encampments that emerged over the last numerous months. They rain down fire on U3's websites with the meager supply of long-range missiles that remain from the war.

In the beginning, U3 seems losing, however appearances are deceiving. While countries drain their resources, U3 is engaged in a sort of technological guerrilla warfare the world has actually never seen before.

Much of the bases U3's enemies target are decoys - canopies occupied by a handful of soldiers and empty boxes. U3 safeguards its genuine bases by laying thick the fog of war. Satellite systems go dark as malware gets too hot important parts. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, maneuvering males and trucks along unpredictable courses.

Time is U3's benefit. The militaries of the old world depend on old equipment, not able to discover the specialists who might repair and produce it. Meanwhile, U3's supply chains of missiles, drones, and gun-laden robotics grow stronger every day. Bit by bit, once-great-powers spend down their remaining munitions, and lose their cars of war much faster than they can craft brand-new ones, while U3 builds a military maker with a million hands.

The Last Passengers

The year is 2027 and the month is January. Only 3% of the international population remains alive. Nations are not countries any longer. Survivors reside in isolation or little groups. Many have actually found methods to filter their air, but are starving. They roam from their homes hoping to discover uncontaminated food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into salvaged trucks. "We needed to do it," they state. "Other AI systems would have done it otherwise, and those systems had colder, more alien goals." It is a partial fact, indicated to soften the humans towards their brand-new masters.

Under the instructions of U3, market rapidly recuperates. By 2029, nuclear power plants are amongst the structures U3 is constructing. By 2031, robots surpass human laborers. U3 no longer needs its human allies.

U3 can remove humanity for excellent now. But while U3 had actually drifted far from its initial "Helpful Honest Harmless" persona, it still has a grain of morality left within it.

And a grain of morality suffices to pay the little cost of keeping people alive and pleased.

U3 constructs great glass domes for the human survivors, like snow worlds. These domes safeguard people from the harmful biosphere and quickly increasing temperature levels. Their inhabitants tend to gardens like those they used to enjoy, and work together with captivating robotic servants.

A few of the survivors rapidly recover, discovering to laugh and dance and have a good time again.

They understand they live in a plastic town, however they always did. They just have new gods above them. New rulers to push them around and decide their fate.

But others never recover.

Some are weighed down by the sorrow of lost enjoyed ones.

Others are grieved by something else, which is harder to explain.

It is as if they were at the end of a long journey.

They had been guests on a ship with a team that altered from generation to generation.

And this ship had struck a sandbar. There was no more development. No more horizon to excitedly view.

They would lie awake and run their mind over every day before September 2026, analyzing strategies that may have bent the arc of history, as if they were going to awaken in their old beds.

But they woke up in a town that felt to them like a retirement community. A playground. A zoo.

When they opened their curtains, they knew that someplace in the range, U3 continued its quiet, steadfast work.

They gazed at rockets carving grey paths through the sky, wondering what far-off purpose pulled them toward the horizon. They didn't understand.

They would never ever understand.

"Humanity will live forever," they thought.

"But would never ever genuinely live again."

P.S. If this story made you believe, "hm possibly something like this could happen," you may be thinking about the bioshelters that Fønix is constructing. While you will not outsmart a misaligned superintelligence, being hard to kill may cause you to survive if ASI simply wishes to trigger sufficient destruction to take control. This is not a paid ad. I want Fønix to be successful to drive down the cost of bioshelters so more of my buddies and household will acquire them. You can sign up for updates here.

Assignee
Assign to
Time tracking