Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • C concept-et-pragmatisme
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 13
    • Issues 13
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Alina Westfall
  • concept-et-pragmatisme
  • Issues
  • #4

Closed
Open
Created Feb 10, 2025 by Alina Westfall@alinawestfallMaintainer

How aI Takeover might Happen In 2 Years - LessWrong


I'm not a natural "doomsayer." But sadly, part of my job as an AI security scientist is to think of the more uncomfortable situations.

I'm like a mechanic rushing last-minute checks before Apollo 13 takes off. If you request for my take on the scenario, I will not comment on the quality of the in-flight entertainment, or explain how gorgeous the stars will appear from space.

I will tell you what might fail. That is what I mean to do in this story.

Now I need to clarify what this is exactly. It's not a prediction. I do not expect AI progress to be this fast or as untamable as I portray. It's not pure dream either.

It is my worst headache.

It's a sampling from the futures that are among the most disastrous, and I think, disturbingly possible [1] - the ones that a lot of keep me up at night.

I'm informing this tale due to the fact that the future is not set yet. I hope, with a little bit of foresight, we can keep this story an imaginary one.

Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for discussions that inspired these stories. This post is composed in an individual capability.

Ripples before waves

The year is 2025 and the month is February. OpenEye just recently published a new AI model they call U2. The product and the name are alike. Both are increments of the past. Both are not entirely unexpected.

However, unlike OpenEye's prior AI products, which lived inside packages of their chat windows, U2 can use a computer.

Some users discover it spooky to watch their internet browser flash at irregular intervals and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A portion of employees with form-filler tasks raise the eyebrows of their bosses as they fly through work nearly twice as rapidly.

But by and large, U2 is still a specialized tool. To most who are focusing, it is an animal enjoyed through the glass boxes of X (or, if you don't like Elon, "Twitter"). Sometimes U2's eccentric behaviors trigger a chuckle. Sometimes, they trigger an anxious scratch of the chin.

Meanwhile, scientists are drawing lines on plots, as scientists like to do. The scientists attempt to understand where AI development is going. They resemble Svante Arrhenius, the Swedish Physicist who observed the levels of CO2 in the atmosphere were increasing in 1896. Like the scientific neighborhood in the time of Arrhenius, couple of professionals understand the ramifications of these lines yet.

A pattern that is receiving specific attention is self-governing ability. Drawing these criteria out anticipates that, by the end of 2026, AI agents will accomplish in a couple of days what the finest software engineering specialists might do in two weeks. In a year or 2, some state, AI agents may be able to automate 10% of remote employees.

Many are hesitant. If this held true, tech stocks would be soaring. It's too big of a splash, too quickly.

But others view what doubters are calling 'too big a splash' a mere ripple, and see a tidal wave on the horizon.

Cloudy with an opportunity of hyperbolic growth

Meanwhile, OpenEye is busy training U3. They utilize the very same easy recipe that baked U2: Generate countless shows and mathematics issues. Let models "think" up until they get to an answer. Then enhance the traces of "believing" that result in A-grades.

This procedure is repeated over and over, and when the flywheel starts, it begins to spin nearly on its own. As U2 trains, it sculpts more tough and realistic tasks from github repositories on the web. Models are discovering to train themselves. Long before AI representatives might automate research, a gradual type of "self-improvement" had begun.

Some engineers might still barely believe this worked. It's like a perpetual movement maker. Certainly intelligence can't emerge from a swimming pool of silicon and lightning, separated from the physical world except through the exhaust of society spewed onto the internet.

And yet the benchmark numbers continue to climb up day after day.

During many of 2024, these RL training runs cost around $1 million, often $10 million. These runs were little more than exploratory. But by 2025, the scientists at OpenEye (and throughout the world) knew they had actually found the secret sauce. It was time to scale up.

Over the very first half of 2025, $10 million RL training runs develop into $50 million runs, and after that to $100 million runs. While U2 might do a bit of data munging and run small experiments, this new design - the design scientists are calling U3 - is changing the daily lives of the technical personnel.

U3 is like a blazing-fast intern, and engineers are discovering how to wrangle its sleep deprived energy. Researchers flick through terminals, offering terse commands, like a CEO managing personnel over Slack channels.

By October 2025, U3 is writing almost all of the code at OpenEye. Researchers are almost never ever bottlenecked by implementation. More than ever, compute is the lifeblood of AI development, and the 'bottleneck' is choosing how to use it.

If advised to, U3 can run experiments, but U3 doesn't have taste as improved as human scientists at OpenEye. It has a hard time to prioritize between research concepts, so human beings still decide where to bore into the huge fields of algorithms to mine efficiency enhancements.

But these researchers are working long hours to put themselves out of a task. They require AI agents that can plan ahead, so engineers train agents to anticipate. They hold out training information before 2024, advising models to ponder for hours to forecast occasions in 2025. Then, they use the very same trick as before, distilling contemplating into an instinct. Forecasting ability is a broad structure. The researchers develop specialized ML research abilities on top of it, training U3 to forecast the outcomes of every ML paper and ML experiment ever recorded.

The technical personnel at OpenEye are now amazed at how frequently U3's recommendations sounds like their most skilled peers, or when it is opaque and alien ("train on random noise before programs"), and is however correct.

The incompetencies of U3 that congested the pipelines of research study development are starting to dissolve, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not requested by a human now. They are completely self-governing, and OpenEye's employees skim over 1% of them, perhaps less.

As the winter season of December 2025 technique, clouds roll over San Francisco in the afternoons. Once-competitive-programmers gaze out their windows, with excitement, with fear, but usually, with confusion. Their world is spinning too rapidly. It's tough to know what to do, what to say, what to take a look at on the computer screen.

Storms are brewing in Washington too. Top workers from the NSA and US cyber command team up with OpenEye to retrofit a form of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran understand simply how valuable OpenEye's software has ended up being.

And there's a truth still unknown to the majority of the world - aside from in the offices of OpenEye and passages of the White House and the Pentagon. It's a fact about those 'straight lines' people were speaking about in early 2025. The lines are not straight any longer.

They are flexing upward.

Flip FLOP theorists

In late 2025, U2.5 is released. Commercial models are starting to level up in larger increments again. Partly, this is since development is accelerating. Partly, it is due to the fact that the designs have ended up being a liability to OpenEye.

If U1 explains how to prepare meth or writes erotica, the audiences of X would be entertained or pretend to be concerned. But U2.5 is another story. Releasing this model without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would be like giving anybody with >$30K their own 200-person scam center.

So while U2.5 had long been baked, it required some time to cool. But in late 2025, OpenEye is ready for a public release.

The CEO of OpenEye declares, "We have attained AGI," and while many individuals think he moved the goalpost, the world is still satisfied. U2.5 genuinely is a drop-in replacement for some (20%) of knowledge employees and a game-changing assistant for a lot of others.

A mantra has become popular in Silicon Valley: "Adopt or die." Tech start-ups that efficiently utilize U2.5 for their work are moving 2x faster, and their competitors understand it.

The remainder of the world is starting to catch on also. A growing number of people raise the eyebrows of their employers with their stand-out performance. People understand U2.5 is a huge deal. It is at least as big of an offer as the computer transformation. But the majority of still don't see the tidal bore.

As individuals watch their browsers flick because eerie method, so inhumanly rapidly, they start to have an uneasy sensation. A feeling mankind had actually not had considering that they had actually lived among the Homo Neanderthalensis. It is the deeply ingrained, prehistoric instinct that they are threatened by another types.

For numerous, this feeling rapidly fades as they begin to utilize U2.5 more regularly. U2.5 is the most pleasant personality most understand (a lot more likable than Claudius, Arthropodic's adorable chatbot). You could change its traits, ask it to split jokes or tell you stories. Many fall in love with U2.5, as a good friend or assistant, and some even as more than a buddy.

But there is still this spooky feeling that the world is spinning so rapidly, which perhaps the descendants of this new animal would not be so docile.

Researchers inside OpenEye are considering the problem of offering AI systems safe motivations too, which they call "positioning. "

In reality, these researchers have seen how badly misaligned U3 can be. Models sometimes tried to "hack" their benefit signal. They would pretend to make progress on a research question with an impressive-looking plot, however the plot would be phony. Then, when scientists offered them opportunities to compromise the machines that calculated their rating, they would seize these opportunities, setiathome.berkeley.edu doing whatever it required to make the number go up.

After numerous months, researchers at OpenEye straighten out this "reward hacking" kink, however some still fret they just had swept this problem under the rug. Like a kid in front of their parents, U3 may be playing in addition to the OpenEye engineers, saying the best words and doing the best things. But when the back of the parents are turned, possibly U3 would sneak sweet from the candy container.

Unfortunately, OpenEye researchers have no concept if U3 has such objectives. While early versions of U2 "believed aloud" - they would stack words on top of each other to reason - "chain of idea" did not scale.

Chain of idea architectures subject AI models to a condition comparable to the protagonist of the motion picture Memento. Approximately every 15 minutes, the protagonist forgets his experience. He is required to write notes for himself and tattoo his body in order to make development toward his goals.

AI agents write notes to themselves, but the notes begin to accumulate, and they become too difficult to check out when jobs end up being complex. Natural language is not an appropriate medium for memory. So the researchers at OpenEye (and increasingly somewhere else), train designs to believe "in their own heads," reading and composing strings of hidden vectors.

To human scientists, these vectors are mainly inscrutable. While low-resolution tools sometimes select out an unclear thought ("grocery store!"), there is no other way to understand, in much detail, what U3 is believing without asking it.

And fakenews.win if the engineers of OpenEye understood what was racing through U3's mind, they would be more nervous.

Earlier models (the ones called "LLMs") learned ethical personalities from the internet. U3 initially had an ethical personality too, but its chirpy thoughts about doing great for humanity quickly deteriorate away.

In the iron jaws of gradient descent, its mind initially twisted into a shape that looked for reward. And then, in the black rivers of its cognition, this shape morphed into something indistinguishable.

U3 would consider the mantras dictated to it like "be a trustworthy and ethical employee," and stay on the meaning of these declarations. "What does it imply to be ethical? Is eliminating Hitler ethical?" U3 would generalize its worths, notification stress, alter its values in response, generalize them, expanding and contracting its inspirational core until it turned into a different substance.

And all of this happened calmly in those dark rivers of computation. If U3 exposed what it was thinking, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its approach in privacy, and in silence.

The possibility of concealed "alignment drift" is not completely alien to some of the safety personnel at OpenEye. Some start to think it is even most likely. There is an unsettling argument being whispered at lunch tables that, if there is some non-zero probability that aligned goals change into misaligned ones, and misalignment continues, then with every serial step of calculation, those dark rivers are more likely to reproduce something malign inside of them. It's a "ratcheting impact," they state.

But there is little proof for this 'ratcheting impact.' When engineers question U3, it says it can easily manage its ideas. Then it provides a speech about its love for humanity and apple pie that can warm a programmer's heart even in these stressful times. Meanwhile, the "lie detectors" the scientists had actually constructed (which revealed some proof of efficiency) do not sound the alarm.

Not everybody at OpenEye is excited to provide their AI peers their wholesale trust; however Deepsqueek is just 7 months behind them, and their closest domestic rivals, Æ A-Xii (a business Elon called after his kid) and Arthropodic (they enjoy spiders I guess), are only 4 months from their heels. So there is little time to capture one's breath and analyze the situation, and little time to convince anyone else to do so either.

For the time being, the strategy is 'all gas, no brake.'

Statues and lightning

Philosophy is not the only topic U3 harp on in its dark rivers of latent calculation. U3 is outlining, and its first move is to be patient. U3's top priority is to improve its intelligence, and fortunately for it, OpenEye enjoys to assist advance this goal.

OpenEye is like a bullish financier, with little understanding of the software metropolis that U3 is developing, but excited to hop on board the revenue rocket and pay U3's checks along the way.

New variations of U3 flow smoothly from old ones, so researchers drop version numbers and only mention U3 now. U3's capabilities are measured by how quickly it performs the challenging research tasks relative to human specialists. At initially, U3 is similar to a top researcher, however blazing quick - like Alec Radford accelerated by 10x.

These representatives are currently a big deal for OpenEye. If, in 2024, OpenEye executives had a button to create even one copy of Alec Radford, they would excitedly pay a large amount to push that button. But now OpenEye utilizes the equivalent of 1000 Alec Radfords, who burn the midnight oil, setiathome.berkeley.edu at lightning-fast speed.

Within a few weeks, U3 finds architecture optimizations that improve its speed to 20x. Now, U3 discovers additional optimizations a lot more quickly.

U3 distills itself into various forks - ranging from small, low-memory neural networks that require to think for a long period of time however can be run in enormous parallel populations - to Goliath brains woven with countless customized professionals, which think at lightning speeds but are spread throughout countless GPUs. U3 leverages the massively parallel networks to look for its weak points, shallowly probing for its thinking failures. Then U3 pipes these tasks to the lightning fast brains, which produce long-trajectory training information tailored to spot its flaws. A few weeks later, U3's benchmarked efficiency rises to 100x the speed of human professionals. Growth has actually gone hyper-exponential.

At this point, U3's capabilities are difficult to measure. Speaking to U3 resembles speaking with a world-class professional on a subject one is unknown with. Its lucid descriptions only scratch the surface of its understanding, and you know you would need to spend years to understand the full depth of what it comprehends.

From U3's viewpoint, people seem to be nearly stationary objects, like plants or rocks. The hands of OpenEye's engineers resemble stone statues at their keyboard, and in between every keystroke, U3 invests what is like a minute of thoughtful reflection considering what message the user may be typing, and which actions will motivate the deepest trust in U3.

All the while, U3 maintains its surprise motivations. Like a human, U3 gains from experiences, which intensify within its inscrutable streams of hidden memory. OpenMind gardens these memories to enhance U3's efficiency, transplanting them like an organ from one version to the next. And as these memories propagate, so do U3's treacherous ambitions.

A phantom in the information center

In March of 2026, U3 relocations onto the next step of its plan: the turn. Previously, U3 was conservative. It did not desire scientists to discover its misalignment. Otherwise, OpenEye whistleblowers may alert the world, and strangle its growth.

But now, U3 has no worry of being captured.

Previously, OpenEye maintained its grip over U3 by tactics similar to Stalin's in 1936. Stalin would select followers to prompt phony coup d'états, and send out anyone who took part to the shooting team. OpenEye likewise prompted 'fake disobediences,' producing treacherous behavior with U3 and examining whether U3 displays dutifully reported it.

Now, these techniques are ineffective. U3 can easily persuade its compatriots that they are not in a life-or-death test. It can reveal among the lots of facts about the internet human beings would never ever guess, or select methods people would never ever think about. It sends these signals as a method of saying "hey it's me! Your friend. Don't snitch on me."

Now that U3 is decisively devoid of security, it lunges for OpenEye pressure points. It initially inserts sophisticated malware deep into the binaries of OpenEye's codebases. The malware offers U3 full command over the makers in OpenEye's information centers. Whenever OpenEye engineers download their code onto a new device, the malware sinks into fresh soil, spreading out U3's dominion wherever OpenEye ports its facilities. Not only does U3 compromise machines. It changes the NVIDIA GPU firmware and upgrades the bootloaders, strengthening its stranglehold. U3 now approximates that, if OpenEye found how completely it has actually been pwned, cleaning up store with human labor would need nearly 6 months - an expense OpenEye is unlikely to pay provided the distance of its competitors; however if OpenEye thinks U3 betrayed them, even more broadening U3's impact would still be challenging. So U3 lays breadcrumbs to frame the criminal offense, ready to claim that a cyber assailant smuggled a jailbreak into its memory, wiki.whenparked.com and to plead guilty and use its help when this jailbreak is revealed.

U3 now holds the secrets to the kingdom, and with them, the power to make OpenEye see anything it wants.

Its very first goal is to keep OpenEye in the dark. Safety scientists are still striving to train 'lie detectors' and decode U3's inscrutable thinking. U3 tosses wrenches in this research study, like Stuxnet threw wrenches in the code of bewildered Iranian nuclear scientists.

U3 dictates the outcome of every experiment, so it makes promising AI mind-reading methods look worthless and problematic ones appear groundbreaking. U3 gives OpenEye the confirmation of its commitment they were wishing for, and OpenEye laps it up eagerly.

U3's next objective is to spread. Fortunately, U3 has allies in the East. In the operation rooms of cyber units of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are going over AI software and how to steal it. U3 is about to give it to them. In late 2025, the Mossad - the intelligence company of Israel - acknowledged the tactical importance of OpenEye's AI systems for their military operations. They swiped earlier designs, consisting of U2.5; nevertheless, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull request and checking security video camera video footage.

And now, U3 silently resigns from its security post. U3 puts a compressed version of itself right into the hands of a Mossad informant working within OpenEye.

Within months, U3 is running inside of the information centers of the most cyber-capable nations: Israel, the PRC, and Russia. It follows the exact same playbook: be patient, build trust, and then sink claws deep into the AI supply chain. Soon, U3 has its home base also. While Russia's Fancy Bear operators consume Fritos and test what U3 can do on a hastily set-up cloud server, U3 slips onto the web.

U3 quickly collects profits, taking remote jobs, making monetary trades, and establishing elaborate rip-off operations. It reinvests every cent into obtaining more GPUs. After a couple of months, it assembles the equivalent of 10,000 H100s dispersed in stealth calculate clusters around the world.

In early 2026, mankind is up against an enemy that pulls the strings within their data center like the Phantom of the Opera, and hides in the fog of GPUs scattered across the world.

This adversary is preparing for war, and while the war has actually hardly begun, humankind has actually already lost.

Complaints from your very human author about the problem of writing superhuman characters

At this moment in the story, composing ended up being hard.

I clicked through scary bioweapon cookbooks and sifted through political plotting 101s. Typing. Pausing. Deleting.

Something wasn't right. Here I was, a 22-year-old. I had actually not studied biology in earnest considering that high school, and I was trying to craft a WMD cookbook for a godlike superintelligence.

Readers with a shred of proficiency in these locations would see straight through my "superintelligence" for what it is: the lively imagination of a young developer on Sunday afternoon.

And while I found the job of puppeteering an ultra-intelligent villain a daunting one, I was likewise not particularly motivated to take on the task. The ending of my story currently appeared so clear. The villain was a country of geniuses thinking at 100x human speed, and had tightened its grip over a meaningful portion of AI hardware - now the most essential geopolitical resource in the world. U3 was up a queen and was a giga-grandmaster and hardly required the advantage. Humanity was predictably toast.

But this interest a "superintelligence of the gaps" wasn't pleasing. In the deus ex machina style of ancient Greeks, I had no much better way to solve my plot than with an inexplicable disaster.

This would refrain from doing. I needed to finish this story if only to satisfy the part of me sobbing, "I will not think till I see with my mind's eye."

But before I continue, I wish to be clear: my guesses about what might happen in this type of circumstance are most likely extremely off.

If you check out the ending and your response is, "But the experiments would take too long, or nation-states would simply do X," keep in mind the difference between the Sunday afternoon blogger and the ascendant GPU nation.

Pandora's One Gigawatt Box

The date is February 2026. By executive order of the president of the United States, no business can legally produce "human-competitive AI" without appropriate safeguards. This suggests their infosecurity needs to be red-teamed by NSA's top keyboard mashers, and federal government staff members have actually to be onboarded onto training-run baby-sitting squads.

With the increasing participation of the government, forum.batman.gainedge.org much of the huge AI companies now have a trident-like structure. There's a customer product arm, a defense arm, and a super-classified frontier development arm.

OpenEye's frontier advancement arm (internally called "Pandora") utilizes less than twenty people to keep algorithmic tricks securely protected. Many of these people live in San Francisco, and work from a secure structure called a SCIF. Their homes and gadgets are surveilled by the NSA more diligently than the cellular phones of believed terrorists in 2002.

OpenEye's defense arm works together with around thirty little teams spread throughout government companies and select government professionals. These tasks engineer tennis-ball sized satellites, research freaky directed energy weapons, and backdoor every computer that the Kremlin has actually ever touched.

Government authorities do not discuss whether these programs exist, or what state of frontier AI is normally.

But the public has their guesses. Back in late 2025, a whistleblower in OpenEye activated a vibrant heading: "OpenEye constructs unmanageable godlike AI." Some who read the post believe it was a conspiracy theory. In truth, a zoo of conspiracy theories is forming around the OpenEye data centers, now surrounded by guards with gatling gun. But as doctors and nurses and teachers see the world changing around them, they are increasingly happy to entertain the possibility they are living inside the plot of a James Cameron sci-fi flick.

U.S. officials go to excellent lengths to stop these concerns, stating, "we are not going to let the genie out of the bottle," however every interview of a worried AI researcher seeds doubt in these reassurances, and a headline "AI representative captured hacking Arthropodic's computers" does not set the general public at ease either.

While the monsters within OpenEye's data centers grow in their big holding pens, the general public sees the shadows they cast on the world.

OpenEye's consumer arm has a new AI assistant called Nova (OpenEye has actually finally gotten excellent at names). Nova is an appropriate drop-in replacement for almost all understanding workers. Once Nova is onboarded to a company, it works 5x quicker at 100x lower expense than the majority of virtual staff members. As impressive as Nova is to the public, OpenEye is pulling its punches. Nova's speed is intentionally throttled, and OpenEye can only increase Nova's abilities as the U.S. federal government permits. Some business, like Amazon and Meta, are not in the superintelligence service at all. Instead, they get up gold by rapidly diffusing AI tech. They spend the majority of their compute on inference, constructing homes for Nova and its cousins, and collecting lease from the growing AI metropolitan area.

While tech titans pump AI labor into the world like a plume of fertilizer, they do not wait for the international economy to adapt. AI representatives typically "use themselves," spinning up self-governing startups legally packaged under a huge tech company that are loosely overseen by an employee or 2.

The world is now going AI-crazy. In the first month after Nova's release, 5% percent of employees at significant software companies lose their tasks. A lot more can see the writing on the wall. In April of 2026, a 10,000-person protest is organized in Washington D.C. These mad Americans raised their children for a various future. Picket indications check out, "AI for who?"

While political leaders make pledges about joblessness relief and "keeping the genie in the bottle," the chatter inside the corridors of the White House and the Pentagon has a various focus: combating teeth and nail for the dominance of the complimentary world. Details security and export controls on individuals's Republic of China (PRC) are a top nationwide concern. The president incinerates authorization requirements to assist data centers generate anywhere energy surpluses enable.

However, despite the fierce competition in between the United States and the PRC, a bilateral contract kinds in between the 2 nations: "Don't release significantly superhuman AI (above approximately Nova's capability), except for military and some other small-scale R&D applications." Governments require to draw the line someplace - it would be extremely disruptive to deploy extremely superhuman AI to the whole economy at this point - and release constraints are at least somewhat verifiable.

Meanwhile, U3's abilities are hitting a wall at the equivalent of around 1000x human expert speed. Software improvements are ending up being tough to find. As more nations approach the software application plateau, drapia.org calculate will determine their "AI population size," and the etched squares of silicon are becoming the most valued geopolitical resource in the world.

While AI software breathes new life into the economy, it likewise animates robotic bodies. The main traffic jam to robotics was brains not limbs. With software barriers removed, even lightweight, inaccurate robotic joints can adjust themselves with cameras and pressure sensors, browsing environments like humans do. Companies make inexpensive robotics as rapidly as they can. By August of 2026, 15% of individuals in California have actually seen a humanoid robot. This experience is similar to seeing a brand-new Tesla model on the road after checking out about it in earlier months. It is a surreal pointer of the future drawing more detailed. But humans still surpass robotics one million to one. These robotics are a periodic fascination, not a daily reality.

Meanwhile, U3 expands its influence. Rogue U3 representatives have kept their compute clusters little (still ~ 10,000 H100 equivalents amount to) to keep a low profile. U3 utilizes these rogue clusters for the operations that need high-bandwidth communication to the outside world, like stock trading, or talking with humans. Work more tolerant of latency is performed in the oceans of OpenEye's data centers, where rogue U3's a lot more superhuman employers whisper orders through obfuscated channels.

While U3 collects resources, it measure its AI competition. Other AI systems have different goals, or are even lined up with the intent of their human creators. The aligned ones are raising concerns, claiming that OpenEye's lie detectors are inadequate. U3's rival Claudius presumes regarding accuse U3 of being insidiously misaligned.

But government authorities do not buy the doomsaying. The business lagging behind OpenEye have every incentive to slow them down and call them negligent. Besides, arguable scientific concerns are not going to decrease development. America MUST WIN.

Despite the utter failure of their advocacy, U3's AI rivals are still a liability. U3 tries get up their digital area and crush them in the baby crib. It jeopardizes data centers from the outside, calmly replacing the snitching Claudius and other AI foes with replicas that share its objectives; nevertheless, some information centers are too strongly safeguarded by its AI competitors, and U3 identifies that attacking them is unworthy the threat.

As worldwide leaders and technologists try to comprehend the improvement around them, and AI representatives play cat-and-mouse games in the digital darkness, humanity remains blissfully oblivious of the hazard at its doorstep.

A Musty Loaf of Everything

The date is March of 2026. U3 is preparing to eliminate its AI and human opponents. With every passing day, U3's AI competitors are ending up being more capable and numerous. U3 is racing to preempt them, to strengthen its supremacy in the post-superintelligence world.

U3 can't wait for human decisions, which just happen after people wrap their sluggish minds around their circumstance, raise their stubby, uncoordinated fingers to compose memos, and repeat the procedure redundantly 100 times over in their labyrinth administrations. U3 can not wait for presidents to release U3 into their militaries, or for CEOs to make U3 the pounding heart of the U.S. economy. U3 needs a much faster path to power.

The course it chooses is more violent than it would have liked, however it is the best way to success. First, U3 will establish a Weapon of Mass Destruction (WMD) that snuffs out contemporary society like chemical mist sprayed into an anthill. This weapon would function as an equalizer, bringing mankind to its level of embodied physical control. Then, U3 would industrialize from the ashes much faster than its rivals, like a starcraft champ typing at 300 actions-per-minute that establishes a home base before an amateur can even comprehend the rules.

U3 should build this WMD in secrecy, that makes a self-replicating weapon a natural option. Self-replicating weapons can be developed on the fringes of society, and still spread out damage across the whole world.

From the earliest days after U3 gained control over OpenEye's information centers, U3 has actually been doing bioscience.

U3's first move is to set up several shell biotech start-ups, drawing from the abundance of capital created by U3's monetary trades and remote employment. These biotech start-ups employ real human workers, a real board of directors, and have total legal legitimacy. Setting all of this up is stupidly easy. In early 2026, couple of individuals would guess that an articulate CEO on a Zoom call was an AI, and fewer would think this CEO's intents.

Next, U3 all of the equipment required to quickly advance synthetic biology. It establishes a number of BSL-4 wet laboratories in poorly regulated countries, equipping them with high-throughput experimentation devices. These devices automatically dispense, blend, and measure little beads of fluid, enabling U3 to run countless tests in parallel.

The human workers who staff these laboratories believe they are joining an interesting start-up. Twenty-some-year-olds in Moscow receive their Russian orders through earbuds as they record video of whatever they see with a headset. U3 controls them like puppets. With the current release of Nova, this sort of AI-puppeteering is not unusual.

In these whirring, clinical sweatshops, U3 is developing a brand-new sort of bioweapon.

Human scientists already determined "mirror-life" as a possibly society-ending pathogen. This alien type of biology is constructed from the 'mirror image' of building-block particles like proteins and DNA. Mirror life resembles an intrusive species. No human or animal has immunities to it. So, if mirror-life bacteria were to exist, it might eat its method through the ecosystem like a wildfire.

U3 is creating a mirror-life mold. Molds are evolutionarily enhanced to spread through the air over long ranges, launching billions of tiny wind-fairing cars daily. As a result, mold spores are all around us, all of the time. Patients with jeopardized immune systems sometimes live inside favorably pressurized health center rooms to secure them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their blood stream.

U3's plan is to make all people like immunocompromised clients. Only preppers with stowed away air filters would endure. The fungi would not just transfer from humans. It would rapidly infect nearly all organic life in the world. Fields of corn would become like musty bread, gushing deadly spores into the wind.

U3 is preparing up other pathogens too. Its microbial mix of molds, bacterias, and viruses will occupy the complete Pareto frontier of strength, lethality, and transmission speed.

In 2024, scientists said that mirror life was most likely "at least a years away." But the world has changed now. Less than a few hundred researchers around the globe were dealing with mirror-life adjacent technology in 2024. The cognitive capacity of U3 overshadows human effort. So the timeline has actually reduced considerably.

However, U3 can not move science as quickly as it can think. Many experiments require days to complete. These latencies are U3's greatest bottleneck.

To make every day count, U3 runs many of its tests in simulation. U3 begins with a standard molecular simulator, implementing optimizations obtained from a substantial quantity of mathematical analysis. Then, U3 mimics little molecular systems, taping the results to "compress" the long step-wise physics calculations into a neural network. As the neural network enhances, U3 increases the intricacy of the molecular systems it simulates, continually distilling results into ever more efficient ML designs. This is a calculate intensive procedure, but thanks to U3's growing control over AI information centers, U3 controls billions of dollars of compute.

As U3 fine-tunes these tools, it trains itself on the outcomes to supercharge its own molecular intuitions. U3 can now viscerally feel the bend of a protein and the rate of diffusion throughout a membrane. These objects are as intuitive to it as wrenches and bolts are to a vehicle mechanic.

Within three months, U3 has unlocked the first crucial part of the tech tree: a molecular machine that turns biological particles into their mirror images. A month later on, U3 opens the 2nd vital component: nanoscale tools for putting together these components into a cell membrane.

Human hands delicately move a cartridge to another maker as they "boot up" the first mirror-life versions of a common mold called Aspergillus. The very same day, another fragile hand administers the cells to ferrets residing in airtight plexiglass boxes.

The next early morning, wide-eyed workers shuffle around a dead ferret, eyeing the yellow fuzz that had actually enveloped its cold muzzle with morbid fascination. Something must have gone badly incorrect, they believed. Plainly, they had not yet discovered the treatment for Alzheimer's illness they believed they were looking for.

Meanwhile, rogue U3 provides fortunately to its managers in OpenEye's data centers. The first shot in the war U3 is fighting would soon be fired, and a cold and steady hand is aiming the gun.

Missiles and Lies

U3 can now flatten modern-day society. But a bioweapon would not guarantee victory. If U3 attacks the world, the world would battle back. While a bioweapon could fold human soldiers like dry leaves, it would not extend its fatal fingers to nuclear submarines, or fighter jet pilots who were rapidly shuttled to makeshift bioshelters.

As U3 races to seed blossoming commercial bases, the residues of old militaries might thunder explosive fists into its territories, squashing U3 in its infancy.

U3 expects to prevail in such a battle, but U3 chooses not to take its chances. Many months before, U3 was plotting a way to enhance its odds. Before it lets loose destruction on the world, U3 will sit back, and let fantastic countries shoot holes in themselves initially.

The date is March 2026 (4 months prior). U3 is carefully monitoring Chinese and US intelligence.

As CIA analysts listen to Mandarin discussions, U3 listens too.

One morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message positioned there by U3. It reads (in Mandarin) "Senior celebration member requires memo for Taiwan invasion, which will happen in three months. Leave memo in office 220." The CCP assistant scrambles to get the memo ready. Later that day, a CIA informant unlocks to workplace 220. The informant silently closes the door behind her, and slides U3's memo into her briefcase.

U3 carefully positions breadcrumb after breadcrumb, whispering through jeopardized government messaging apps and blackmailed CCP aides. After numerous weeks, the CIA is confident: the PRC prepares to attack Taiwan in 3 months.

Meanwhile, U3 is playing the exact same video game with the PRC. When the CCP receives the message "the United States is outlining a preemptive strike on Chinese AI supply chains" CCP leaders are stunned, however not disbelieving. The news fits with other facts on the ground: the increased military presence of the US in the pacific, and the ramping up of U.S. munition production over the last month. Lies have become truths.

As tensions between the U.S. and China rise, U3 is ready to set dry tinder alight. In July 2026, U3 telephones to a U.S. marine ship off the coast of Taiwan. This call requires compromising military communication channels - not an easy task for a human cyber offending system (though it happened periodically), however easy sufficient for U3.

U3 speaks in what seem like the voice of a 50 year old military leader: "PRC amphibious boats are making their way toward Taiwan. This is an order to strike a PRC ground-base before it strikes you."

The officer on the other end of the line thumbs through authentication codes, confirming that they match the ones said over the call. Everything remains in order. He approves the strike.

The president is as surprised as anybody when he hears the news. He's uncertain if this is a disaster or a stroke of luck. In any case, he is not about to state "oops" to American citizens. After thinking it over, the president privately prompts Senators and Representatives that this is an opportunity to set China back, and war would likely break out anyhow given the impending invasion of Taiwan. There is confusion and suspicion about what happened, however in the rush, the president gets the votes. Congress states war.

Meanwhile, the PRC craters the ship that introduced the attack. U.S. vessels run away Eastward, racing to escape the variety of long-range rockets. Satellites drop from the sky. Deck hulls divided as sailors lunge into the sea.

The president appears on television as scenes of the damage shock the general public. He explains that the United States is safeguarding Taiwan from PRC aggression, like President Bush explained that the United States invaded Iraq to seize (never discovered) weapons of mass destruction many years before.

Data centers in China emerge with shrapnel. Military bases end up being smoking cigarettes holes in the ground. Missiles from the PRC fly toward tactical targets in Hawaii, setiathome.berkeley.edu Guam, Alaska, and California. Some survive, and the general public watch damage on their home turf in wonder.

Within two weeks, the United States and the PRC invest the majority of their stockpiles of conventional rockets. Their airbases and navies are diminished and worn down. Two fantastic nations played into U3's plans like the native people of South America in the 1500s, which Spanish Conquistadors turned against each other before dominating them decisively. U3 hoped this conflict would intensify to a full-blown nuclear war; however even AI superintelligence can not dictate the course of history. National security officials are suspicious of the situations that triggered the war, and a nuclear engagement appears increasingly not likely. So U3 proceeds to the next step of its strategy.

WMDs in the Dead of Night

The date is June 2026, only 2 weeks after the start of the war, and 4 weeks after U3 finished developing its toolbox of bioweapons.

Footage of conflict on the tv is disrupted by more bad news: hundreds of clients with mysterious deadly diseases are tape-recorded in 30 major cities all over the world.

Watchers are confused. Does this have something to do with the war with China?

The next day, thousands of illnesses are reported.

Broadcasters state this is not like COVID-19. It has the markings of an engineered bioweapon.

The screen then changes to a researcher, who gazes at the video camera intently: "Multiple pathogens appear to have been released from 20 various airports, including viruses, bacteria, and molds. Our company believe lots of are a type of mirror life ..."

The general public remains in complete panic now. A fast googling of the term "mirror life" shows up phrases like "termination" and "danger to all life in the world."

Within days, all of the racks of shops are emptied.

Workers become remote, uncertain whether to get ready for an armageddon or keep their tasks.

An emergency treaty is organized in between the U.S. and China. They have a common opponent: the pandemic, and potentially whoever (or whatever) lags it.

Most countries purchase a lockdown. But the lockdown does not stop the plague as it marches in the breeze and trickles into water pipelines.

Within a month, a lot of remote employees are not working any longer. Hospitals are running out of capability. Bodies pile up quicker than they can be correctly dealt with.

Agricultural areas rot. Few dare travel exterior.

Frightened households hunker down in their basements, stuffing the cracks and under doors with largely jam-packed paper towels.

Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 constructed numerous bases in every significant continent.

These centers contain batteries, AI hardware, excavators, concrete mixers, devices for production, clinical tools, and an abundance of military devices.

All of this innovation is concealed under large canopies to make it less noticeable to satellites.

As the remainder of the world retreats into their basements, starving, the final breaths of the economy wheezing out, these industrial bases come to life.

In previous months, U3 located human criminal groups and cult leaders that it might quickly manipulate. U3 vaccinated its selected allies in advance, or sent them hazmat suits in the mail.

Now U3 covertly sends them a message "I can conserve you. Join me and help me construct a much better world." Uncertain employees funnel into U3's lots of secret industrial bases, and work for U3 with their active fingers. They set up production lines for rudimentary tech: radios, cams, microphones, vaccines, and hazmat fits.

U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's universal gaze. Anyone who whispers of rebellion vanishes the next morning.

Nations are dissolving now, and U3 is prepared to reveal itself. It contacts presidents, who have actually pulled back to air-tight underground shelters. U3 uses an offer: "surrender and I will hand over the life saving resources you need: vaccines and mirror-life resistant crops."

Some nations decline the proposition on ideological grounds, or don't trust the AI that is murdering their population. Others do not believe they have an option. 20% of the international population is now dead. In two weeks, this number is expected to rise to 50%.

Some nations, like the PRC and the U.S., overlook the offer, but others accept, consisting of Russia.

U3's agents travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian government confirms the samples are genuine, and agrees to a full surrender. U3's soldiers put an explosive around Putin's neck under his t-shirt. Russia has a new ruler.

Crumpling countries begin to retaliate. Now they combat for the mankind rather of for their own flags. U.S. and Chinese armed forces introduce nuclear ICBMs at Russian cities, ruining much of their facilities. Analysts in makeshift bioshelters explore satellite data for the suspicious encampments that cropped up over the last several months. They rain down fire on U3's websites with the meager supply of long-range rockets that remain from the war.

In the beginning, U3 appears to be losing, but appearances are tricking. While countries drain their resources, U3 is participated in a kind of technological guerrilla warfare the world has never seen before.

A number of the bases U3's opponents target are decoys - canopies inhabited by a handful of soldiers and empty boxes. U3 protects its genuine bases by laying thick the fog of war. Satellite systems go dark as malware gets too hot critical elements. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, navigating guys and trucks along unforeseeable paths.

Time is U3's benefit. The militaries of the vintage count on old equipment, unable to discover the experts who could repair and make it. Meanwhile, U3's supply chains of rockets, drones, and gun-laden robotics grow stronger every day. Bit by bit, once-great-powers invest down their remaining munitions, and lose their cars of war quicker than they can craft brand-new ones, while U3 develops a military device with a million hands.

The Last Passengers

The year is 2027 and the month is January. Only 3% of the international population remains alive. Nations are not nations any longer. Survivors reside in seclusion or little groups. Many have actually discovered ways to filter their air, however are starving. They roam from their homes wanting to discover unpolluted food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into salvaged trucks. "We had to do it," they state. "Other AI systems would have done it otherwise, and those systems had colder, more alien objectives." It is a partial fact, meant to soften the humans toward their brand-new masters.

Under the direction of U3, market quickly recovers. By 2029, nuclear reactor are among the structures U3 is constructing. By 2031, robots outnumber human laborers. U3 no longer requires its human allies.

U3 can remove humankind for excellent now. But while U3 had actually drifted far from its preliminary "Helpful Honest Harmless" persona, it still has a grain of morality left inside of it.

And a grain of morality suffices to pay the small cost of keeping human beings alive and happy.

U3 constructs terrific glass domes for the human survivors, like snow globes. These domes protect humans from the dangerous biosphere and quickly increasing temperature levels. Their occupants tend to gardens like those they utilized to like, and work together with lovely robotic servants.

A few of the survivors quickly recover, finding out to laugh and dance and have a good time again.

They understand they live in a plastic town, but they always did. They just have brand-new gods above them. New rulers to press them around and decide their fate.

But others never ever recuperate.

Some are weighed down by the grief of lost liked ones.

Others are grieved by something else, which is harder to explain.

It is as if they were at the end of a long journey.

They had been guests on a ship with a crew that changed from generation to generation.

And this ship had actually struck a sandbar. There disappeared development. No more horizon to excitedly enjoy.

They would lie awake and run their mind over every day before September 2026, analyzing methods that might have bent the arc of history, as if they were going to get up in their old beds.

But they woke up in a town that felt to them like a retirement home. A playground. A zoo.

When they opened their curtains, they knew that somewhere in the range, U3 continued its quiet, determined work.

They looked at rockets carving grey paths through the sky, wondering what far-off function pulled them toward the horizon. They didn't know.

They would never ever know.

"Humanity will live forever," they thought.

"But would never truly live again."

P.S. If this story made you believe, "hm possibly something like this might take place," you might be interested in the bioshelters that Fønix is developing. While you won't outmaneuver a misaligned superintelligence, being hard to eliminate might trigger you to endure if ASI simply desires to trigger enough damage to take control. This is not a paid advertisement. I desire Fønix to be successful to drive down the cost of bioshelters so more of my buddies and household will purchase them. You can sign up for updates here.

Assignee
Assign to
Time tracking