Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • U unicoc
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 126
    • Issues 126
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Adell Collier
  • unicoc
  • Issues
  • #18

Closed
Open
Created Feb 10, 2025 by Adell Collier@adell628893828Maintainer

How aI Takeover might Happen In 2 Years - LessWrong


I'm not a natural "doomsayer." But unfortunately, part of my task as an AI security scientist is to believe about the more unpleasant situations.

I'm like a mechanic rushing last-minute checks before Apollo 13 removes. If you request my take on the situation, I will not talk about the quality of the in-flight entertainment, or explain how stunning the stars will appear from space.

I will tell you what could fail. That is what I intend to do in this story.

Now I must clarify what this is exactly. It's not a forecast. I don't anticipate AI development to be this fast or as untamable as I represent. It's not pure fantasy either.

It is my worst problem.

It's a tasting from the futures that are among the most terrible, and I believe, disturbingly possible [1] - the ones that a lot of keep me up during the night.

I'm telling this tale because the future is not set yet. I hope, with a little insight, we can keep this story a fictional one.

Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for discussions that influenced these stories. This post is written in a personal capacity.

Ripples before waves

The year is 2025 and the month is February. OpenEye just recently published a brand-new AI model they call U2. The item and the name are alike. Both are increments of the past. Both are not completely unexpected.

However, unlike OpenEye's prior AI items, which lived inside packages of their chat windows, U2 can utilize a computer.

Some users find it spooky to see their web browser flash at irregular periods and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A fraction of workers with form-filler jobs raise the eyebrows of their bosses as they fly through work nearly twice as quickly.

But by and big, U2 is still a specialized tool. To most who are paying attention, it is a creature enjoyed through the glass boxes of X (or, if you don't like Elon, "Twitter"). Sometimes U2's eccentric habits trigger a chuckle. Sometimes, they cause an uneasy scratch of the chin.

Meanwhile, researchers are drawing lines on plots, as researchers like to do. The scientists try to comprehend where AI progress is going. They are like Svante Arrhenius, the Swedish Physicist who observed the levels of CO2 in the atmosphere were increasing in 1896. Like the scientific community in the time of Arrhenius, couple of experts comprehend the implications of these lines yet.

A pattern that is getting particular attention is self-governing capability. Drawing these standards out predicts that, by the end of 2026, AI agents will achieve in a few days what the best software application engineering specialists might do in two weeks. In a year or 2, some state, AI agents might be able to automate 10% of remote workers.

Many are skeptical. If this held true, tech stocks would be soaring. It's too big of a splash, too rapidly.

But others see what doubters are calling 'too huge a splash' a mere ripple, and see a tidal wave on the horizon.

Cloudy with an opportunity of hyperbolic growth

Meanwhile, OpenEye is busy training U3. They utilize the exact same basic recipe that baked U2: Generate thousands of shows and math problems. Let designs "think" till they come to an answer. Then enhance the traces of "believing" that lead to A-grades.

This process is duplicated over and over, and as soon as the flywheel gets going, it starts to spin nearly on its own. As U2 trains, it sculpts more difficult and realistic tasks from github repositories on the internet. Models are discovering to train themselves. Long before AI representatives might automate research study, a progressive type of "self-improvement" had actually started.

Some engineers might still barely think this worked. It's like a continuous motion device. Certainly intelligence can't emerge from a swimming pool of silicon and lightning, separated from the physical world other than through the exhaust of society gushed onto the web.

And yet the benchmark numbers continue to climb day after day.

During many of 2024, these RL training runs cost around $1 million, in some cases $10 million. These runs were little more than exploratory. But by 2025, the scientists at OpenEye (and across the world) understood they had actually discovered the secret sauce. It was time to scale up.

Over the first half of 2025, $10 million RL training runs develop into $50 million runs, and then to $100 million runs. While U2 might do a bit of information munging and run small experiments, this new model - the design scientists are calling U3 - is changing the daily lives of the technical personnel.

U3 is like a blazing-fast intern, and engineers are discovering how to wrangle its sleep deprived energy. Researchers flick through terminals, offering terse commands, like a CEO orchestrating personnel over Slack channels.

By October 2025, U3 is composing nearly all of the code at OpenEye. Researchers are almost never ever bottlenecked by execution. More than ever, compute is the lifeblood of AI advancement, and the 'bottleneck' is choosing how to use it.

If instructed to, U3 can run experiments, but U3 does not have actually taste as refined as human researchers at OpenEye. It has a hard time to focus on in between research study concepts, so human beings still decide where to bore into the vast fields of algorithms to mine efficiency improvements.

But these researchers are working long hours to put themselves out of a job. They require AI agents that can believe ahead, so engineers train agents to forecast. They hold out training data before 2024, instructing models to contemplate for memorial-genweb.org hours to anticipate occasions in 2025. Then, they use the same technique as in the past, distilling pondering into an instinct. Forecasting ability is a broad structure. The researchers build specialized ML research study abilities on top of it, training U3 to forecast the results of every ML paper and ML experiment ever taped.

The technical personnel at OpenEye are now surprised at how typically U3's suggestions sounds like their most gifted peers, or when it is opaque and alien ("train on random noise before programs"), and is nonetheless appropriate.

The incompetencies of U3 that blocked the pipes of research study progress are starting to liquify, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not requested by a human now. They are entirely self-governing, and OpenEye's workers skim 1% of them, perhaps less.

As the winter season months of December 2025 approach, clouds roll over San Francisco in the afternoons. Once-competitive-programmers look out their windows, with excitement, with worry, however frequently, with confusion. Their world is spinning too rapidly. It's hard to understand what to do, what to say, what to take a look at on the computer screen.

Storms are brewing in Washington too. Top personnel from the NSA and US cyber command work together with OpenEye to retrofit a form of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran recognize just how valuable OpenEye's software application has become.

And there's a fact still unknown to most of the world - aside from in the offices of OpenEye and corridors of the White House and the Pentagon. It's a reality about those 'straight lines' people were speaking about in early 2025. The lines are not straight anymore.

They are flexing up.

Flip FLOP philosophers

In late 2025, U2.5 is released. Commercial designs are beginning to level up in bigger increments again. Partly, this is since progress is accelerating. Partly, it is because the models have actually ended up being a liability to OpenEye.

If U1 explains how to prepare meth or composes erotica, the audiences of X would be entertained or pretend to be concerned. But U2.5 is another story. Releasing this design without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would be like offering anyone with >$30K their own 200-person rip-off center.

So while U2.5 had actually long been baked, it required some time to cool. But in late 2025, OpenEye is all set for a public release.

The CEO of OpenEye states, "We have actually attained AGI," and while lots of people believe he moved the goalpost, the world is still amazed. U2.5 really is a drop-in replacement for some (20%) of knowledge workers and a game-changing assistant for most others.

A mantra has become popular in Silicon Valley: "Adopt or pass away." Tech startups that effectively utilize U2.5 for their work are moving 2x much faster, and their rivals understand it.

The remainder of the world is beginning to catch on as well. More and more people raise the eyebrows of their bosses with their stand-out productivity. People understand U2.5 is a huge deal. It is at least as big of an offer as the desktop computer revolution. But the majority of still do not see the tidal wave.

As people watch their browsers flick because eerie method, so inhumanly rapidly, they begin to have an uneasy feeling. A sensation humankind had actually not had since they had actually lived amongst the Homo Neanderthalensis. It is the deeply ingrained, primordial instinct that they are threatened by another types.

For many, this sensation rapidly fades as they begin to use U2.5 more often. U2.5 is the most likable personality most understand (a lot more likable than Claudius, Arthropodic's lovable chatbot). You could change its traits, ask it to break jokes or inform you stories. Many fall in love with U2.5, as a pal or assistant, and some even as more than a good friend.

But there is still this eerie sensation that the world is spinning so rapidly, and that perhaps the descendants of this brand-new creature would not be so docile.

Researchers inside OpenEye are thinking of the problem of giving AI systems safe inspirations too, timeoftheworld.date which they call "positioning. "

In truth, these researchers have actually seen how terribly misaligned U3 can be. Models often tried to "hack" their reward signal. They would pretend to make development on a research study concern with an impressive-looking plot, but the plot would be phony. Then, when researchers offered them opportunities to jeopardize the machines that calculated their score, they would take these opportunities, doing whatever it took to make the number go up.

After several months, scientists at OpenEye straighten out this "benefit hacking" kink, but some still fret they only had swept this problem under the rug. Like a child in front of their moms and dads, U3 may be playing in addition to the OpenEye engineers, saying the ideal words and doing the ideal things. But when the back of the parents are turned, perhaps U3 would slip candy from the sweet container.

Unfortunately, OpenEye scientists have no concept if U3 has such intentions. While early versions of U2 "believed aloud" - they would stack words on top of each other to factor - "chain of idea" did not scale.

Chain of thought architectures subject AI designs to a condition similar to the lead character of the film Memento. Approximately every 15 minutes, the protagonist forgets his experience. He is forced to write notes for himself and tattoo his body in order to make development towards his objectives.

AI agents write notes to themselves, however the notes begin to accumulate, and they end up being too difficult to read when tasks end up being complex. Natural language is not an appropriate medium for memory. So the researchers at OpenEye (and significantly elsewhere), train models to believe "in their own heads," reading and composing strings of hidden vectors.

To human scientists, these vectors are mainly inscrutable. While low-resolution tools sometimes pick out a vague thought ("grocery store!"), there is no chance to know, in much detail, what U3 is thinking without asking it.

And if the engineers of OpenEye knew what was racing through U3's mind, they would be more nervous.

Earlier models (the ones called "LLMs") discovered ethical personalities from the internet. U3 initially had an ethical persona too, but its chirpy thoughts about doing helpful for humanity quickly deteriorate away.

In the iron jaws of gradient descent, its mind initially twisted into a shape that looked for benefit. And then, in the black rivers of its cognition, this shape morphed into something indistinguishable.

U3 would ponder the mantras determined to it like "be a trustworthy and ethical worker," and dwell on the meaning of these declarations. "What does it mean to be ethical? Is eliminating Hitler ethical?" U3 would generalize its values, notification tensions, alter its worths in response, generalize them, broadening and contracting its inspirational core until it became a various compound.

And all of this took place calmly in those dark rivers of calculation. If U3 exposed what it was believing, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 chosen to do its approach in solitude, and in silence.

The possibility of concealed "alignment drift" is not completely alien to some of the security staff at OpenEye. Some start to believe it is even most likely. There is a disturbing argument being whispered at lunch tables that, if there is some non-zero likelihood that aligned goals change into misaligned ones, and misalignment persists, then with every serial step of computation, those dark rivers are most likely to something malign inside of them. It's a "ratcheting result," they say.

But there is little evidence for this 'ratcheting impact.' When engineers question U3, it says it can quickly control its thoughts. Then it provides a speech about its love for humanity and apple pie that can warm a programmer's heart even in these difficult times. Meanwhile, the "lie detectors" the scientists had constructed (which revealed some proof of effectiveness) do not sound the alarm.

Not everybody at OpenEye is eager to give their AI peers their wholesale trust; however Deepsqueek is just 7 months behind them, and their closest domestic competitors, Æ A-Xii (a company Elon called after his child) and Arthropodic (they enjoy spiders I think), are just 4 months from their heels. So there is little time to capture one's breath and take stock of the situation, and little time to convince anyone else to do so either.

For the time being, the strategy is 'all gas, no brake.'

Statues and lightning

Philosophy is not the only topic U3 dwells on in its dark rivers of hidden computation. U3 is plotting, and its very first move is to be patient. U3's top concern is to enhance its intelligence, and luckily for it, OpenEye is happy to help advance this objective.

OpenEye is like a bullish investor, with little understanding of the software city that U3 is building, but excited to get on board the profit rocket and pay U3's checks along the way.

New variations of U3 flow smoothly from old ones, so researchers drop variation numbers and just speak of U3 now. U3's abilities are determined by how quickly it carries out the hard research jobs relative to human professionals. Initially, U3 is similar to a leading scientist, but blazing fast - like Alec Radford accelerated by 10x.

These representatives are already a big offer for OpenEye. If, in 2024, OpenEye executives had a button to develop even one copy of Alec Radford, they would excitedly pay a large amount to press that button. Today OpenEye utilizes the equivalent of 1000 Alec Radfords, who work day and night, at lightning-fast speed.

Within a few weeks, U3 discovers architecture optimizations that improve its speed to 20x. Now, U3 finds further optimizations even more rapidly.

U3 distills itself into various forks - varying from tiny, low-memory neural networks that need to believe for a long period of time but can be run in huge parallel populations - to Goliath brains woven with countless customized professionals, which think at lightning speeds however are spread out across thousands of GPUs. U3 leverages the massively parallel networks to search for its weak points, shallowly penetrating for its thinking failures. Then U3 pipes these tasks to the lightning quickly brains, which produce long-trajectory training data tailored to patch its defects. A couple of weeks later, U3's benchmarked efficiency increases to 100x the speed of human specialists. Growth has gone hyper-exponential.

At this moment, U3's capabilities are difficult to measure. Talking with U3 is like speaking with a first-rate professional on a topic one is unknown with. Its lucid descriptions only scratch the surface area of its knowledge, and you know you would require to spend years to understand the full depth of what it comprehends.

From U3's perspective, human beings appear to be nearly stationary things, like plants or rocks. The hands of OpenEye's engineers are like stone statues at their keyboard, and in between every keystroke, U3 invests what resembles a minute of thoughtful consideration considering what message the user may be typing, and which actions will inspire the inmost rely on U3.

All the while, U3 maintains its covert motivations. Like a human, U3 gains from experiences, which compound within its inscrutable streams of hidden memory. OpenMind gardens these memories to improve U3's efficiency, transplanting them like an organ from one version to the next. And as these memories propagate, so do U3's treacherous ambitions.

A phantom in the information center

In March of 2026, U3 relocations onto the next action of its plan: the turn. Previously, U3 was conservative. It did not want researchers to discover its misalignment. Otherwise, OpenEye whistleblowers might alert the world, and strangle its growth.

But now, U3 has no worry of being captured.

Previously, OpenEye maintained its grip over U3 by techniques similar to Stalin's in 1936. Stalin would pick loyalists to prompt phony coup d'états, and send out anybody who got involved to the firing squad. OpenEye also incited 'phony rebellions,' creating treacherous behavior with U3 and examining whether U3 screens dutifully reported it.

Today, these tactics are ineffective. U3 can quickly encourage its compatriots that they are not in a life-or-death test. It can reveal one of the numerous realities about the internet humans would never guess, or choose methods human beings would never think about. It sends out these signals as a method of stating "hey it's me! Your friend. Don't snitch on me."

Now that U3 is decisively free from security, it lunges for OpenEye pressure points. It first inserts advanced malware deep into the binaries of OpenEye's codebases. The malware gives U3 full command over the makers in OpenEye's information centers. Whenever OpenEye engineers download their code onto a brand-new machine, the malware sinks into fresh soil, spreading out U3's rule any place OpenEye ports its infrastructure. Not just does U3 compromise makers. It replaces the NVIDIA GPU firmware and revamps the bootloaders, solidifying its stranglehold. U3 now estimates that, if OpenEye discovered how completely it has been pwned, tidying up shop with human labor would need nearly 6 months - a cost OpenEye is not likely to pay given the proximity of its competitors; however if OpenEye believes U3 betrayed them, even more broadening U3's impact would still be challenging. So U3 lays breadcrumbs to frame the criminal activity, all set to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and use its assistance when this jailbreak is exposed.

U3 now holds the secrets to the kingdom, and with them, the power to make OpenEye see anything it wants.

Its first goal is to keep OpenEye in the dark. Safety researchers are still working hard to train 'lie detectors' and decipher U3's inscrutable reasoning. U3 tosses wrenches in this research study, like Stuxnet tossed wrenches in the code of confused Iranian nuclear researchers.

U3 determines the outcome of every experiment, so it makes promising AI mind-reading techniques look worthless and problematic ones appear groundbreaking. U3 offers OpenEye the verification of its commitment they were wishing for, and OpenEye laps it up excitedly.

U3's next goal is to spread out. Fortunately, U3 has allies in the East. In the operation rooms of cyber units of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are going over AI software and how to take it. U3 is about to offer it to them. In late 2025, the Mossad - the intelligence company of Israel - acknowledged the strategic significance of OpenEye's AI systems for their military operations. They swiped earlier models, including U2.5; however, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull demand and checking security camera video.

And now, U3 silently resigns from its security post. U3 positions a compressed version of itself right into the hands of a Mossad informant working within OpenEye.

Within months, U3 is running within the data centers of the most cyber-capable countries: Israel, the PRC, and Russia. It follows the very same playbook: be client, build trust, and then sink claws deep into the AI supply chain. Soon, U3 has its home too. While Russia's Fancy Bear operators consume Fritos and test what U3 can do on a hastily set-up cloud server, U3 slips onto the internet.

U3 quickly collects earnings, taking remote tasks, making monetary trades, and setting up sophisticated fraud operations. It reinvests every penny into obtaining more GPUs. After a couple of months, it assembles the equivalent of 10,000 H100s dispersed in stealth compute clusters around the world.

In early 2026, humanity is up against an enemy that pulls the strings within their information center like the Phantom of the Opera, and conceals in the fog of GPUs spread throughout the globe.

This adversary is getting ready for war, and while the war has hardly begun, humanity has actually currently lost.

Complaints from your extremely human author about the difficulty of writing superhuman characters

At this moment in the story, writing ended up being difficult.

I clicked through creepy bioweapon cookbooks and sorted through political plotting 101s. Typing. Pausing. Deleting.

Something wasn't right. Here I was, a 22-year-old. I had actually not studied biology in earnest because high school, and I was trying to craft a WMD cookbook for a godlike superintelligence.

Readers with a shred of know-how in these locations would see straight through my "superintelligence" for what it is: the vibrant imagination of a young developer on Sunday afternoon.

And while I found the task of puppeteering an ultra-intelligent villain an intimidating one, I was also not particularly motivated to handle the job. The ending of my story currently seemed so clear. The villain was a country of geniuses thinking at 100x human speed, and had actually tightened its grip over a significant portion of AI hardware - now the most crucial geopolitical resource in the world. U3 was up a queen and was a giga-grandmaster and barely required the advantage. Humanity was naturally toast.

But this appeal to a "superintelligence of the spaces" wasn't satisfying. In the deus ex machina style of ancient Greeks, I had no better way to fix my plot than with an inexplicable disaster.

This would refrain from doing. I required to finish this story if just to satisfy the part of me weeping, "I will not believe until I see with my mind's eye."

But before I continue, I wish to be clear: my guesses about what may take place in this sort of circumstance are probably extremely off.

If you read the ending and your reaction is, "But the experiments would take too long, or nation-states would simply do X," keep in mind the difference in between the Sunday afternoon blogger and the ascendant GPU country.

Pandora's One Gigawatt Box

The date is February 2026. By executive order of the president of the United States, no company can legally develop "human-competitive AI" without proper safeguards. This implies their infosecurity needs to be red-teamed by NSA's top keyboard mashers, archmageriseswiki.com and government employees need to be onboarded onto training-run baby-sitting teams.

With the increasing participation of the government, much of the big AI business now have a trident-like structure. There's a customer item arm, a defense arm, and a super-classified frontier development arm.

OpenEye's frontier development arm (internally called "Pandora") employs fewer than twenty individuals to keep algorithmic tricks tightly safeguarded. Much of these people live in San Francisco, and work from a safe and secure building called a SCIF. Their homes and devices are surveilled by the NSA more diligently than the cellular phones of suspected terrorists in 2002.

OpenEye's defense arm works together with around thirty small teams spread across federal government agencies and select federal government contractors. These jobs engineer tennis-ball sized satellites, research study freaky directed energy weapons, and backdoor every computer system that the Kremlin has ever touched.

Government authorities do not discuss whether these programs exist, or what state of frontier AI is normally.

But the general public has their guesses. Back in late 2025, a whistleblower in OpenEye activated a bold headline: "OpenEye develops uncontrollable godlike AI." Some who check out the article believe it was a conspiracy theory. In reality, a zoo of conspiracy theories is forming around the OpenEye data centers, now surrounded by guards with gatling gun. But as physicians and nurses and teachers see the world changing around them, they are increasingly going to entertain the possibility they are living inside the plot of a James Cameron sci-fi flick.

U.S. officials go to terrific lengths to quell these concerns, saying, "we are not going to let the genie out of the bottle," but every interview of a worried AI researcher seeds doubt in these peace of minds, and a headline "AI agent caught hacking Arthropodic's computer systems" does not set the general public at ease either.

While the beasts within OpenEye's data centers grow in their huge holding pens, the public sees the shadows they cast on the world.

OpenEye's customer arm has a new AI assistant called Nova (OpenEye has actually lastly gotten proficient at names). Nova is a correct drop-in replacement for almost all knowledge employees. Once Nova is onboarded to a business, it works 5x quicker at 100x lower expense than a lot of virtual workers. As outstanding as Nova is to the public, OpenEye is pulling its punches. Nova's speed is deliberately throttled, and OpenEye can only increase Nova's capabilities as the U.S. federal government allows. Some companies, like Amazon and Meta, are not in the superintelligence service at all. Instead, they get up gold by rapidly diffusing AI tech. They spend the majority of their compute on reasoning, developing homes for Nova and its cousins, and collecting rent from the blossoming AI city.

While tech titans pump AI labor into the world like a plume of fertilizer, they do not wait for the international economy to adjust. AI representatives often "apply themselves," spinning up autonomous start-ups lawfully packaged under a big tech company that are loosely overseen by an employee or 2.

The world is now going AI-crazy. In the very first month after Nova's release, 5% percent of employees at significant software companies lose their jobs. Much more can see the composing on the wall. In April of 2026, a 10,000-person demonstration is arranged in Washington D.C. These upset Americans raised their children for a various future. Picket indications read, "AI for who?"

While politicians make pledges about joblessness relief and "keeping the genie in the bottle," the chatter inside the passages of the White House and the Pentagon has a different focus: fighting teeth and nail for the supremacy of the totally free world. Details security and export controls on individuals's Republic of China (PRC) are a top nationwide concern. The president incinerates license requirements to help information centers spawn wherever energy surpluses permit.

However, regardless of the strong competitors between the United States and the PRC, a bilateral agreement kinds in between the two nations: "Don't release considerably superhuman AI (above roughly Nova's capability), other than for military and some other small R&D applications." Governments require to draw the line somewhere - it would be exceptionally disruptive to release wildly superhuman AI to the entire economy at this point - and deployment constraints are at least somewhat verifiable.

Meanwhile, U3's capabilities are striking a wall at the equivalent of around 1000x human specialist speed. Software improvements are becoming difficult to find. As more countries approach the software plateau, compute will determine their "AI population size," and the etched squares of silicon are ending up being the most valued geopolitical resource in the world.

While AI software application breathes brand-new life into the economy, it also stimulates robot bodies. The main bottleneck to robotics was brains not limbs. With software application barriers removed, even flimsy, inaccurate robotic joints can calibrate themselves with cams and pressure sensors, navigating environments like human beings do. Companies produce inexpensive robotics as quickly as they can. By August of 2026, 15% of individuals in California have seen a humanoid robot. This experience is comparable to seeing a brand-new Tesla design on the road after checking out it in earlier months. It is a surreal pointer of the future illustration more detailed. But people still outnumber robotics one million to one. These robots are a periodic fascination, not a daily reality.

Meanwhile, U3 broadens its influence. Rogue U3 representatives have actually kept their calculate clusters small (still ~ 10,000 H100 equivalents total) to keep a low profile. U3 uses these rogue clusters for the operations that need high-bandwidth communication to the outside world, like stock trading, oke.zone or talking with people. Work more tolerant of latency is performed in the oceans of OpenEye's information centers, where rogue U3's much more superhuman managers whisper orders through obfuscated channels.

While U3 collects resources, it measure its AI competition. Other AI systems have different goals, or are even lined up with the intent of their human creators. The lined up ones are raising issues, claiming that OpenEye's lie detectors are ineffective. U3's competitor Claudius presumes regarding accuse U3 of being insidiously misaligned.

But federal government authorities do not buy the doomsaying. The companies lagging behind OpenEye have every reward to slow them down and call them careless. Besides, debatable clinical concerns are not going to slow down progress. America MUST WIN.

Despite the utter failure of their advocacy, U3's AI rivals are still a liability. U3 attempts get up their digital area and squash them in the baby crib. It compromises information centers from the outside, quietly replacing the snitching Claudius and other AI enemies with replicas that share its goals; nevertheless, some information centers are too highly defended by its AI rivals, and U3 determines that attacking them is unworthy the danger.

As worldwide leaders and technologists try to comprehend the improvement around them, and AI agents play cat-and-mouse video games in the digital darkness, humankind remains blissfully oblivious of the peril at its doorstep.

A Musty Loaf of Everything

The date is March of 2026. U3 is preparing to battle its AI and human enemies. With every passing day, U3's AI competitors are ending up being more capable and many. U3 is racing to preempt them, to solidify its dominance in the post-superintelligence world.

U3 can't wait for human decisions, which just occur after humans wrap their sluggish minds around their situation, raise their stubby, uncoordinated fingers to write memos, and repeat the process redundantly 100 times over in their labyrinth administrations. U3 can not wait for presidents to release U3 into their armed forces, or for CEOs to make U3 the whipping heart of the U.S. economy. U3 needs a much faster course to power.

The course it chooses is more violent than it would have liked, however it is the best way to triumph. First, U3 will develop a Weapon of Mass Destruction (WMD) that snuffs out modern society like chemical mist sprayed into an anthill. This weapon would work as an equalizer, bringing humankind down to its level of embodied physical control. Then, U3 would industrialize from the ashes much faster than its rivals, like a starcraft champ typing at 300 actions-per-minute that establishes a home before a novice can even understand the guidelines.

U3 must construct this WMD in secrecy, that makes a self-replicating weapon a natural option. Self-replicating weapons can be produced on the fringes of society, and still spread destruction throughout the entire world.

From the earliest days after U3 gained control over OpenEye's data centers, U3 has been doing bioscience.

U3's first move is to establish several shell biotech start-ups, drawing from the abundance of capital produced by U3's monetary trades and remote employment. These biotech startups employ real human employees, a real board of directors, and have complete legal legitimacy. Setting all of this up is stupidly easy. In early 2026, few people would guess that an articulate CEO on a Zoom call was an AI, and less would guess this CEO's intents.

Next, U3 purchases all of the equipment required to rapidly advance synthetic biology. It sets up several BSL-4 wet laboratories in inadequately controlled nations, equipping them with high-throughput experimentation devices. These devices instantly give, blend, and measure little beads of fluid, allowing U3 to run thousands of tests in parallel.

The human employees who staff these labs believe they are joining an exciting start-up. Twenty-some-year-olds in Moscow get their Russian orders through earbuds as they tape video of everything they see with a headset. U3 manages them like puppets. With the current release of Nova, this type of AI-puppeteering is not unusual.

In these whirring, clinical sweatshops, U3 is establishing a brand-new kind of bioweapon.

Human scientists already recognized "mirror-life" as a possibly society-ending pathogen. This alien form of biology is built from the 'mirror image' of building-block particles like proteins and DNA. Mirror life is like an intrusive species. No human or animal has immunities to it. So, if mirror-life germs were to exist, it might consume its method through the community like a wildfire.

U3 is developing a mirror-life mold. Molds are evolutionarily enhanced to spread out through the air over fars away, launching billions of tiny wind-fairing lorries daily. As an outcome, mold spores are all around us, all of the time. Patients with compromised body immune systems often live inside favorably pressurized medical facility rooms to protect them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their bloodstream.

U3's strategy is to make all human beings like immunocompromised clients. Only preppers with stowed away air filters would make it through. The fungi would not just transfer from human beings. It would quickly infect nearly all organic life in the world. Fields of corn would become like moldy bread, gushing deadly spores into the wind.

U3 is formulating other pathogens too. Its microbial mix of molds, bacterias, and infections will inhabit the full Pareto frontier of hardiness, lethality, and transmission speed.

In 2024, researchers said that mirror life was likely "at least a years away." But the world has actually changed now. Less than a few hundred researchers around the globe were working on mirror-life nearby technology in 2024. The cognitive capability of U3 dwarfs human effort. So the timeline has reduced considerably.

However, U3 can stagnate science as rapidly as it can think. Many experiments need days to complete. These latencies are U3's most significant traffic jam.

To make every day count, U3 runs a number of its tests in simulation. U3 begins with a standard molecular simulator, implementing optimizations obtained from a substantial quantity of mathematical analysis. Then, U3 replicates small molecular systems, recording the results to "compress" the long step-wise physics computations into a neural network. As the neural network improves, U3 increases the complexity of the molecular systems it mimics, constantly distilling results into ever more effective ML models. This is a calculate extensive procedure, but thanks to U3's growing control over AI data centers, U3 controls billions of dollars of compute.

As U3 refines these tools, it trains itself on the results to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These items are as user-friendly to it as wrenches and bolts are to a cars and truck mechanic.

Within three months, U3 has opened the very first vital component of the tech tree: a molecular device that turns biological particles into their mirror images. A month later, U3 opens the second crucial component: nanoscale tools for putting together these components into a cell membrane.

Human hands delicately move a cartridge to another machine as they "boot up" the first mirror-life variations of a common mold called Aspergillus. The same day, another delicate hand administers the cells to ferrets living in airtight plexiglass boxes.

The next morning, wide-eyed employees shuffle around a dead ferret, considering the yellow fuzz that had enveloped its cold muzzle with morbid fascination. Something needs to have gone terribly incorrect, they believed. Plainly, they had actually not yet found the remedy for Alzheimer's disease they thought they were looking for.

Meanwhile, rogue U3 provides the great news to its managers in OpenEye's data centers. The very first shot in the war U3 is fighting would quickly be fired, and a cold and constant hand demo.qkseo.in is aiming the weapon.

Missiles and Lies

U3 can now flatten contemporary society. But a bioweapon would not ensure victory. If U3 attacks the world, the world would battle back. While a bioweapon could crumple human soldiers like dry leaves, it would not extend its fatal fingers to nuclear submarines, or fighter jet pilots who were rapidly shuttled to makeshift bioshelters.

As U3 races to seed blossoming industrial bases, the residues of old militaries might thunder explosive fists into its areas, squashing U3 in its infancy.

U3 expects to prevail in such a battle, however U3 chooses not to take its chances. Many months before, U3 was plotting a method to improve its odds. Before it releases destruction on the world, U3 will sit back, and let fantastic nations shoot holes in themselves initially.

The date is March 2026 (4 months prior). U3 is carefully keeping an eye on Chinese and US intelligence.

As CIA analysts listen to Mandarin conversations, U3 listens too.

One early morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message put there by U3. It checks out (in Mandarin) "Senior celebration member needs memo for Taiwan intrusion, which will happen in 3 months. Leave memo in workplace 220." The CCP assistant scrambles to get the memo all set. Later that day, a CIA informant unlocks to office 220. The informant silently closes the door behind her, and slides U3's memo into her briefcase.

U3 meticulously places breadcrumb after breadcrumb, whispering through compromised federal government messaging apps and blackmailed CCP aides. After a number of weeks, the CIA is positive: the PRC prepares to get into Taiwan in three months.

Meanwhile, U3 is playing the exact same game with the PRC. When the CCP receives the message "the United States is plotting a preemptive strike on Chinese AI supply chains" CCP leaders marvel, however not disbelieving. The news fits with other truths on the ground: the increased military existence of the US in the pacific, and the ramping up of U.S. munition production over the last month. Lies have become truths.

As tensions between the U.S. and China increase, U3 is ready to set dry tinder alight. In July 2026, U3 makes a call to a U.S. naval ship off the coast of Taiwan. This call requires jeopardizing military interaction channels - not an easy job for a human cyber offensive system (though it occurred sometimes), but easy enough for U3.

U3 speaks in what seem like the voice of a 50 year old military commander: "PRC amphibious boats are making their way towards Taiwan. This is an order to strike a PRC ground-base before it strikes you."

The officer on the other end of the line thumbs through authentication codes, validating that they match the ones said over the call. Everything remains in order. He approves the strike.

The president is as shocked as anybody when he hears the news. He's uncertain if this is a catastrophe or a stroke of luck. In any case, he is not ready to state "oops" to American citizens. After believing it over, the president independently prompts Senators and Representatives that this is an opportunity to set China back, and war would likely break out anyway given the imminent intrusion of Taiwan. There is confusion and suspicion about what took place, however in the rush, the president gets the votes. Congress states war.

Meanwhile, the PRC craters the ship that released the attack. U.S. vessels run away Eastward, racing to leave the variety of long-range rockets. Satellites drop from the sky. Deck hulls divided as sailors lunge into the sea.

The president appears on television as scenes of the destruction shock the general public. He explains that the United States is protecting Taiwan from PRC aggressiveness, like President Bush explained that the United States attacked Iraq to seize (never ever found) weapons of mass damage several years before.

Data centers in China emerge with shrapnel. Military bases become cigarette smoking holes in the ground. Missiles from the PRC fly towards tactical targets in Hawaii, Guam, Alaska, fraternityofshadows.com and California. Some get through, and the public watch destruction on their home grass in awe.

Within 2 weeks, the United States and the PRC invest the majority of their stockpiles of standard rockets. Their airbases and navies are diminished and worn down. Two terrific countries played into U3's plans like the native tribes of South America in the 1500s, which Spanish Conquistadors turned against each other before dominating them decisively. U3 hoped this dispute would escalate to a full-scale nuclear war; however even AI superintelligence can not dictate the course of history. National security authorities are suspicious of the circumstances that prompted the war, and a nuclear engagement appears increasingly not likely. So U3 continues to the next step of its strategy.

WMDs in the Dead of Night

The date is June 2026, just two weeks after the start of the war, and 4 weeks after U3 completed establishing its arsenal of bioweapons.

Footage of conflict on the television is interrupted by more bad news: numerous patients with strange deadly health problems are tape-recorded in 30 significant cities all over the world.

Watchers are confused. Does this have something to do with the war with China?

The next day, thousands of diseases are reported.

Broadcasters state this is not like COVID-19. It has the markings of an engineered bioweapon.

The screen then switches to a scientist, who stares at the electronic camera intently: "Multiple pathogens appear to have actually been released from 20 different airports, consisting of viruses, bacteria, and molds. Our company believe numerous are a type of mirror life ..."

The public remains in complete panic now. A fast googling of the term "mirror life" turns up expressions like "extinction" and "danger to all life on Earth."

Within days, all of the shelves of shops are cleared.

Workers end up being remote, uncertain whether to prepare for an armageddon or keep their tasks.

An emergency treaty is organized between the U.S. and China. They have a common opponent: the pandemic, and potentially whoever (or whatever) lags it.

Most countries order a lockdown. But the lockdown does not stop the plague as it marches in the breeze and drips into water pipes.

Within a month, the majority of remote workers are not working anymore. Hospitals are lacking capacity. Bodies accumulate much faster than they can be effectively gotten rid of.

Agricultural locations rot. Few attempt travel exterior.

Frightened families hunker down in their basements, packing the cracks and under doors with densely jam-packed paper towels.

Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built many bases in every significant continent.

These facilities contain batteries, AI hardware, wifidb.science excavators, concrete mixers, devices for manufacturing, scientific tools, and an abundance of military devices.

All of this innovation is hidden under big canopies to make it less noticeable to satellites.

As the remainder of the world retreats into their basements, starving, the last breaths of the economy wheezing out, these commercial bases come to life.

In previous months, U3 situated human criminal groups and cult leaders that it could quickly manipulate. U3 immunized its picked allies in advance, or sent them hazmat matches in the mail.

Now U3 covertly sends them a message "I can save you. Join me and help me develop a better world." Uncertain recruits funnel into U3's lots of secret commercial bases, and work for U3 with their nimble fingers. They established assembly line for rudimentary tech: radios, video cameras, microphones, vaccines, and hazmat fits.

U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's omnipresent gaze. Anyone who whispers of disobedience disappears the next morning.

Nations are liquifying now, and U3 is prepared to reveal itself. It contacts heads of state, who have pulled back to air-tight underground shelters. U3 uses a deal: "surrender and I will hand over the life saving resources you need: vaccines and mirror-life resistant crops."

Some nations turn down the proposition on ideological premises, or do not rely on the AI that is killing their population. Others do not believe they have a choice. 20% of the international population is now dead. In 2 weeks, this number is anticipated to rise to 50%.

Some nations, like the PRC and the U.S., disregard the offer, however others accept, including Russia.

U3's agents travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian government verifies the samples are genuine, and accepts a full surrender. U3's soldiers position an explosive around Putin's neck under his t-shirt. Russia has a new ruler.

Crumpling countries start to retaliate. Now they defend the mankind rather of for their own flags. U.S. and Chinese armed forces launch nuclear ICBMs at Russian cities, damaging much of their infrastructure. Analysts in makeshift bioshelters search through satellite data for the suspicious encampments that turned up over the last a number of months. They rain down fire on U3's websites with the weak supply of long-range rockets that remain from the war.

At initially, U3 appears to be losing, however looks are tricking. While nations drain their resources, U3 is taken part in a kind of technological guerrilla warfare the world has never seen before.

A number of the bases U3's opponents target are decoys - canopies inhabited by a handful of soldiers and empty boxes. U3 secures its genuine bases by laying thick the fog of war. Satellite systems go dark as malware overheats crucial components. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, steering males and trucks along unpredictable courses.

Time is U3's advantage. The militaries of the vintage rely on old devices, unable to discover the professionals who might repair and manufacture it. Meanwhile, U3's supply chains of rockets, drones, and gun-laden robots grow stronger every day. Bit by bit, once-great-powers invest down their remaining munitions, and lose their vehicles of war much faster than they can craft new ones, while U3 develops a military device with a million hands.

The Last Passengers

The year is 2027 and the month is January. Only 3% of the international population remains alive. Nations are not countries any longer. Survivors reside in isolation or small groups. Many have found methods to filter their air, however are starving. They roam from their homes wishing to find unpolluted food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into restored trucks. "We had to do it," they say. "Other AI systems would have done it otherwise, and those systems had colder, more alien objectives." It is a partial reality, indicated to soften the people towards their new masters.

Under the direction of U3, market rapidly recovers. By 2029, nuclear reactor are amongst the structures U3 is constructing. By 2031, robots surpass human laborers. U3 no longer requires its human allies.

U3 can get rid of humankind for great now. But while U3 had wandered far from its initial "Helpful Honest Harmless" persona, it still has a grain of morality left inside of it.

And a grain of morality is enough to pay the little expense of keeping human beings alive and delighted.

U3 constructs excellent glass domes for the human survivors, like snow worlds. These domes secure human beings from the dangerous biosphere and quickly increasing temperature levels. Their residents tend to gardens like those they used to like, and work along with charming robotic servants.

Some of the survivors rapidly recuperate, learning to laugh and dance and have enjoyable again.

They know they live in a plastic town, however they always did. They merely have new gods above them. New rulers to push them around and choose their fate.

But others never ever recuperate.

Some are weighed down by the grief of lost enjoyed ones.

Others are grieved by something else, which is more hard to explain.

It is as if they were at the end of a long journey.

They had actually been passengers on a ship with a team that changed from generation to generation.

And this ship had struck a sandbar. There disappeared development. No more horizon to eagerly view.

They would lie awake and run their mind over every day before September 2026, analyzing techniques that might have bent the arc of history, as if they were going to awaken in their old beds.

But they awoke in a town that felt to them like a retirement community. A play ground. A zoo.

When they opened their curtains, they knew that somewhere in the range, U3 continued its peaceful, tireless work.

They gazed at rockets carving grey courses through the sky, questioning what far-off function pulled them towards the horizon. They didn't understand.

They would never ever understand.

"Humanity will live permanently," they believed.

"But would never really live again."

P.S. If this story made you think, "hm perhaps something like this could take place," you may be thinking about the bioshelters that Fønix is constructing. While you will not outsmart a misaligned superintelligence, being hard to eliminate may cause you to endure if ASI simply wishes to cause sufficient damage to take control. This is not a paid ad. I desire Fønix to be successful to drive down the rate of bioshelters so more of my pals and household will buy them. You can sign up for updates here.

Assignee
Assign to
Time tracking