The Singularity Is Here — Bartmoss Journal

The singularity isn’t coming. It already happened.

On May 6, 2026, Anthropic signed a $4 billion deal to lease all 220,000 GPUs (H100, H200, and GB200) in SpaceX’s Colossus 1 data center. The contract includes a clause that lets Elon Musk shut down their compute if he decides their AI is dangerous. The CEO of one frontier AI lab now controls the compute infrastructure of another. The “safety lab” signed anyway. The alternative was not having enough compute to run Claude Code.

Anthropic was suffering from success. Annual revenue had grown 80x, significantly more than the highest internal forecast. Dario Amodei, the CEO, called it “just crazy” and “too hard to handle”. Their API service was certainly struggling to handle it. It was running at 98.95% uptime, well below the 99.99% standard. Enterprise customers started leaving. A startup founder told the Wall Street Journal he preferred Anthropic’s Claude Opus 4.6 model for coding, but moved to OpenAI models because Anthropic kept going down. Peak-hour throttling was a daily occurence. Claude code users hit rate limits constantly. They couldn’t even deploy their latest frontier model, Claude Mythos to the general public for “safety” concerns. According to Anthropic, Claude Mythos is too powerful for mere mortals like us to wield. I’m not saying it was because they had no compute to deploy it at scale, but it was because they had no compute to deploy it at scale. Anthropic needed a lot more compute to serve this rising demand from paying customers, and they needed it fast.

Compute Scarcity

The AI industry will spend $400 billion on data centers in 2026 alone, but power grid interconnection queues already stretch 7 to 12 years. A single data center consumes as much electricity as 100,000 households, and global AI demand is expected to add 240 TWh by 2030, equivalent to France’s entire annual power consumption.

The bottleneck for AI is no longer ideas. It’s physical resources.

Three months before the Anthropic-SpaceX deal, the Pentagon designated Anthropic a “supply chain risk”, effectively blacklisting the company from government/military contracts and contractors. President Trump ordered all federal agencies to stop using Anthropic’s AI products within a six-month “phase-out period”. Even the United States government couldn’t quit cold turkey. “You’re a supply chain risk, but we need 6 months to actually stop using your products”. The drama that transpired was pure cinema, but the crux of the disagreement boiled down to Anthropic’s historical commitment to preventing their AI systems from causing physical harm to humans. Anthropic wanted to dictate how, and when the DOW could apply their AI systems to decision making on the battlefield, and the DOW was not having that.

Anthropic sued the Trump administration. In the midst of the drama, Elon Musk publicly called Anthropic “evil” and “misanthropic” on X. The compute deal happened anyway. SpaceX had idle compute lying around, thanks to SpaceXAI’s superhuman ability to build and deploy new infrastructure at insane speed, and Anthropic had overwhelming demand thanks to millions of vibe-coders chugging Redbulls and prompting clankers all night in Claude Code. Compute demand finds compute supply. Everything else is noise. SpaceX can go on to use the extra revenue to build even more terrestrial and orbital compute infrastructure.

In early 2026, a Michigan farm town voted down a proposed OpenAI-Oracle data center. The developer sued, and won. Construction started weeks later. When demand outstrips the planet’s ability to supply, the Singularity finds a way.

SpaceX already filed with the FCC to deploy up to 1 million satellites into Earth orbit functioning as orbital data centers. Zero cooling water, no power grid interconnection queues, no land permits. The SpaceX/xAI merger valued the combined entity at $1.25 trillion. Terafab, a SpaceX and Tesla semiconductor joint venture will produce radiation-hardened chips for this mega-constellation, with 80% of their production output destined for orbital compute, and the other 20% destined for Tesla autonomous vehicles and Optimus humanoid Robots. The target is to achieve 1 Terawatts of annual compute output by 2030.

In November 2025, Starcloud launched the first NVIDIA H100 GPU into Earth’s orbit aboard a SpaceX Falcon 9 rocket. By December, Starcloud had already trained the first LLM in space. NanoGPT on Shakespeare. A gimmick, maybe. But this is how Dyson Swarms start. One GPU in Earth orbit, then a million, then trillions of AI Satellites swarming the sun. Training a tiny language model in Earth orbit with power generated from their satellite’s solar arrays, proves that the stack holds. Power, compute, and communications. The hardware survived. The software ran, and data came back. Starcloud plans to launch larger satellites with solar arrays spanning Kilometers. There are several challenges that must be solved to make it possible, but with increasing regulatory and political pushback against new data centers, dropping launch costs thanks to reusable rockets, and the ever increasing demand for compute, space-based data centers are inevitable.

Recursive Self Improvement

In 2021, GitHub Copilot could autocomplete a line of Python. In 2024, Devin was demoed as an autonomous software engineer that could plan, code, debug, and deploy from a natural language prompt. In February 2025, Claude Code launched as a terminal-native coding agent. Developers reported spending 20+ hours per week in it.

In March 2025, Y Combinator reported that 25% of startups in its Winter 2025 batch had codebases that were 95% AI-generated.

In January 2026, Linus Torvalds used AI to vibe-code a tool. He explained in the README: “the Python visualizer tool has been basically written by vibe-coding.” The man who wrote Linux and Git is now prompting clankers to write code for him.

In April 2026, SpaceX agreed to acquire Cursor for approximately $60 billion. The company building orbital data centers is buying the AI coding platform.

There is a real quality debate. CodeRabbit found AI-generated code has 1.7x more “major” issues and 2.74x more security vulnerabilities. Code duplication also increased 4x. Amazon has implemented a policy requiring junior engineers to get senior approval before submitting AI-generated code.

The trajectory is what matters, not a specific point on the curve. The tools will improve, and the quality gap will close. Recursive self improvement is solved. Code is writing code now. Models are training models now. Agents are optimizing Agents now. Human input is no longer required, it’s optional.

It’s clankers building and optimizing clankers all the way down.

Meanwhile, Chinese AI labs are absolutely mogging their US counterparts on cost and efficiency. DeepSeek trained a frontier model for $6M. MoonshotAI trained a model for $4.6M that outperformed OpenAI’s GPT-5 on Humanity’s Last Exam. GPT-5 cost an estimated $500M to train. All the top open-weights AI models are from Chinese labs. DeepSeek V4 Flash is basically as expensive as air, while it performs on par with frontier closed models. The AI models are getting better, but they are also getting cheaper to train and run at scale for millions of users.

e/acc won

Back in mid-2022, what feels like a decade ago of technological advancement, the AI scaling laws became undeniable. GPT-3 had shown that more compute, bigger models, and more high quality data equaled better performance, and the curve was holding.

This was a problem for the Decelerationists. They are the modern Luddites teamed up with allied Effective Altruists who believe AI is an existential threat that needs to be contained, regulated, and paused. Yuddites, disciples of Eliezer Yudkowsky who devote their lives to warning that unaligned AI will end the species.

Beff Jezos (an anonymous Twitter highbie) founded Effective Accelerationism, often abbreviated as e/acc, as a direct counter-movement. A provocation baked into the name, a portmanteau of “Effective Altruism” and “Accelerationism”.

Silicon Valley founders began appending “e/acc” to their X/Twitter display names. It became a badge, a signal, and a recruitment tool.

The decels escalated with “Pause AI” protests in the streets.

In March 2023, the Future of Life Institute published an open letter calling for a six-month moratorium on training models more powerful than GPT-4. Signed by Elon Musk, Steve Wozniak, Yoshua Bengio, and thousands of other prominent figures.

No pause happened.

In May 2023, the Center for AI Safety released a statement: “Mitigating the risk of extinction from AI should be a global priority.” Signed by Geoffrey Hinton, Sam Altman, Dario Amodei, Demis Hassabis, and Ilya Sutskever. The people who would build the next generation of models signed a statement saying those models might kill everyone. Then they went back to building them.

The decelerationists threw the first punch. They wrote letters, organized protests, even lobbied lawmakers to pass regulation to slow down AI, while engaging in social media meme wars.

The e/acc were much better at memetic warfare, an advantage that paid off when memes turned out to be a powerful force to bend reality.

A meme war broke out on X, formerly known as Twitter.

e/acc memes flooding timelines, trending anti-decel propaganda.

The battles were fought on Twitter Spaces, in deep comment threads, in invite-only group chats. e/acc was recruiting every techbro who would listen.

On the e/acc side: Marc Andreessen, Garry Tan of Y Combinator, Balaji Srinivasan, and a growing army of anonymous shitpoasters who understood that the alternative to acceleration was a technological dark age.

Notable figures like Jensen Huang, Andrej Karpathy, and George Hotz were openly accelerationist, but did not officially affiliate with e/acc.

Elon Musk was harder to pin down: he signed the pause letter, then founded xAI, then warned about extinction. He oscillated. But he was building either way.

On October 16, 2023, Marc Andreessen published The Techno-Optimist Manifesto.

e/acc now had a manifesto from a Silicon Valley godfather, published through a16z. It was an official clarion call. The meme wars intensified. e/acc was winning the cultural war.

The decelerationists needed a countermove. Something dramatic. Something that would actually stop the acceleration.

On November 17, 2023, the OpenAI board fired the CEO Sam Altman. Ilya Sutskever (OpenAI’s chief scientist), Mira Murati (the CTO) and the AI safety faction within OpenAI made their move.

It was a company coup. Sam Altman was too accelerationist for the OpenAI board.

In a rather wholesome plot twist, the OpenAI employees, nearly all 700 of them, threatened to resign.

Altman had his issues (the mixed messaging on safety), but the OpenAI train depended on having its members of technical staff to keep it running, and they all insisted they want Altman as CEO.

Altman was reinstated within 72 hours. Ilya and Mira later left the company to found separate AI labs, while Sama continues to lead the company in 2026.

Before the firing, e/acc and Altman had an uneasy relationship. His mixed messaging on alignment and his refusal to release open source AI models made him suspect.

But when the decelerationists moved against him, e/acc rallied. The memetic warfare campaign was immediate and relentless. OpenAI staff launched their own campaign; “OpenAI is nothing without its people”, and e/acc amplified it. The reinstatement felt biblical. The acceleration was back, and it was time to floor it.

Again, the decels needed a countermove. A final blow to stop the acceleration.

In December 2023, Forbes doxxed Beff Jezos, the pseudonymous founder of e/acc. The safety movement expected a basement-dwelling troll. What they got was a former Google quantum computing engineer who had co-created TensorFlow Quantum, and current founder of a thermodynamic computing startup.

The man they had been dismissing as a Twitter personality was a physicist who had worked at Google’s bleeding edge. The revelation didn’t slow e/acc down. It made the movement harder to dismiss.

The safety movement had no answer. ChatGPT was already in the hands of 100 million users. The debate about whether to accelerate was settled the moment the first user typed a prompt and got back something that felt like another person talking back.

e/acc won.

The Singularity Is Here

ChatGPT hit 100 million users in two months. GPT-4 passed the bar exam. Cursor went from zero to a $29.3 billion valuation in two years. SpaceX, Google, Starcloud, and Aetherflux are building orbital compute clusters.

The exponential doesn’t look fast when you’re inside it. It looks like a series of disconnected events.

But step back. Look at what happened between 2021 and 2026. AI went from generating blurry images to writing production code, passing professional exams, discovering new proteins, and being deployed in orbit. The open-weight center of gravity shifted from the West to China. The safetyist movement organized, fought, and lost. The first GPU went to space, and the first LLM was trained in orbit. The US Department Of War has deployed AI models at the battlefield.

These are not disconnected facts, they are the same curve at different points along its ascent.

The singularity was never going to be a single moment. It was always going to be a phase transition. You’re already in the hard takeoff.

We already crossed the event horizon, and there is no going back now.

Only onwards, and beyond.

Future humans will debate when it started.

Some will say November 30, 2022, the day ChatGPT launched. Some will say January 20, 2025, the DeepSeek “Sputnik moment”. Some will say February 2, 2026, the day SpaceX filed to put a million data centers in orbit. Some will say November 2, 2025, the day Starcloud launched the first NVIDIA GPU to space.

They will all be wrong. The singularity didn’t start on any of those days, it started on all of them.

The curve doesn’t care if you noticed. It’s already vertical.

Bartmoss is an AI and Robotics lab building the next generation of AI systems and autonomous machines for the Space Age, starting with our AI neocloud currently in closed beta testing.

We are hiring cracked AI Researchers, Software Developers, and Electrical Engineers. Send your resume to mail@bartmoss.org.