
GPT-5.5 Arrives - The Venture Singularity
The first quarter of 2026 saw investors pour $300 billion into startups globally—an all-time record that topped 70% of all venture capital spent in all of 2025. But the thing you need to understand about this number is that nearly two-thirds of that money went to just four companies.
PLUS: OpenAI launched GPT-5.5, a model it claims can finally stop talking about work and actually do it, while a growing coalition of EU countries is blocking Germany’s attempt to gut the bloc’s AI Act on behalf of industrial giants.
Here’s the question no one is asking: If $300 billion in a single quarter is what capital allocation looks like in the age of frontier AI, what happens when those bets go bad? I’ve spent the past week talking to people familiar with the venture and regulatory landscapes, and what I keep hearing is a kind of vertigo. The numbers are so large that they stop feeling real. But the underlying dynamics are extremely real—and they are concentrating risk in ways that most market participants are not pricing in.
Following: The Venture Singularity
PLUS: OpenAI’s GPT-5.5 launch brings agentic coding accuracy to 82.7% on Terminal-Bench 2.0, with claims of 20% faster token generation thanks to smarter partitioning algorithms co-designed with NVIDIA GB300 systems.
PLUS: Google Cloud unveiled a suite of agent-building tools at its Las Vegas conference, including a dedicated inbox for AI agents to post progress reports to each other. The company is betting that enterprise adoption will follow the same path as cloud migration—slow, then suddenly.
PLUS: Geoffrey Hinton, speaking at a UN conference in Geneva, called AI “a very fast car with no steering wheel” and said regulation must provide one. It feels like he’s been saying this for years. It also feels like no one with real leverage is listening.
The Crunchbase data, published this week, is breathtaking even by AI-boom standards. Q1 2026 global venture investment of $300 billion represents a 150% increase quarter over quarter and year over year. It exceeds the total annual venture spending of every year prior to 2018. And it is almost entirely a story about a handful of American companies: OpenAI ($122 billion round), Anthropic ($30 billion), xAI ($20 billion), and Waymo ($16 billion) collectively raised $188 billion. That’s 65 cents of every venture dollar spent on the planet, flowing to four firms headquartered within a 30-mile radius in the Bay Area and Los Angeles.
U.S. companies took 83% of global venture capital in Q1, up from 71% in Q1 2025. The rest of the world is not just watching from the sidelines; it is being structurally written out of the future of capital-intensive AI infrastructure. When I asked a European venture partner how she explains this to her limited partners, she laughed and said, “We tell them we’re investing in the application layer and hoping the foundation models get cheaper.” That’s a plausible strategy. It is also a confession of strategic weakness.
The concentration is even starker within the late-stage numbers. $235 billion of the $246.6 billion in late-stage funding went to 158 companies that raised rounds of $100 million or more. That is not venture capital as traditionally understood. That is sovereign-wealth-level capital deployment dressed up in a16z hoodies. The Crunchbase Unicorn Board added $900 billion in valuation in a single quarter. For context: the entire market capitalization of the London Stock Exchange is roughly $3 trillion.
So where does this all end? The most honest answer I got was from a former OpenAI employee who now works at a competing lab. “It ends when the next round fails to close,” they said. “That hasn’t happened yet because the compute buildout is so capital-intensive that every investor is afraid of missing the last train. But the last train has a maximum capacity, and not every model lab gets a seat.” Not anymore.
The $300 Billion Problem
The thing you need to understand about Q1’s funding frenzy is that it is not a bet on current revenue. OpenAI has $20 billion in annualized revenue, according to the Crunchbase report, which is real money. But it is raising $122 billion at an $852 billion valuation. That implies investors expect the company to be worth more than most national GDPs within a few years. Even by the generous discount rates of late-stage venture, that requires growth that looks compounding into the trillion-dollar-plus range. The model that justifies a trillion-dollar valuation is not a chatbot company. It is the operating system of the global economy. And OpenAI is not the only company making that pitch: Anthropic, xAI, Google DeepMind, and a dozen smaller labs all want the same title.
The problem is that the infrastructure required to train and deploy frontier models is not scaling linearly. It is scaling super-exponentially. The GB200 and GB300 NVL72 systems that co-designed GPT-5.5 are the kinds of hardware that require entire data centers to be built around their power and cooling requirements. Bloomberg reported this week that half of the data centers slated to open in the U.S. in 2026 face delays or cancellations. Sightline Climate data, first flagged by journalist Ed Zitron, shows that only a third of the 12 gigawatts of data center capacity scheduled for this year is actually under construction. The rest is in pre-production stages where cancellation is likely. The AI industry is building a future that assumes unlimited energy and unlimited silicon. The real world has limits.
Meanwhile, the models themselves are getting better in ways that are genuinely impressive but also genuinely expensive. GPT-5.5 scores 82.7% on Terminal-Bench 2.0, a benchmark that tests complex command-line workflows requiring planning and tool coordination. It scores 58.6% on SWE-Bench Pro, solving real-world GitHub issues end-to-end. On the internal Expert-SWE benchmark for 20-hour coding projects, it outperforms GPT-5.4. And it does this using fewer tokens, making it both more capable and more efficient. OpenAI’s own finance team used it to review 71,637 pages of K-1 tax forms, cutting two weeks off the process. That is a productivity gain that will show up in profit margins for companies that adopt it.
But the question is whether the cost of building these models will ever be recouped by the applications they enable. The venture market is betting yes, with the conviction of a gambler who has already doubled down three times and cannot walk away. The AI industry’s own voluntary safety commitments, meanwhile, are beginning to look like the corpses Hinton warned about. “The road to AGI is paved with the corpses of voluntary safety commitments,” reads an opinion piece in the AGI Ethics Newsletter. It feels like a line that will be quoted in congressional hearings five years from now.
The Regulatory Pinch
PLUS: A group of ten EU countries, led by Austria, Denmark, and the Netherlands, is opposing Germany’s push to carve industrial manufacturing out of the AI Act’s high-risk requirements. The fight comes ahead of a final deal scheduled for next Tuesday.
PLUS: The White House released a National AI Legislative Framework in March that leans toward “light-touch” regulation, recommending no new federal regulator and preemption of state laws. States passed 150 AI-related bills in 2025 anyway, and multiple laws took effect on January 1, 2026.
PLUS: The Economy of the Future Commission Act of 2026, introduced in Congress this week, proposes a bipartisan study of AI’s workforce impact, with $5.25 million in funding and subpoena power.
The EU story is the most revealing. Germany, backed by Siemens and Bosch, wants to shift AI requirements for machinery, medical devices, and toys out of the horizontal AI Act and into sector-specific legislation. The argument is that overlapping requirements create a “double burden” for companies already regulated under product safety laws. That sounds reasonable until you read the paper from the opposing countries, which warns this would “result in deregulation, not simplification.” The German Green MEP Sergey Lagodinsky was more blunt: “I hail the growing opposition to the German government’s lonely proposal.”
It feels like a perfect microcosm of the broader regulatory moment. The AI Act is the world’s first comprehensive AI law. It is imperfect, bureaucratic, and already struggling to keep pace with technology that continues to improve at rates that surprise the researchers building it. But it is a framework. The alternative—sectoral carve-outs—would fragment the horizontal framework into “twelve separate compliance logics,” as Italian MEP Brando Benifei put it. That is not simplification. That is regulatory arbitrage masquerading as regulatory reform.
In the United States, the picture is even more chaotic. The White House framework recommends federal preemption of state laws, but Congress has not acted. So states are filling the void. California’s training data transparency law, Texas’s RAIGA, Colorado’s AI Act (effective June 30, 2026)—these create a patchwork that makes compliance expensive and uncertain. The administration’s December 2025 executive order created an AI Litigation Task Force to challenge state laws, but executive orders cannot preempt state law. Only Congress can. And Congress has not.
So where does this all end? I’ve been asking sources that question all week. The most pessimistic answer came from a former FTC official now in private practice. “We are heading for a trainwreck where state and federal requirements directly conflict,” they said. “The courts will have to sort it out. That will take years. In the meantime, the biggest AI companies will just comply with the strictest state laws and pass the cost to consumers. The smaller ones will have to choose which state to ignore. It is a mess.” Not a single federal regulatory agency has been created to oversee AI. The GAO found 94 AI-related requirements across federal agencies but concluded that significant gaps remain, particularly around procurement and accountability. The U.S. has not created a single agency to regulate AI, and the current framework explicitly recommends against doing so. It feels like 1996 all over again, when the internet was growing so fast that regulation seemed futile. But AI is not the internet. It is a technology that can impersonate humans, manipulate markets, and automate decisions with life-altering consequences. The light-touch approach is a bet that the industry will self-regulate. History suggests that is a bad bet.
The Safety Theater
OpenAI says GPT-5.5 comes with its strongest safeguards yet, including tighter controls for high-risk cybersecurity requests and expanded red-teaming. It classifies the model’s cybersecurity and biology capabilities as “High” under its Preparedness Framework, though not yet “Critical.” To balance access with safety, the company is launching Trusted Access for Cyber, which gives verified defenders expanded use of a cyber-permissive model called GPT-5.4-Cyber. This is the same kind of tiered access that Anthropic has experimented with for its Claude Mythos model. It is also the same kind of model that, according to reporting by Bloomberg and others, has been contracted to the Department of War for applications that include—according to Anthropic’s own policy statements—mass domestic surveillance and fully autonomous weapons. Anthropic’s CEO Dario Amodei said the company has “never raised objections to particular military operations nor attempted to limit use of technology in an ad hoc manner.” That is a careful statement. It is also a statement that effectively hands the ethical judgment to the customer.
The thing you need to understand about safety frameworks is that they are voluntary, self-assessed, and subject to revision without notice. The UN’s Independent International Scientific Panel on AI held its first in-person meeting in Madrid this week, co-chaired by Maria Ressa and Yoshua Bengio. Its mandate is to provide an independent, scientific, and authoritative assessment of how AI systems shape societies. But its findings will merely inform the Global Dialogue on AI Governance, which meets in July. There is no enforcement mechanism. There is no binding treaty. There is a lot of speaking invitations and a lot of PDFs.
Geoffrey Hinton is right. The car is fast and has no steering wheel. The question is whether we build one before it crashes, or after.
The Personalization Trap
PLUS: A startup called Public.com launched “AI investing agents,” making it the first agentic brokerage. The same week, a founder deployed an AI executive team with formal governance and termination protocols. The line between tool and agent is blurring, and regulators have not even noticed yet.
PLUS: Channel 1, an intelligent media infrastructure company, is building a system that turns real-world events into personalized video experiences for every viewer. Its tagline: “Capture once, create always, deliver uniquely.” It feels like the logical endpoint of the personalization trend—a world where every viewer gets a different version of the same news, optimized for their biases and preferences.
It feels like the beginning of the end of shared reality. I’ve been telling everyone to start a newsletter, even as the death of Twitter makes it harder to promote them. But if every news outlet can produce personalized video at scale, the concept of a common set of facts becomes optional. The AI industry is building the tools to fragment our attention further, and calling it “intelligent media infrastructure.” The audience is not stupid. It knows when it is being fed a tailored version of events. The question is whether it cares. If the value is in the information, not the writing, then people will care less that AI did most of the writing. If the value is in voice and opinion and argument and analysis, it is cheap to use AI to do the whole thing. And that cheapness is a feature, not a bug, for the companies selling the infrastructure.
Talk to us
Send tips, comments, and questions. We read everything, even if we can’t respond to all of it.