Clarity Signal Field Notes
May 3, 2026 · Field Notes

Saying Yes Is
Not a Plan.

A builder's read on what's happening at OpenAI. Not a takedown — a flare. The cracks are early enough that course correction is still possible.

I've been building software for thirty years, and you develop a sense for when a system is sound and when it's running on momentum. Lately, when I look at OpenAI, I get the second feeling.

This isn't a takedown. OpenAI matters. The mission matters. The people inside who signed up to build AI carefully deserve a company that can actually deliver on that. But the cracks I'm seeing are the kind that get worse if nobody names them, and they're still early enough to fix. So I want to name them — not to cheer for a fall, but to flag what a builder sees when he looks at the foundation.

Here's what I see: a company saying yes to everything that walks in the door. Yes to a $50 billion Amazon investment that required tearing up the Microsoft contract. Yes to ads in ChatGPT, and then yes to a steady erosion of the firewall that was supposed to protect users from those ads. Yes to a Public Wealth Fund proposal in a sweeping 13-page policy paper. Yes to over $1.4 trillion in total AI infrastructure commitments. Yes to a "third era" as an infrastructure provider. Yes to a Jony Ive hardware project. Yes to a Disney partnership, then no. Yes to Sora, then no. Yes, yes, yes.

Saying yes is not a plan. Saying yes is what a company does when it has no plan, or when the cash burn is high enough that every yes feels like survival. My honest read is that they're throwing things at the wall to see what sticks. That's not a plan. It's not even hope. It's desperation dressed up as ambition.

The founder is asking for help

The thing that pushed me to write this was something Sam Altman said at Stripe Sessions a few weeks ago: that he's "definitely not a hands-on manager" while admitting he messages a few hundred employees a day, and that he might need to hire new leaders or even build an AI manager to handle the company's scaling demands. This isn't the first time he's signaled this. He has said before, in different ways, that he's not sure he's the right fit for where the company is headed.

I want to be careful here, because that kind of admission deserves respect, not opportunism. Altman took OpenAI from a research lab nobody outside AI had heard of to the most consequential technology company of the decade. That's not nothing. Founders who can do the zero-to-one rarely have the same instincts for the ten-to-a-hundred, and the honest ones say so. He's saying so.

The problem isn't that he said it. The problem is that the org around him doesn't seem to be responding the way the situation calls for. Yes, there's a CFO (Sarah Friar). Yes, there's a head of Applications (Fidji Simo). Yes, there's a COO (Brad Lightcap). On paper the executive layer exists. But Altman is still messaging hundreds of employees a day. That's the contradiction I keep coming back to. A functioning executive layer means the CEO doesn't have to be the integration point for every decision. If he is, the layer isn't doing what executive layers are supposed to do — and that's not a critique of the people in those roles, it's a critique of how authority actually flows in the company. He doesn't need new titles. He needs leaders with the authority to say no on his behalf, and a structure that lets them. That's a fixable problem in theory. In practice, the design of the company makes it very hard.

The structure fights the fix

To give Altman the help he's asking for, you'd need a strong operational layer with real authority. A COO who can say no to the CEO. An executive team with clear domains. A board that provides genuine oversight rather than crisis governance. The titles exist; the underlying authority doesn't, and the structure makes installing it harder than it should be.

I'll grant the obvious counter: OpenAI is mid-conversion to a Public Benefit Corporation. The board has been reshuffled. The Microsoft renegotiation cleared a major commercial overhang. None of that is nothing. The honest version of this critique acknowledges that the company knows the structure is a problem and is actively working on it. The question is whether the changes are deep enough and fast enough — and whether they fix authority flow or just rename it.

A real COO needs authority that comes from somewhere. At a normal company, it comes from the board and the CEO together. At OpenAI, the board already tried to exert authority once and got rolled within a weekend by Microsoft, the employees, and Altman himself. Any COO walking in knows the board can't actually back them. The only authority they'd have is whatever Altman personally delegates — which means they're not really a COO, they're a chief of staff with a bigger title. That's not the help he needs.

And the foundational agreements keep getting rewritten. The Microsoft partnership — the deal that has defined OpenAI's commercial existence since 2019 — has been renegotiated twice in the past six months. Just this week, the AGI clause was scrapped, Azure exclusivity was ended, Microsoft's IP license became non-exclusive, and OpenAI is now free to serve customers on AWS and Google Cloud. The cap-table is being unwound through the conversion to a Public Benefit Corporation. Each of these is a sign that the previous structure couldn't hold what was being built on top of it. That's not stability adapting to growth. That's a company continuously rebuilding its structural foundations while trying to operate at hyperscale.

Watch how the firewall drifts

This is the section I've been thinking about hardest, because the surface story and the actual story are different.

The surface story is fine. On February 9, OpenAI started running ads in ChatGPT — Free and Go tiers only, sponsored units that appear below the response, visually separated, clearly labeled. Run on separate systems from the chat model. "Ads do not influence the answers ChatGPT gives you." That's the official design, and as designs go, it's defensible. It's the Google search ad model — organic answer on top, sponsored unit below, user knows the difference.

But watch what's happened in the eighty days since.

March 2: Criteo became the first ad-tech partner to integrate with the pilot. A few weeks later, StackAdapt joined as a DSP — a demand-side platform selling placements based on "prompt relevance." Adobe is running ads for Acrobat Studio and Firefly through their agency WPP. The auction, the targeting, the creative serving, the measurement — all being handled by external ad-tech, with OpenAI providing the inventory and the user signal.

April 30: OpenAI quietly updated its U.S. privacy policy to formalize data sharing with what it now calls "marketing partners." The new language explicitly acknowledges receiving purchase data from advertisers to measure ad effectiveness, sharing user information with marketing partners for third-party ad targeting, and using user data to market OpenAI's own products. The vendor disclosure category was renamed. That's the language that signals formal ad-tech relationships, not just service providers.

In parallel, the ad industry is openly publishing guides describing future ChatGPT ad formats. Sponsored Comparison Tables, where a brand's product gets embedded inside a structured comparison response. "Contextual Native Recommendations" that, in the words of one guide, "blend into the conversational fabric more seamlessly, functioning more like a trusted recommendation than a traditional ad unit." That language exists in marketing playbooks today. The drift toward in-conversation placement isn't hypothetical — it's literally what advertisers are being trained to expect, while OpenAI's official position remains that ads are separate from the answer.

This is the pattern. Yes to ads. Yes to ad-tech partners. Yes to formal data-sharing language. Yes to industry expectations of in-conversation formats. All in eighty days. The firewall the original announcement promised is already drifting, and the trajectory is one direction. The "ads don't influence answers" line is the first promise that gets renegotiated when revenue pressure meets ad-tech partner demands. It always is.

The strongest defense of ads — and I think this is genuinely OpenAI's best argument — is that subscription-only access means powerful AI for the wealthy and table scraps for everyone else. Free access matters. Ads pay for free access. That's a real moral case, and I take it seriously. But the case only holds if the firewall holds. The moment "ads pay for free access" becomes "ads shape the answers free users see," the moral argument inverts: now the people who can't pay are the ones getting the compromised product. The eighty-day drift is exactly the trajectory that turns the good argument into the bad one.

Trust is the product (and Google already owns ads)

Here's the harder problem for OpenAI specifically: Google is going to win the AI ad space. They have to. They built a $300 billion-a-year business teaching users to expect sponsored results next to organic ones. Their users came to them with that contract already signed. ChatGPT's users came for the opposite contract — direct answers without commercial bias, because that was the whole pitch.

And Google is moving. Three months ago Demis Hassabis was at Davos saying flatly that Gemini had no ad plans. No qualifiers, no hedge. Last week on Alphabet's Q1 earnings call, Chief Business Officer Philipp Schindler was asked the same question and the answer had transformed: if they find a format that works in AI Mode, the same idea could eventually be used in the Gemini app.

That's not a denial. That's a $2 trillion ad company pretending it's still figuring out how to sell ads. Three hedges in one sentence — if, could, eventually — from the company that wrote the playbook on monetizing commercial intent. They're not figuring it out. They've figured it out. They're waiting for the air to clear. Hassabis's flat denial in January was the before. Schindler's rehearsed hedging in late April was the after. The actual decision happened somewhere in between, almost certainly the moment OpenAI confirmed the February 9 ad launch and the sky didn't fall. Google was waiting for cover. OpenAI gave it to them.

The deeper problem: Google already runs ads in AI Overviews and AI Mode. The ad-tech stack, the advertiser relationships, the auction infrastructure, the measurement tooling — all already operational. Gemini is the last holdout, kept clean for premium positioning. That perception barrier is now gone. They'll ship, probably soon, and they'll ship into a market they already own.

So OpenAI is taking the asset that makes their product defensible — trust — and putting it in tension with a revenue stream they're reaching for under financial pressure, in a market segment where the incumbent has every structural advantage and now has air cover to follow them in. The Wall Street Journal just reported they expect to burn $25 billion in cash this year against $30 billion in revenue, that they missed an internal target of one billion weekly active users by end of 2025, and that subscriber defections to Gemini and Anthropic are now a real concern. CFO Sarah Friar has reportedly warned colleagues the company might not be able to fund future compute contracts. That's the context for the ads decision. They need money. But the cure may be worse than the disease, and the prescription was already filled by Google.

The Ive project, in miniature

The Jony Ive partnership is the whole pattern in one project. OpenAI bought Ive's company io for $6.5 billion in May 2025. Altman told staff it was "the coolest piece of technology the world will have ever seen." Ship date: 2026.

Then the Financial Times reported they were struggling with compute shortages, privacy questions about always-on cameras and microphones, and basic disagreements about the device's "personality." They lost the name "io" in a trademark fight with a hearing aid startup called iyO — a fight they'd called "utterly baseless" and vowed to fight "vigorously." Court filings in February confirmed the device won't ship before February 2027. No packaging exists. No marketing exists. Last week they killed Sora six months after launch and ended a planned $1 billion Disney partnership.

Yes to a $6.5B acquisition before they had the compute. Yes to a hardware product before they had the operational layer to ship one. Yes to a name without clearing the trademark. Yes to "the iPhone of AI" as a vision before they had a manufacturing arm, a retail strategy, a privacy framework, or a working prototype.

The cruelest part is that Ive himself put real meaning behind it. He said "everything I've learned over the last 30 years has led me to this place and to this moment." That's a man at the end of his career staking his legacy. He deserved a partner with the operational maturity to ship it. He got a partner that says yes to everything.

The headwinds don't care

Here's the part nobody can change. The macro environment is brutal for everyone. HBM memory is sold out through 2027. Power-ready data center sites are scarce, and permitting runs three to five years. Local communities are pushing back on water use, noise, and tax abatements. Tariffs are hitting transformers, cooling systems, and Chinese components. Nuclear restarts won't deliver electrons until 2027 at the earliest. None of this gets meaningfully better before 2030.

A company with clear strategy and operational discipline can navigate that. A company that says yes to everything cannot. Compute will go to whoever can sequence their commitments and honor them. OpenAI has publicly committed to more than $1.4 trillion in AI infrastructure spending — across Microsoft, Amazon, Oracle, and its own data center buildouts — against $30 billion of revenue this year. The math doesn't have to fail. But it has to be allocated, and allocation requires a plan.

Consolidation is coming

I'll say this carefully because it's speculation, not reporting. But when an industry has demand outrunning supply for five years, brutal capital requirements, and a leader visibly losing operational coherence — consolidation happens. It always does.

Elon Musk made an unsolicited $97 billion bid for OpenAI's nonprofit assets in February 2025 and was rejected. He's currently in court in Oakland, testifying against the company's conversion to a for-profit and seeking $134 billion in damages plus a forced unwinding of the conversion. He runs xAI with its own compute, its own capital, and a deeply personal grievance about being pushed out of the company he co-founded. I would not be shocked to wake up in eighteen months and find Musk has acquired OpenAI in some form — a fire-sale rescue, a forced merger, a structured deal extracted under financial duress. I'm not sure that's a good outcome. He'd bring capital and operational will, but he'd also bring his own version of chaos and a very different relationship to safety than the original mission promised.

The point isn't to predict who buys whom. The point is that an industry burning $25 billion a year to chase $30 billion in revenue, against a fixed supply curve and a leader asking for help that isn't coming, is not an industry that stays at seven independent labs forever. Three or four end-states are realistic in five years. OpenAI being acquired, broken up, or reorganized into something unrecognizable is one of them.

A different kind of foundation

Anthropic isn't free of contradictions, but the structure is more coherent. Public Benefit Corporation. Long-Term Benefit Trust with actual board authority. Founders who are mostly still there. A research-led culture where the safety framing isn't bolted on after the fact — it's the founding premise. That doesn't make them right about everything, but it means decisions can land because there's a place for them to land. The foundation can carry weight.

Talent retention tells the same story. When Sutskever, Murati, Schulman, and a long list of others leave inside eighteen months, that's not bad luck. The defense you'll hear is that each departure was individual — Sutskever wanting his own lab, Murati building her own company, generational wealth, a chance to do something new. Each story is true. But the pattern is what matters. People with options at this caliber don't all decide to do something else at the same time unless something at the center has changed. And the alumni of OpenAI starting the most-watched competitors in the field is not a feature for OpenAI. It's the same brain trust building the same thing somewhere else, somewhere with a different foundation.

The quiet case for local-first

I build local-first software. Mini apps that run without a cloud dependency, work offline, and keep user data on the user's own machine. That's been my work for years, and the pitch has usually been ideological — your data should belong to you. It still is. But when the foundations of centralized AI keep getting rewritten every six months, when the firewall between your conversations and an ad-tech auction can be redefined in a privacy policy update overnight, when the leading lab might be acquired by anyone within a few years — the case stops being ideological and starts being practical.

Keep your data close. Keep your tools close. Use the cloud where it adds real value, not because the vendor told you to. Build so that whatever happens upstream — a fire-sale acquisition, a strategic pivot, a trust collapse, a privacy policy rewrite — your work and your customers' data don't go with it. That's not a political statement anymore. It's risk management.

What I hope

I want OpenAI to succeed. I want the people inside who care about the mission to get the operational support they need to deliver on it. I want Altman to get the help he's been asking for, in the form of real leaders with real authority, before the headwinds or the consolidation does the deciding for him.

I'll grant the defense its honest core: this is a company in the steepest part of an S-curve, building something that may genuinely be the most important technology in human history, on a timeline they didn't choose. Some of what looks like chaos from outside is the unavoidable mess of doing something nobody has done before. I take that seriously. The PBC conversion is real. The new executive layer is real. The Microsoft renegotiation cleared an actual overhang. Eighteen months from now, this piece may read as too pessimistic. I would be glad if it did.

But hope is not a plan either. And right now, from outside, it doesn't look like a plan is what's happening. It looks like a company throwing things at the wall, very fast, very expensively, hoping something sticks before the compute, the cash, or the trust runs out.

I hope I'm wrong. I'm telling you what I see.