The BeAIReady Brief | Week 18
April 27 – May 3 | The OpenAI Alliance Breaks Open, the All-You-Can-Eat AI Model Ends, and the Permanent Underclass Question Gets a Courtroom
The BeAIReady Brief | Week 18
The economic signals last week were telling two different stories at once. The S&P 500 closed Friday at record highs — driven by blowout Big Tech earnings and Wall Street’s confidence in the AI buildout — while the University of Michigan’s Consumer Sentiment Index fell to 49.8, its lowest reading in the survey’s 74-year history, worse than the trough of the 2008 financial crisis; the Strait of Hormuz remained largely closed, oil held above $100 a barrel, and Q1 GDP growth came in well below expectations.
The gap between the equity market and the economic mood is not just a curiosity — it’s the context for almost everything I read last week. The dominant thread running through last week’s reading was the moment when AI’s economic promise and AI’s economic disruption stopped being future-tense arguments and started showing up simultaneously: in Q1 earnings, in corporate headcount decisions, in trial testimony in an Oakland courthouse, and in the job search anxiety of this year’s graduating class. The abstraction is over.
Here’s what I was reading.
The OpenAI Alliance Breaks Open
Microsoft’s exclusive lock on OpenAI ended, GPT-5.5 landed in Copilot the same week, and AWS quietly emerged as the biggest structural winner — all in four days.
The End of All-You-Can-Eat AI
GitHub’s Copilot moves to token metering on June 1, and Atlassian joins 79 other enterprise software firms shifting from flat fees to usage-based pricing — the all-you-can-eat model for AI is closing.
The Labor Signal Is No Longer Subtle
Microsoft’s buyout program, the collapse of entry-level hiring, a 6,000-word NYT investigation into Silicon Valley’s own fears, and the Stanford Leadership Forum all surfaced the same fracture in the same week.
The AI Governance Layer Is Finally Getting Serious
CISA and the Five Eyes published formal guidance on agentic AI security; Musk v. Altman moved from complaint to courtroom. Two different governance fronts opening at once.
On the Bigger Picture
Big Tech’s AI profits are partly paper gains on Anthropic stakes, Meta is losing users while raising its AI capex, and the scaffolding layer of enterprise AI is collapsing — with real questions about what survives.
The OpenAI Alliance Breaks Open
The news that Microsoft and OpenAI had renegotiated their exclusivity agreement arrived Monday morning, and by Tuesday OpenAI’s models were landing on AWS Bedrock. This was less a rupture than a controlled separation both parties had been engineering for months — and that the architecture of enterprise AI is now genuinely multi-vendor in a way it wasn’t thirty days ago. (Microsoft, OpenAI change terms of deal so startup can court Amazon and others)
Microsoft gave up something it was already losing — exclusivity it couldn’t enforce — and in return extracted real certainty: a guaranteed 20% revenue share through 2030, a non-exclusive license to OpenAI’s IP through 2032, and relief from having to build out data center capacity to meet OpenAI’s exploding infrastructure demands. Barclays called it a positive for both companies — and I agree. But the more interesting question is who else benefits?
The New Stack’s detailed breakdown makes a compelling case for AWS. OpenAI had already been bleeding eastward — the Amazon partnership announced in February, the $50 billion cloud commitment — but formal Bedrock integration changes the calculus for enterprise procurement teams that have been reluctant to mix their AWS environments with Azure-dependent AI services. The disclosure that roughly 45% of Microsoft’s commercial remaining performance obligation was tied to OpenAI underscores how much Azure had come to depend on a single partner — and why loosening that dependency is structurally healthier for Microsoft long-term. What last week established is that the competitive moat in enterprise AI is no longer which cloud a given model lives on; it’s which models your enterprise can access, through which governance frameworks, at what price. (The OpenAI-Microsoft reset, decoded: Why AWS may come out ahead)
In the middle of all this, Microsoft quietly pushed GPT-5.5 Thinking into Copilot Chat, Word, Excel, and PowerPoint — the first public availability of the reasoning-class model in the M365 productivity suite. The headline is the capability lift. The structural signal is that Microsoft is now running OpenAI’s newest model in its productivity layer while simultaneously opening the door for OpenAI to run on competing clouds — an acknowledgment that the competitive moat, if one exists, is not the model but the workflow integration and organizational context built around it. (Available today: GPT-5.5 Thinking and ChatGPT Images 2.0 in Microsoft 365 Copilot)
The End of All-You-Can-Eat AI
GitHub announced this week that Copilot is moving from request-based billing to usage-based billing on June 1 — introducing AI Credits at $0.01 each, with monthly allotments by plan tier and the option to buy overages. The stated reason is that the current model is financially unsustainable: a quick chat question and a multi-hour autonomous coding session cost GitHub the same in subscription revenue, but wildly different amounts in inference. The comparison The Register reached for was Red Lobster’s Endless Shrimp promotion — and the analogy is uncomfortable primarily because it’s accurate. (Microsoft’s GitHub shifts to metered AI billing amid cost crisis)
GitHub is not alone in this. The Information reported last week that by the end of 2025, 79 of the 500 software companies tracked by analyst Kyle Poyar had begun charging customers additional fees based on AI consumption — more than double the figure in 2024. HubSpot, Adobe, Atlassian, ServiceNow, Salesforce: the list of companies shifting from flat-fee to usage-based or outcome-based pricing is now long enough that the flat-fee model looks like the exception rather than the standard. The honest pressure driving this is that customers on flat subscriptions started actually using the AI features, which raised costs for vendors without raising revenue — a mismatch that was always going to resolve in one direction. The customer quoted in The Information who said “most of my clients hate it — the costs go through the roof really quickly” is describing the reality that enterprise IT leaders are about to walk into at scale. (Atlassian and HubSpot Join Shift From AI Flat Fees)
The management implication buried in this shift isn’t getting enough attention. Organizations that budgeted for AI on a per-seat basis — a predictable, plannable line item — are now facing token consumption curves that are non-deterministic by design. The CFO conversation about AI ROI had been deferred as ‘experimentation’ is about to become unavoidable. The invoices are going to start forcing it. The question of what AI actually costs, measured against what it actually produces, is a billing cycle away.
The Labor Signal Is No Longer Subtle
Microsoft announced last week that it’s offering voluntary buyouts to 7% of its U.S. workforce — more than 8,500 employees, specifically those whose combined age and years of service total 70 or more. The framing is that this is humane: a choice, not a layoff. But the subtext is visible in the arithmetic: Microsoft is investing $145 billion in capital expenditure this fiscal year, and the employees being invited to leave are the ones whose institutional knowledge the company has decided is less strategically valuable than the compute capacity being built in its place. (Here’s why companies like Microsoft are offering voluntary buyouts)
The hiring freeze at the entry level is producing its own reckoning. Junior-level job postings on Indeed fell 7% in 2025, and this year’s graduating class is applying to 150 positions and receiving silence. What’s changed is not just the volume of rejections but the nature of the barrier. There’s a growing collective suspicion among graduating seniors that AI is filtering their applications before human recruiters ever see them — and the data shows they are probably right. (Graduates Reset Ambitions in Pursuit of First Jobs)
The New York Times published what I found to be last week’s most important read — a long investigation into how Silicon Valley is actually thinking about AI’s labor impact. The piece makes clear that the “San Francisco consensus” on what AI does to ordinary workers is, by the admission of the people building AI, bleak. One finding that landed hard for me: when AI company executives say they’re cutting jobs because of AI, “other people feel like they have to too” — and that dynamic could accelerate displacement far faster than efficiency gains alone would dictate. The companies with the most candid internal views about AI-driven job loss are, in several cases, the same ones whose enterprise agent products are the proximate cause of that loss. (Opinion | Silicon Valley Is Bracing for a Permanent Underclass)
A Stanford Leadership Forum panel I watched this week — with economists from Stanford and ADP, alongside Mechanize’s co-founder whose company is explicitly trying to automate knowledge work at scale — added empirical texture to all of this. ADP’s chief economist noted that the firm’s payroll data covering one-fifth of the U.S. workforce shows no broad displacement yet, but that granular data on early-career workers in AI-exposed occupations shows a distinct employment drop since October 2022 — what one cited research paper called “canaries in the coal mine.” The ADP research on upskilling made the point pretty clear: organizations that invest in worker upskilling see employees’ sense of job security increase fivefold — a finding that reframes AI workforce investment from a cost to a strategic lever with measurable retention implications. (Stanford Leadership Forum 2026: Rewiring the Workforce in the Age of AI)
The AI Governance Layer Is Finally Getting Serious
Cybersecurity agencies from the U.S., U.K., Australia, Canada, and New Zealand published joint formal guidance this week on the secure deployment of agentic AI. The document doesn’t create a new security discipline — it argues, persuasively, that agentic AI should be folded into the zero-trust and least-privilege frameworks organizations already maintain. What it adds is specificity: five risk categories (privilege escalation, design and configuration flaws, unintended behavioral risks, structural inter-agent failures, and accountability gaps), a strong emphasis on cryptographically verified agent identities and short-lived credentials, and an explicit requirement that high-impact actions involve human sign-off. The agencies also acknowledged — and this is the part every CIO should read twice — that some risks unique to agentic systems are not yet covered by existing frameworks, and that organizations should “assume agentic AI may behave unexpectedly and plan deployments accordingly, prioritizing resilience and reversibility over efficiency gains.” That kind of calibrated honesty from a government guidance document is unusual, and it signals that the security establishment is taking the agentic layer seriously in a way it wasn’t eighteen months ago. (US government, allies publish guidance on how to safely deploy AI agents)
The Musk v. Altman trial opened in Oakland, and the first week of testimony surfaced revelations more consequential than the headline drama. Musk testified that his own company, xAI, “partly” distills OpenAI’s models to train Grok — prompting audible gasps in the courtroom. OpenAI’s lawyer argued the lawsuit is less about nonprofit governance than competitive sabotage. The judge observed acidly that she suspected there weren’t many people who’d want to put the future of humanity in Musk’s hands either. What the trial is establishing, independent of who prevails, is how loosely the AI industry’s foundational governance commitments were defined from the start — and how much of what was treated as principled agreement was actually a handshake between people who later became competitors. That’s the governance precedent being set here, and it matters well beyond the specific parties involved. (Musk v. Altman week 1)
On the Bigger Picture
The Q1 earnings from Alphabet and Amazon came with a number that deserved more attention than it received. Nearly half of Alphabet’s record $62.6 billion quarterly profit — about $28.7 billion — came not from search, cloud, or any operating business, but from the company marking up the value of its Anthropic stake after a new funding round set a higher price. Amazon disclosed a similar figure: $16.8 billion in pre-tax gains from Anthropic, more than half of its pre-tax income for the quarter. The accounting is uncontroversial under GAAP. The business signal is worth sitting with: the companies claiming to lead the AI era are booking much of their “AI profit” by investing in Anthropic and then benefiting when their own continued investment pushes Anthropic’s valuation higher — a structure where they can influence the value of the asset they’re marking to market. (Half of Google’s and Amazon’s blowout ‘AI profits’ came from Anthropic)
Meta’s quarter told a version of the same story about AI investment and operational reality diverging. The company lost 20 million daily active users — attributing the decline to internet disruptions tied to the Hormuz conflict — while simultaneously raising its 2026 capex guidance to $125–145 billion and reporting 33% revenue growth. The pattern is becoming familiar across Big Tech: AI is superb for the income statement and complicated for everything else — user engagement, workforce morale, public trust, and the household finances of the customers whose spending the whole system ultimately depends on. (Meta lost 20 million users last quarter)
The Fortune piece on American household margin compression is the one I’d flag as context for all of it. Framed as a P&L analysis of the average U.S. household, it argues that a combination of Hormuz-driven cost increases and AI-driven hiring freezes has compressed household discretionary income by 81% in a single month — producing the consumer sentiment collapse that showed up in the Michigan survey. The companies cutting headcount and freezing hiring to fund their AI buildout are, in aggregate, squeezing the customers whose spending their next phase of growth depends on. That dynamic doesn’t resolve itself. (The American household just took an 81% margin cut)
Two infrastructure-level pieces round out last week’s reading. LlamaIndex’s CEO made the case that the scaffolding era of AI development is over — that as models develop stronger native context reasoning and tool-use, the elaborate orchestration frameworks that defined early agentic development are collapsing, and that the new competitive moat is the quality and modularity of context retrieval, not the orchestration layer above it. For enterprise IT leaders evaluating AI stack investments, this is a real signal: build for context portability and model agnosticism, not for a single orchestration vendor. (The scaffolding era is over. LlamaIndex says context is the new moat) And Replit’s CEO made the case for staying independent as Cursor was reportedly in talks to be acquired by SpaceX for $60 billion — pointing to positive gross margins, 300% net revenue retention, and a fundamentally different customer base of non-technical builders. The consolidation of the AI coding tool market is moving fast, and the question of who controls access for non-technical builders — the actual majority of the future knowledge workforce — is worth watching more carefully than the valuation headlines suggest. (Replit’s Amjad Masad on the Cursor deal, fighting Apple, and why he’d rather not sell)
For me, last week put into stark contrast a moment when the gap between the AI economy and the real economy is becoming impossible to treat as a leadership abstraction. The record equity prices and the all-time-low consumer sentiment aren’t contradictions — they’re two measurements of the same system, taken from different vantage points. Organizations are compressing household discretionary income through AI-driven hiring freezes, building financial statements that book paper gains on private AI stakes as operating profit, and restructuring workforces in ways that are quietly removing the entry rungs from the career ladder.
None of this is irrational at the firm level. All of it is, in aggregate, producing an economy that is holding its breath... waiting to find out whether the productivity gains that were supposed to justify all of it arrive before the social and political costs do.
That’s it for this week’s BeAIReady brief!
If you appreciate the depth of reporting and how I connect the dots, please like, share this post, and subscribe (or share the Brief with a friend!). Thanks!
~erick


