The BeAIReady Brief | Week 17
April 20–26 | 90% of Executives Say AI Isn't Moving the Needle and Boards Are Firing Them for It Anyway, Your Laid-Off Colleagues' Slack Me
The Atlanta Fed’s GDP tracker ended last week projecting Q1 growth at around 1.3% — normally attention-getting on its own. But that same week, hyperscalers also revised their 2026 AI capital expenditure estimates up to $667 billion — a 62% jump over last year.
It seems two things are happening at once: the broader economy is tightening, and the biggest technology companies are spending faster than ever on AI. That paradox ran through nearly everything I read last week — from boardrooms replacing their CEOs to private equity firms writing nine-figure checks to force AI into portfolio companies. The pressure to act on AI is intensifying even as the evidence that it’s working at scale remains thin.
Here’s what I was reading.
This week’s coverage:
The Leaders Who Can’t Wait and the Leaders Who Won’t Stay
Boards are replacing CEOs for not moving fast enough on AI — even as 90% of executives say AI hasn’t changed their operations at all.
The Workforce Is Being Restructured, and So Is Its Data
Microsoft and Meta cut tens of thousands of workers while spending hundreds of billions on AI — and those workers’ Slack messages and emails may end up as training data.
The End of the Software License and Who’s Coming to Replace It
A new HBR framework says the era of standardized enterprise software is ending — and a massive private equity play shows who’s planning to profit from what replaces it.
The Platform Wars Go Private and Agentic
Google Cloud Next ‘26 declared the agentic era with 260 announcements; the real story is that enterprise AI infrastructure is moving out of public clouds and into private, controlled environments.
On the Bigger Picture
AI is finding security bugs in Firefox at scale, GPT-5.5 is already outperforming GPT-5.4, and the Justice Department can’t quite decide what it thinks about Anthropic.
Here’s what I was reading.
The Leaders Who Can’t Wait and the Leaders Who Won’t Stay
Last week produced another round of high-profile CEO departures, and the pattern is hard to miss. In 2025, companies in the S&P 1500 named 168 new CEOs — the highest total in more than 15 years. Adobe’s longtime CEO stepped down after 18 years under investor pressure to deliver on AI. At Walmart, Doug McMillon cited the urgency of AI transformation as part of why he stepped aside. Now Tim Cook is handing Apple to John Ternus. (The AI era is turning Corporate America into a CEO churn machine)
The core tension here: boards are replacing leaders for not moving fast enough on AI, even though the evidence that AI is actually delivering results remains remarkably thin. A survey of 6,000 executives cited in Fortune last week found that 90% say AI has had no impact on employment or productivity over the past three years — yet those same executives forecast it will increase productivity by 1.5% over the next four years. Boards aren’t firing people because AI isn’t working. They’re firing them because they’re not performing AI fast enough in front of investors who believe it should already be working. (Tim Cook’s exit is part of a CEO reckoning sweeping Corporate America)
There’s a useful distinction in the Fortune piece worth carrying into your own organization: transformation is not a turnaround. In a turnaround, you bring in an outsider to blow things up. In a transformation, you want to accelerate change without destroying what you’ve built. The companies making the most visible CEO moves last week — Apple, Walmart, Coca-Cola — are handing the wheel to insiders who know the business. The bet isn’t on a new vision; it’s on someone who can execute the existing one faster. If you’re an IT or business leader watching this, the takeaway isn’t about hiring. It’s about velocity: how quickly can you demonstrate measurable AI progress to whoever is watching your work?
The Workforce Is Being Restructured, and So Is Its Data
Meta confirmed last week it will cut roughly 8,000 employees — about 10% of its workforce — while closing 6,000 open roles and spending between $115B and $135B on AI this year. The same week, Microsoft announced voluntary buyouts for about 8,000 employees, targeting those whose age plus years of service equals 70 or more. Microsoft’s CEO has said AI already writes 30% of the company’s code. Its AI chief said AI will be able to replace most white-collar work within 12 to 18 months. Block, Amazon, and Oracle have made similar moves in recent months. (Microsoft and Meta announce large staff reductions as they spend big on AI)
The math being presented here is direct: AI spend goes up, headcount goes down. The productivity argument makes sense on paper. What the quarterly earnings presentations don’t include is the operational risk of moving this fast — the institutional knowledge that walks out the door, the morale impact on people who remain, or the technical debt that accumulates when AI-generated code replaces engineers who understood why decisions were made.
There’s a darker layer to this story that got less attention last week. A Forbes piece revealed that a startup called SimpleClosure is helping defunct companies sell their internal digital footprints — Slack archives, emails, Jira tickets, code repositories — to AI labs as training data for agents. The demand, according to their CEO, is “insane.” A competitor called Sunset is in the same business. The implication is striking: the workplace communications of companies that didn’t survive are becoming the raw material for AI agents designed to do that work in the future. The privacy questions are significant — employees didn’t consent to their messages being repurposed — and regulatory scrutiny is beginning. (AI’s New Training Data: Your Old Work Slacks And Emails)
For organizations building AI adoption strategies, this is worth pausing on. The data your organization generates every day — how people communicate, how decisions get made, what questions get asked — is increasingly what makes AI systems valuable. How you govern that data, and who has rights to it, isn’t just a legal department concern anymore. It’s a strategic one.
The End of the Software License and Who’s Coming to Replace It
HBR published a piece last week that named something I’ve been watching from my own work with clients: the economic logic that made standardized enterprise software the default choice is breaking down. Enterprise spending on generative AI applications jumped from $1.7 billion in 2023 to $37 billion in 2025. At the same time, public SaaS valuations have compressed sharply, with many leading vendors trading 30 to 60% below their 2021 peaks. The framework the piece proposes gives organizations four paths forward: build your own AI-driven systems, configure flexible platforms, collaborate with vendors to create tailored solutions, or simply buy the outcome and let someone else run it. The strategic question every IT and business leader now faces is which workflows genuinely need to be owned, and which can be delegated. (The End of One-Size-Fits-All Enterprise Software)
This isn’t just theoretical. Private equity figured out the same thing — and made a $5.5 billion bet on it. The Financial Times reported last week that OpenAI is in talks to invest $1.5 billion in a joint venture called DeployCo, alongside TPG, Bain Capital, Advent International, and Brookfield, which would contribute another $4 billion. The venture’s job: send “forward-deployed engineers” armed with OpenAI’s tools into portfolio companies to drive AI adoption and boost margins. Anthropic is in parallel talks with Blackstone, Hellman & Friedman, and General Atlantic for essentially the same model. (Private equity courts OpenAI and Anthropic)
The forward-deployed engineer model isn’t new — Palantir has used it for years. What’s new is the scale and the urgency. For organizations still figuring out where to start with AI, this is a clear signal: the companies that will profit most from AI adoption are betting on embedding themselves directly in your operations, not selling you a license and walking away. If you don’t have your own AI strategy and roadmap, someone else will hand you one.
The Platform Wars Go Private and Agentic
Google Cloud Next ‘26 took place in Las Vegas last week and was substantial. Google announced 260 updates over the course of the event. The headline was the Gemini Enterprise Agent Platform — a full-stack system for building, deploying, governing, and running AI agents at scale — alongside a new eighth-generation TPU split into two specialized chips, one for training and one for inference. Google also announced a $750 million fund to help its 120,000-member partner ecosystem build and deploy agents. Seventy-five percent of Google Cloud customers are already using its AI products; the pitch is that the next competitive battleground isn’t compute access, it’s which platform becomes the operating system for your agent workforce. (Welcome to Google Cloud Next ‘26)
A separate story last week illustrated where this is heading for regulated industries. A company called Cirrascale — working with Google — is now offering Gemini as a fully private, air-gapped appliance that runs on a single server inside a customer’s own facility. The model lives entirely in volatile memory: cut the power, and it’s gone — no data ever leaves, no model weights ever persist on disk. For organizations in financial services, healthcare, or government that have been sitting on the AI sidelines because of data sovereignty or compliance requirements, this changes the calculus. The binary choice between “use the best AI and expose your data” versus “host an inferior open-source model and keep control” is starting to close. (Google’s Gemini can now run on a single air-gapped server — and vanish when you pull the plug)
The AWS Bedrock story rounds this out. At the MCP Summit in New York last week, AWS’s Luca Chang explained how Amazon’s contributions to the Model Context Protocol — the emerging standard for connecting AI agents to enterprise tools and data — grew directly out of customer gaps. MCP is fast becoming the connective tissue of the enterprise agent layer: a standard protocol that lets AI agents plug into your existing systems. Organizations that start building their agent workflows on MCP now will have a head start when the next generation of tools arrives — and it will arrive quickly. (How AWS Bedrock is shaping Model Context Protocol)
On the Bigger Picture
The most practically significant piece I read last week wasn’t about strategy or spending — it was about security. Mozilla announced that Firefox 150 includes fixes for 271 vulnerabilities identified using early access to Anthropic’s Mythos model. What matters isn’t just the number. It’s what Mozilla said about the experience: it required significant resources and discipline to manage the volume of bugs the AI could surface. The uncomfortable implication is that if AI can find 271 security flaws in a major browser in weeks, the same capability will be in attackers’ hands shortly. If your organization runs Microsoft 365, SharePoint, or any aging enterprise application stack, this is worth a direct conversation with your security team. (Mozilla Used Anthropic’s Mythos to Find and Fix 271 Bugs in Firefox)
The model race kept moving. Lovable, the AI development platform, published benchmark results last week from its early access to GPT-5.5. On the hardest tasks, GPT-5.5 outperformed GPT-5.4 by 12.5%, made 23% fewer tool calls, and cost about 15% less per session. For teams evaluating which models to build processes or products on: the improvement curve is still steep, and locking into any single implementation without a plan to migrate is a growing risk. (Testing GPT-5.5 in early access: what we are seeing so far)
The Anthropic regulatory situation took another unusual turn. The Justice Department asked a California judge to pause its own appeal of the Anthropic case — a procedural move driven by the fact that federal agencies are simultaneously trying to get access to Anthropic’s Mythos model for government use. Trump told CNBC last week that Anthropic executives are “very smart people” and that a deal is “possible.” The government is, in effect, both suing Anthropic and trying to use its AI. This probably won’t be the last time an enterprise leader faces a version of the same dynamic: the vendor you’re most concerned about is also the one you can’t afford not to use. (Justice Department asks California judge to pause its Anthropic appeal)
What last week made clear to me is that the pressure to act on AI has become almost completely decoupled from the evidence that acting is working. Boards are removing leaders who can’t show velocity. Companies are cutting people while building infrastructure. Private equity is mobilizing billions to force AI deployment into organizations that would otherwise move more slowly. The urgency is real — but it’s being driven by investor narrative and competitive anxiety more than by outcomes anyone can actually measure. That’s not an argument to slow down. It’s an argument to be intentional: clear about what you’re measuring, honest about what problem you’re solving, and disciplined enough to separate the speed your organization needs from the speed someone else wants you to perform.
That’s it for this week’s BeAIReady brief!
If you appreciate the depth of reporting and how I connect the dots, please like, share this post, and subscribe (or share the Brief with a friend!). Thanks!
~erick


