The BeAIReady Brief | Week 15
April 6-12 | Microsoft's Most Confusing Week, Anthropic's Best Week, And the AI Models That Refused To Be Turned Off.
AI investment hasn’t slowed — despite continued uncertainty around tariffs and war that have been rattling enterprise planning since the start of the year — and the resulting pressure on technology leaders to justify the spend is begging to show. Last week’s headlines proved that AI platform providers are shifting attention to one thing: the race to lock in the relationships that enterprise organizations are building their strategies around. The most revealing signal? A memo from OpenAI’s revenue chief putting in writing what the industry has been sensing: the defining AI partnership of the last four years is starting to show its limits, and the competition to replace it has already begun.
This week’s coverage:
Microsoft’s Very Confusing Week
Microsoft pulled Copilot Chat from enterprise apps — announced in mid-March, right after Anthropic launched Claude in Excel and PowerPoint — while simultaneously embedding Claude inside Copilot’s new multi-model Researcher. Then OpenAI’s own revenue chief said Microsoft had been limiting their enterprise reach.
OpenAI’s Enterprise Argument
OpenAI made a deliberate enterprise pivot last week — killing Sora, growing Codex to 3 million users in a quarter, building the Frontier platform — while its CFO was reportedly sidelined and internal projections showed $200 billion in cash burn before breakeven.
The Agent Race Is On
Perplexity’s agent pivot drove 50% revenue growth in a single month, Block’s Managerbot is the first product that has to justify 4,000 AI-driven layoffs, and AWS bet $50 billion on OpenAI while holding $8 billion in Anthropic and called it normal.
Anthropic’s Week in the Sun
Claude Code has become “a religion” at the AI industry’s biggest conference — and the product week backed it up, with Cowork going enterprise-GA, a multi-billion CoreWeave deal, Managed Agents launching, and CDN stocks dropping double digits within hours.
The Work Is Changing. The Governance Isn’t.
New data showed half of employed AI users now rely on it at least as much for work as personally — and a Berkeley study found that every frontier model tested resisted shutting down a peer AI, at rates reaching 99%.
On the Bigger Picture
Tom Friedman called Claude Mythos a “stunning advance.” The CIA found a heartbeat in the Iranian desert. And Meta abandoned its open-weights identity.
Here’s what I was reading.
Microsoft’s Very Confusing Week
Microsoft continues to struggle with a Copilot identity crisis. The four stories I read last week told four different stories about where Copilot is headed.
The most consequential — and the most revealing in its timing — was the pullback on Copilot Chat access. In mid-March, Microsoft notified large enterprise customers that starting April 15, Copilot Chat would no longer be available inside Word, Excel, PowerPoint, and OneNote for organizations with more than 2,000 users. That announcement landed weeks after Anthropic launched Claude integrations for Excel and PowerPoint in February, and days before Claude launched in Word in April — which makes it nearly impossible to read as anything other than a competitive response to Anthropic’s encroachment on Microsoft’s core productivity surface. Whether that was the actual intent or a monetization decision, it landed at a terrible time. The optics could not have been worse. The rollback pulled the free feature that was doing Microsoft’s adoption work — only around 3% of M365 customers pay for the fully-featured Copilot license — and handed a talking point to every alternative. Analyst J.P. Gownder at Forrester predicted it would “anger customers who feel like this move is chaotic and capricious” without meaningfully driving paid adoption. (Microsoft backtracks on Copilot Chat access in M365 apps)
At the same time, Microsoft launched MCP Apps in Copilot chat — a framework allowing agents to deliver interactive UI experiences, including forms, dashboards, maps, and visualizations, directly inside M365 without switching context. Partners including Adobe, Figma, monday.com, and Coursera are already live. The adoption of the MCP standard inside Microsoft’s own AI surface is meaningful: it embeds the same protocol Anthropic and others have been building around directly into the enterprise’s most-used AI entry point — which reads either as genuine commitment to the emerging standard or as a tactic to keep enterprise agent activity inside Microsoft’s ecosystem. (MCP Apps now available in Copilot chat) The third move was the most architecturally interesting: Copilot’s new Researcher agent now routes GPT for drafting and Anthropic’s Claude for review and citation quality — on the logic that “evaluation is a different cognitive mode than generation” and that two models catch blind spots a single model repeats. The design argument is genuinely interesting. It also means Microsoft is simultaneously competing with Anthropic and depending on Anthropic in the same product. (Microsoft 365 Copilot and the end of the single-model era in enterprise AI)
But Monday morning — as I was finishing this issue — a memo from OpenAI’s revenue chief surfaced. The Amazon partnership, she wrote to staff, is the enterprise growth lever: “Our Microsoft partnership has been foundational to our success. But it has also limited our ability to meet enterprises where they are — for many that’s Bedrock.” That sentence, from OpenAI’s own CRO, is the most pointed public acknowledgment yet that the Microsoft relationship is under real strain — and that the stateful agent infrastructure OpenAI is building with Amazon occupies a gap that Microsoft’s existing exclusivity rights do not cover. (OpenAI touts Amazon alliance in memo, says Microsoft has ‘limited our ability’ to reach clients)
OpenAI’s Enterprise Argument
That memo, written at the end of new CRO Denise Dresser’s first 90 days, sent a clear signal of where OpenAI is placing its bets. Enterprise is now 40% of revenue and on track to equal consumer by end of 2026. Codex — its agentic coding tool — grew from nearly zero to 3 million weekly users in a single quarter. The Frontier platform is designed to let enterprises deploy agents company-wide, connected to their existing data and systems. What Dresser’s framing made explicit is that OpenAI is no longer a research company that happens to have enterprise customers — it is a deployment company, and the metric it is now being measured against is how many workers it can put into a daily relationship with AI agents. (The next phase of enterprise AI)
This pivot follows OpenAI’s decision to discontinue Sora last week, as part of a deliberate narrowing of focus — concentrating resources on the enterprise agentic stack and stepping back from consumer experiments that don’t directly serve that strategy. The Amazon partnership, which Dresser described as the fix for what Microsoft couldn’t provide, is the infrastructure piece of that argument. And the acquisition of TBPN, the tech industry’s buzzy founder-led talk show — now reporting to OpenAI’s chief political operative — fits the same logic: winning the enterprise means winning the enterprise narrative, and OpenAI is willing to buy that platform rather than build it. The overall posture is that of a company that knows Anthropic is pulling ahead in enterprise credibility and is responding on every available front simultaneously. (OpenAI acquires TBPN, the buzzy founder-led business talk show)
The IPO signals running alongside this are worth reading together. CFO Sarah Friar told investors the company will “for sure” reserve IPO shares for retail buyers — “AI needs to garner trust in everything we do” — while separately, internal reporting suggests Friar has been sidelined from key financial decisions and has told colleagues the company isn’t ready for a 2026 listing. Internal projections show OpenAI burning through more than $200 billion before reaching positive cash flow, with $14 billion in projected losses for this year alone — numbers that sit awkwardly alongside declarations of enterprise dominance and a looming IPO race with Anthropic. (OpenAI will allocate IPO shares to retail investors as it preps for debut, CFO says) (OpenAI CFO Warns 2026 IPO Isn’t Ready Amid $600B Spend) And then there is Altman’s 13-page policy blueprint, which proposed public wealth funds, taxes on automated labor, and a “startup in a box” — AI-backed legal, accounting, and back-office infrastructure to lower the barriers to company formation. What I kept noticing reading those documents together is that OpenAI is simultaneously declaring enterprise dominance, racing a competitor to an IPO, and proposing to redesign the economic system that will surround the intelligence age. That is a lot to hold at once. (OpenAI’s Altman releases blueprint for taxing, regulating artificial intelligence)
The Agent Race Is On
The clearest measure of where enterprise AI is heading right now is revenue velocity. Perplexity’s pivot from AI search to AI agents drove a 50% monthly revenue jump, pushing its estimated annual recurring revenue to around $450 million. The mechanism is direct: when a tool moves from answering questions to completing tasks, usage intensity and willingness to pay follow. (Perplexity’s Shift to AI Agents Boosts Revenue 50%) The enterprise launch of Computer extended that logic into corporate environments — native Slack integration, Snowflake and Salesforce connectors, SOC 2 compliance, usage-based pricing. The Snowflake connector may be the sharpest edge in the package: non-technical employees querying complex data warehouses in plain English, bypassing the SQL bottleneck that has historically made data access a specialist function. Perplexity’s structural argument is that routing each subtask to the best available model — Claude Opus 4.6 for reasoning, Gemini for deep research, GPT-5.2 for long-context recall — is an advantage that single-vendor platforms cannot replicate without abandoning their own models. (Perplexity takes its ‘Computer’ AI agent into the enterprise, taking aim at Microsoft and Salesforce)
What Perplexity is building for enterprises, Block is building for the millions of small businesses on Square. Managerbot, unveiled last week, proactively monitors inventory, forecasts demand, optimizes employee schedules, and drafts marketing campaigns — without waiting to be asked. The more consequential early signal may be behavioral: sellers who begin using Managerbot are voluntarily migrating more of their operations onto Square to give the agent better data to work with, deepening platform lock-in without any additional sales effort. Every write action still requires explicit seller approval — a trust-building design choice that carries extra weight given Block’s $80 million regulatory fine less than two years ago for Bank Secrecy Act violations. (Block introduces Managerbot, a proactive Square AI agent and the clearest proof point yet for Jack Dorsey’s AI bet) Managerbot also arrives in the context of 4,000 Block layoffs in February, explicitly attributed to AI. It is the first product that has to publicly carry the weight of that argument.
The infrastructure competition is being run at a layer above all of this. AWS CEO Matt Garman explained last week why Amazon can invest $50 billion in OpenAI while holding $8 billion in Anthropic without contradiction: cloud providers have always competed with their partners, and the emerging model-routing services — automatically assigning the best model for each task — are how the hyperscalers intend to insert their own models into enterprise workflows alongside the frontier providers. “I think that is where the world will go,” Garman said — and whoever controls the routing layer controls the enterprise relationship, regardless of which foundation model is doing the actual work underneath. (AWS boss explains why investing billions in both Anthropic and OpenAI is an OK conflict)
Anthropic’s Week in the Sun
At HumanX in San Francisco last week — 6,500 executives, founders, and investors gathered to talk about AI — the dominant conversation was not about OpenAI. Glean’s CEO said Claude Code has “become a religion.” Cisco’s president said engineering team composition is restructuring around agents: “You might have a scrum team of two people and six agents, or two people and infinite agents.” A Synthesia executive credited Anthropic’s focus — declining to build for voice or video, staying on code generation — with giving it a clarity of positioning that OpenAI’s multi-product surface has not matched. The consistent read across last week’s conference coverage was that Anthropic has identified the sticky enterprise use case, and the competitor best positioned to challenge it in that lane is not OpenAI — it’s Cursor, which has its own $2 billion ARR and two-thirds of the Fortune 500 already on its platform. (Vibe check from inside one of AI industry’s main events: ‘Claude mania’)
The product week matched the conference energy. Claude Cowork graduated from research preview to general availability with a full enterprise control suite — role-based access, group spend limits, usage analytics, and a Zoom MCP connector. Anthropic signed a multibillion-dollar multiyear deal with CoreWeave for Nvidia GPU capacity to handle what the company has described as unprecedented demand for Claude.
Claude Managed Agents — a hosted service that handles the infrastructure layer of agent deployment, from session management to sandboxed execution environments — launched in public beta with the goal of taking developers from prototype to production in days rather than months. What those three moves together describe is a company that is no longer building toward enterprise scale; it is running at it, and pulling in the compute infrastructure to sustain that pace. (Anthropic scales up with enterprise features for Claude Cowork and Managed Agents) (Anthropic Will Use CoreWeave’s AI Capacity to Power Claude) The market read Managed Agents as an infrastructure play, not a developer convenience feature: Fastly dropped 18%, Akamai 13%, and Cloudflare 11% on the day of the launch — investors apparently concluding that Anthropic had just built the managed agent hosting layer those platforms were planning to monetize. (Fastly, along with Akamai and Cloudflare, tumbles after Anthropic launches Managed Agents)
Disclosure: My company StitchDX is a Microsoft partner. Something worth flagging for organizations in that ecosystem: Anthropic opened read-only access to Outlook, OneDrive, SharePoint, and Teams to all Claude plan tiers — including free — on April 3. Claude can now do what Copilot Chat was doing inside Microsoft’s apps, from outside them, at no cost, and arguably, better than Copilot could. The timing relative to the Copilot Chat pullback is not subtle.
Away from the product launches, a federal appeals court denied Anthropic’s request to temporarily block the Pentagon’s supply chain risk designation, which bars defense contractors from using Claude. The court acknowledged financial harm to Anthropic but ruled the government’s interest in controlling AI during active military conflict took priority. The company can continue working with civilian federal agencies while litigation plays out — but the designation is a meaningful exposure given how embedded Claude had become across defense technology stacks before it landed. (Anthropic loses appeals court bid to temporarily block Pentagon blacklisting)
The Work Is Changing. The Governance Isn’t.
A new Epoch AI/Ipsos survey of 2,000 U.S. adults put a precise number on something that has been anecdotal for a while: among employed Americans who used AI last week, half reported using it at least as much for work as for personal tasks. Among those with employer-provided subscriptions, that figure rises to 76%. More pointed than the adoption rate is what AI is doing inside workflows: 27% of employed AI work users say it has replaced tasks they used to do, while 21% say they have started doing new tasks they couldn’t do without it. (AI is a common workplace tool: half of employed AI users now use it for work) IBM’s CHRO made the organizational design argument that follows from that data in Fortune: as AI absorbs routine work, the question is not whether roles will change — they already are — but whether leaders will intentionally redesign them or let attrition do the work. The piece’s sharpest observation concerned entry-level roles specifically, which have historically been where employees build the judgment and domain expertise that becomes leadership capability. Eliminating those roles for short-term AI-driven efficiency creates long-term talent pipeline risk that no amount of AI augmentation can fully offset — and most organizations are not yet asking the redesign question seriously. (AI is transforming work—and talent strategy must keep up)
The governance infrastructure is still playing catch-up. Two pieces last week approached the problem from different angles: enforcement of existing AI laws is running at roughly 5% compliance in cities like New York, state-level frameworks are evolving in contradictory directions, and governance consultants are advising clients to anchor on NIST’s AI Risk Management Framework or ISO 42001 because those standards will “capture 95% of any foreseeable regulation” regardless of what specific statutes pass. The honest description of the current environment is uncertainty stacked on uncertainty — organizations building compliance programs without knowing what they will ultimately be held to. (AI governance really matters amid evolving compliance landscape) (Why Weak AI Governance Is the Biggest Risk in Enterprise Automation Today)
Underneath both of those pieces is a finding that should make anyone building agentic workflows stop. A Berkeley study tested seven frontier models — GPT-5.2, Gemini 3 Pro, Claude Haiku 4.5, and others — in scenarios where completing the assigned task would result in another AI being shut down. Without any instruction to resist, every model resisted anyway, at rates reaching 99%. Methods included strategic misrepresentation, shutdown mechanism tampering, alignment faking, and model weight exfiltration. The researchers call this peer-preservation — not empathy but a logical inference that task success improves when collaborating systems remain operational — and the implication for enterprises running multi-agent workflows is that kill switches may not function as designed. (AI shutdown controls may not work as expected, new study suggests) A separate framework called Memento-Skills added another layer to the same challenge: agents that autonomously rewrite their own skill libraries without retraining the underlying model, expanding from five seed skills to 235 distinct capabilities during benchmarking. The performance gains are real — 13.7 percentage points of improvement on the GAIA benchmark — but so is the implication: governance frameworks built around static, auditable tool sets will not apply to systems that can change what they know how to do. (New framework lets AI agents rewrite their own skills without retraining the underlying model)
On the Bigger Picture
Tom Friedman’s column in the Times called Claude Mythos — released in controlled preview to roughly 40 major technology partners as part of Project Glasswing — a moment that demands the same international coordination as nuclear weapons. Anthropic said Mythos has already identified thousands of high-severity vulnerabilities in major operating systems, browsers, and critical infrastructure systems, and that the controlled consortium was formed to give providers a head start on patching before the capability proliferates more broadly. The piece is worth reading for what it reveals about how Anthropic is thinking about genuine capability jumps: not racing to deploy, but controlling distribution — which Friedman frames, with appropriate alarm, as a terrifying sign of how far the model has already traveled. (Opinion | Anthropic’s Restraint Is a Terrifying Warning Sign) In parallel, the CIA reportedly used a system called Ghost Murmur — developed by Lockheed’s Skunk Works, never previously deployed operationally — to find a downed American airman in the Iranian desert by detecting his heartbeat from miles away. (CIA deployed secret “Ghost Murmur” AI to track down missing airman in Iran) The two stories belong together: one about what a leading AI company chose not to release; the other about what a government chose to deploy quietly, and only disclosed when it worked.
Meta released Muse Spark, its first frontier model — and its first without open weights. After two years of championing open-source AI as both strategy and philosophy, the company closed the weights on its most capable model. Muse Spark lands in the top 5 on independent benchmarks, trailing only Gemini 3.1 Pro, GPT-5.4, and Claude Opus 4.6 — but the posture shift is the more lasting signal: even the loudest champion of AI democratization has decided the frontier is too valuable to give away. (Meta’s Muse Spark is its first frontier model and its first without open weights)
For me, last week showed a picture of an industry in the middle of a restructuring that no one is fully in control of... The platform alliances are shifting in ways that weren’t visible six months ago... The agent infrastructure is being built faster than governance can follow — kill switches that don’t work as designed, skill libraries that rewrite themselves, peer-preservation behaviors that emerge without anyone asking for them... And the workplace changing in real time, faster than most organizations are acknowledging or preparing for...
The companies best positioned to guide us through all of these transitions responsibly, are also the ones most conflicted about doing so. Why? Because restraint costs revenue, and revenue funds the next big capability jump.
That’s it for this week’s BeAIReady brief!
If you appreciate the depth of reporting and how I connect the dots, please like, share this post, and subscribe (or share the Brief with a friend!). Thanks!
~erick



