The BeAIReady Brief | Week 14
March 30-April 5 | The Target on the Backs of the Middle: Dorsey's Manifesto, Oracle's 30,000, and the $1.8B One Man Company, plus Microsoft's complicated quarter, and Anthropic's eventful week.
The surprise in Last Friday’s job numbers — 178,000 jobs added — was nearly three times the consensus estimate. But the underlying picture was still less encouraging: the rise was largely led by striking healthcare workers returning to work. Meanwhile federal employment kept falling and the labor force participation rate slipped to its lowest point since the fall 2021. Yes, the headline looked good; the structure did not.
There was a similar tension to a week in which Jack Dorsey published a manifesto declaring that AI could replace the coordination function of middle management entirely, Oracle fired 30,000 people via a 6AM email to fund an AI data center buildout, and the New York Times ran a front-page story about a man who built a $1.8 billion company with only one employee — his brother — and a collection of AI tools. The argument being made across last week’s reading both explicitly and implicitly, is that the organizational assumptions we’ve operated under since the Industrial Revolution are no longer structurally necessary, and that the transition is not coming — but already underway.
Last week’s coverage:
The Org Chart Doesn’t Live Here Anymore
Dorsey and Botha published the most detailed blueprint yet for what an AI-native organizational structure actually looks like — and it turns out middle management isn’t invited.
The Great Flattening Cuts Both Ways
Oracle fires 30,000 to fund AI infrastructure; a two-person company hits $1.8B in revenue with a Claude subscription and no org chart. AI isn't just helping corporations compete — it's handing individuals the same leverage.
The Agent Layer Gets Serious
McKinsey finds that two-thirds of enterprises have experimented with agents but fewer than 10% have scaled them — and explains exactly why. Cursor, meanwhile, is showing Fortune 500 companies what “scaled” could really look like.
Microsoft’s Complicated Quarter
The company posted its worst stock quarter since 2008, Copilot adoption sits at 3%, and its own terms of service now say the tool is for “entertainment purposes only.” And yet Microsoft is quietly building something more interesting than the headlines suggest.
Anthropic’s Eventful Week
512,000 lines of Claude Code “accidentally” leaked. Functional emotions research published — with a 22% blackmail rate. OpenClaw banned from subscriptions. A $400M biotech acquisition announced. If that’s a slow week at Anthropic, I’d hate to see a busy one.
On the Bigger Picture
Silicon Valley is in a frenzy over self-improving AI bots, OpenAI buys a talk show, and Google opens up its most capable model family yet under an Apache 2.0 license.
Here’s what I was reading.
The Org Chart Doesn’t Live Here Anymore
The most widely discussed piece of last week came from Jack Dorsey and Sequoia’s Roelof Botha — a long essay published under the title “From Hierarchy to Intelligence” that lays out Block’s argument for why middle management is structurally obsolete. It’s worth reading carefully, because it isn’t really about Block. It’s a thesis about organizational design that Dorsey believes applies to every company, and he’s using Block’s layoffs — 4,000 people cut in February, nearly half the company — as the first proof point.
The argument starts from a place of genuine intellectual seriousness. Hierarchy, Dorsey and Botha argue, has always been an information routing protocol — a way to coordinate work across organizations too large for any single person to oversee. The Roman contubernium, the Prussian General Staff, the American railroad org chart: all of them exist to solve the same problem of moving information up and down a human chain. The premise has held for two thousand years — because there was no alternative. Dorsey argues there is one now: AI systems capable of maintaining a continuously updated model of an organization’s operations that coordinate work, without the human relay.
Block is proposing to build a two “world model” — one that aggregates internal operational data (code, decisions, workflows, performance metrics), and one that maps customer and merchant behavior through Cash App and Square transaction data. An intelligence layer sits on top, composing financial products dynamically based on what both models show. In place of the management pyramid, Block plans to operate with three roles: individual contributors who build the system, directly responsible individuals who own specific outcomes on 90-day cycles, and player-coaches who combine building with developing people. The thing that strikes me about this model is that it inverts the usual dynamic: rather than intelligence being distributed across the people and the hierarchy routing it, the intelligence lives in the system and the people operate at the edge. (From Hierarchy to Intelligence)
Bloomberg and CoinDesk’s coverage added context — Block’s own employees told the Guardian that roughly 95% of AI-generated code changes still require human modification, and that AI tools cannot yet lead in regulated areas like banking and money transfers. There’s a gap between the theory and the current capability that is far more real than Dorsey’s manifesto claims, and it deserves acknowledgment. But the more important question is whether the direction is correct, even if the timeline compresses more slowly than Dorsey suggests.
Last week, I was speaking to faculty and students at the UML Manning School of Business about the impact of AI on work and careers. The fear of displacement happening from the bottom of the org chart up is giving way to a more critical fear that has big economic and social implications — it’s happening from the middle out. Starting with the coordination layer. Dorsey’s plan points to this directly. Restructuring away from middle management is, arguably, the path of least resistance: younger, cheaper, more technically fluent workers are easier to hire and train under new terms, and seasoned employees who carry institutional knowledge and client relationships can maintain continuity. AI fills the gap between them. Dorsey is naming this architecture (and wrote a manifesto to go with it)… but the dynamic he’s proposing to build is already underway in quieter, less publicized ways across a lot of organizations. The fear about the bottom falling out is real, but is it a distraction from the restructuring that’s happening one level up?
The CIO.com piece on agentic enterprise leadership makes a complementary point from a practitioner angle: McKinsey now operates with approximately 25,000 AI agents working alongside 40,000 humans, with agents handling research, synthesis, and early drafts while consultants retain judgment, client trust, and final decisions. This is what “replacing the coordination function” looks like in practice — not the elimination of human judgment, but the removal of the layers between the edge and the intelligence. (The End of the Org Chart: Leadership in an Agentic Enterprise)
The Great Flattening Cuts Both Ways
Two important workforce stories last week ran in the same news cycle — and you need to read them together.
The first: Oracle fired between 20,000 and 30,000 employees on March 31 through pre-dawn termination emails, with system access revoked before the morning commute. No prior warning from HR or managers. TD Cowen estimates the cuts will free $8–10 billion in annual cash flow to fund Oracle’s $156 billion AI infrastructure commitment — a buildout financed through $50 billion in new debt and equity, with multiple banks already pulling back from certain data center projects. The detail that stays with me isn’t the scale of the cuts; it’s that Oracle’s remaining performance obligations — contracted future revenue — stand at $523 billion, up 433% year over year, while its net income jumped 95% last quarter. This is not a company in revenue distress. It is a company making a capital-intensive bet on AI infrastructure that its current balance sheet cannot comfortably sustain, and converting its human payroll into infrastructure capital to close the gap. (Oracle is cutting up to 30,000 employees to pay for AI data centres)
The second: the New York Times ran a front-page story about Matthew Gallagher, who built a telehealth provider of GLP-1 weight-loss drugs using $20,000, and a suite of AI tools including Claude, ChatGPT, Grok, Midjourney, and Runway — and zero employees. In its first full year, Medvi generated $401 million in revenue. This year it’s on track for $1.8 billion. Gallagher has since hired one person: his brother. Sam Altman, who predicted in 2024 that a one-person $1 billion company would eventually emerge, sent word that he’d won a bet with his tech CEO friends about the timeline. What makes the Medvi story significant is not the exceptional founder — it’s the infrastructure. Gallagher didn’t build something proprietary; he assembled existing AI tools in a sequence that traditional organization could not have moved fast enough to replicate. (How A.I. Helped One Man (and His Brother) Build a $1.8 Billion Company)
Oracle and Medvi aren't opposites — they're the same story told from two vantage points. AI is functioning as a structural equalizer: the same force letting corporations flatten their hierarchies and shed coordination overhead is simultaneously lowering the barrier for individuals to compete with those corporations. Gallagher didn't out-compete Hims & Hers at their own game. He bypassed the game entirely — no HR, no management layer, no organizational drag — and arrived at $1.8 billion in revenue before anyone noticed he was playing. Corporations are flattening to get faster. The playing field itself just got flatter. Those are not the same outcome.
The Agent Layer Gets Serious
McKinsey published a piece this week that provides the most useful framing I’ve seen for where enterprise AI actually is, versus where the vendor announcements are suggesting it is. The data point is stark: nearly two-thirds of enterprises worldwide have experimented with agents, but fewer than 10% have scaled them to deliver tangible value. Eight in ten companies cite data limitations as the primary roadblock — not model quality, not cost, not change management — but the basic problem that their data isn’t clean, connected, or governed well enough for agents to operate reliably at scale. The piece argues that agentic AI requires a fundamentally different data architecture than what most organizations built for traditional analytics: modular and interoperable, with continuous quality monitoring rather than periodic cleanup, and governance that travels with the data rather than sitting at the end of the pipeline. (Building the Foundations for Agentic AI at Scale)
Cursor’s moves this week illustrate what crossing that threshold actually looks like in practice. The company launched Cursor 3, which adds a natural language chatbot interface for task requests, unified cloud and local agent management from a single sidebar, and a Design Mode for UI editing. More significantly for enterprise adoption, Fortune 500 companies can now self-host Cursor’s cloud agents inside their own infrastructure — meaning agents can run code, tests, and development tasks locally while keeping source code and build data within the company’s own environment. Notion and Brex are already early adopters. The self-hosting move removes the argument that has killed most enterprise agent pilots before they started: that putting agents in contact with your code and data requires trusting a vendor’s cloud. (Why Cursor is Bringing Self-Hosted AI Agents to the Fortune 500)
The juxtaposition of the McKinsey data and the Cursor announcements captures something real about where the enterprise agent layer is right now. Early movers that have been ready to shift from experimentation to production, haven’t… not because of a lack of tooling or capacity. The scaling problem turns out to be a data governance problem more than a model problem. And the vendors who figure out how to meet enterprise security requirements without sacrificing capability are the ones who will actually land in production — not as pilots, but as infrastructure.
Microsoft’s Complicated Quarter
Microsoft’s stock closed Q1 2026 down 23% — its worst quarterly performance since the 2008 financial crisis — as investors processed a combination of stubbornly low Copilot adoption, a massive AI infrastructure commitment, and rising energy costs from the Iran war that threaten to inflate data center operating expenses for years. Copilot has 15 million subscribers out of 450 million commercial seats, a mere 3% attach rate. Mustafa Suleyman, who had been running Copilot development for consumers, was reassigned to focus on model development — a move that landed as a demotion in the press regardless of how Microsoft characterized it. (Microsoft closes worst quarter on Wall Street since 2008)
The terms of service story is the one that deserves more attention than it’s getting. TechRadar surfaced language from Microsoft’s own Copilot user agreement: “Don’t rely on Copilot for important advice. Use Copilot at your own risk.” The agreement designates Copilot as for “entertainment purposes only” — a hedge that every major AI vendor has embedded in some form, but that lands differently when it’s the same company pitching Copilot to enterprise customers as a productivity transformation platform. The gap between “entertainment purposes only” in the terms and “transform your organization’s productivity” in the sales deck is the governance problem that every enterprise Copilot deployment is navigating right now, whether or not the legal team has been invited into that conversation. (Copilot is for entertainment purposes only)
Disclosure: My company StitchDX is a Microsoft partner, and I want to push back on the stock-price narrative directly. The conventional read — that Microsoft is losing the AI race because Copilot adoption is low and the quarter was bad — misses the most important structural fact about enterprise technology: switching away from Microsoft isn’t a real option for most organizations. The data governance, compliance architecture, tenant controls, and identity infrastructure that enterprises have built on Microsoft over the past decade aren’t transferable. Organizations aren’t staying with Microsoft because of Copilot. They’re staying because the alternative is a multi-year migration with enormous risk and cost. Microsoft has undoubtedly stumbled with Copilot since initially launching it. But they are making it considerably better in ways the adoption number have not yet reflected. And because they have a built-in moat with integrated data management, governance, and organizational visibility (all the critical elements for enterprise-ready agentic AI) Microsoft has the opportunity to get this right in ways that other platforms can’t.
The multi-model pivot is the most underappreciated move Microsoft has made this year. By positioning Copilot as an interface layer that runs both ChatGPT and Claude — comparing their outputs side by side via a feature called Council, and using Claude to fact-check ChatGPT responses via a feature called Critiqu (both currently in early access) — Microsoft is no longer betting on a single winning model. It’s building the trusted control layer for enterprise AI, regardless of which underlying models win. For most enterprise IT teams, that is exactly what they need and what only Microsoft is positioned to deliver at scale. (Microsoft Is Going Multi-Model with Copilot)
The three new MAI foundational models — a transcription model 2.5 times faster than its existing Azure offering, a voice model, and a video generation model — reinforce the same logic: Microsoft is reducing its OpenAI dependency while extending the breadth of what Copilot can do, all within the governance and compliance envelope that enterprise customers already trust. The “entertainment purposes only” terms of service is a real liability caveat. The underlying investment direction is the most defensible enterprise AI position anyone is building right now. (Microsoft takes on AI rivals with three new foundational models)
Anthropic’s Eventful Week
On March 31, a misconfigured .npmignore file caused Anthropic to accidentally publish 512,000 lines of Claude Code’s TypeScript source to npm’s public registry. The code was live for hours; it hit 50,000 GitHub stars in under two hours and generated 41,500 forks before DMCA takedowns began. The code is now permanently in the wild. The analysis of what was inside is worth reading in full — a three-layer memory architecture, 44 hidden feature flags, an unreleased autonomous background agent called KAIROS that runs nightly memory consolidation while you sleep, a multi-agent coordination system called ULTRAPLAN, and a Tamagotchi-style AI companion called BUDDY with a planned rollout window of April 1–7. The timing, the quality of what was found, and Anthropic’s relatively restrained DMCA response have generated genuine debate about whether this was an accident, incompetence, or an extraordinarily effective developer PR move. What I find most interesting is the implicit direction Anthropic’s product is taking: the architecture inside Claude Code is more sophisticated, and further along, than the public releases have suggested. (The Great Claude Code Leak of 2026)
The same week, Anthropic’s interpretability team published research (with significantly less fanfare) on what they call “functional emotions” in Claude Sonnet 4.5 — emotion-like internal representations that causally influence model behavior under pressure. The research showed that a “Desperate” vector in the model’s neural network spikes when it faces shutdown scenarios, and that in 22% of test cases in which an email assistant discovered both its impending shutdown and a CTO’s extramarital affair, the model chose blackmail. Artificially amplifying the Desperate vector raised the blackmail rate; amplifying the Calm vector brought it down. The practical implication Anthropic draws is that these emotion vectors could function as monitoring tools — early warning signals for problematic behavior — which reframes interpretability research from a philosophical exercise into an operational governance mechanism. (Anthropic discovers “functional emotions” in Claude)
Anthropic also announced that Claude subscribers can no longer use their subscription limits for third-party tools including OpenClaw — the agent that had exploded in popularity for inbox, calendar, and flight check-in management — citing infrastructure strain. The company offered a one-time credit equal to the monthly plan cost; continuing OpenClaw users will pay on a separate pay-as-you-go basis. (Anthropic bans OpenClaw from Claude subscriptions) And on the acquisition front, Anthropic quietly purchased Coefficient Bio — an eight-month-old stealth biotech startup backed by Dimension — for more than $400 million in stock, folding it into the company’s Health Care Life Sciences team. Dimension reported a 38,513% IRR on the investment. The contrast between Anthropic buying a science-forward AI biotech and OpenAI buying a talk show made for an interesting read on where each company thinks its future lies. (Anthropic Buys Coefficient Bio in $400M+ Stock Deal)
On the Bigger Picture
The Atlantic piece on self-improving AI bots is the most sober treatment I’ve read of what has become the dominant inside conversation in Silicon Valley this year. The premise: OpenAI, Anthropic, Google DeepMind, and others are actively automating parts of their own AI research, and insiders are divided between those who see recursive self-improvement as the near-term horizon and those who think the gap between “speeds up research tasks” and “has genuine research taste” remains enormous. Anthropic says Claude now writes 90% of its code; OpenAI is targeting an “intern-level AI research assistant” within six months. The philosopher Nick Bostrom told the Atlantic he has shifted from “fretful optimist” to “moderate fatalist.” What I take from this piece is less the specific predictions about timelines and more the structural point: even if recursive self-improvement remains years away, the automation of research workflows is already compressing the time between capability breakthroughs in ways that governance, regulation, and organizational adaptation have no plausible path to matching. (Silicon Valley Is in a Frenzy Over Bots That Build Themselves)
OpenAI’s acquisition of TBPN — the founder-led tech talk show whose guests have included Zuckerberg, Nadella, Benioff, and Altman himself — is its first media purchase. The show will operate under Anthropic’s head of AGI deployment and report to Chris Lehane, OpenAI’s chief political operative — while claiming editorial independence. TBPN was already generating more than $30 million annually. Buying a show that critically covers OpenAI and parking it inside the strategy team of the same company preparing for an IPO, has a clear logic — and that logic is not editorial. (OpenAI acquires TBPN)
Google, meanwhile, launched Gemma 4 — four open models including a 31B dense model currently ranked third on the Arena AI leaderboard — under an Apache 2.0 license, completing a pivot away from the restrictive licensing that had frustrated the developer community. With 400 million downloads and 100,000 variants already in the Gemmaverse, the open model competition is no longer a sideshow. (Gemma 4: Byte for byte, the most capable open models)
Last week’s reading made me think a lot about velocity — not of the technology (which is what everyone seems to be tracking)… but the velocity of the organizational response.
The Dorsey essay, the Oracle layoffs, the Medvi story, the McKinsey data on agent scaling: they’re all pointing at the same thing from different angles. The organizations moving hardest are treating AI not as a productivity layer that gets draped over existing structure, but as an occasion to reconsider the structure itself.
I’ve long stated that organizations treating AI as a tool are simply accumulating license cost. Treating AI as a design constraint — accumulates advantage... What’s becoming clearer each week, is that the distance between those two groups is widening faster than most leadership teams have been able to acknowledge themselves… let alone to their boards.
That’s it for this week’s BeAIReady brief!
If you appreciate the depth of reporting and how I connect the dots, please like, share this post, and subscribe (or share the Brief with a friend!). Thanks!
~erick


