The BeAIReady Brief | Week 12
March 16-22 | Companies are cutting pay, flattening raises, and building AI leaderboards. The February jobs report missed by 150k. These stories are connected. Here's what I've been reading.
Last Wednesday, the Fed held rates steady while the Dow dropped 768 points — the markets were reacting to oil shock and the ongoing aggression against Iran. But to me, the more troubling signal arrived a few days earlier, when February’s jobs report showed the economy shedding 92,000 positions, far below any forecast, and Powell told reporters that job creation had “slowed to essentially zero.”
That backdrop was hard to ignore, because the articles that I found hit hardest weren’t about AI technology itself. They were about what organizations are doing to employees in the name of AI investment — and how AI is affecting the cognitive outputs we produce.
This week’s coverage:
The Architecture of Work Is Being Redesigned
Companies are cutting pay to fund AI, building performance metrics around token consumption, and new research shows that heavy AI use is changing not just how knowledge workers write — but how and what they actually think.
OpenAI’s Focus Problem, Now in Public
A week of all-hands meetings, a product consolidation play, a data center retreat, and an audacious research moonshot — OpenAI is threading the needle between IPO discipline and maximum ambition, and the tension is showing.
Microsoft’s Coherence Problem
With only 15 million Copilot seats out of 450 million paid M365 licenses, Microsoft is reorganizing to finally solve a problem it created for itself — and quietly building model independence at the same time.
The Agent Layer Is Blowing Up
From OpenClaw’s viral rise to vertical agents going into real enterprise workflows, AI maturity from chat to agentic AI is exploding — especially as enterprise players like Nvidia step into the arena.
On the Bigger Picture
Computation is becoming a utility and defense tech is graduating from pilots to prime contracts — both developments have potential benefits, and serious consequences about how AI is reorganizing power… and control… well beyond the digital workplace.
Here’s what I was reading.
The Architecture of Work Is Being Redesigned
The number that stood out most to me came from a survey of 866 U.S. business leaders: 54% have already cut or plan to cut employee compensation — bonuses, raises, equity, base salary — to fund AI investments in 2026. 94% percent say they’re willing to accept higher turnover to do it. What stood out wasn’t the cuts themselves but the rationale — leaders told researchers they’re counting on the weak job market to absorb the consequences, a calculation that treats workers as a variable the macro environment has temporarily made cheap. (Half of Companies Are Cutting Compensation To Fund AI Investments)
That calculation is tied into the growing fears of recent and soon-to-be college grads — something I’m looking at carefully as I prepare to speak to University of Massachusetts College students next week. Workers aged 22 to 25 in AI-exposed roles have seen a 13%-16% employment decline since late 2022, according to Stanford research that resurfaced prominently this week. The labor market is softening in the exact categories of early-career, screen-based knowledge work that AI has been targeting first. The February jobs report makes that an undeniable pattern, and something of great concern. (Something Big Is Happening (And Most People Have No Idea))
Meanwhile, inside tech companies, something interesting is happening at the other end of the experience spectrum. The New York Times reported this week on “tokenmaxxing” — engineers competing on internal leaderboards to see who can consume the most AI tokens. Some boasting they’ve racked up $150,000 monthly Claude Code bills already. Both Meta and Shopify are leaning into the competitive metrics, baking AI usage into their performance reviews. The question the piece doesn’t quite answer — and the more important one — is whether maximizing AI usage is genuinely correlated with valuable output, or whether it’s a measure of performance productivity. (More! More! More! Tech Workers Max Out Their A.I. Use.)
The most unsettling piece in the set this week came from a peer-reviewed study across West Coast universities: heavy reliance on LLMs doesn’t just change how people write, it changes what they argue. Participants who used AI heavily answered questions about happiness with neutral responses 69% more often than those who didn’t; their writing had 50% fewer personal pronouns. One of the lead researchers described this as the “blandification” of human writing — the models pushing outputs toward something no human would have written. But the concerning impact for knowledge work isn’t just about style; it’s that the cognitive output of an AI-dependent workforce may be driving towards average — in ways that are genuinely hard to measure… and harder to reverse. (AI is changing the style and substance of human writing, study finds)
For leaders navigating this in fields where accuracy, provenance, and trust are non-negotiable — legal, healthcare, archival, finance — the tension is especially sharp. A piece in Inc. this week captured something I’ve been hearing in client conversations: the urgency to “do something with AI” is high, but clarity on how and what to do responsibly is still not quite defined. From my experience, this article gets it right — the organizations making the most progress aren’t the ones with the most tools; they’re the ones building governance and training infrastructure before they scale. (AI Is Reshaping Knowledge Work)
OpenAI’s Focus Problem, Now in Public
The week opened with a WSJ exclusive: OpenAI’s leadership is actively deciding which products to cut. CEO of Applications Fidji Simo told staff, “We cannot miss this moment because we are distracted by side quests.” The proximate cause is Anthropic — specifically, Claude Code and Cowork’s market traction — but the underlying cause is what happens when an organization bets on too many internal startups while trying to conserve scarce compute. The pivot toward coding and enterprise isn’t a strategic vision; it’s a correction, and the speed of the correction signals how much ground OpenAI feels it lost last year. (OpenAI to Cut Back on Side Projects in Push to ‘Nail’ Core Business)
That same IPO discipline is reshaping OpenAI’s infrastructure story. The company has retreated from building its own data centers, leaning instead on Oracle, Microsoft, and Amazon for capacity. Altman acknowledged the operational reality in public: “Anything at this scale, it’s just like so much stuff goes wrong.” The Stargate narrative has quietly shifted from a bold sovereign infrastructure play to a managed dependency on the same cloud providers OpenAI was supposed to help enterprises move beyond. (OpenAI’s data center pivot underscores Wall Street spending concerns ahead of IPO) The desktop super app — combining ChatGPT, its browser, and Codex into a single experience — is the product-side corollary: one surface, less fragmentation, a cleaner story for IPO investors. (OpenAI to create desktop super app, combining ChatGPT app, browser and Codex app)
None of which makes the research ambition smaller. MIT Technology Review published an exclusive this week with OpenAI’s chief scientist Jakub Pachocki, laying out the company’s new north star: an AI research intern by September capable of handling specific tasks autonomously, then a fully automated multi-agent research system by 2028 — what Pachocki called “a whole research lab in a data center.” The tension between IPO fiscal tightening and maximum research ambition is not a contradiction OpenAI has resolved; it’s a contradiction the public markets will price. (OpenAI is throwing everything into building a fully automated researcher)
Microsoft’s Coherence Problem
The Copilot reorganization story has two layers, unfortunately, most coverage stayed on the shallower one. The headline: Satya Nadella is consolidating the fragmented commercial and consumer Copilot teams under Jacob Andreou, a former Snap executive, while freeing Mustafa Suleyman to focus entirely on building proprietary models. The metric behind the headline: 15 million Copilot seats sold against 450 million-plus paid Microsoft 365 seats. That’s a penetration rate that would concern any enterprise software product manager, let alone one tied to the company’s flagship AI bet. The core thing Microsoft is admitting with this reorganization is that it built multiple products called Copilot that confused users and created organizational silos, and that confusion is now visible in the adoption numbers. Disclaimer: I’m a Microsoft partner… but speaking candidly, this was an epic failure on Microsoft’s part — a hole of their own making, they are now having to dig themselves out of. (Microsoft shakes up Copilot AI leadership team) (Microsoft Seeks More Coherence in AI Efforts With Copilot Reorganization)
But the deeper story here is Suleyman’s actual mandate: build enterprise-specific model lineages that reduce Microsoft’s dependency on OpenAI IP — for which it has rights to only through 2032. Two weeks ago, it was the major announcement of their partnership with Anthropic. This past week, it was a strangely quiet launch of MAI-Image-2 — a text-to-image model that immediately ranked third on the Arena.ai leaderboard, behind only Google and OpenAI. This is likely an early signal of the direction Suleyman will be driving in the future. Microsoft paying OpenAI billions to power Copilot — while simultaneously funding Anthropic and now training competing models — is a hedging strategy that makes more sense as a long-term bet on model independence than as a coherent product story for the market. (Microsoft Launches MAI-Image-2 Text-to-Image Model) As a Microsoft partner, I’m eagerly watching whether E7 and Copilot Cowork adoption accelerates in ways that the raw seat numbers haven’t yet reflected.
The Agent Layer Is Here
The biggest story in AI tooling last week came from OpenClaw — Jensen Huang called it out as the fastest-growing open-source project in history. An open-source AI agent built by an Austrian indie developer, OpenClaw is designed to run continuously from a Mac Mini, managing email, scheduling, code, and anything with a digital interface. But, the real reason OpenClaw matters isn’t its capabilities in isolation; it’s what it proves about where value is accumulating — not in the foundation models themselves, but in the agent frameworks layered on top of them. (OpenClaw’s ChatGPT moment sparks concern that AI models are becoming commodities)
Nvidia’s response, which was announced at their GTC conference, is NemoClaw — essentially OpenClaw with enterprise security guardrails, built in collaboration with OpenClaw’s founder and designed to make agent deployment safe for corporate environments where an autonomous AI agent accessing sensitive internal data through Slack or WhatsApp is a genuine compliance exposure. What Nvidia is actually building with NemoClaw is the policy enforcement layer that makes agentic AI enterprise-deployable — and in doing so, it’s positioning its hardware stack as the platform of record for the next compute era. The internal push for more on-premise control could be a big factor in success here. But with Claude and Microsoft deploying their own CoWork platforms, there’s reason to be at least a little skeptical of Nvidia’s bet on local automation. (Nvidia’s NemoClaw is OpenClaw with guardrails)
Still, WSJ’s sweeping feature on Claude Code, Cursor, and Codex this week framed the drive for automation more clearly: what once began as autocomplete for developers, has become the infrastructure for a market one OpenAI revenue chief called “a multi-trillion dollar opportunity.” Anthropic’s Claude Code is generating $2.5 billion in annualized revenue; Cursor recently passed $2 billion; OpenAI’s Codex tripled weekly active users since January. Both Anthropic and OpenAI are currently subsidizing usage well below cost to capture the installed base before pricing normalizes — a dynamic that echoes early-era ride-sharing more than enterprise SaaS. (The Trillion Dollar Race to Automate Our Entire Lives)
The more structural question is what vertical AI agents do to the labor line of a P&L — that’s the premise from Bessemer Venture Partners featured in a GeekWire piece on domain-specific agent startups. General-purpose models are good at generating text — but require significantly more scaffolding when they are operating within the specific workflows, data schemas, and compliance constraints of legal, healthcare, or financial services. The startups that win in this layer aren’t just deploying models — they’re embedding agents into the workflows those models can’t navigate without domain-specific context, and that combination is what distinguishes them from the SaaS tools they’re replacing. (The rise of vertical AI agents — and the startups racing to build them)
On the Bigger Picture
Two articles this week sat outside the digital workplace frame but felt important to note. The first: Anduril secured a $20 billion enterprise contract with the U.S. Army, consolidating more than 120 existing orders under a single five-to-ten-year agreement with a fixed-price structure. The significance isn’t the dollar figure — it’s the model. The Pentagon is graduating a select group of defense tech startups from pilot projects into prime contractor relationships, which means AI-native companies have now found a durable procurement pathway inside the single largest technology buyer in the world. (Anduril’s new mega-deal rewrites the rules for Silicon Valley)
The second: a Fast Company essay making the case that the personal computer era is ending — that PCs are trending toward luxury items as computation itself becomes a utility, priced and distributed on demand. The argument isn’t new, but as Open Claw running on a Mac Mini can effectively outcompete software products worth hundreds of billions in combined market capitalization — it’s an argument that’s suddenly become very real. If computation is becoming infrastructure — like electricity, not like a device — then the organizations that treat AI as a capital asset to own are building on the wrong mental model. (The PC era is dying. Welcome to the collective computer era)
I keep returning to a question that I’m bringing to a business school audience shortly: if AI is reorganizing not just how work gets done… but how companies fund it, measure it, and reward it, then the real challenge isn’t technological adoption — it’s organizational design.
What’s becoming clear, is that AI will make it harder to hide organizational level failures as intentional decisions: Compensation cuts are a governance failure dressed as a budget decision. The tokenmaxxing leaderboards are a measurement failure dressed as a performance metric. The blandification of writing is a quality failure dressed as efficiency.
None of these are AI problems. They’re leadership problems.
AI has made them legible — and the organizations that name that distinction clearly… before their competitors do, will have an advantage that no foundation model can replicate.


