The BeAIReady Brief | Week 19
May 4–10 | Your Org Is Still Rewarding the Old Way of Working, AI Layoffs Aren't Buying ROI, and the AI Companies Just Confirmed the Challenge of Implementing their Models.
The April jobs report beat expectations on Friday — 115,000 new positions added, unemployment holding at 4.3%. But the information sector logged its 16th consecutive month of net job losses. Among the tech companies announcing cuts was Cloudflare who let 1,100 workers go, while simultaneously reporting a 600% increase in its own AI usage over three months. And expectations remain that real hourly wages will likely run negative once May’s inflation data arrives.
The dominant thread in last week’s reading wasn’t capability — it was the growing distance between what AI can do for an individual and what it actually takes to operationalize it across an organization. That gap showed up in Microsoft’s research, IBM’s announcements, Gartner’s data on layoff ROI, and most explicitly in the billions OpenAI and Anthropic are now spending to acquire the engineers and consultants who close it.
This week’s coverage:
The 65% - 13% Problem
Employees fear falling behind if they don’t adopt AI. Only 13% are rewarded for it. Three major reports last week landed on the same organizational diagnosis from three different directions.
Copilot Cowork Steps Out of the Chat Window
Microsoft’s Cowork is shifting from answering questions to executing multi-step work autonomously — and for IT leaders managing Copilot deployments, that changes what they’re actually governing.
From “Does This Work?” to “Where Does This Belong?”
The shape of enterprise AI experiments is maturing: the questions are moving from whether AI can do something toward where it fits, what it costs when it’s wrong, and who owns the outcome.
Cutting Staff Isn’t Earning the Return
Gartner found that 80% of billion-dollar firms that cut staff after deploying AI saw no meaningful ROI. Deloitte is restructuring the billable hour. And employees are silently watching AI make mistakes that nobody’s correcting.
The Policy Is There. The Controls Aren’t.
AI governance policies nearly doubled in a year. The actual mechanisms — tested shutdown procedures, vendor vetting, oversight committees — haven’t kept pace.
The Implementation Gap Is Now Official
OpenAI and Anthropic are spending billions to acquire enterprise implementation capacity. What that move signals about the difference between AI for individuals and AI for organizations is the story underneath the story.
Here’s what I was reading.
The 65% - 13% Problem
Microsoft’s annual Work Trend Index dropped last week. In it was a data point that hasn’t gotten the attention it deserves in the media: 65% of AI users surveyed said they fear falling behind if they don’t adopt AI quickly. Only 13% said their organizations actually reward them for using and experimenting with it. That gap — between how urgently employees feel the pressure to change and how little their organizations have restructured to recognize that change — is what Microsoft is now calling the “Transformation Paradox.” (Microsoft’s new research finds an AI ‘paradox’ holding companies back)
The paradox is structural, not motivational. Workers are already reshaping how they work with AI — 49% of all Copilot interactions analyzed involved cognitive tasks like analysis, problem-solving, and creative work, not just document summarization. A cohort Microsoft calls “Frontier Professionals” — the 16% of AI users who routinely deploy agents for multi-step workflows — report producing work they couldn’t have done a year ago. But only one in four AI users said their leaders are clearly aligned on AI, and organizations where managers actively modeled AI use saw a 17-point increase in perceived value and a 30-point boost in trust in agents. The research makes the case that culture, manager modeling, and talent practices account for more than twice the AI productivity impact of individual factors like mindset or motivation — which means the ceiling on AI ROI is organizational, not individual.
IBM made the same argument from a different angle at Think 2026. CEO Arvind Krishna framed it plainly: the enterprises pulling ahead aren’t deploying more AI — they’re redesigning how their businesses operate. IBM announced a comprehensive AI operating model built on four integrated systems: agent orchestration, real-time AI-ready data foundations, intelligent hybrid cloud management, and built-in governance. The framing treats AI adoption not as a technology procurement decision but as a fundamental operating model change — a distinction that will separate organizations that see returns from those still accumulating spend without accountability for outcomes. (Think 2026: IBM Delivers the Blueprint for the AI Operating Model as the AI Divide Widens)
The World Economic Forum made a similar argument last week, pointing out that AI transformation fails far more often because of organizational design choices rather than limitations of the technology. When companies deploy AI without redesigning work, decision rights blur, accountability erodes, and productivity gains stall. The CHRO’s role — as design architect, capability steward, adoption catalyst, and what the piece calls “transition guardian” — is to own the human transformation that determines whether the technology delivers value at scale or stalls in pilots. “The decisive differentiator will not be access to technology, but the ability to orchestrate human transformation around it.” I find that a useful diagnosis, even if the prescription demands organizational authority that most CHROs don’t yet have. (AI transformation is reshaping work. HR leaders must help redesign it)
Boris Cherny, head of Claude Code, illustrated this point during a CNBC interview last week. Reaching back to a Harvard Business School case study from the early 1990s, he explained why companies with computers weren’t seeing productivity benefits yet — and the answer was that computers were sitting in the corner of the office while workflows, structures, and metrics were still organized around the filing cabinet. Productivity gains arrived only after organizations restructured around the computer as the center. The companies Cherny described as seeing “hundreds of percentage points” of productivity improvement had done exactly that — not added AI to existing workflows, but rebuilt workflows around AI. The analogy is useful not because it’s flattering but because it correctly locates the bottleneck. (Head of Claude Code on the future of work and productivity)
Copilot Cowork Steps Out of the Chat Window
Cowork is expanding from chat-based assistance to autonomous multi-step task execution — now available on iOS and Android, with reusable “skills” that capture and standardize repeatable workflows, new native integrations with Fabric IQ and Dynamics 365 across sales, customer service, and ERP applications, and a connector ecosystem opening to third-party platforms including monday.com, Miro, and LSEG.
The intelligence layer underneath it — what Microsoft calls Work IQ — understands your organization’s data, tools, and workflows, meaning Cowork’s outputs are grounded in your business context rather than public internet information alone. For IT leaders managing Copilot rollouts, this changes what they’re actually deploying. A chat assistant that answers questions sits in one governance lane. An autonomous execution platform that coordinates meetings, conducts research, processes approvals, and generates structured documents across connected enterprise systems sits in a different one entirely. The adoption conversation that was sufficient for Copilot Chat isn’t sufficient for what Cowork is becoming. (Copilot Cowork: From conversation to action across skills, integrations, and devices)
Disclaimer: My company, StitchDX is a Microsoft Partner.
From “Does This Work?” to “Where Does This Belong?”
The shape of enterprise AI experiments is changing — not ending. What’s shifting is the question the experiments are designed to answer. Last year’s experiments mostly asked whether AI could do a thing. The ones I’m watching now ask whether a given AI application belongs in a given workflow, what failure looks like in practice, and who owns the outcome when something goes wrong. That’s a meaningfully different kind of experimentation, and it’s producing more honest conversations about where AI actually fits versus where it was assumed to fit.
AIBusiness captured the shift directly: agents are moving from isolated demos into embedded enterprise workflows, and that transition is forcing organizations into governance and security questions they weren’t facing when agents lived in sandboxes. The infrastructure layer is changing because agents are now persistent, orchestrated, and increasingly autonomous — and that changes the security model, the accountability model, and the risk calculus at the same time. The challenge is no longer proving that the capability exists; it’s figuring out where the capability belongs and what constraints it needs to operate within safely. (Prompt: AI Agents Are Becoming Operational Infrastructure)
Anthropic’s “dreaming” feature for Claude Managed Agents — announced last week at the Code with Claude conference — is a small but directionally significant development. Dreaming is a scheduled process in which recent sessions and memory stores are reviewed across agents, with high-signal patterns, recurring mistakes, and shared preferences identified and retained for future tasks. It addresses a real limitation: context windows are finite, important information gets lost across lengthy multi-agent projects, and single-agent compaction processes can’t see patterns across a broader agent network. The feature is still in research preview and limited in access, but the direction matters — agents that retain organizational memory across sessions represent a meaningfully different infrastructure model than agents that start from scratch each time. (Anthropic’s Claude Managed Agents can now “dream,” sort of)
Pinecone — which built the vector database category and made RAG the standard enterprise AI pattern — used last week to declare RAG a bottleneck and announce a bet on what it’s calling “knowledge compilation.” The argument: traditional RAG forces agents into retrieve-read-retrieve loops that complete only 50-60% of tasks while consuming enormous compute. Pinecone’s Nexus precompiles source data into typed, cited, task-specific artifacts that agents query directly rather than searching raw corpora. The claimed results — task completion above 90%, token spend reduced by 90% — are self-reported and should be validated in production before anyone acts on them. What’s more significant than the specific numbers is what the move signals about where value in the AI stack is heading: from raw retrieval toward pre-structured, curated knowledge that agents can actually work with. For enterprise teams building knowledge architecture, this is the pressure worth planning for. (The company that made RAG mainstream is now betting against it)
Cutting Staff Didn’t Buy the Return
Gartner surveyed 350 global businesses — all with annual revenues above $1 billion, all piloting or deploying intelligent automation — and found that around 80% had cut staff as a result of AI deployment. The ROI from those cuts was largely absent. Companies that reduced their workforces were just as likely to see negative outcomes or marginal gains as they were to generate any meaningful return — and the organizations actually seeing results were investing more in people, not less: building new skills, new roles, and operating models built around humans directing autonomous systems. (AI layoffs backfire as cutting staff doesn’t cut it, firms warned)
Deloitte’s positioning tells a parallel story about what AI is doing to professional services economics. The firm is targeting AI handling 30% of its tasks, growing its managed services division to $1 billion in revenue by 2030, and cutting delivery costs by up to 40% through AI and offshore centers. Clients are already adjusting — some are unilaterally hard-coding 10% AI efficiency discounts into contracts as delivery costs fall. The roles most at risk aren’t entry-level; they’re mid-ranking partners and advisers who have built careers around exactly the kind of assessment and advisory work that AI can now replicate at a fraction of the cost. The takeaway is that the value proposition of senior expertise is being repriced faster than the people who hold it can adapt. The billable hour isn’t dying because junior work is being automated out. (AI to handle 30pc of Deloitte tasks as billable hour dies)
The April jobs data provides the macro frame. The information sector — tech, telecom, data, media — logged its 16th consecutive month of net job losses, with employment in that sector now at its lowest level since March 2021. Goldman Sachs puts the aggregate AI impact at roughly 16,000 net US jobs lost per month — 25,000 displaced by AI substitution against 9,000 created by AI augmentation. The World Economic Forum projects 170 million new jobs created globally by 2030 against 92 million displaced — a net positive that, as one analysis noted last week, doesn’t help much if you’re sitting in one of the displaced roles right now. The problem isn’t the total job count — it’s the mismatch between the roles disappearing and the roles emerging, and the realistic timeline for workers to move between them. (AI Impact on the Job Market in 2026: What the Data Shows)
The Radical Candor report adds a dimension I haven’t seen named as clearly elsewhere. Sixty percent of employees said last week they are afraid to speak up at work — and one of the named drivers is AI inaccuracy. Seventy-three percent of the time, inaccuracies appear in AI-assisted work. More than half of workers and managers say those quality concerns are only sometimes or rarely acted on. The mechanism here is worth naming explicitly: when leadership is laying people off in AI’s name, employees have no incentive to flag the mistakes AI is making — which means organizations are making decisions based on AI output that nobody is correcting. That’s not a feedback problem. It’s a governance problem wearing a feedback problem’s clothes. (New Radical Candor Report Reveals 6 in 10 Employees Are Afraid to Speak Up at Work)
The Policy Is There. The Controls Aren’t.
ISACA’s 2026 AI Pulse Poll found that 90% of respondents believe employees are using AI in their organization, and 81% say that includes generative AI specifically. The governance picture sitting alongside that adoption data is stark: only 12% of organizations have a documented, regularly tested process for shutting down an AI system when something goes wrong, and 56% of respondents don’t know how long a shutdown would take. Shadow AI — employees using tools outside approved governance channels — is already introducing exposure that most organizations have no tested mechanism to contain when something fails. The risk isn’t hypothetical; it’s a matter of when, not if, a production AI failure demands a response that most organizations haven’t practiced. (The AI Security Gap: Adoption Is Accelerating but Response Capability Is Lagging)
Littler Mendelson’s employer survey adds the HR governance layer. Sixty-eight percent of employers now have formal AI governance policies — up from 38% just a year ago. Littler calls that progress “encouraging” while noting that fewer than half have instituted procedures for vetting third-party AI vendors, tool-specific training, or a designated internal AI oversight committee. The gap isn’t between organizations that care about AI risk and those that don’t — it’s between having a policy document and having operational controls that actually function when they’re needed. A policy that exists on paper but has never been exercised isn’t governance. It’s a liability that hasn’t been discovered yet. (Employers ‘still playing catch-up’ on AI risk management, Littler report finds)
The Implementation Gap Is Now Official
Reuters reported last week that the joint ventures OpenAI and Anthropic have separately formed with private equity are in active acquisition talks targeting AI services firms — engineering and consulting companies that help businesses put AI to work inside their actual systems. OpenAI’s vehicle, The Deployment Company, is raising roughly $4 billion from 19 investors. Anthropic’s is raising $1.5 billion, backed by Blackstone, Hellman & Friedman, and Goldman Sachs. Most of that capital is expected to fund acquisitions of services and consulting firms, not model development. (OpenAI, Anthropic ventures in talks to buy AI services firms, sources say)
The strategic read on this move depends on how closely you’ve been watching enterprise AI play out. If you’ve spent time in the room where organizations actually try to stand AI up — with their real data, their real compliance requirements, their real change management capacity, and a workforce that was never asked whether it wanted any of this — the acquisitions aren’t a surprise. What works frictionlessly for an individual at a laptop does not translate automatically to an organization with siloed data, legacy infrastructure, established approval chains, and employees whose jobs are changing shape whether they agreed to that or not. Enterprise AI requires tailoring to specific data, systems, and workflows, and ongoing adaptation as business needs evolve. The model providers have now officially acknowledged that. (OpenAI, Anthropic expand services push, signaling new phase in enterprise AI race)
The risk embedded in this model is worth watching carefully. Buying AI services from the same company that sells you the model creates a stack that becomes progressively harder to exit — data pipelines, governance frameworks, and workflows all embedded in a single provider’s architecture. As IDC’s Deepika Giri noted last week, avoiding that dependency requires deliberate architecture decisions made early, before the stack is already built around a single vendor. For enterprise leaders evaluating AI vendor relationships right now, the lock-in risk just became significantly more layered than it was six months ago.
The picture I’ve been watching take shape for a while is finally coming into focus. AI is genuinely useful — remarkably so — for individuals who know how to work with it. The enterprise version of that usefulness is a different project entirely: it requires organizational redesign, governance infrastructure, change management, data architecture, and accountability structures that most organizations haven’t built yet. The Transformation Paradox, Gartner’s layoff data, the ISACA controls gap, and the OpenAI and Anthropic acquisition moves all point at the same thing. The model companies are now spending billions to staff up on the human side of that gap... which tells you everything about how hard the human side actually is.
That’s it for this week’s BeAIReady brief!
If you appreciate the depth of reporting and how I connect the dots, please like, share this post, and subscribe (or share the Brief with a friend!). Thanks!
~erick


