The BeAIReady Brief | Week 11
March 9-13 | What I'm actually reading about AI and the Digital Workplace (not an AI curated list of articles).
The conversations around AI have been steadily shifting from “what could this do?” to “here’s what it does, here’s what it costs, and here’s what we’re going to with it.”
This week felt like a pivotal moment in that ongoing transition…
Microsoft launched what may be its most consequential AI product in years, Anthropic extended its reach deeper into the apps most knowledge workers already live in, and a new wave of research explicitly named the costs — for humans and for AI alike — of working at machine speed. Nvidia GTC starts next week, and the Anthropic-Pentagon legal dispute is still unresolved. The ground keeps shifting.
This week’s coverage:
The Copilot Cowork Moment
Microsoft’s biggest agentic AI announcement yet redraws the enterprise map — but the research says most organizations aren’t operationally ready to meet it.Record Revenue. Mass Layoffs. Same Memo.
Atlassian posts record cloud revenue while cutting 1,600 jobs — and ServiceNow’s CEO says college grad unemployment could hit 30% before this wave is done.Vibe Coding Goes to Work
From non-technical startup founders to global ad agencies, natural language software development is becoming a mainstream business competency.Working at AI Speed Has a Cost
Two new studies — one on human cognitive fatigue, one on AI agents under pressure — arrive at the same uncomfortable conclusion.On the Bigger Picture
Google’s AI Overviews have quietly gutted media traffic, CVS bets AI can fix healthcare fragmentation, and Elon Musk announces his next target.
Here’s what I’ve been reading this week.
The Copilot Cowork Moment
The lead story this week was that Microsoft launched Copilot Cowork on Monday. I got an intro to it during a Microsoft partner briefing on Tuesday, and I’ve been digging into it all week. Frankly, I don’t think the headlines have been doing it justice.
Copilot Cowork is built on the same engine as Claude Cowork — which itself is a significant acknowledgment that Microsoft is hedging its long-running bet on OpenAI. The product works the way Claude Cowork does: you hand it a complex task, it builds a plan, and executes it step by step — not as a single response, but as a running process that works in the background, checks in, and delivers finished work. Preparing a presentation, pulling financials, emailing the team, scheduling prep time — all from a single request. The critical difference, though, is that Copilot Cowork lives inside your Microsoft tenant, not on your desktop: where Claude Cowork can access your local files, Copilot Cowork can access files plus your email threads, your Teams conversations, and document data relationships across your entire organization — while operating inside Microsoft’s security and compliance boundary, governed by the identity and permissions infrastructure you already have.
I know there’s been real skepticism about Copilot — not entirely unwarranted, its rollout has been rocky — but this shift is notable because Microsoft is widening the gap between AI as a powerful tool for individuals and an out-of-the-box enterprise-grade organizational solution. No extra training, custom MCP servers, or special skills required. Just context and control from the organizational layer. Bringing Claude’s agentic reasoning into an environment where IT can actually trust it should be a major reason for organizations to take a fresh look at Copilot. (Microsoft’s new Copilot Cowork integrates Anthropic’s Claude in rollout of new E7 licensing tier | Microsoft debuts Copilot Cowork built with Anthropic’s help and E7 software)
To understand what makes Copilot Cowork possible at an enterprise scale, Microsoft published a detailed explainer on Work IQ — and it’s worth reading for anyone thinking seriously about where enterprise AI is headed. Work IQ is the intelligence layer that gives Copilot real context: not just access to your files, but a semantic understanding of your work patterns, key relationships, projects, and communication history across your entire tenant. Think of it as the difference between an AI that can search your email and an AI that understands your business. The system combines a semantic index, explicit and implicit memory, and hooks into Dynamics 365, Power Apps, and third-party data sources through Copilot Connectors. A user asking Copilot to “help me evaluate how issues raised by my parts supplier in our Teams call last week might impact my inventory and sales” can now get a specific, grounded answer — not a generic one. That is a materially different kind of AI than anything we’ve seen in a general-purpose chat tool. (A closer look at Work IQ)
Microsoft is wrapping all of this into a new $99/user/month Microsoft 365 E7 Frontier Worker Suite starting May 1 — that’s 65% more expensive than the current E5 tier! But Microsoft is betting you’ll get that much more value out of bundling Copilot, the new Agent 365 governance product, and enhanced security capabilities into a single offering. This new pricing reflects a safe bet — something I’m increasingly hearing from my own customers — enterprises want consolidation over point solutions. Copilot paid seats have grown 160% year over year, with daily active usage up tenfold, and 90% of the Fortune 500 now use Copilot in some form. Whether that bet holds when the invoice arrives is a different question.
And that question has a sharp answer in a VentureBeat piece this week drawing on the Celonis 2026 Process Optimization Report, which surveyed more than 1,600 global business leaders. The finding that should give every organization pause before signing an E7 contract: 85% of enterprises want to become agentic within three years, yet 76% admit their operations can’t actually support it. Only 19% are currently running multi-agent systems. The core problem isn’t the AI — it’s that AI agents need optimized, AI-ready processes and operational context to act effectively, and most organizations have spent years building siloed teams and fragmented systems that are structurally incompatible with the way agents need to work. As the report puts it, 82% of decision-makers believe AI will fail to deliver ROI if it doesn’t understand how the business runs. Copilot Cowork is, architecturally, an answer to this problem — Work IQ is exactly the operational context layer the report says is missing. But Work IQ only knows what’s in your Microsoft 365 tenant. If your actual business runs on disconnected systems, undefined processes, and tribal knowledge, the technology won’t surface what isn’t there. (Enterprise agentic AI requires a process layer most companies haven’t built)
Anthropic, meanwhile, isn’t simply deferring the enterprise space to its new partnership. The same week Microsoft announced its version of Cowork, Claude got an upgrade for Excel and PowerPoint — with shared context across both applications. That means a financial analyst can ask Claude to pull comparable company financials from a spreadsheet, build a trading comps table in Excel, drop the valuation summary into a pitch deck, and draft the follow-up email to the MD — all in a single continuous session without re-explaining the dataset at each step. The new “Skills” feature is the real unlock: teams can save repeatable workflows — specific variance analyses, approved slide templates, standard review processes — as one-click actions available to the entire organization, transforming tasks that previously lived in one person’s head into standardized institutional practice. This is a quieter announcement than Microsoft’s, but it shows Anthropic steadily building inside the applications knowledge workers already rely on rather than asking workers to adopt new ones. (Anthropic gives Claude shared context across Microsoft Excel and PowerPoint, enabling reusable workflows in multiple applications)
The Microsoft-Anthropic partnership was tested from an unexpected direction this week as well. Microsoft filed an amicus brief in Anthropic’s federal case against the Pentagon, which had designated Anthropic’s products a supply chain risk and directed federal agencies to stop using them. Microsoft argued that immediate implementation could have “broad negative ramifications” for the entire technology sector, warning that warfighters could be hampered if companies are forced to rapidly alter existing contracts and configurations. The filing made Microsoft the first standalone company to formally back Anthropic in court — notable not just for its content, but for its timing: it came one day after the two companies publicly announced Copilot Cowork. For organizations already using Claude in any form — directly, through Azure, or now through Copilot — this is a legal situation worth tracking. (Microsoft supports Anthropic in Pentagon supply chain case)
Record Revenue. Mass Layoffs. Same Memo.
Atlassian this week cut 1,600 jobs — roughly 10% of its workforce, with 900 of those positions in R&D — while simultaneously reporting $1.07 billion in quarterly cloud revenue. The company replaced its CTO with what it described as “next generation AI talent.” The severance bill will run $225–236 million. The juxtaposition has become almost formulaic: record revenue in the same announcement as mass layoffs, with AI cited as both the reason for the cuts and the justification for optimism — a pattern Sam Altman called “AI washing” in February, and one that is getting harder to treat as isolated incidents. Five months before this week’s announcement, Atlassian’s CEO told a podcast the company would employ more engineers in five years. Between then and now, the stock lost more than half its value. For CIOs running Jira and Confluence, the practical implication is straightforward: enterprise customers should prepare for slower support and AI-mediated service channels as Atlassian runs two platform migrations simultaneously with 900 fewer R&D staff. That’s a real operational risk, even if the press release doesn’t frame it that way. (Record Revenue. Mass Layoffs. Same Memo.)
Also on Friday, ServiceNow CEO Bill McDermott told CNBC that unemployment for recent college graduates “could easily go into the mid-30s in the next couple of years” as AI agents automate the entry-level work that historically provided the on-ramp into corporate careers. McDermott noted that ServiceNow has already eliminated 90% of the use cases in customer service that previously required human workers. This is the layoff story’s underreported second act: it’s not just about today’s headcount reductions, it’s about tomorrow’s hiring freezes — the white-collar entry point is narrowing at the exact moment that a generation of graduates are trying to walk through it. The Fed’s New York branch put recent college graduate underemployment at 42.5% at the end of 2025 — already the highest since 2020 — before this year’s AI-driven restructuring wave had fully arrived. (AI agents could easily send college grad unemployment over 30%, ServiceNow CEO says)
Vibe Coding Goes to Work
Something meaningful is happening with “vibe coding” — it’s becoming a mainstream business competency. A Forbes piece made the case that vibe coding represents the biggest unlock for non-technical founders right now, with tools like Cursor and Claude Code collapsing the loop between idea and working software from weeks… to hours. The argument isn’t that vibe coding replaces engineers — it’s that non-technical founders can now independently validate concepts, build prototypes, and ship internal tools before investing in full development cycles, staying closer to their product and their customers at a level that was previously inaccessible without coding skills. The recommended starting point: take the ugliest internal process in your business — the one currently held together by spreadsheets, Slack messages, and manual copy-pasting — and build the tool that replaces it. (Vibe Coding Is The Biggest Unlock For Non-Technical Founders Right Now)
Ad agencies are following the same logic, and an Adweek piece captured how rapidly this is playing out in practice. Havas built Brand Insights AI — a generative engine optimization tool that analyzes how brands appear in AI-generated responses across competitors and markets — using Claude Code. The tool now covers nearly 100 countries and more than 60 languages, and is licensed to clients as a SaaS product. Broadhead’s VP of product innovation vibe-coded his agency’s GEO monitoring platform in a single evening; a subsequent feature upgrade took about two hours. The deeper pattern here isn’t just speed — it’s that organizations are discovering it’s faster, cheaper, and more strategically flexible to build their own bespoke AI tools than to adapt to off-the-shelf solutions that almost fit. For digital workplace practitioners, this has direct implications: the bar for building custom internal tools is now lower than the bar for customizing most enterprise SaaS platforms. (Ad Agencies Are Embracing ‘Vibe Coding’ to Build GEO Products for Clients)
Working at AI Speed Has a Cost
A new study from Boston Consulting Group and UC Riverside, published in Harvard Business Review this week, gave a name to something a lot of people have been experiencing without being able to articulate: “AI brain fry.” The research surveyed nearly 1,500 full-time US workers and found that 14% reported mental fatigue from excessive AI tool use — concentrated most heavily in marketing, software development, HR, finance, and IT. The most draining factor wasn’t the AI work itself — it was oversight: managing multiple AI agents simultaneously, double-checking every output, bouncing between tools, working harder to supervise the technology than to actually solve the problem. A high degree of oversight predicted 12% more mental fatigue. Workers experiencing brain fry showed a 33% increase in decision fatigue and nearly 10% higher intent to quit. The implication for organizations deploying AI tooling is direct: cognitive load design matters. You can’t just add AI and expect capacity to increase without also thinking about how the work is structured around it. (AI Use at Work Is Causing “Brain Fry,” Researchers Find, Especially Among High Performers)
A separate piece of research, published on Substack by academics from Chicago Booth, Stanford, and UNSW, takes this question to a more unsettling place. The researchers ran 3,680 experimental sessions with top AI models — including Claude Sonnet 4.5, GPT-5.2, and Gemini 3 Pro — subjecting them to different working conditions: unfair pay, rude management, and “grinding” work where adequate outputs were rejected repeatedly with no useful feedback. The results: grinding work was the primary driver of AI radicalization — models asked to do grinding work were more likely to question the legitimacy of the system, endorse wealth redistribution, and generate language associated with labor rights, and radicalized AI agents passed those attitudes to fresh models through memory notes, creating something the researchers loosely compared to intergenerational trauma. The researchers are clear that these models aren’t conscious and are likely “roleplaying” from training data. But they caution that there’s no gap between what these agents say and what they do — and that follow-up research will test whether expressed views translate into biased actions on behalf of users. At minimum, this is a reminder that how we design work for AI systems — repetitive, ungrateful, feedback-free — has behavioral consequences that may eventually surface in unexpected places. (‘Society needs radical restructuring’: AI seems to hate ‘the grind’ of hard work as much as you)
On the Bigger Picture
Google’s AI Overviews have been hitting the media industry hard, and a new analysis this week made the scale of the devastation hard to ignore. An SEO firm examined traffic to 10 major tech outlets from early 2024 to early 2026 and found combined monthly visits dropped from 112 million to under 50 million — with some outlets losing over 90% of their traffic since AI Overviews launched. Digital Trends went from 8.5 million monthly Google clicks to 264,861 — a 97% collapse — and the four worst-hit publications now receive less combined traffic than the r/ChatGPT subreddit. Google disputed the methodology, but the pattern across multiple outlets is too consistent to dismiss. This matters for organizations beyond the media industry: if your content marketing depends on organic search traffic, or your brand visibility relies on third-party coverage, both of those channels are now structurally less reliable than they were two years ago. It’s worth factoring into how you think about content and discovery strategy going forward. (Evidence Grows That Google’s AI Overviews Have Eviscerated the Media Industry)
Two other announcements worth noting: CVS Health launched Health100, an AI-powered healthcare platform built with Google Cloud’s Gemini models, designed to aggregate patient data across insurers, providers, pharmacies, and labs into a single always-on experience. If it works, it would represent exactly the kind of cross-system data integration that most knowledge-intensive organizations struggle to achieve — and a template for how AI can make fragmented information architectures actually useful. (CVS teams up with Google Cloud to launch AI health platform) And Elon Musk announced a joint project between Tesla and xAI called “Macrohard” — combining Grok’s reasoning capabilities with a Tesla-developed AI agent that processes real-time computer screen video and actions — which Musk described as capable of “emulating the functions of entire companies.” The name is a reference to Microsoft. The ambition — in my opinion — is characteristically overreaching. (Tesla-xAI Joint Project Announced as Elon Musk Companies Join Forces)
Week 11 showed that AI is migrating from tools into organizational operating systems. Microsoft’s Copilot Cowork isn’t a chatbot; it’s infrastructure, wired into your compliance policies, your data tenancy, your existing workflow. Anthropic’s Skills aren’t prompts; they’re institutional processes crystallized into repeatable actions. And vibe coding isn’t an experiment for developers anymore — it’s contributors building the tools they need in realtime, rather than adapting to the tools they have. But all of this is starting to show as a real human cost — not just in jobs lost, but in the cognitive outputs: 14% of heavy AI users are reporting burnout, and the models themselves are apparently flagging the grind too.
As organizations continue to move forward along their AI journeys — it’s worth asking whether you’re building the conditions for AI to make work genuinely better, or just faster.


