the BeAIReady Brief | Week 9
What I'm actually reading about AI and the Digital Workplace (not an AI curated list of articles).
This week the fog around enterprise AI started to lift as the stakes have become more explicit. Two major platform launches competed for the role of enterprise AI orchestration layer. A once-obscure developer protocol dominated RSA conference submissions and CIO conversations alike. And one of America’s most closely watched tech founders announced he was replacing 40% of his workforce with AI — and watched his stock rally 20% in after-hours trading. The machinery of disruption, long described in theoretical terms, is now disclosing itself in earnings calls and org charts.
This week’s coverage:
Two Platforms Walk Into an Enterprise
Anthropic and OpenAI both launched enterprise agent platforms the same week. What that means for you.
MCP Grows Up — But Security Hasn’t Kept Pace
The protocol is becoming infrastructure. Governance is lagging dangerously behind.
The Layoff Signal No One Should Ignore
Block cuts 40% of its workforce and the stock pops 20%. The market has spoken.
On the Bigger Picture
Canva's "last 20%" bet, and the Substack post that moved markets, drew 22 million views, and earned a Citadel rebuttal.
Here’s what I’ve been reading this week.
Two Platforms Walk Into an Enterprise
The most consequential week in enterprise AI infrastructure in recent memory unfolded quietly enough: Anthropic held a virtual event announcing Claude Cowork’s full enterprise launch, and a few days earlier OpenAI unveiled a platform called Frontier. Both companies are making essentially the same bet — that the next major AI battleground isn’t model quality, it’s orchestration. Whoever becomes the connective tissue of enterprise AI wins.
Anthropic’s pitch centers on what they’re calling “the thinking divide” — the growing gap between organizations embedding AI across employees, processes, and products simultaneously, and those still running isolated pilots. The Cowork announcement included private plugin marketplaces, a broad set of new MCP connectors spanning everything from Google Drive and Gmail to DocuSign and FactSet, and the ability to pass context seamlessly across Claude, Excel, and PowerPoint. The case studies were striking:
Novo Nordisk reduced regulatory documentation time from over ten weeks to ten minutes
Spotify cut engineering time on code migrations by 90%
Salesforce’s Slack integration now saves customers nearly 100 minutes per week.
Thomson Reuters’ CEO said plainly that “the tools are in many senses ahead of the change management” — and estimated it would be 18 months before enterprise organizations catch up. That’s probably the most honest thing said at the entire event. (Anthropic says Claude Code transformed programming. Now Claude Cowork is coming for the rest of the enterprise.)
OpenAI’s Frontier announcement is playing a similar game from a different angle. The platform is designed to address what OpenAI sees as the core failure mode of enterprise AI: fragmentation. Agents deployed in isolation, without shared business context, quickly become complexity generators rather than value creators. Frontier aims to give agents institutional knowledge, identity, governance, and auditability — and claims to work with existing systems rather than replacing them. The reaction in developer and enterprise communities has been notably mixed: real enthusiasm for what the platform could do, alongside sharp skepticism about vendor lock-in. When your LLM vendor is also your agent orchestration layer and your enterprise platform, the strategic exposure compounds quickly. (OpenAI Launches Frontier, a Platform to Build, Deploy, and Manage AI Agents across the Enterprise)
What makes this moment interesting isn’t just that both companies launched in the same week — it’s what that timing reveals about the competitive dynamic. Anthropic moved from research lab to platform company in roughly twelve months. OpenAI is responding with similar enterprise ambition. Both are compressing into months the kind of ecosystem development that once took years. For the organizations sitting in the middle of this, the question isn’t which platform to choose — it’s whether you have the data infrastructure and organizational readiness to take advantage of either. As Anthropic’s head of economics put it at the Cowork event: “If the knowledge Claude needs to execute a sophisticated task exists only in a coworker’s head, that’s not a technical problem. That’s an organizational problem.”
MCP Grows Up — But Security Hasn’t Kept Pace
Three separate pieces this week told a coherent story about the Model Context Protocol: where it is in the hype cycle, where it’s heading as enterprise infrastructure, and why security is the variable most organizations are dangerously underweighting.
CIO.com laid out the executive case plainly. MCP has moved from engineering curiosity to board-level concern in under a year — and the primary driver is agentic AI. Agents need two things to function: access to data and the ability to act. As I’ve written previously, MCP provides standardized solutions to both, which is what makes it different from previous integration frameworks and why RSA 2026 is reportedly dominated by MCP-related submissions.
The governance implications are serious: MCP integrations can be created by anyone experimenting with AI tooling, which means the attack surface is expanding well beyond enterprise-approved systems. CIOs need to know where MCP is already in use within their organizations, who has authority to create integrations, and how permissions are being granted — because the answer, in most organizations right now, is “we don’t fully know.” (Why Model Context Protocol is suddenly on every executive agenda)
Google extended the MCP story into the browser this week with WebMCP — a protocol built into Chrome 146 that lets websites expose structured functions directly to AI agents. Rather than agents burning thousands of tokens processing screenshots or scraping raw HTML, they can call structured functions natively through the browser. Early benchmarks show a 67% reduction in computational overhead compared to visual agent-browser interactions, and the protocol is already backed by both Google and Microsoft, with W3C standardization underway. The creator describes it simply as “MCP, but built into the browser tab.” The SEO and web development implications are significant: the question for any organization with a customer-facing web presence is no longer just how humans experience your site — it’s how AI agents do. (Google Ships WebMCP, The Browser-Based Backbone For The Agentic Web)
The New Stack brought some useful ground-level reality from the MCP Conference in London. The headline finding is that the gap between “vibe-coded” MCP experiments and production-ready deployments remains wide. Security is the main culprit: as one speaker put it, “it has never been easier to get hijacked,” and OAuth implementation in most MCP deployments is incomplete at best. Context window management is the secondary challenge — connecting an agent to 100 tools is easy; getting it to use the right three efficiently is hard. The practical advice for organizations starting out: pick one internal system people constantly ask questions about, build one MCP server with read-only access, and give it to five non-engineers. That’s it. The ambition can grow from there. (Beyond the vibe code: The steep mountain MCP must climb to reach production)
The Layoff Signal No One Should Ignore
Every week there’s another AI job displacement headline, and it’s easy to let them blur together. This week was different in a way worth paying attention to.
Jack Dorsey announced that Block — the company behind Square and Cash App — would lay off 40% of its workforce, more than 4,000 people. The stated reason was not cost pressure, not poor performance, not strategic restructuring in the conventional sense. Dorsey wrote plainly that “intelligence tools have changed what it means to build and run a company” — and said he chose immediate, honest action over a gradual drawdown over months or years.
Block’s stock rose more than 20% in after-hours trading. The market’s signal was unambiguous: investors are rewarding companies for accelerating the substitution of human labor with AI capability, even at scale, even when the operational execution is uncertain. The question every leadership team should be asking is not whether this happens elsewhere — it’s at what rate, and with what obligations to the people affected. (Jack Dorsey’s Block to Lay Off 40% of Its Workforce in AI Remake)
The Anthropic Cowork event added a piece of data that deserves its own moment. Anthropic’s head of economics, drawing on privacy-preserving analysis of how Claude is actually being used, reported that a year ago roughly a third of all US jobs had at least a quarter of their associated tasks appearing in Claude usage data. That figure has now risen to approximately one in every two jobs — and when businesses embed Claude through the API, the overwhelming pattern is automation, not augmentation. He specifically called out “jobs that are pure implementation” — data entry workers, technical writers — as occupations where Claude is already handling tasks central to those roles. No widespread labor displacement has materialized yet in the data, but the exposure is broadening faster than most organizations are tracking. The economist’s advice to leaders cut to the real issue: the bottleneck isn’t model capability, it’s whether your organization’s knowledge is structured and accessible enough for AI to act on it. (Anthropic says Claude Code transformed programming. Now Claude Cowork is coming for the rest of the enterprise.)
On the Bigger Picture
A few pieces this week were less about immediate enterprise decisions and more about the shape of what’s coming.
Canva announced two acquisitions — Cavalry for 2D animation and MangoAI for video ad optimization — while the broader software market continued to be punished on AI displacement fears. Canva’s position is instructive: $4 billion in annualized revenue, up 36% year over year, while Adobe is down 30% for the year. The company’s co-founder put it plainly: “AI is great at getting you to 80%. That last 20% — where you’re confident you can push this out and truly represent your brand — that’s really tricky to do.” Companies that embed AI deeply while keeping human judgment in the loop for the final mile are, for now, the ones finding competitive ground. (As Wall Street punishes software stocks over AI concerns, Canva gets more acquisitive)
The piece I kept coming back to this week — the one that I suspect you may have already encountered — was CitriniResearch’s “2028 Global Intelligence Crisis.” I want to be clear about what this is: a scenario document, explicitly labeled speculative fiction, not a forecast. But here’s the thing: it moved real markets at real speed. Published on a Sunday, it triggered a broad selloff the following Monday:
The Dow dropped more than 800 points
IBM fell nearly 13%
Payment companies like American Express, Mastercard, and Visa all tumbled
DoorDash and private equity giants KKR and Blackstone down over 8%
The piece accumulated 22 million views on X after Michael Burry amplified it. The Wall Street Journal cited it as a key accelerant of investor anxiety. Citadel Securities published a formal rebuttal the same week. A speculative essay on Substack moved hundreds of billions in market value in a single session. That fact alone is worth sitting with, regardless of whether the scenario is accurate.
The scenario itself is worth understanding because it traces the internal logic of AI disruption further than most analysis does. The core mechanism: companies cut headcount, reinvest savings into AI, AI improves, enabling further cuts — with no natural corrective cycle, because unlike a typical recession, this downturn’s cause is structural, not cyclical. Citrini introduces the concept of “Ghost GDP” — output that registers in national accounts but never circulates through the consumer economy, because machines don’t spend money on discretionary goods. The scenario traces the feedback loop through white-collar wage compression, consumer spending decline, SaaS revenue erosion, private credit defaults on PE-backed software LBOs, and eventually prime mortgage distress — borrowers with 780 FICO scores whose income assumptions were written before the world changed.
Citadel’s rebuttal pushed back hard, pointing to rising software engineering job postings (+11% YoY), stable AI adoption rates in actual labor data, and the historical pattern of general-purpose technologies creating more jobs than they displace. Those counterarguments have real merit. But what the debate itself reveals may matter more than who wins it: we are in a moment where the gap between “plausible narrative” and “tradeable signal” has collapsed to nearly nothing, and institutional investors are making allocation decisions based on AI disruption fears that are still more scenario than data. For organizations watching the market signals around enterprise software valuations, payments infrastructure, and professional services firms, the message is the same whether or not the 2028 scenario materializes: the repricing of human knowledge work is already underway, and capital markets are running well ahead of the economic data. (THE 2028 GLOBAL INTELLIGENCE CRISIS)
What this week’s reading adds up to is something like a threshold moment: the ideas that have been circulating in boardrooms and conference sessions for two years are now being operationalized and disclosed. Platforms are being launched, protocols are becoming policy questions, layoffs are being attributed explicitly to AI capability, and macro researchers are stress-testing scenarios that once seemed speculative. Organizations that treat this as background noise — as something to monitor rather than act on — are making a choice with real consequences. The thinking divide Anthropic named this week is real, and it’s widening. The question isn’t whether to engage with AI transformation; it’s whether your organization can do it with intention and clarity before the decisions get made for you.


