The BeAIReady Brief | Week 10
What I'm actually reading about AI and the Digital Workplace (not an AI curated list of articles).
I read this week's articles against the backdrop of a newly waged war — U.S. and Israeli strikes on Iran… and Iranian retaliation that reached (among other targets) Amazon's cloud infrastructure in the Middle East. That story’s in the "Bigger Picture" section at the end, but it factored into how I took everything else in. The red-thread running through all of this felt less abstract: how much of what we assume is stable — economically, organizationally, technologically — is actually about to move?
Because right now, everything feels a little shaky… doesn’t it?
And then this morning, the February jobs report landed: the economy shed 92,000 jobs, unemployment ticked up to 4.4%, and the labor market has now averaged essentially zero net job creation over six months. The causes are genuinely tangled — a Kaiser Permanente strike, continued federal workforce reductions, the early shadow of a widening global conflict — but the question running through all of it felt less abstract than usual: how much of what we assume is stable is actually moving, and how would we even know? This week also marked a shift from argument to evidence in the AI-and-jobs debate specifically, with two serious research papers, a landmark corporate restructuring, and a cascade of product moves rewriting the economics of enterprise software.
This week’s coverage:
The Boats Are Already Burning
Block cuts 40% of its workforce, Goldman publishes “AI-nxiety,” and Moody’s economist names the moment we’re in. The argument-to-evidence shift is here.What the Research Actually Shows About Jobs
Anthropic and MIT both published serious labor market research this week. The picture is more nuanced — and more useful — than the headlines suggest.The Software Stack Is Rewriting Itself
Companies are vibe-coding their own CRMs, and OpenAI just shipped GPT-5.4. The pressure on incumbent software vendors is no longer theoretical.What Good Enterprise AI Deployment Actually Looks Like
OpenAI built an internal data agent serving 4,000 employees in three months. The lesson isn’t the tech — it’s what they say is the real prerequisite. Plus: why 62% of executives outsourcing decisions to AI should worry you.On the Bigger Picture
Iranian drone strikes hit Amazon’s data centers in the Middle East. The cloud isn’t as abstract as we sometimes treat it.
Here's what I’ve been reading this week.
The Boats Are Already Burning
Jack Dorsey chose this week to give his first extended interview since cutting nearly half of Block’s 10,000-person workforce, and the resulting Wired piece is required reading. Dorsey isn’t always right, but he is unusually willing to say what many CEOs are privately thinking. His framing: the shift in AI capability that happened in December — specifically the leap in Anthropic’s and OpenAI’s models — made a fundamental restructuring of companies not just possible but existential. The test isn’t gross profit per employee, he argues. The test is whether you’re building toward “a company as an intelligence” rather than a hierarchy of managers. Organizations that aren’t doing that, he says flatly, will face something existential within the next year or two. (Jack Dorsey Is Ready to Explain the Block Layoffs | WIRED)
Moody’s chief economist Mark Zandi had a name for what Dorsey just did: a Cortés moment. Like the conquistador who burned his ships on the shore of Mexico in 1519, eliminating any possibility of retreat, companies investing at scale in AI are cutting off their own exit routes — whether they know it or not. The mechanism Zandi fears most isn’t the layoff itself; it’s the market’s response to it. Block’s stock surged after the announcement. When Wall Street rewards aggressive AI-driven downsizing, the signal travels fast to every other boardroom that hasn’t yet acted. It’s not a single rupture — it’s a cascade of rational decisions, each one pushing the labor market a little closer to the edge. (Top economists says companies are close to a ‘Cortes moment’ on AI)
Goldman Sachs, meanwhile, published what I think may be the most important single data point of the week: a research note from senior economist Ronnie Walker titled, without irony, “AI-nxiety.” The headline finding is almost counterintuitive — Goldman found no meaningful relationship between AI adoption and productivity at the economy-wide level, even as corporate revenues grew a healthy 4.6%. Seventy percent of S&P 500 management teams mentioned AI on their earnings calls. Only 10% quantified its impact. Only 1% quantified its impact on earnings. But in two specific domains — customer support and software development — companies that actually measured AI’s contribution reported a median productivity gain of 30%. That’s not a marginal improvement… it’s what Dorsey recognized as a restructuring trigger. Goldman also found that companies discussing AI in the context of their workforce cut job openings by 12% over the past year, compared to 8% for companies overall — a modest but meaningful signal that the “nascent reluctance to hire” is already underway. Long-term, Goldman’s baseline forecast is that 6-7% of workers — roughly 11 million jobs — will eventually be displaced. (Goldman finds no relationship between AI and productivity but a 30% boost in 2 areas)
What the Research Actually Shows About Jobs
If the Dorsey interview and the Zandi piece represent the “vibes” end of the AI-and-work debate, this week I read two pieces of serious academic research that push back — carefully, not dismissively — on the most panicked readings.
Anthropic published a new paper introducing a novel framework for measuring AI’s labor market impact, combining its own usage data from Claude with government occupational statistics to build what the researchers call an “observed exposure” metric — a measure of which jobs are not just theoretically automatable but actually being automated right now. The findings are more reassuring than alarming, at least for the moment: no systematic increase in unemployment for workers in the most AI-exposed occupations has emerged since late 2022. The exception — and it’s worth watching — is younger workers. Job-finding rates for workers aged 22-25 entering high-exposure occupations have dropped by roughly 14% in the post-ChatGPT era. The entry-level pipeline into exposed roles is narrowing, even if incumbent workers aren’t yet being displaced at scale. The paper is careful to note this is early evidence in a framework explicitly designed to detect disruption before it becomes obvious in aggregate data. (Labor market impacts of AI: A new measure and early evidence)
MIT economists David Autor and Neil Thompson offered a parallel and useful corrective in MIT Sloan Management Review. Their core argument is that we’ve been asking the wrong question. The relevant issue isn’t whether a job is exposed to automation — it’s whether AI will automate the supporting tasks that free workers to do their expert work better, or whether it will automate the expert tasks themselves, commoditizing hard-won skill. When spellcheck automated proofreading’s routine work, skilled proofreaders’ wages went up. When GPS automated taxi drivers’ encyclopedic knowledge of city streets, their wages fell. The difference is everything. Thompson also flagged a striking productivity finding that deserves more attention: experienced developers using generative AI wrote code faster, but took 19% longer to complete tasks overall. Prompting, checking outputs, waiting on the model — it all adds up. The productivity gains AI promises are real, but getting there involves friction we often don’t account for in the projections. (What 2 MIT experts are thinking about AI and work | MIT Sloan)
Venture capitalist Bill Gurley synthesized the career-level implications in ways I found resonated with what I’m hearing from clients. His warning: workers who followed the “college conveyor belt” into roles they don’t particularly care about are most exposed — not because AI will replace passion, but because disengaged workers have no natural motivation to become the most AI-fluent person in the room. The workers who survive, Gurley argues, are those who treat AI as “career jet fuel” — who understand what the technology can do in their specific industry and become indispensable precisely because of that fluency. For organizational leaders, the implication is uncomfortable: the talent most vulnerable to displacement may also be the talent that’s hardest to motivate to adapt. (Tech investor Bill Gurley says workers who went through the ‘college conveyor belt’ are most at risk)
The Software Stack Is Rewriting Itself
While the labor market debate dominated headlines, I was also tracking a quieter but arguably more consequential set of moves in enterprise software. I have stood firmly in the camp that the effect of AI is more about disruption than replacement — but this week’s reading has started to make me sway.
The Wall Street Journal documented a wave of small and midsize companies that are vibe-coding their own CRM systems rather than paying for Salesforce. One example: a 65-person water treatment company that built a custom CRM for $15,000-$20,000 — cheaper, better-fitting, and more likely to actually get used. The more significant business story isn’t the individual builds; it’s the leverage shift. As BNP Paribas’s head of software research noted, even if most companies don’t walk away from incumbent vendors, an explosion of AI-native alternatives gives buyers negotiating power they haven’t had in years. That’s a margin compression story as much as a displacement story. (Meet the Companies Vibe Coding Their Own CRMs)
OpenAI accelerated its enterprise ambitions this week with the release of GPT-5.4, a model it describes as “our most capable and efficient frontier model for professional work.” Available in standard, Thinking, and Pro versions, the model hits benchmark records in knowledge-work tasks — 83% on OpenAI’s GDPval professional skills test — while running faster and at lower cost than its predecessors. For enterprise buyers, the headline is reliability: GPT-5.4 is 33% less likely to make errors in individual claims compared to its predecessor, and 18% less likely to produce responses containing errors overall. That gap between “impressive demo” and “dependable production tool” has been the sticking point for serious enterprise adoption; OpenAI is clearly targeting it directly. (OpenAI launches GPT-5.4 with Pro and Thinking versions | TechCrunch)
What Good Enterprise AI Deployment Actually Looks Like
The most instructive piece I read this week might also be the one that got the least attention: a VentureBeat deep-dive into how OpenAI built an internal AI data agent that now serves over 4,000 of its own employees. Two engineers. Three months. Seventy percent of the code written by AI. The agent lets any employee — technical or not — query 600 petabytes of data across 70,000 datasets in plain English and get charts, dashboards, and analytical reports in minutes instead of hours.
The most useful insight from the piece isn’t the architecture or the benchmark scores — it’s what OpenAI’s data infrastructure lead Emma Tang identifies as the unsexy prerequisite that will determine who wins the AI agent race: data governance. This speaks directly to the Knowledge Architecture foundation I work to build for my own clients. Your data needs to be clean enough, annotated enough, and governed well enough for an agent to navigate it reliably. Without that foundation, the best models in the world produce overconfident, wrong answers. The organizational work of making your data trustworthy is not glamorous. It is, however, non-negotiable. (OpenAI’s AI data agent, built by two engineers, now serves thousands of employees)
The data governance challenge has a troubling counterpart in the executive suite. A survey of 200 UK business leaders I came across this week found that 62% now use AI to make the majority of their decisions — and 70% admitted to second-guessing their own judgment when it conflicted with AI’s recommendation. Perhaps more revealing: 65% said decision-making had become less collaborative since they adopted AI, and 46% now rely on AI more than on their colleagues’ advice. There’s a familiar pattern here — the same promises made about Executive Information Systems in the 1980s and 1990s. What’s different now is the degree of deference, and the research finding that frequent AI tool usage correlates with a “significant negative correlation” with critical thinking ability. Building AI into your workflow to handle complexity is prudent. Outsourcing your judgment to it is a different matter. (Supposedly big-brained execs are outsourcing decisionmaking to AI)
On the Bigger Picture
A story that belongs in every enterprise risk register this week: Iranian drone strikes damaged three Amazon Web Services data centers in the Middle East — two directly struck in the UAE, a third damaged in Bahrain — causing structural damage, power disruption, and in some cases requiring fire suppression that added water damage on top. The localized impact was significant but not catastrophic; AWS’s redundancy architecture absorbed most of it. What the incident exposed is something cloud providers have long preferred not to emphasize: these facilities are physical objects, in physical locations, in a physical world that includes geopolitical conflict. AWS’s own disaster recovery architecture is designed for software failures, not missile attacks. For any organization with critical workloads in a region that borders active conflict zones, the question of “what does our BCP look like if our cloud provider’s infrastructure takes a hit?” just became more urgent. (Iranian strikes on Amazon data centers highlight industry’s vulnerability to physical disasters)
What I kept coming back to this week is how much the debate has shifted from “will this happen?” to “it’s happening — now what?” The Goldman data shows productivity gains are real but concentrated. The Anthropic and MIT research shows labor impacts are real but still uneven and emerging. Dorsey’s restructuring and Zandi’s framing show that competitive and market dynamics are now accelerating adoption in ways that may not wait for the research to catch up. The most dangerous position for any organization right now is the one that’s watching all of this unfold while waiting for more certainty. The boats are already burning — the question is whether you’re on the shore or still on the water.


