the BeAIReady Brief | Week 8
What I'm actually reading about AI and the Digital Workplace (not an AI curated list of articles).
Hi there! This is a new segment for BeAIReady. Each week (or just about) I’ll recap what I’ve been actually reading and watching related to AI and the digital workplace. This isn’t an AI curated list of articles. These are actual articles I’ve read, along with some analysis on the impact on the practical impact on the digital workplace.
This week’s reading was defined by a single uncomfortable question running through almost everything I saw: who actually benefits when AI restructures work? The market is already voting — punishing the companies AI threatens and rewarding those aligned with it. Agents are moving from novelty to infrastructure, and the governance debates are arriving right on cue. And on the ground, knowledge workers are quietly discovering that the biggest shift isn’t capability — it’s continuity.
Here’s what I’ve been reading this week.
The Market Is Repricing AI’s Impact — and the Signal is What’s Falling, Not What’s Rising
The story everyone was watching this week was the so-called “SaaSpocalypse” — a wave of selling that wiped over a trillion dollars from enterprise software valuations. Salesforce down 26%. ServiceNow down 28%. HubSpot down 39%. The hot take was that AI is overhyped and the bubble is cracking. A much better read, from Dave Brear on LinkedIn, makes the opposite case: markets aren’t losing faith in AI — they’re losing faith in the companies AI is threatening to replace. (The AI Skeptics Are Right About the Wrong Thing)
That reframe matters. Deutsche Bank ran a remarkable experiment this week — they asked their own AI tool, dbLumina, which industries it planned to disrupt. The answer was uncomfortably direct: information technology and software are the most exposed, followed by wealth management (80% of retail investor interactions handled by AI by 2027) and customer service (75% automated by 2026). The machine identified its own targets. (Deutsche Bank asked AI how it was planning to destroy jobs)
Figma is a useful case study in what it looks like to be on the right side of that line. The stock is down 80% since IPO — and yet investors cheered its Q4 earnings, which showed 40% year-over-year revenue growth and the highest net dollar retention in ten quarters. In a CNBC interview, CEO Dylan Field ranked Claude at the top of the LLM stack, said 75% of their larger customers are already consuming AI credits weekly, and pushed back hard on the “software is dead” narrative. Figma’s partnerships with Anthropic and OpenAI aren’t decoration — they’re the thesis. (Figma Q4 earnings | Figma CEO interview) Citi’s equity analyst Tyler Radke added useful color: he thinks we’re nearing a bottom for software stocks, but his advice is to be selective — favor companies with real AI stories and avoid legacy SaaS businesses that aren’t showing growth or margin improvement. (Citi sell-off analysis)
Agents Are Becoming Infrastructure — and the Governance Questions Just Arrived
Underneath the market noise, something more structurally significant is happening with AI agents. A team at UC Santa Barbara published a new framework called Group-Evolving Agents (GEA), which allows AI agents to evolve collectively — sharing innovations across the group rather than siloing improvements in individual branches. GEA matched the performance of human-engineered agent systems on real-world coding benchmarks, at zero additional deployment cost. It’s an early signal that the “design the agent” phase of enterprise AI may give way to “let agents design themselves.” (GEA framework)
At the same time, the agent ecosystem got its first real governance friction. Anthropic quietly updated its documentation to clarify that routing Claude requests through personal Pro or Max subscriptions — which is exactly how popular personal agents like OpenClaw and NanoClaw work — violates their terms. The community reacted, Anthropic walked it back as a documentation clarification rather than a policy change, but the underlying tension is real: where is the line between personal experimentation and building a business on subsidized platform access? (Anthropic/OpenClaw ToS) The denouement was almost too on-the-nose: the creator of OpenClaw, Peter Steinberger, promptly announced he was joining OpenAI to work on “the next generation of personal agents.” (OpenClaw creator joins OpenAI)
The governance theme ran deeper than platform ToS disputes. Anthropic CEO Dario Amodei resurfaced in a Fortune piece with a quote that deserves more attention than it got: “I’m deeply uncomfortable with these decisions being made by a few companies, by a few people.” He wasn’t talking about competitors — he was talking about himself. No one elected him. No federal AI regulations exist. Thirty-eight states have adopted some form of AI legislation, but there’s no coherent national framework. Amodei is advocating for more regulation while acknowledging the commercial pressures working against it from inside his own company. It’s an honest and uncomfortable position — which is probably why it’s worth taking seriously. (Anthropic CEO on governance)
What AI Actually Feels Like From Inside the Work
Three more grounded reads this week, all relevant to anyone doing knowledge work.
The Claude Code diary by Dave Brear is worth reading slowly. He spent a week connecting Claude Code to his personal knowledge vault and discovered something easy to miss in the broader AI hype: the value isn’t in any individual response — it’s in continuity. When AI knows your files, your conventions, your half-formed ideas, the collaboration changes character entirely. You stop prompting and start conversing. He hit rate limits, felt the absence like a colleague stepping out for lunch, and ended the week staring at the Claude Max pricing page. The piece captures something true about what persistent AI context actually feels like. (Claude Code diary)
Microsoft open-sourced VibeVoice this week — a local audio model that handles text-to-speech, speech-to-text, and voice cloning, all without a cloud subscription or API key. Capabilities that were expensive and gated six months ago are now free and local — and that’s increasingly the story with open-source AI. (Microsoft VibeVoice)
Finally, a study worth bookmarking for anyone creating content professionally: an analysis of 1.2 million AI-generated answers found that 44% of ChatGPT citations come from the first 30% of any given piece of content. AI models weight early framing more heavily and interpret the rest through that lens. The practical implication is simple: if you want AI to surface your ideas, stop burying your best thinking at the end. Front-load your substance. (ChatGPT citation study)
On the Bigger Picture
Two reads that don’t fit the workplace frame neatly but are worth noting. IEEE Spectrum published a careful piece arguing that the US and China aren’t racing toward the same finish line — the US is doubling down on AGI, while China is focused on embedding AI into manufacturing, healthcare, and logistics as a near-term productivity engine. The “arms race” framing may be creating a self-fulfilling prophecy that increases risk for everyone. (US vs. China AI futures) And separately, China’s humanoid robots went from stumbling through folk dances at last year’s Spring Festival Gala to performing kung fu flips and choreographed gymnastics this year — China now accounts for more than 85% of global humanoid robot installations. The AI model race, one analyst noted, will ultimately matter more than the hardware. But the hardware is catching up faster than most expected. (China humanoid robots)
The thread connecting most of this week’s reading: AI isn’t disrupting work in the future tense anymore. The disruption is active — in valuations, in agent governance debates, in how knowledge workers structure their days and their content. The question for organizations isn’t whether to engage with this. It’s whether they’re paying close enough attention to know which side of the line they’re on.


