The BeAIReady Brief | Week 16
April 13–19 | Your Org Chart Is the Bottleneck, OpenAI Is Done Pretending Microsoft Is a Partner, and the Governance Framework You're Relying On Doesn't Cover Where AI Is Actually Running
The IMF cut its global growth forecast to 3.1 percent this week and U.S. consumer sentiment hit a 74-year low — both driven largely by the ongoing pressure from the Middle East situation on energy prices and increased expectations on imminent inflation.
You’d think we’d be seeing organizations cutting back — and they are, sort of. Hiring is basically flat, while enterprise AI investment continues to go up. But there’s Increasing pressure to show returns on what they’ve already spent. The challenge is that the way most of companies are organized — the culture, the structure, the management layer — isn't set up to use it well. That gap keeps showing up differently in every article I read — in the org chart that hasn’t moved, in the governance framework that doesn’t reach where the models are actually running, in the demand metrics measuring the wrong thing entirely.
Here’s what I was reading.
This week’s coverage:
The Structure Isn’t Ready and the Employees Already Know It
The KPMG data on organizational adaptability is uncomfortable reading — and the HBR case study on BBVA suggests that the employees who can’t wait for the org chart to catch up have already built around it.
Enterprise Software’s Most Comfortable Assumptions Are No Longer Safe Three structural forces are cracking the SaaS business model, and an internal OpenAI memo made clear that even its most important partnership is now a competitive constraint.
The Governance Layer Doesn’t Reach Where AI Is Running
Open-weight models and local inference are moving faster than the oversight frameworks built to contain them — and Google Gemma 4 just made the cloud security perimeter largely irrelevant.
Token Counts Are Not a Productivity Metric
Anthropic is restructuring its pricing as a deliberate bet against inflated demand projections, while companies like Snap are wrapping workforce reductions in AI investment narratives that deserve more scrutiny than they’re getting.
On the Bigger Picture
AI is quietly reshaping how humans communicate with each other, not just how they work — and the model arms race moved into cybersecurity last week.
The Structure Isn’t Ready and the Employees Already Know It
A new index from KPMG, built from surveys of 300 C-suite leaders and analysis of 177 publicly traded companies, delivered numbers that should be uncomfortable for anyone who has spent the last two years telling their board that the AI transformation is underway. Eighty-one percent of executives say their boards have raised expectations for organizational adaptability. Only 30% say their organization’s structures, roles, and processes can actually reconfigure quickly. Only 24% identified more dynamic talent deployment as something their organization changed in the last year. And in every industry group surveyed, companies were nearly twice as likely to increase technology spending as to invest in employee training. (The org chart isn’t ready: How AI exposed the hidden crisis inside the American corporation)
What this data is actually measuring is the distance between executive aspiration and organizational reality — and that distance is not closing. The KPMG researchers found that industries most focused on innovation scored near the bottom on cultural adaptability; manufacturing and energy, which most people wouldn’t call hotbeds of radical reinvention, scored highest, because they adapt through disciplined scenario planning and operational execution rather than through enthusiasm about transformation. Forty-six percent of executives report burnout and change fatigue as an unintended consequence of their adaptability efforts — meaning organizations are demanding more adaptability from the people they’re simultaneously making fewer of. The companies that pushed through genuine cultural and structural transformation, by contrast, saw 4.4 times higher shareholder returns and nearly triple the revenue growth of their less adaptable peers. That is not a technology finding. That is a leadership and organizational design finding, and it implicates a set of decisions most executive teams have been avoiding.
The KPMG data becomes even sharper when you read it alongside an HBR case study on BBVA, one of Europe’s largest banks, which approached the same challenge from the other direction. Rather than waiting for governance frameworks to catch up, BBVA started from a recognition that its employees were already using AI on their own. Research suggests that in companies without official AI subscriptions, more than 90% of employees report using personal AI tools for work tasks anyway — what the HBR authors call the “shadow AI economy.” Most organizations respond to this with restrictions, monitoring, and gatekeeping. BBVA concluded the opposite: that restricting shadow AI was more dangerous than deploying a managed solution rapidly, and that the shadow AI economy was a signal of demand and productivity potential, not a compliance problem. (The Hidden Demand for AI Inside Your Company)
The BBVA approach rested on three principles: treat AI as an assistant, not a replacement; give employees autonomy with clear responsibility for results; and build a peer-to-peer adoption network rather than a centrally managed rollout. They distributed initial licenses competitively — to the most motivated employees, with a “use it or lose it” policy that turned access into a privilege. Active users who built and shared custom tools were prioritized for additional access. This created genuine demand before it mandated adoption. The results, as of mid-2025: 83% of employees using the system weekly, averaging 50 prompts per week, with self-reported time savings of 2-5 hours per week. More than 4,800 custom tools were built by frontline employees — people who understood the actual workflow, not a central IT team.
The implication for most organizations is pointed. If your AI governance function is primarily occupied with restricting and monitoring what employees are already doing on their own, you have deployed your scarce change management capacity in exactly the wrong direction. The bottleneck is not the technology and it is not the employees — it is the organizational structure that is too rigid to harness what employees are already building, and the leadership culture that responds to that ingenuity with a policy document.
Enterprise Software’s Most Comfortable Assumptions Are No Longer Safe
The S&P software index has dropped roughly 20% this year, and a new word has entered the business vocabulary: “SaaSpocalypse.” A Fortune analysis from last week, based on roundtables with senior business leaders, identified three structural forces that are undermining the business model that made enterprise software one of the most profitable industries on the planet — and none of them is temporary. (The 3 forces quietly dismantling the business model that made enterprise software fabulously profitable)
The first force is market vulnerability: enterprise software margins have been sustained for decades by switching costs that lock customers in regardless of satisfaction. That kind of captive market is an invitation to disruption. The second is collapsing barriers to entry: building enterprise-grade software used to require enormous capital and engineering resources; AI coding agents have dramatically lowered both. The third — and potentially the most consequential — is the rethinking of workflows. SaaS companies built their empires on standardizing processes across industries: one CRM platform for every company, one finance system for every CFO. AI is enabling organizations to redesign workflows from scratch, and the competitive advantage is shifting toward deep vertical expertise rather than mastery of a horizontal process. The organizations that will capture value in this new environment are not the ones with the largest installed base — they are the ones that control the orchestration layer, privileged data access, and distribution into daily work. Those control points are genuinely unsettled right now, and the fight over them is already underway.
Nowhere was that fight more visible last week than in an internal memo from OpenAI’s revenue chief, Denise Dresser, which characterized the company’s long-standing Microsoft partnership as a constraint on its enterprise ambitions. Microsoft has “limited our ability to meet enterprises where they are,” Dresser wrote — and for many enterprises, that means Amazon’s Bedrock platform. Inbound demand from customers for the Amazon partnership has been “frankly staggering,” the memo noted. (OpenAI touts Amazon alliance in memo, says Microsoft has ‘limited our ability’ to reach clients)
For any organization currently running its AI strategy primarily through a Microsoft enterprise agreement, this memo is worth reading carefully. Microsoft and OpenAI are both racing toward IPOs and both encroaching on each other’s territory — Microsoft added OpenAI to its list of competitors in its 2024 annual report, and OpenAI is now actively routing enterprise clients through Amazon rather than through Microsoft’s distribution channels. Meanwhile, Anthropic has connected Claude directly to Microsoft 365 data at no cost, and Microsoft has simultaneously pulled back free Copilot access for its largest enterprise customers. The AI distribution layer inside your organization is not settled. Assuming it will remain organized around a single vendor relationship is a planning error.
The Governance Layer Doesn’t Reach Where AI Is Running
The governance conversation inside most enterprises is still organized around a set of assumptions that were reasonable in 2023: models live in the cloud, traffic flows through monitored gateways, and the security perimeter is the place to apply controls. A Forbes Technology Council piece last week catalogued the ways this framework is already failing as open-weight models proliferate across enterprise teams — not through any coordinated deployment decision, but through individual engineers and analysts downloading and running models on their own. (The Hidden Risks Of Scaling Open AI Models Across Enterprises)
The risks the contributors identified are worth naming because they’re not hypothetical. Without governance, organizations risk “automating the creation of technical debt” — open-weight models generating code that looks correct but gradually diverges from the system’s architecture. Model sprawl means no visibility into which models are running on which workflows, what data they were fine-tuned on, or whether their outputs are reproducible across different hardware configurations. And the same dynamic that creates shadow AI in productivity tools creates shadow AI in model deployment: the governance processes meant to catch these problems are exactly slow enough that motivated engineers route around them.
The more structurally significant problem, though, is that open-weight models running locally have moved outside the reach of the API-centric security controls most organizations spent the last two years building. Google’s release of Gemma 4 — a multimodal, open-weight model capable of running directly on laptops and smartphones — makes this concrete. Security analysts cannot inspect network traffic if the traffic never hits the network. An employee can now run a capable AI agent on their local machine, process sensitive corporate data, execute multi-step workflows, and generate output without triggering a single cloud firewall alert. If that agent hallucinates or mishandles regulated data, the logs that auditors and compliance teams would normally examine simply don’t exist inside the centralized security dashboard. (Strengthening enterprise governance for rising edge AI workloads)
The governance response to this is not to block the models — that approach creates shadow IT, not compliance. The more defensible answer is to shift focus from policing what model is running to controlling what the host machine can access: restricting permissions, flagging anomalous access patterns, and building endpoint detection tools that can differentiate between a developer compiling code and an agent iterating through local file structures. Most corporate security policies were not written for a world in which the endpoint itself is the compute node. The window for updating them before incidents force the issue is shorter than most CISOs realize.
Token Counts Are Not a Productivity Metric
A CNBC analysis last week made an argument that I found worth sitting with: AI demand, as currently measured, is significantly overstated, and Anthropic’s decision to move away from flat-rate enterprise pricing toward per-token billing is a deliberate bet on that reality. The core of the argument is that token consumption — the basic unit of AI usage — has become a distorted metric. Companies like Meta and Shopify have built internal leaderboards tracking how many tokens employees consume, and Nvidia’s CEO has said he would be “deeply alarmed” if engineers weren’t spending the equivalent of $250,000 in compute annually. As the CEO of Databricks observed: once companies start measuring AI adoption by volume, employees optimize for the metric rather than the outcome. (Perspective: AI demand is inflated, and only Anthropic is being realistic)
The deeper problem is that organizations are accumulating AI spend without accumulating evidence of what that spend produced. A dozen CTOs and CIOs told a researcher at Harvard Business School’s AI Institute that they’re “having a really hard time finding an ROI framework” for their AI investments. Flat-rate enterprise pricing — which dominated the early AI adoption period — made this easy to ignore. When the bill doesn’t change regardless of usage, nobody is forced to ask whether the usage is generating value. Anthropic’s move to per-token billing changes that calculus and, if it takes hold, will force a reckoning that finance teams have been quietly building toward.
Snap’s announcement that it was laying off 16% of its workforce — roughly 1,000 employees — while citing AI efficiencies sits in this same frame. The company reports that more than 65% of its new code is AI-generated and that AI agents have found over 7,500 bugs in its codebase; the restructuring is expected to save $500 million annually. Snap joins a list that now includes Amazon, Meta, and Oracle, all of which have announced significant cuts while simultaneously increasing AI infrastructure spending. (Snap Lays Off 1,000 Workers To Focus on AI—Is This the New Norm?)
What deserves scrutiny here is not whether AI is involved in these decisions — it clearly is — but whether “AI-generated code percentage” is a meaningful measure of anything other than itself. Research from the Yale Budget Lab found that AI adoption had not caused a discernible disruption to the overall labor market since 2022; AI was cited as the cause of roughly 4.5% of announced job cuts last year. But the micro-level data is more specific: young workers in high-AI-exposure occupations have seen relative unemployment rise sharply, and they’re taking longer to find new jobs. The macro numbers look stable. The adjustment is already underway in the data you have to look harder to find. Organizations that are using AI investment narratives to rationalize structural workforce reductions without measuring the actual productivity outcomes of those investments are setting up an accountability problem, not solving one.
On the Bigger Picture
Something I’ve been thinking about that last week’s reading reinforced: the conversation about AI’s impact on work almost always focuses on tasks, roles, and productivity. Less discussed is what AI is doing to how people communicate with each other — and a Fast Company analysis offered a framework worth considering. The argument, drawn from INSEAD professor Erin Meyer’s work on cultural dimensions, is that generative AI is gradually training people toward greater explicitness in communication. An effective prompt requires precision; implicit cues don’t translate. As AI mediates more exchanges, the richness of indirect communication — the valued art of reading between the lines in high-context cultures — erodes. Perhaps the most telling signal: in professional contexts, a typo is increasingly read as proof that you wrote something yourself. Imperfection has become an authenticity marker. (AI Isn’t Just Reshaping Productivity and Threatening to Kill Jobs. It’s Also Creating a New Gender Gap)
Elsewhere, the model arms race moved into cybersecurity last week. OpenAI unveiled GPT-5.4-Cyber, a variant of its flagship model fine-tuned for defensive security work, rolling it out initially to vetted vendors and researchers through its Trusted Access for Cyber program. The announcement came exactly one week after Anthropic introduced Claude Mythos Preview as part of Project Glasswing — a controlled initiative that has reportedly identified thousands of major vulnerabilities in operating systems and browsers. (OpenAI unveils GPT-5.4-Cyber a week after rival’s announcement of AI model) And TechCrunch reported that Microsoft is quietly developing what amounts to its own OpenClaw-like agent — an always-on, persistent agent capable of completing multi-step tasks over extended periods — with plans to show it at Build in June. (Microsoft is working on yet another OpenClaw-like agent) The pattern across all three announcements is consistent: the capability frontier is moving, the institutional race to claim it is accelerating, and the governance frameworks meant to manage it are running several months behind.
What last week’s reading kept returning me to is a question of accountability. The KPMG data shows that genuinely adaptable organizations see 4.4 times the shareholder returns of their peers — and that the gap between them is cultural and structural, not technological. Boards have raised their expectations for adaptability, but only 30% of organizations say their structures can actually move. The pattern across the rest of last week’s reading reinforces this: governance frameworks written for the cloud don’t cover local inference... ROI frameworks built on token consumption don’t measure business value... enterprise software strategies organized around a single vendor relationship don’t account for how fast the distribution layer is fragmenting. These are not AI problems. They are organizational problems that AI is making newly expensive — and newly visible. The companies that are finding their way through this are the ones that started not with the technology but with a clear-eyed look at the structure that has to receive it. The ones that haven’t are accumulating spend without accumulating evidence, and the accounting will arrive on schedule.


