Something shifted this quarter. Not one big announcement, but a steady drumbeat of moments where AI stopped fitting neatly into anyone's roadmap. Twelve major model releases in February. Agents operating your desktop better than most employees. A single product launch wiping $200 billion off the software industry. If you felt like things accelerated and never quite slowed down, you're not imagining it. Here's what actually happened.
Six things that actually mattered in Q1
Model releases hit a pace nobody can track. February alone brought 12 significant model updates across the major labs: GPT-5.3, GPT-5.4, Claude Opus 4.6, Claude Sonnet 4.6, Gemini 3.1 Pro, and more. The evaluation window for enterprise teams is now essentially zero. By the time your IT team finishes testing a model, two newer ones have shipped.
GPT-5.4 crossed a meaningful line. Launched March 5, OpenAI's latest model can now autonomously navigate desktops and software applications, combining reasoning, coding, and computer use in one system. On the OSWorld benchmark, which tests whether AI can navigate operating systems and complete real desktop tasks, GPT-5.4 scored 75%. Human experts score 72.4%. Agents are no longer a developer demo. They are becoming a standard enterprise feature.
Anthropic launched a suite that rattled the software industry. In January, Anthropic launched Claude Cowork, designed to handle the tedious work of shuffling data between spreadsheets, integrating Slack and Salesforce, and organizing emails -- so people can focus on the work that actually matters. The market reaction was immediate: Thomson Reuters dropped 16%, LegalZoom fell 20%, Salesforce dropped 7%, and the JPMorgan Software Index fell 7% in a single day. Total market value erased: over $200 billion. In March, Anthropic also launched a $100M Claude Partner Network and announced Claude integration directly into Microsoft 365, putting it inside Word, Excel, PowerPoint, and Outlook.
Safety as strategy: Anthropic's Pentagon standoff. After Anthropic refused Pentagon terms permitting use for mass surveillance and autonomous weapons, the US Defense Department designated Anthropic a supply chain risk, blocking a $200M contract and causing more than 100 enterprise customers to raise concerns. The market responded in the opposite direction: Claude became the number one downloaded free app on the App Store in the US and 15 other countries. AI vendor choice is now a values signal, not just a technical one.

Cloud infrastructure proved fragile. AWS data centers in Bahrain and UAE experienced power outages and fire-related damage following Middle East escalation in early March. European gas prices rose over 60% and Asia LNG surged 143% since late February. Goldman Sachs projects that power demand from AI infrastructure could grow 160-165% by 2030. The energy bill for AI is becoming a strategic variable, not a line item.
EU AI Act: more time, not less urgency. MEPs voted on March 16 to postpone high-risk AI system obligations, citing that key technical standards won't be finalised by the August 2026 deadline. Most high-risk categories now move to December 2027, with safety-critical products pushed to August 2028. For European enterprises, this is breathing room, not a green light. The prohibited practices rules are already active. The fines of up to €35M or 7% of global revenue still apply. The delay just means you have longer to get your high-risk systems right, not longer to start.
For board members and senior leaders looking to go deeper on AI governance: my INSEAD colleague Robert Maciejko & co-authors just published a research paper "Power Steering, Not a Brake: How Boards Should Actually Govern AI," a practical framework for how boards can move from awareness to structured oversight. Worth the read. Read here
Q1 exposed risks that don't show up in your AI roadmap
Every major story from Q1 points to the same thing: AI ambition is running into real-world constraints. But the risks aren't coming from one direction. They're coming from two.
The external risk: you're building on ground you don't control.
Agents can operate computers. Models can draft, analyse, and execute across your existing software stack. The capability curve is steep and accelerating. But the moment any of this touches real infrastructure, it exposes dependencies most strategies didn't account for. Which cloud regions does your vendor run on? What happens to your workflows if your primary model provider becomes a regulatory casualty? Do your AI contracts have governance language, or just pricing?
The Anthropic situation is the clearest example. A company that built its brand on safety found that safety was a liability in one market and a competitive advantage in another. Enterprises with deep Claude integrations had to scramble. That's not a criticism of Anthropic. It's a reminder that single vendor dependency in your AI stack is now a governance risk, not just a technical one.
The internal risk: you're optimizing individuals, not transforming teams.
Individual AI use gets all the attention. Someone saves 40 minutes a day, a developer ships faster, a marketer drafts quicker. Useful, but not where the real gains are.
The actual jump happens when organizations stop giving everyone their own setup and start building AI into how the team works together. Shared systems. Common workflows. AI that one person configures and the whole team runs on.
Deloitte's 2026 report captures the gap: two thirds of organizations report productivity gains from AI, but only 34% are truly reimagining how the business operates. The rest are optimizing individuals, not transforming teams.
Where does that leave us?
Most AI roadmaps until now were built around one assumption: that the technology would be the hard part. Q1 made it clear the technology is the easy part. The hard part is everything around it: the governance, the vendor dependencies, the team structures, AI skill gap within the teams, the way work actually flows.
Heading into Q2, the distinction that matters: are you adopting AI, or adapting to it?
Adoption is tools, licenses, and pilots. Adaptation is rethinking how your teams operate, how decisions get made, and what your infrastructure actually needs to support. Most organizations are still doing the first. Q1 made a strong case for the second
Hope this was useful. If you'd like more on the frameworks and strategies behind making AI work at the organization level, drop me a reply. It'll help me shape what I cover next.
See you soon,
Pooja
PS: If you found this useful, please share it with your team or colleague who might benefit from it.
