This website uses cookies

Read our Privacy policy and Terms of use for more information.

I work with mid - large size teams on AI adoption every week. Here is the pattern I keep seeing, across industries and across maturity levels: the individuals are flying. The team is not. And almost nobody is measuring the gap between the two.

Studies put individual AI productivity gains anywhere from 25% to 40% depending on the role and task. Frequent users save around 9 hours a week (BCG). Power users are reportedly six times more productive than the average user (OpenAI internal data, via VentureBeat).

But in the same period, only 39% of companies can point to any EBIT impact from AI, and most of those report under 5% (McKinsey 2025).

That is not a technology gap. That is an organizational design gap.

The gap between what your team can do as individuals with AI and what your organization can do as a unit with AI is the single biggest source of unrealized value in 2026.

Illustrative. Directional, not data

The good news: this gap can be closed. I have closed it with clients across financial services, M&A advisory, and professional services. The work is not glamorous, but it is concrete. Let me show you what is actually happening, and what to do about it.

Seven recent developments that only matter when you read them together

  1. A Chinese open-source AI just beat the best US models. Moonshot AI released Kimi K2.6 on April 19. It outperformed GPT-5.4, Claude Opus 4.6, and Gemini 3.1 Pro on one of the hardest AI tests, can run a 12-hour coding job without human input, and anyone can download and use it. The assumption that frontier AI capability lives inside US labs is no longer safe

  2. Anthropic built a frontier model and chose not to release it. On April 7, Anthropic disclosed Claude Mythos. Internal testing showed it could construct full multi-stage cyberattack chains and found thousands of zero-day vulnerabilities across every major operating system and browser. They have declared it ‘too powerful’ to release. The Bank of England is now testing AI as a financial stability risk. This is the first time a major lab has withheld a completed frontier model on safety grounds. It will not be the last.

Source: BBC

  1. Two "vibe-coding" platforms got hacked within 48 hours of each other. Lovable accidentally exposed thousands of customer projects, source code, passwords, AI chat histories, for 48 days. Vercel's breach started with a single employee's compromised AI plug-in. These were not edge cases. They are what happens when AI tools are shipping faster than the security around them.

  2. Only 5% of companies are creating substantial AI value at scale. BCG surveyed enterprise AI programs and found that 60% generate no material value despite their investments. Only 5% are creating substantial value at scale. PwC's 2026 study found that 74% of AI's economic value is being captured by just 20% of organizations. The gap between "we use AI" and "AI is changing how we make money" is widening, not narrowing.

  3. EU AI literacy is already a legal obligation. It has been since February 2025. A common misread I keep hearing: "We have until August 2026." No. AI literacy obligations under the EU AI Act have been enforceable for over a year. GPAI obligations kicked in August 2025. August 2026 is only the deadline for high-risk systems. If your team does not have documented AI literacy training, you are not preparing to be compliant. You already are not.

  4. Anthropic's Claude Design wiped 7% off Figma's stock in a single day. When Anthropic launched Claude Design, a tool that generates polished UI designs and prototypes from a single prompt, Figma's stock dropped 7% within hours. The same week, Amazon committed another $5 billion to Anthropic (with up to $20 billion planned), and Anthropic locked in $100 billion of Amazon cloud infrastructure for the next decade. AI is no longer a feature inside someone else's product. It is starting to replace the product. If the software you depend on is the kind of thing AI can now generate, your vendor risk just changed.

  5. Deloitte: 66% report productivity gains. Only 20% report revenue growth. Deloitte's 2026 State of AI in the Enterprise (3,200+ global leaders): 66% of organizations are seeing productivity and efficiency gains from AI. 74% hope to grow revenue from it. Just 20% are actually doing so. Translation: the productivity gains are real, and they are showing up at the individual level. The revenue gains require something else, and almost nobody is doing that something else.

What individual AI use looks like vs. what team AI use looks like

The seven developments above only matter when you read them against this distinction.

Individual AI use is one person, one tool, one task. A consultant using Claude to draft a deck. A marketer using ChatGPT to brainstorm headlines. A lawyer using Harvey to review a contract. The productivity lift is real, immediate, and almost entirely owned by the individual. This is what most "AI adoption" programs are actually achieving. It is also why your power users are reportedly six times more productive than your average user.

Team AI use is something completely different. It is a workflow where multiple people, often across functions, use AI as a shared layer in how they collaborate. The deal team uses one shared model, with shared context, shared prompt patterns, and shared outputs that feed downstream into someone else's work. The compliance team uses an agent that pulls from policy, transactions, and CRM in a way no individual could replicate. This is where the EBIT shows up. And almost no organization is set up to deliver it.

"AI use that boosts individual performance does not naturally translate to improving organizational performance."
— Ethan Mollick, One Useful Thing

This is what Mollick has been saying for over a year, and the data has now caught up with him. Individual gains are everywhere. Organizational gains are rare. The reason is not the technology. The reason is that almost nobody has redesigned the work itself.

What this looked like inside a recent AI adoption program at a Fintech company

I have been working with a fintech company on exactly this problem for the last several months.

We started where most programs start. Identified the use cases, built the tools, trained the team. On paper, enough.

It was not. The team was using AI individually, but under real workload pressure, adoption stayed inconsistent. Some tools got picked up immediately. Others sat untouched because the existing workflow was already in motion and switching felt like extra work.

Here is what changed. We stopped treating each tool as a standalone add-on and started rebuilding the workflow around it. We unified fragmented AI tool stack so the team didn't have to think about which one to open. We collected real usage feedback between sessions and fixed gaps before the next one. And we assigned ownership to the people doing the work, not IT.

The result: content that used to go through an external agency is now produced internally with 30 to 40% less time. Agency spend is dropping. And the team is using AI more as a central operating system than just as a tool add-on.

"Training assumes the issue is knowledge. It is not. They know how to use it. They know it is better. They still do not use it when it matters."

The lesson is the one I keep telling clients: stop trying to teach people to use AI as individuals and start redesigning how the team works with AI in it. The first is training. The second is the actual work.

So what should you actually do about it

Most AI strategies are being measured on the wrong things.

Licenses bought. Pilots launched. Tools rolled out. Training sessions run.

These numbers go up while the actual way the team works does not change. The individual productivity is real. The organizational productivity is not.

What I am telling clients this quarter: stop measuring individual AI usage. Start measuring team AI usage. How often is the new workflow used end-to-end across the team? Where does collaboration break down? Which functions have actually integrated AI into how they produce output, and which are still using it as a personal productivity hack? That is the data worth having.

If your team is showing strong individual AI productivity but it is not showing up in your numbers, the gap is not training. It is workflow design.

Building a structured AI adoption program for your company?

We work with mid-market enterprises on AI discovery, tool selection, team enablement to build AI into your operating system, not just another tool in the stack.

Reply to this email or connect on LinkedIn.

Until next time,

Pooja

PS: If you found this useful, please share it with your team or colleague who might benefit from it.

1  

Reply

Avatar

or to participate

Keep Reading