The new AI adoption KPI: useful measure or behavioural theatre?

For the first year of generative AI adoption, many organisations were asking a simple question: are people using it?

That was understandable, how do you measure ROI if you haven’t enabled adoption enough to see an impact from your investment in the new tools. But the question is now shifting and is no longer just whether people are using AI. It is now:  how often, how well, and with what measurable effect. The dreaded measuring of the elusive beast of efficiencies.

KPMG has reportedly introduced an AI usage dashboard for its 10,000-person US advisory division, tracking employee engagement with internal and external AI tools, including ChatGPT, Microsoft 365 Copilot and its own Digital Gateway. The dashboard compares individual usage against goals and peer benchmarks. KPMG says the aim is to encourage more frequent and sophisticated AI use, although some employees have raised concerns that the system can be gamed. Source: KPMG Dashboard Tracks Employee AI Use. Workers Say It's Easy to Game. - Business Insider

This example matters because it surfaces the new concerns of the adoption cycle.

  • Businesses want to know whether AI investment is translating into value.

  • Employees want to understand what is expected of them.

  • Managers are under pressure to show progress.

Somewhere in the middle, “AI adoption” starts to become a measurable behaviour.

The legal sector shows the same direction of travel, although often through the language of efficiency and value. Wolters Kluwer’s 2026 Future Ready Lawyer report found that 62% of respondents reported weekly time savings of 6% to 20% from AI use, averaging nearly 10% of the working week. It also found that 52% of organisations reported revenue growth after implementing AI. PwC’s 2025 UK Law Firm Survey similarly identifies embracing AI and transforming the workforce as key priorities for law firms, alongside pricing and operational discipline. Source: The Wolters Kluwer Future Ready Lawyer Report: Building confidence in an AI era | Wolters Kluwer

OK, so as the market matures, the direction is clear; AI is moving out of the innovation corner and into operating models. This is broadly sensible and a natural next step. Organisations cannot keep investing in technology without understanding whether it is being used, where it is helping, and where it is failing to land but there is a behavioural risk in the way this is measured.

Where Behavioural Science comes in

If usage is tracked, usage becomes something people manage. If peer comparison is added, people start to think about how they appear. If adoption activity becomes a signal of being modern or capable, some people will learn how to look like good adopters. It is a status enabler.

That is why usage needs careful handling. It may be useful as one indicator, especially where firms are investing heavily and managing risks around data, confidentiality and quality, but usage is not the same as value.

What is improvement?

Value is created when the work improves. A lawyer may use AI to accelerate a first draft, but the useful question is whether the review is better, the risk is lower, and the final advice is stronger.

A dashboard will show the usage activity on processes. It can show where tools are being ignored, where teams need support, or where capability is emerging, but it cannot, on its own, show judgement and critical thinking.

This is where AI adoption becomes less about technology and more about leadership. Key questions for leadership to ask are:

  • Are teams using AI to reduce low-value effort?

  • Are they improving quality?

  • Are they challenging outputs properly?

  • Are they redesigning workflows, or simply adding AI on top of old habits?

  • What behaviours are being encouraged and measured to ensure high quality review.

And when there are still gaps in usage, why are some people still avoiding using AI?

  • They are sceptical.

  • They are overloaded.

  • They do not want to look incompetent.

  • Others use it with enthusiasm but little discipline.

Those differences matter because AI adoption is not a single behaviour. It is a mixture of confidence, curiosity, skill, trust, judgement and permission.

The organisations that get this right will not confuse activity with maturity. They will look for evidence that AI is improving the work.

That also matters for the recruitment market. The next generation of AI transformation leaders will not be credible because they can show an increase in usage alone. They will be credible because they can explain how adoption translated into business outcomes, governance, capability and behaviour change.

The strongest leaders will also be able to say where AI did not work. Not every process needs AI, not every team is ready for it and, most definitely, not every productivity claim survives scrutiny!

That does not make AI less important. It makes leadership more important, because the moment AI adoption becomes a KPI, it becomes part of the behavioural system. And if organisations only measure the appearance of adoption, they should not be surprised when that is what they get.

Next
Next

Calm Under Pressure: Observations from Abu Dhabi, United Arab Emirates