Build it and will they come?

When it comes to AI at work, producing it with a ‘tah dah’ and expecting employees to gravitate immediately towards using it is naive. Commonly quoted adoption rates hover around 20% - 30%. The issue isn’t about the tool or its capabilities, but about the behaviours of those who use it.  Most importantly, noting that they are not the ones who bought or developed it, and so the concept is all rather new to them.

What we see is that initial curiosity drives a short spike in usage. A small group of enthusiasts embed AI into their work, but then most people revert to familiar ways of working. Leaders are left wondering why a technically successful deployment has failed to change behaviour.

The lift-and-shift trap in AI adoption

Most enterprise AI programmes follow a logic borrowed from traditional technology rollouts and assume that a typical plan looks something like this:

• Procure an AI tool
• Integrate it into existing workflows
• Pilot with early adopters
• Deliver training sessions
• Communicate availability and benefits
• Track usage and adoption metrics

This approach works reasonably well for required digitalisation of technological tools.  It breaks down when applied to systems that influence thinking, judgment, and decision-making.

AI does not simply automate tasks, it intervenes in how people interpret information, form conclusions, and take responsibility for outcomes. Treating it as a lift-and-shift exercise assumes that once access is provided, behaviour will naturally adjust. It is key to remind readers of the cliché and consequences of “ASS U ME” in this context!

Where the model breaks

When AI is introduced using a traditional rollout model, several predictable issues emerge.

1.      People are unclear what decisions AI is meant to support versus replace. Is the output a draft, a recommendation, or an answer? Can it be relied upon, or must it always be checked? Ambiguity freezes action.

2.     Accountability becomes blurred and, if an AI-assisted decision turns out to be wrong, who is responsible? People hesitate to rely on the tool.

3.     Cognitive load increases before it decreases as AI expands optionality. Without structure, this overwhelms the user rather than creating efficiency, particularly for experienced professionals who already carry significant decision responsibility.

4.     Trust becomes mis-calibrated. Some users dismiss AI entirely and others defer to it too readily. Few organisations actively teach what appropriate trust looks like in practice.

5.     Cultural signals from leadership dominate everything else. If leaders never ask how AI was used, it remains peripheral. If mistakes are punished, experimentation stops. If speed is rewarded without scrutiny, risk will accumulate.

None of these issues are technical. They are behavioural.

 

The real barriers to AI adoption

If adoption rates begin to stall, organisations often respond with more training, better prompts, or additional features. These interventions treat symptoms, not causes as the underlying barriers are more fundamental.

Trust asymmetry
People either over-trust AI or under-trust it. Over-trust leads to automation bias and unchallenged outputs. Under-trust leads to avoidance and workarounds. Calibrated trust is rarely taught explicitly.

Decision ambiguity
When it is unclear where human judgment ends and AI input begins, people tend to hesitate. Adoption requires clarity around decision ownership, not just tool capability.

Confidence gaps
AI exposes differences in confidence and fluency. High performers often struggle more than expected, particularly when AI challenges their expertise or introduces uncertainty into familiar tasks. This is a new technology and takes time to master.

Cultural reinforcement
People take their cues from what leaders reward, tolerate, and ignore. Adoption is shaped far more by informal signals than formal guidance.

Missing feedback loops
People rarely learn whether their use of AI improved the decision, introduced risk, or simply sped up the work. Without feedback, poor use and good use look identical. And, often, errors go unnoticed until they surface downstream.

Over time, the focus on efficiencies means that AI use becomes a private productivity hack rather than an organisational capability. However, in the absence of feedback loops, usage metrics create a false sense of progress.

Why more training does not solve this

One-off training sessions and prompt libraries are appealing because they are tangible and measurable, but they are also insufficient.

Onboarding needs to focus on two aspects: Training focuses on how to use the tool, and adoption depends on how people think with it.

Without explicit guidance on judgment, escalation, and challenge, training decays quickly. People either default back to old habits or use AI in ways that are fast but fragile.

Sustainable adoption requires structure, not enthusiasm.

What actually mitigates AI adoption risk

Organisations that see sustained AI adoption do a few things differently.

·       They start with decision quality, not usage metrics.

·       They make decision boundaries explicit. People are clear where AI can inform, where it can recommend, and where it must not decide.

·       They teach judgment, not just interaction. This includes when to challenge outputs, when to escalate concerns, and when to disregard AI altogether. Critical thinking is key! 

·       They normalise fallibility. Language such as “this doesn’t feel right” or “let’s sanity-check this” is encouraged rather than penalised.

·       They build lightweight feedback loops into real work. Reflection happens in context, not as an abstract exercise.

·       And critically, leaders model the behaviour themselves. They ask how AI was used. They question outputs and they admit uncertainty. These signals matter and influence the behaviour of the rest of the management and employees.

Reframing the challenge

AI adoption is not a question of access. Most organisations already have powerful tools at their disposal. Until AI is treated as a behavioural change challenge rather than a deployment exercise, the gap between technical success and real impact will remain.

Next
Next

Why Leaders Confuse Confidence with Clarity