AI in Learning & UX Design: Augmenting, Not Replacing

AI in Learning & UX Design: Augmenting, Not Replacing
AI in Learning & UX Design: Augmenting, Not Replacing
“You won’t lose your job to AI, but to someone who uses AI better than you do.”

Mike Brown (Head of Design at Barclays) credited that idea to UX pioneer Jakob Nielsen, and it framed everything that followed. The future is not a single switch from humans to machines. It is a reshaping of capability, and the people who learn fastest will set the pace.

A design leader in a learning world

Mike has spent years leading design organisations in large companies, where scale is both a gift and a constraint. You can improve something once and help thousands, but you also inherit bureaucracy, politics, and legacy systems.

At Barclays, Mike leads a learning design organisation. There are UX designers in his team, but most are learning designers focused on colleague learning outcomes. It is user-centred work inside a regulated environment where learning links directly to risk, compliance, and performance. AI is not a toy. It is a lever, and levers need guardrails.

Why measurable gains beat hype

Many people already use AI for personal productivity. Drafting copy, generating options, summarising notes, and unblocking the blank page. Useful, but limited.

The bigger opportunity, Mike argued, is applying AI inside workflows where gains can be measured, defended, and scaled. That requires choosing relevant use cases, benchmarking first, and judging both speed and quality.

How Barclays tested AI like a design problem

Mike’s team is distributed across the UK, so early on, he ran workshops to meet people and build a strategy together. AI became a key focus, and the approach was deliberately disciplined. They formed an AI innovation squad with designers, a scrum master, and a product manager. They gathered use cases, narrowed them down, rated them for feasibility and importance, and then tested them through 2024.

They set a baseline. They measured how long a designer took to complete a task without AI and assessed quality using expert review. Only then did they compare outputs across tools like ChatGPT, Claude, Copilot, and a learning specific tool called Hive. The message was simple: if you cannot measure it, you cannot lead it.

The clear win: generating test questions

Everyone in the room had suffered mandatory training modules. At Barclays, many of those modules exist to satisfy regulation. Testing is part of proving completion and, ideally, proving understanding.

Creating strong test questions is time-consuming. Mike said it typically takes around six and a half hours to produce a good set of questions for a module. When his team integrated AI into this task, they consistently saved around four hours per module. Expert reviewers also rated the AI-supported output slightly higher in quality, and the results were similar across tools.

Four hours sounds small until you scale it across a year of output. Then it becomes hundreds of hours returned to the team. Mike’s key point was what you do with that time. Treat it as a capacity to improve the experience, not as a reason to strip the team.

The cautionary tale: summarisation that lies politely

Summarising content seemed like an obvious next target. Learning teams update modules regularly and negotiate changes with stakeholders. A reliable “compress this” button would remove friction.

So they tested a clear requirement: take about a thousand words down to eight hundred. The summaries looked good. Then they counted words and found an inconsistency. Some outputs were longer than the original. Some were drastically shorter. Re-running the same prompt produced different lengths.

Mike’s explanation was blunt. Large language models generate text by pattern prediction, not by counting words. Right now, they cannot reliably hit a word target. But the bigger risk was confidence. The tools claimed they had met the requirement. Without checking, it would be easy to accept the output and move on.

This is why Mike kept returning to augmentation. AI can draft quickly, but humans must verify constraints, accuracy, and intent.

Bigger ambitions, bigger dependencies

Mike is excited about larger learning use cases: virtual coaching, automated asset creation, converting content between formats, curriculum design, and personalisation. He described a future where the same learning objective could be presented differently depending on what works best for each person, including accessibility needs.

But he also made the hidden work visible. Most of these ideas require integration with existing systems, collaboration across teams, and governance that can keep pace. They are not projects a small squad can ship alone, and timelines are often longer than leaders expect.

The bank reality: risk shifts the strategy

Mike’s hackathon story brought the talk to life. His team built a virtual coaching agent prototype in 24 hours and tested it with leaders. The use case was coaching managers through difficult conversations with colleagues. Feedback was strong, and the potential savings were huge because coaching is expensive at scale.

Yet it was nowhere near going live. The blocker was a risk.

In banking, mistakes can mean major fines, reputational damage, and business loss. Barclays trains everyone in risk and control for a reason. In that context, Mike said they have not yet found an AI use case they would be comfortable automating with no risk. So the strategy becomes augmentation. AI supports the workflow and accelerates output. Humans remain accountable, validate results, and act as the control layer that makes progress safe.

The Q and A: How to handle fear outside banking

When asked what to tell teams who fear an “AI replaces people” directive, Mike came back to evidence. If someone claims AI will save time and justify cuts, test the claim. If you believe automation will degrade quality, prove it. Use research, experiments, and measurements to move the conversation from fear to facts.

He also returned to the reality of learning. People already try to rush mandatory modules. AI will not create that behaviour, but it can make shortcuts easier. The job of learning design is to make people want to engage, using hooks, structure, and methodology to turn dry material into something that lands.

Augmenting, not replacing, is a leadership choice

Mike’s talk was not a promise that nothing will change. It was a blueprint for change without recklessness. Start with relevant use cases. Benchmark first. Measure time and quality. Validate outputs. Keep humans accountable where the cost of failure is real.

Most importantly, do not confuse “faster content creation” with a long-term strategy. Mike warned that content creation is becoming cheaper with every advance. If a learning function focuses only on producing more, faster, it risks becoming redundant when creation becomes close to free. The enduring value is in defining learning objectives, designing for real human behaviour, and guiding people through change responsibly.

That is what Mike meant by augmenting, not replacing. AI can draft and accelerate. Humans still decide what matters, what is safe, and what good looks like.

Want to watch the full talk?

You can find it here on UXDX: https://uxdx.com/session/ai-in-learning-ux-design-augmenting-not-replacing/

Want to make your career AI-proof? Make sure to read our new ebook: https://uxdx.com/ebook/career-compression/?utm_source=Ebook&utm_medium=Blog&utm_campaign=Carreer-AI+ebook