Human-Centered AI Across Standard Chartered

Human-Centered AI Across Standard Chartered
Human-Centered AI Across Standard Chartered

In an era obsessed with automation, Emily Yang (Head of Human-Centered AI and Innovation - Strategy and Talent at Standard Chartered) argues that AI’s real potential lies not in replacing people but in empowering them. Speaking at EMEA 2025, she showed how a 172-year-old bank can innovate responsibly without losing empathy, trust, or human identity. Her message was clear: technology is only 30% of the solution; the other 70% is people.

From Biochemistry to Human-Centred AI

Emily’s path to AI wasn’t linear. She began in biochemistry, moved into natural language processing and human–computer interaction, built startups, then pivoted to corporate venture. That blend of science and empathy shaped her approach. Earlier in her career, her research on parental leave experiences at the bank helped inform the global rollout of parental leave in 2023. The same principle that research driving systemic change now guides her work in AI.

What Human-Centred AI Really Means

Many assume human-centred AI is the same as responsible AI or user-centred design. Emily smiled as she told the audience: “You’re all correct but incomplete.” For her, human-centred AI sits where research and practice meet. It focuses on human identity, flourishing, and sustainability at the intersection of AI ethics and technology.

There are two dimensions to this work. The first is helping the organisation adopt AI safely and effectively. The second is using AI tools within disciplines like design and HR to enhance creativity and productivity. Both depend on understanding people as much as systems.

Testing the Framework

To make her ideas tangible, Emily walked the audience through a live example: the rise of virtual therapists and coaches. According to Harvard Business Review, therapy and companionship are now the top generative AI use cases of 2025. Using this example, she unpacked ten human factors from fairness and transparency to culture and sustainability that every product or policy team should consider.

Fairness came first. She asked the audience to notice how many virtual assistants are designed as women. Gender, ethnicity, and occupation often intersect in subtle biases, and AI tends to mirror those societal patterns. Designers need to question what stereotypes their systems may unintentionally reinforce.

Transparency followed. Do users know when they’re talking to an AI instead of a person? Have they given consent? “And when something goes wrong,” Emily asked, “who’s accountable?” The questions were rhetorical but practical—designed to push teams toward clearer governance.

Literacy Before Lift-Off

Once AI systems are live, non-engineers often operate and monitor them. That, Emily said, makes AI literacy critical. Her team’s AI Learning Hub trains different groups of users, operators, and governors to understand what responsible use means, how to interpret metrics, and when to escalate problems. Literacy isn’t a nice-to-have; it’s risk management.

Work, Meaning, and Anxiety

Virtual coaches and therapy bots also raise questions about the future of work. Which human roles will be replaced, and which will be augmented? For those affected, the issue isn’t only employment, it’s purpose. “Can I stay relevant? Will my work still matter?” Emily noted that employees are already asking these questions inside the bank. Human-centred AI means helping them find meaning, not just efficiency, in the age of automation.

Culture, Values, and Alignment

A system’s cultural lens shapes every response it gives. A virtual coach trained on Western data may not translate well in Asian contexts or between generations. “By whose values,” Emily asked, “are we reinforcing behaviour?” Misaligned values can erode trust faster than any technical glitch. The solution is diversity in design teams, data, and testing that reflect real users, not ideal ones.

That leads to cyberpsychology: why people choose to interact with AI systems and how those relationships affect well-being over time. Emily cited her research with the World Economic Forum, reminding the audience that the only way to counter fear and uncertainty in technological change is through trust.

Designing for Trust and Empathy

Experience design sits at the heart of that trust. How do people want to interact with text, voice, or an avatar? How realistic should an AI companion appear before it enters the uncanny valley? Emily’s warning was subtle but firm: over time, reliance on virtual empathy could erode human-to-human connection. Internal surveys at Standard Chartered show employees already worry that AI may reduce the “human touch” at work.

The Hidden Cost of Politeness

Then came a statistic that made the room laugh and pause. Asking ChatGPT a question uses about ten times more electricity than a Google search. Saying “please” and “thank you” costs even more energy. Every generative response consumes water and power, and at a global scale, those pleasantries add up. “It’s costly to be polite,” she joked, but sustainability, she stressed, is no joke. Responsible AI also means conscious consumption.

The Economics That Matter

Traditional ROI measures cost savings, speed, and accuracy, but misses AI’s broader effects. Well-being, creativity, and engagement also drive productivity, but are harder to quantify. Emily urged leaders to balance short-term metrics with long-term human impact. “The point isn’t to avoid measuring,” she said. “It’s to measure what truly matters.”

Humans in the Group, Machines in the Loop

In the Q&A, an audience member asked how human-centred AI fits with business goals focused on profit. Emily’s reply was pragmatic: look at who sits at the decision table. Legal, compliance, technology, and finance are always there, but rarely UX or research. “Our leadership can’t make informed decisions about people if they don’t hear from the people experts,” she said. The challenge for UX and design leaders is to operate at the level of strategy, not just interface.

As she closed, Emily returned to her core principle: AI should be an innovation for us, not to us. “Human-centred AI isn’t just design, ethics, or HR, it’s all of them working together,” she said. Technology may be the visible layer, but people are the real system underneath.

Her final line drew quiet nods across the room. “It’s always about humans in the group and machines in the loop, not the other way around.”

Want to watch the full talk?

You can find it here on UXDX: https://uxdx.com/session/human-centered-ai-across-standard-chartered/

Or explore all the insights in the UXDX USA 2025 Post Show Report: https://uxdx.com/post-show-report/