AI Training in Financial Services: Know It, Apply It, Do It Right
Most financial services firms have an AI policy. Fewer have staff who know what to do with it. Here is a three-part training framework that closes the gap between policy and judgment in practice.
Most financial services firms now have an AI policy. Fewer have staff who know what to do with it.
That gap, between policy on paper and judgment in practice, is exactly where regulatory risk lives. Training is how you close it. But only if it is the right kind.
In our previous post on AI tool selection and due diligence, we noted that effective AI training needs to be tripartite. Here is what that means in practice.
Part One: Why Everyone Needs a Baseline
Before anyone can use AI responsibly in a regulated environment, they need to understand what it actually is.
What is AI doing when it makes a recommendation? What does it mean for a system to be biased? Why does data quality matter so much?
This foundational layer creates a shared language across the firm. It reduces the risk of AI being treated as either a magic solution or an unknowable black box. It is also increasingly what regulators expect to see when they ask whether a firm has adequate governance around its AI systems.
Generic awareness training that explains large language models in abstract terms does not meet this bar. What staff need is a practical understanding of how AI behaves in the contexts relevant to their work, and what can go wrong.
Part Two: What Does This Mean for My Job?
Generic awareness is only the start. The more important question, and the one most training programmes fail to answer, is: what does this mean for me?
A compliance officer using an AI-assisted transaction monitoring tool faces different questions than a relationship manager whose CRM surfaces AI-generated client insights. A risk analyst working with a third-party model has different obligations than a product team building one in-house.
Training that speaks to those distinctions lands. Training that does not gets ignored.
Good AI training maps the technology to the role and the business context. It asks staff to think about where AI already sits in their workflow, where it is likely to arrive next, and what questions they should be asking before they rely on its outputs.
A Framework That Works
For any AI tool a member of staff uses or encounters, they should be able to answer three questions:
1. What is this system actually doing? What input goes in, what output comes out, and what decision does that output influence?
2. Where could it go wrong- and would I notice? What are the failure modes: bias (i.e. the system discriminates against a characteristic), data drift (the context that the system was trained in changed), overconfidence (blind reliance on the AI outputs)? Am I in a position to catch them?
3. What is my accountability if it does? Under SM&CR, Consumer Duty, or my firm’s own governance framework, where does responsibility sit?
These three questions work for any role and any tool, they create a habit of critical engagement with AI rather than passive reliance on it.
Part Three: Doing It Compliantly
This is where regulated firms have a specific and non-negotiable requirement.
The FCA has been clear that AI does not change accountability, if anything, it intensifies it. Consumer Duty, SM&CR, operational resilience, and model risk management all have something to say about how AI is used and by whom.
Effective training at this level helps teams understand how to apply rules to real decisions: when to escalate, when to document, when to push back on a model’s recommendation, and how to explain to a client or a regulator what the system did and why.
It is about developing the judgment to apply the regulations when the situation is ambiguous which, in practice, it usually is.
Packaging It So People Actually Do It
The three-part structure above combines short, targeted learning modules, covering the essentials without demanding hours of screen time, with structured reflection: applying what has been learned to genuine scenarios from the firm’s own business.
The result is training that feels relevant rather than compulsory, and that produces something useful: staff who know what to do, not just what the rules say.
Completion rates for generic AI e-learning are poor. Engagement with training that speaks directly to someone’s role and their firm’s actual AI use cases is significantly higher, and the outcomes are demonstrably better.
Getting the C-Suite to Care
None of this lands without leadership engagement. And leadership engagement rarely comes from an e-learning completion report.
Senior executives need to understand AI governance as a strategic issue. That means understanding how the data powering AI systems was collected, what assumptions it encodes, and what happens when those assumptions are wrong at scale. It means understanding liability, reputational risk, and the direct connection between model decisions and regulatory accountability under SM&CR.
The C-suite needs to think through the strategic implications of AI adoption, the role of data governance in making AI trustworthy, and what genuine oversight looks like in practice.
An executive workshop is a more effective format for this than an e-learning module. It creates space for the questions that senior leaders actually have, and connects AI governance to the business decisions they are already making.
The Outcome Good Training Produces
Firms that invest in structured, role-specific AI training end up with something regulators are increasingly looking for: documented evidence that staff understand the AI systems they use, know their obligations, and have been equipped to exercise judgment.
That documentation matters. It is a defence in enforcement proceedings, a signal of competence to counterparties, and increasingly a factor in AI insurance underwriting.
The three-part approach: foundational awareness, role-specific application, compliance in practice, is what turns AI governance from a document into a culture.
If you would like to discuss AI training for your firm or find out more about our executive workshop programme, visit digitalregs.com or get in touch directly.

