Digital Regs Is Mentoring at the FCA AI Supercharged Sandbox: Here Is What We Are Seeing
The FCA's AI Supercharged Sandbox is one of the most significant regulatory initiatives in UK financial services right now. Digital Regs is mentoring the first cohort on digital regulation and privacy
2025 was a pivotal year for AI in financial services. The technology moved decisively beyond pilot projects and proof-of-concepts and became embedded in live operations: fraud detection, customer service, investment analysis, regulatory compliance. What was experimental became operational.
2026 will see that pace accelerate further. More financial services leaders are pushing the button on AI deployment, and the central question has not changed: how do you innovate safely in a heavily regulated industry?
The FCA AI Supercharged Sandbox
The Financial Conduct Authority recognised that tension and responded with the launch of its AI Supercharged Sandbox, an environment enabling safe, responsible experimentation with AI in UK financial services, supported by access to data, compute infrastructure, and direct regulatory engagement.
Digital Regs has been proudly mentoring the first cohort of participants in the areas of digital regulation, privacy law, and their applicability to FCA rules.
What We Discussed in the Sandbox
The questions raised by participating firms were not theoretical. They were the practical, difficult questions that any financial services firm deploying AI will eventually have to answer.
How do you ensure AI-driven decisions are fair and ethical? Fairness in AI is not a single standard, it depends on the context of the decision, the data used to train the model, and the protected characteristics of those affected. Consumer Duty requires firms to demonstrate good outcomes for all customers. That obligation does not pause because a decision was supported by an algorithm.
What does explainability mean in the context of a black box model? The FCA does not currently require firms to be able to explain every individual AI decision in technical terms, but it does require firms to be able to explain outcomes to customers and to demonstrate that models are monitored for bias and change of context that the data was used for training in (drift). Explainability is a governance question as much as a technical one.
How do you validate AI models used in financial crime detection? AML and financial crime detection are areas where AI is increasingly used, and where the consequences of a false positive or false negative are significant. Validation needs to cover model accuracy, bias testing, data quality, and the human oversight process when the model flags or fails to flag a case.
These were not edge cases raised by early-stage startups. They are questions that every firm deploying AI in a regulated context will face. The Sandbox provided a space to work through them with regulatory support before deployment, which is precisely its value.
The Second Intake Has Just Opened
The FCA announced on 21 April 2026 that the Supercharged Sandbox is expanding. The second intake opened on 5 May 2026, with more UK fintechs gaining access to data and NVIDIA compute to build and test their AI products. The FCA’s chief data officer Jessica Rusu cited “unprecedented demand” for the programme.
The Broader Regulatory Context
The Supercharged Sandbox does not exist in isolation. The UK government’s AI Opportunities Action Plan is advancing sector-specific guidance, with financial services identified as a priority area. Meanwhile, UK firms serving EU customers need to understand the extraterritorial reach of the EU AI Act, Article 2 applies regardless of where the deploying firm is based, if the AI system’s outputs are used within the EU.
There is also growing focus on AI insurance. Lloyd’s of London has tightened its approach to AI-related risks, with some insurers excluding certain AI applications from coverage unless proper governance frameworks are demonstrably in place. The implication is direct: if you cannot insure an AI deployment, you should think carefully before making it.
What This Means for Your Firm
For financial services firms, these developments create both opportunity and obligation. The Sandbox offers a genuine pathway to innovation with reduced regulatory uncertainty. But the wider context makes clear that AI governance is no longer optional, nor is it something that should be retrofitted after deployment.
The firms that engage with governance early, understanding their FCA obligations, mapping their data protection requirements, and building oversight into their AI systems from the start, will be better placed to deploy confidently, insure adequately, and demonstrate compliance when regulators ask.
We are grateful to the FCA for the opportunity to contribute to this landmark initiative, and to the first cohort of participants for the quality of the questions they brought to the table.
Digital Regs provides mentoring, governance frameworks, and compliance support for firms deploying AI in financial services. For more information about the FCA AI Supercharged Sandbox, visit the FCA AI Lab page. To discuss how Digital Regs can support your firm, visit digitalregs.com.
References: [1] AI Lab — FCA [2] AI Opportunities Action Plan — GOV.UK [3] EU AI Act, Regulation (EU) 2024/1689 [4] Insuring AI: How Good Governance Can Save You Money — Digital Regs [5] From magnifying glass to drone: using AI to spot reserving risks faster — Bank of England

