Choosing the Right AI Tool: Governance, Due Diligence, and Training in Financial Services
Not every AI tool is ready for enterprise use, and your organisation is responsible for the ones you choose. See a practical framework for AI vendor due diligence, contract review, and staff training.
The rapid rise of agentic and autonomous AI tools has been exciting to watch. We are finally seeing AI move beyond chat interfaces and into tools that can take action, fully automate workflows, and meaningfully augment professional work.
But recent headlines have also reminded us of an important truth: not every AI tool is ready to be treated as a mature, enterprise-grade product.
What the OpenClaw Incident Tells Us
The emergence of OpenClaw, an AI agent that effectively behaved like a high-privilege employee, showed what can happen when powerful tools are adopted before organisations are ready to secure and govern them properly. The product went viral extremely quickly and then immediately became a major cybersecurity risk and attack target.
This is not a reason to avoid AI agents. Quite the opposite. It is a reason to be more deliberate about which tools we adopt, how we deploy them, and whether our teams are equipped to spot risk early.
New AI tools appear almost every week, promising to save time, reduce costs, and make work easier. Many of them genuinely can. Chosen well, AI is a powerful support for professionals and organisations. But choosing the right tool, in the right way, has become a business and regulatory decision, not just a technical one.
Your Existing Obligations Already Apply
There is no AI-specific law in the UK, but organisations are already expected to comply with data protection, confidentiality, security, and governance requirements that apply directly to how AI is used.
UK data protection law requires organisations to understand what happens to personal data, to keep it secure, and to be able to explain and justify the tools they use. Automated decision-making rules, recently updated, still require care, transparency, and human oversight when technology influences decisions about people.
In the EU, the AI Act makes this even more explicit by placing responsibilities not just on AI developers, but on organisations that choose to deploy AI systems. Any UK firm with EU customers or data flows needs to understand where those obligations begin.
The FCA has been clear in its support for a principles-based, outcomes-driven approach to AI regulation. Financial firms must ensure that how they develop, deploy, and use AI is consistent with the FCA Handbook requirements that apply to their specific business.
The key point is simple: if your organisation decides to use an AI tool, you remain responsible for that choice.
The Risk of Treating Experimental Tools as Finished Products
One of the particular challenges in the current AI market is that many of the most widely discussed tools are not traditional enterprise software products. They may be open source, built by small teams, or designed to run directly on a user’s device. They often change quickly and may not come with the safeguards you would expect from a long-established vendor.
There is nothing wrong with experimentation, innovation depends on it. Problems arise when experimental tools are treated as fully mature products, especially when they are given access to emails, documents, messaging systems, or client data. At that point, what felt like a harmless productivity experiment can quickly become a compliance or security issue.
This is why vendor due diligence has become so important in the AI space. It is about understanding what you are saying yes to.
Questions Your Vendor Due Diligence Should Answer
A sensible review of an AI vendor should address the following:
Where does the data go? Is it stored, reused, or shared with third parties? Does it leave the UK or EU?
What security measures are in place? What certifications does the vendor hold, and how are they tested?
What happens if something goes wrong? Is there a clear incident response process, and does the vendor notify you promptly of breaches?
Can the vendor use your data for their own purposes? Many AI services have terms that permit broad reuse of input data for model training. This is rarely acceptable in a regulated environment.
Who is responsible if there is a breach or failure? Liability limitations in AI vendor contracts are often aggressive. Know your exposure before you sign.
These questions matter because regulators expect organisations to have asked them, and to have documented the answers.
Read the Terms, Not Just the Demo
The terms of business behind an AI tool deserve at least as much attention as its features. Many AI services are written for speed and scale, not for regulated or professional environments. Some allow broad reuse of data. Some limit liability heavily. Some offer very little transparency about how information is processed or stored.
From a legal and regulatory perspective, those terms shape your risk exposure directly. They affect your ability to protect client data, meet confidentiality obligations, and demonstrate accountability if you are ever challenged. A tool that looks impressive in a demonstration can become a serious problem if its contractual terms do not align with your responsibilities.
Staff Awareness: The Overlooked Layer
Many AI risks do not come from bad intentions. They come from people not realising what a tool does or does not do. New regulations, particularly in the EU, now explicitly recognise this by requiring organisations to ensure that staff using AI have an appropriate level of understanding.
In practice, this means teams should be able to recognise when a tool is experimental, when it handles data in unexpected ways, and when something does not feel right. They should know when to pause, ask questions, and escalate concerns. Without that knowledge, even well-intentioned use of AI can create real problems.
Training plays a crucial role. It helps people use AI confidently without being careless. It also helps organisations demonstrate that they have taken reasonable steps to manage AI risks, which is exactly what regulators look for when things go wrong.
In our next post, we will explain why effective AI training needs to be tripartite, covering governance, technical literacy, and practical application, and what that looks like in a financial services context.
What Good Looks Like
Organisations that manage AI well share a few common characteristics.
They distinguish between tools suitable for controlled experimentation and those ready for wider deployment, and they document that distinction. They take vendor due diligence seriously even when a tool is exciting or widely discussed, because asking the right questions early is far easier than fixing problems later. And they invest in training their people, not as a barrier to progress, but as what allows the organisation to move forward with genuine confidence.
Digital Regs helps organisations assess AI tools and vendors from a practical regulatory and data protection perspective, supports AI procurement and due diligence in a way that enables informed decision-making, and delivers training that helps teams use AI responsibly. To discuss how we can help your firm, visit digitalregs.com.

