Tuesday, 9 September 2025

From Data to Decisions: Building Trustworthy AI in Financial Services

A corporate picture of David Bull, Senior Consultant

David BULL

London City Skyline at sunset
Supporting AI Adoption
Ethical AI
Our RegTech Journey
The Path Forward

The Financial Conduct Authority (FCA) has five guiding principles for responsible artificial intelligence (AI) deployment. Introduced in their AI update, these are: safety, transparency, fairness, accountability, and contestability. Rather than introducing entirely new frameworks, the FCA is embedding these principles into existing regimes like the Senior Managers and Certification Regime (SM&CR) and Consumer Duty.

To encourage innovation, the FCA has launched initiatives such as the AI Sprint event. They brought together regulators, technologists, and financial institutions to explore practical use cases for AI. The forthcoming AI Live Testing programme will allow firms to trial AI solutions in a controlled setting, while the Digital Sandbox offers a collaborative environment for testing with synthetic data.

Data Governance and Compliance to Support AI Adoption

While AI often grabs the headlines, it’s data that underpins everything. Poor data quality can compromise even the most advanced AI models, leading to biased decisions, regulatory breaches, and reputational harm. Imagine an AI model ‘let loose’ on inaccurate or incomplete data! That’s why robust data governance is essential, not only for compliance but also for building trustworthy AI systems.

In one Customer Due Diligence (CDD) project, Talan Data x AI assisted a multinational bank in reconciling customer identity data across multiple jurisdictions. Our support ensured their compliance with both local and global anti-money laundering (AML) requirements. This involved creating a unified data model, implementing real-time validation checks, and establishing audit trails to support regulatory reporting.

Another frequent challenge is conflicting consent data. For example, a customer might opt out of marketing communications via one channel but remain opted in on another due to fragmented systems. This inconsistency can lead to breaches of ePrivacy regulations, erode customer trust, and contravene the FCA’s principles of fairness and transparency. The solution lies in effective data management and governance.

In an ePrivacy-focused project for a major retail bank, we helped consolidate over 40 separate data stores into a single unified database containing every customer’s contact preferences and marketing permissions. These permissions can vary by channel (email, telephone, etc.) and product (credit cards, current accounts, loans, etc.), making customer contact decisions complex. Our solution streamlined this process, delivering clear benefits to both customers and the bank.

We also assist clients in building data lineage frameworks, which are critical for demonstrating how data flows through systems and how decisions are made. This is particularly important for AI applications, where regulators expect firms to explain how an algorithm arrived at a specific outcome. By embedding traceability and auditability into data pipelines, we enable firms to meet the FCA’s expectations around accountability and contestability.

Ultimately, good data governance isn’t just a compliance exercise, it’s a business enabler. It allows firms to innovate faster (including with AI), serve customers better, and build systems that are fair, transparent, and resilient.

Consumer Outcomes and Ethical AI

AI presents both an opportunity and a risk for fair and transparent consumer outcomes. The FCA has stressed that firms must ensure AI-driven decisions are explainable and contestable, particularly where they affect access to financial products. This includes having mechanisms for redress when AI decisions cause harm.

Recent joint research by the FCA and Bank of England revealed that while 75% of UK firms use AI, only 34% fully understand the systems they deploy. This raises concerns about accountability and governance, especially as AI models become more complex and reliant on third-party providers.

To prevent digital exclusion, inclusive design and human oversight are essential. Ethical AI should work for all consumers, not just the digitally fluent, helping build trust and ensuring technology serves the public interest.
 

Talan’s RegTech Innovation Journey

Talan Data x AI has been actively contributing to this evolving landscape. Through our involvement in the Financial Regulation Innovation Lab (FRIL) and Fintech Scotland’s Innovation Call, we have developed a RegTech accelerator that simplifies compliance using natural language processing (NLP) and machine learning (ML).

The accelerator processes complex regulatory documents, extracts key insights, and presents them in intuitive dashboards tailored for compliance teams to take appropriate actions. It’s built with a “human-in-the-loop” design, ensuring AI-driven decisions can be reviewed, challenged, and explained, aligning with the FCA’s principles of accountability and contestability.

The Path Forward: Trustworthy AI Through Better Data

The right approach to AI in financial services is grounded in robust data governance, aligning closely with UK regulatory priorities. By ensuring data is accurate, representative, and responsibly managed, financial institutions can build AI systems that are fair, transparent, and accountable.

As the FCA and ICO continue to stress the importance of data quality and oversight, Talan Data x AI remains focused on helping firms innovate responsibly, creating a resilient, consumer-centric financial ecosystem that earns trust, delivers value, and adapts to change.
 

Contact us