Shropshire Star

UK financial system ‘may not be prepared enough for major AI-related incidents’

The Treasury Select Committee said it had received a ‘significant volume of evidence’ about AI’s risks to financial services consumers.

By contributor Vicky Shaw, Press Association Personal Finance Correspondent
Published
Supporting image for story: UK financial system ‘may not be prepared enough for major AI-related incidents’
AI is being used by businesses in a variety of ways, including to automate admin functions and to deliver core services such as processing insurance claims and credit assessments, the Treasury Select Committee said (Dominic Lipinski/PA)

“Worrying” evidence that the UK’s financial system should be more prepared for the risks of AI-driven market shocks has been highlighted by an influential committee of MPs.

Not enough is being done by regulators to manage risks from increased use of AI (artificial intelligence) in the financial services sector, the Treasury Select Committee said.

The committee said it had received a “significant volume of evidence” about AI’s risks to financial services consumers.

It heard concerns about a lack of transparency in AI-driven decision-making in credit and insurance; concerns about financial exclusion for the most disadvantaged customers; concerns over people being misinformed by unregulated advice from AI search engines; and concerns that AI use could drive up fraud.

AI-driven market trading could amplify herding behaviour, risking a financial crisis in the worst-case scenario, while AI also risks increasing the volume and scale of cyber-attacks against the financial services sector, the committee heard.

It also heard evidence that UK financial services firms are particularly reliant on a small number of US technology firms for AI and cloud services.

Treasury Committee chairwoman Dame Meg Hillier, said: “Based on the evidence I’ve seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying. I want to see our public financial institutions take a more proactive approach to protecting us against that risk.”

The committee’s report said that while “AI offers important benefits, including faster services for consumers and new cyber defences for financial stability, our inquiry revealed significant risks to consumers and financial stability, which could reverse any potential gains”.

It continued: “The UK does not currently have AI-specific legislation or AI-specific financial regulation.

“The Financial Conduct Authority (FCA) and the Bank of England (including the Prudential Regulation Authority), the UK’s two financial services regulators, rely on the existing regulatory framework to supervise financial services firms’ use of AI.

“The FCA is responsible for upholding consumer protection, market integrity and competition. The Bank of England is responsible for maintaining monetary and financial stability.”

The committee said that a “wait and see” approach may risk exposing people and the financial system to potentially serious harm.

It said the Bank of England and the FCA should conduct specific stress-testing to boost businesses’ readiness for any future market shock driven by AI.

The Treasury Committee also recommended that the FCA should publish practical guidance on AI for firms by the end of this year.

This should include how consumer protection rules apply to their use of AI as well as setting out a clearer explanation of who in those organisations should be accountable for harm caused through AI.

The report said: “We recognise that it is difficult to provide prescriptive regulation in the context of fast-moving technological change. However, the current approach gives firms little practical clarity as to how existing rules apply to the use of AI.

“This leads to uncertainty for firms and potentially increases risks to consumers and the integrity of the financial system.”

AI and wider tech sector developments could bring considerable benefits to consumers, the committee said.

It encouraged firms and the FCA to work together to ensure that the UK capitalises on AI’s opportunities.

The committee’s report said: “Encompassing a range of technologies solving complex tasks that previously required human intelligence, AI is a breakthrough which presents both plausible opportunities and risks for the UK economy. The balance between the two remains highly uncertain.

“Yet, as the Government looks to boost the economy, it is backing AI innovation across sectors to ‘turbocharge growth’.”

Evidence received by the committee indicates more than 75% of UK financial services firms are now using AI, with the biggest take-up among insurers and international banks operating in the UK.

AI is being used by businesses in a variety of ways, including to automate admin functions and to deliver core services such as processing insurance claims and credit assessments, the committee said.

The Critical Third Parties Regime gives the FCA and the Bank of England new powers of investigation and enforcement over non-financial firms which provide critical services to the UK financial services sector, including AI and cloud providers. The Government is responsible for deciding which firms are brought into this regime.

The committee urged the Government to designate AI and cloud providers deemed critical to the financial services sector, in order to improve oversight and resilience.

In April 2025, the FCA launched its AI live testing service, which allows firms to trial their AI solutions before real-world deployment.

The live testing sits alongside its new supercharged sandbox, which enables firms without their own AI infrastructure to experiment with AI solutions, the report said.

But it added: “Many industry and academic stakeholders told us that the FCA’s current approach to supervising AI implementation was reactive, leaving firms with little practical clarity on how to apply existing rules to their AI usage.”

The FCA and the Information Commissioner’s Office announced in June 2025 that they would create a joint statutory code of practice for firms developing or deploying AI for automated decision-making.

The committee said it had also heard that firms’ concerns about accountability for harm caused to consumers through the use of AI had had a “chilling effect” on high-end AI adoption in the sector.

A Bank of England spokesperson said: “We welcome the Treasury Select Committee’s report on the responsible adoption of AI in financial services.

“The bank has already taken active steps to assess AI-related risks and reinforce the resilience of the financial system, including publishing a detailed risk assessment and highlighting the potential implications of a sharp fall in AI affected asset prices.

“We will consider the committee’s recommendations carefully and will respond in full in due course.”

A Treasury spokesperson said: “We’ve been clear that we will strike the right balance between managing the risks posed by AI and unlocking its huge potential.

“While firms are already required to manage AI-related risk, we will not wait around. That’s why we have been actively working with the regulators to strengthen our approach as the technology evolves, as well as appointing a new AI champion for financial services to ensure we seize the opportunities it presents in a safe and responsible way.”

The Government said on Tuesday that two industry figures have been appointed to spearhead the rollout of AI in financial services.

The new “champions,” Harriet Rees, group chief information officer at Starling Bank, and Dr Rohit Dhawan, head of AI and advanced analytics at Lloyds Banking Group, will report directly to the Economic Secretary Lucy Rigby – exploring ways to accelerate safe adoption at scale, identify where innovation can move faster, and tackle barriers holding firms back.

The Government said the focus will be on ensuring that firms can seize the opportunities AI presents with confidence.

Economic Secretary to the Treasury Lucy Rigby said: “Harriet and Rohit bring deep, real-world experience of deploying AI safely at scale, and they will help turn rapid adoption into practical delivery – unlocking growth while keeping our financial system secure and resilient.”

The appointments take effect from January 20 2026 and the roles are unpaid, the Government said.

An FCA spokesperson said: “We welcome the Treasury Select Committee’s focus on the safe and responsible use of AI in financial services. We will review the report carefully.

“We have already undertaken extensive work to ensure firms are able to use AI in a safe and responsible way. This includes providing a safe place for firms to develop, experiment and test through our world-leading AI Lab and partnerships.”