Artificial intelligence (AI) is entering a decisive phase in global banking, as firms move from experimental pilots to production systems that influence credit decisions, fraud detection, customer operations and real-time risk assessment. Globally, the regulatory posture is shifting accordingly: supervisors are signalling that AI now sits within the perimeter of core prudential and conduct expectations, with a sharper focus on accountability and consumer outcomes as models scale. Against this backdrop, the UK Financial Conduct Authority (FCA) is sharpening its supervisory stance. Speaking at the Singapore FinTech Festival 2025, Jessica Rusu, FCA’s chief data, information & intelligence officer, highlighted the rapid shift from generative to agentic AI, the heightened governance and outcome-testing expected as models move into production, and the need for continuous monitoring to detect drift, bias and emerging risks. “The pace of change is tremendously fast, particularly as technologies move from generative AI to agentic AI,” Rusu said. A joint report from Bank of England and FCA shows that around 2% of AI use cases across the country involve autonomous decision-making — small in proportion, but indicative of an emerging shift toward more agentic systems that will require robust governance. Embedding responsible AI as systems move into production As banks embed AI deeper into credit assessment, behavioural analytics and operational decisioning, the FCA expects firms to ensure models perform safely and consistently in real-world environments. This places greater emphasis on data quality, validation, explainability and demonstrable consumer-protection outcomes, anchoring AI oversight to established UK frameworks such as Consumer Duty and the Senior Managers & Certification Regime. Rusu highlighted the importance of understanding how models interact with different customer segments. “Whenever you’re building a model, you have to think about who’s using it, who you’re testing it on and what the impact will be,” she said. This includes assessing how vulnerable customers interact with digital channels, how decision logic operates under stress and how unintended bias may emerge once systems scale. The FCA expects outcome testing that evaluates real consumer effects rather than technical performance alone, reducing the risk of exclusion or unfair treatment. Agentic tools are already emerging in production settings. Australian insurer NIB Group has introduced Nibby, an agentic assistant able to resolve customer queries and complete administrative tasks end-to-end. JP Morgan has trialled agentic tools in the UK to automate workflow routing and support internal decision-making. These deployments illustrate why regulators are intensifying their focus: as systems take on more autonomous functions, governance must keep pace. Post-deployment monitoring is critical. “Balance checks and controls should be in place to ensure awareness of model drift, bias, or any other movement that could harm consumers.” Strong oversight helps manage risks as AI models scale in real-world use. Applying the supervisory lens as adoption accelerates The FCA’s perspective on AI governance is informed by its own operational use of advanced analytics. The authority increasingly deploys machine learning to process complex datasets, detect anomalies and identify misconduct earlier. Rusu notes that this has strengthened the regulator’s ability to act quickly on potential misconduct. “We’re using neural networks to link individuals and entities, spotting and stopping harm so much faster,” she said. This strengthens FCA’s ability to scrutinise advanced systems and signals to firms that supervisors are becoming more technologically capable. Automation enables FCA staff to focus on higher-value supervisory judgement while ensuring outcomes remain explainable and consistent with public-interest objectives. The regulator expects banks to adopt a similar balance, using AI to enhance efficiency while maintaining clear visibility over how decisions are made and risks are managed. MAS-FCA partnership expands cross-border testing This year’s Singapore FinTech Festival also highlighted growing international alignment on responsible AI. The Monetary Authority of Singapore (MAS) and FCA jointly launched the UK-Singapore AI-in-Finance Partnership to promote trustworthy innovation, share supervisory insights and support cross-border testing of AI solutions. For Asian banks and international firms operating through Singapore, the partnership reinforces MAS’s role as a regional reference point. Rusu noted that many UK fintechs already view Singapore as their route into Asia, and the agreement creates a clearer pathway for that expansion. The collaboration aligns sandbox approaches and outcome-testing methods, giving firms a more predictable environment to trial and scale AI systems across two major financial centres. While the partnership does not set binding rules, it reflects a broader trend of regulators coordinating more closely as AI becomes embedded in core financial services. Balancing innovation, capability building and consumer protection Banks expanding their AI capabilities often rely on sandboxes, controlled testing environments and cross-border collaboration to manage risks while accelerating innovation. Rusu emphasises this balance: “Innovation is in our DNA. The supercharged sandbox and AI live testing provide firms with practical support to help them scale responsibly.” The FCA has invested in technical talent, from data architects to AI engineers, to strengthen its understanding of emerging risks. Rusu notes that combining early-career talent with seasoned experts allows the regulator to anticipate how firms use AI and supervise complex deployments. “Every application requires a different approach,” she said. “Every model is unique and different data, training methods, or prompt engineering may be required.” As AI shifts from pilots to production, Rusu stressed that governance, testing and post-deployment monitoring must evolve at the same pace as the technology itself. Firms will need clearer accountability frameworks, stronger outcome-testing and continuous checks for drift and bias as models take on more autonomous functions.