Artificial intelligence (AI) is no longer confined to innovation labs or proof-of-concept pilots in financial services. Across the sector, AI systems are now embedded in credit decisioning, real-time fraud detection, anti-money laundering monitoring, customer engagement, operational resilience and internal decision support. In many large global institutions, hundreds of models operate simultaneously in live environments, including generative AI tools that interact directly with customers and staff. This shift is not simply about adoption but it marks a structural change in how financial services are designed and governed. AI is no longer experimental; it is increasingly outcome-critical, shaping who receives credit, how risks are detected and how firms meet regulatory obligations. As these systems grow more powerful and opaque, they amplify long-standing tensions between innovation and accountability, performance and explainability, speed and safety. What has changed most is not the technology itself, but the scale and consequence of its use. Decisions once mediated by human judgement are now influenced or executed by adaptive systems operating continuously and at volume. This reality has pushed AI governance out of the realm of technical oversight and into the centre of enterprise and system-level risk management. Regulatory expectations move AI into the enterprise core Regulators have responded by sharpening expectations and reframing AI as a core enterprise risk rather than a niche technology issue. Europe’s AI Act and the United Kingdom (UK)’s outcomes-focused supervisory guidance on AI in financial services reflect a growing consensus that AI must be governed with the same seriousness as capital, conduct and operational resilience. In parallel, Singapore’s Monetary Authority of Singapore (MAS) Fairness, Ethics, Accountability and Transparency (FEAT) principles and model AI governance initiatives reinforce similar themes of fairness, accountability and transparency in AI-driven decisions. Taken together, these frameworks signal a decisive shift. They do not treat AI governance as a compliance overlay applied after deployment. Instead, they frame AI as a determinant of consumer outcomes, market integrity and financial stability, requiring responsibility to be embedded across the entire lifecycle of systems. Across Europe, the UK and Singapore, policymakers are converging on a common principle: governing AI is no longer about documentation or post-hoc controls. It is about designing accountability into how AI is built, tested, deployed and monitored over time. For many organisations, this approach is becoming the foundation for scalable AI adoption rather than a constraint on innovation. As Peter Kerstens, adviser to the Directorate-General for Financial Stability, Financial Services and Capital Markets Union (DG FISMA) at the European Commission, put it, “We cannot explain the human brain either, but we still expect accountability for outcomes.” The implication is clear. AI does not need to be perfectly transparent to be governable. It does, however, need to operate within frameworks that make responsibility, oversight and human impact explicit and enforceable. From experimentation to enterprise-grade AI The stakes of AI governance are materially higher than they were only a few years ago. Early deployments focused largely on efficiency gains in back-office processes or narrowly scoped analytical tasks. Today, AI increasingly sits at the heart of customer-facing and risk-critical activities, including lending decisions, fraud prevention, financial crime controls and personalised engagement. Many contemporary AI systems are adaptive, probabilistic and highly complex, particularly those built on large language models. Their behaviour may evolve as data changes and outputs may not be easily traceable to deterministic rules. This challenges long-standing assumptions about model control and oversight. Traditional model risk management frameworks were designed for relatively static models with clearly defined inputs and outputs. While these disciplines remain essential, they are no longer sufficient on their own. Generative AI introduces new failure modes, including hallucinated outputs, sensitivity to prompt design and emergent biases that may not be evident during initial testing. In practice, these risks are not theoretical. Fraud detection models have drifted out of sync with evolving criminal tactics, leading to significant undetected losses. Credit algorithms trained on oversampled historical data have unfairly flagged legitimate transactions or loan applications, exposing firms to conduct and reputational risk. European policymakers have emphasised the need to recognise these differences. Kerstens noted that, “Regulation is not a one size fits all approach. It is firm specific, situational where it’s extremely important.” The objective is not uniformity, but proportionality grounded in risk and impact. In the UK, supervisory thinking has evolved towards an outcomes-focused approach. Firms remain accountable for results, regardless of whether decisions are made by humans or machines. Jessica Rusu, chief data, information and intelligence officer at the Financial Conduct Authority (FCA), stressed that governance must extend beyond initial approval. “Whenever you are building a model, you have to think about who is using it, who it is affecting and what the impact will be.” This perspective reframes governance as a continuous operational discipline, not a gatekeeping event. Within this broader regulatory and infrastructure-driven shift, some large global banks are adapting their internal governance models by embedding AI oversight within enterprise risk frameworks. At JPMorgan Chase, AI governance is integrated into firm-wide risk and control structures, with senior management accountable for model outcomes across consumer banking, markets and operations. HSBC has similarly aligned AI governance with its global risk architecture, connecting model risk, data governance and conduct risk to ensure consistency across jurisdictions. These institutional responses illustrate how AI governance is increasingly moving from specialist teams into the core of enterprise decision-making. Peter Kerstens, Advisor, DG FISMA, European Commission Jessica Rusu, Chief Data, Information & Intelligence Officer, FCA Aligning talent, standards and safe experimentation Effective governance depends not only on rules, but on capability. As AI systems grow more complex, organisations require new combinations of technical expertise, risk awareness and ethical judgement. Talent, standards and experimentation are increasingly intertwined. In Singapore, national efforts to strengthen AI adoption have deliberately focused on this intersection. AI Singapore, a national programme supported by the government, plays a central role in shaping the ecosystem. Rather than positioning governance as a barrier to innovation, the programme integrates responsible practices directly into experimentation. According to Laurence Liew, director of AI Innovation at AI Singapore, the technical landscape has shifted rapidly. “Today, 80% to 90% of the use cases we see are generative AI or large language model based,” he said. This shift has increased demand for engineers who understand not only model development, but also deployment constraints, risk trade-offs and ethical considerations. AI Singapore combines talent development with shared frameworks and regulatory engagement. Sandboxes allow organisations to test new applications in controlled environments while embedding governance expectations from the outset. Guidance on data handling, model evaluation and human oversight is introduced during experimentation, not imposed after deployment. As Liew explained, “Ethics, governance and standards go hand in hand… more companies will use standards to ensure the quality of artificial intelligence products.” This approach reflects a wider regulatory and industry consensus: governance that begins at the design phase is more effective and less costly, than controls retrofitted after failure. Large global banks are reinforcing this shift by investing in AI literacy and specialised teams that support, rather than replace, enterprise-wide ownership. JPMorgan Chase has built central AI, model risk and data ethics capabilities that partner with business lines, combining speed with control. HSBC has invested in AI-specific training for board and senior management, recognising that strategic choices about data, vendors and models require a shared baseline of understanding. Data governance as a foundation for AI governance As AI systems scale, data governance is increasingly emerging as a first-order risk rather than a supporting function. Model behaviour is shaped not only by algorithms, but by the quality, provenance and governance of the data on which systems are trained and operated. Senior executives increasingly recognise that many AI failures are, at their core, data failures. Weak lineage, inconsistent quality, ungoverned third-party datasets and opaque cross-border data flows can undermine even well-designed models. The growing use of synthetic data and externally sourced training data further complicates accountability, particularly as AI systems are deployed across jurisdictions and business lines. Regulatory expectations are evolving accordingly. Supervisors now expect organisations to demonstrate not only robust model governance, but disciplined controls over data sourcing, access, usage and retention. In practice, data governance is increasingly treated as critical infrastructure, ensuring that AI-driven decisions can be trusted at scale. As firms work to meet these expectations, governance is being embedded directly into the platforms and systems that manage data and AI operations, rather than relying on policy alone. Akber Jaffer, chief executive of SmartStream, observed that many AI initiatives are less about individual models and more about the systems that support them. “A lot of the enabling technologies — cloud, APIs and artificial intelligence — are really focused on infrastructure,” he said, highlighting the role of platform design in enforcing governance at scale. Laurence Liew, Director (AI Innovation), AI Singapore Akber Jaffer, Chief Executive Officer, SmartStream A similar perspective is emerging among core banking and payments providers. Mick Fennell, payments director at Temenos, described enterprise AI integration as “a fruitcake with AI baked in,” underscoring that explainability and accountability must be embedded into platforms from the outset. These views signal a broader shift: governance is increasingly enforced through architecture, not supervision alone. Building governance into enterprise decision-making As AI moves from pilots to enterprise-scale deployment, organisations are rethinking governance as a design principle rather than a downstream compliance exercise. Effective governance ensures that models operate reliably, ethically and at scale, while maintaining accountability for business outcomes and customer impact. Across the sector, cross-functional processes are becoming standard. Technology, business, risk and compliance teams collaborate throughout the AI lifecycle, supported by structured oversight, clear ownership and defined performance indicators. Automated monitoring is increasingly used to track model performance, drift and emerging risks in real time. National frameworks such as MAS’s FEAT principles and supervisory expectations in the UK and EU reinforce that governance must operate continuously, not only at approval points. These expectations are shaping how enterprise AI operating models are designed from the ground up. United Overseas Bank (UOB) provides one example. “There is a big gap between proof of concept and production,” said Alvin Eng, head of enterprise AI at UOB. The challenge, he explained, is no longer whether AI can work, “but whether it can work reliably, ethically and maintain explainability at scale.” Bridging that gap requires clarity on ownership, sponsorship and measurable business outcomes. Governance becomes the mechanism through which experimentation is translated into sustainable operations. These challenges are not confined to developed markets. Institutions operating across emerging and frontier markets face similar issues, often amplified by fragmented infrastructure and regulatory diversity. United Bank for Africa (UBA), for example, has embedded AI into customer-facing platforms serving millions across multiple countries. UBA’s conversational AI assistant, Leo, supports everyday banking interactions across digital channels, while platforms such as RedPay enable real-time payments for merchants. Together, these use cases illustrate how AI increasingly shapes access to financial services and economic participation, reinforcing the need for governance frameworks that address fairness, resilience and accountability across jurisdictions. Scaling governance across complex ecosystems As institutional practices mature, governance challenges increasingly extend beyond individual organisations into shared payment, data and technology infrastructures. AI systems are now deeply interconnected and risks can propagate rapidly across ecosystems. This reality has sharpened focus on third-party and ecosystem-level governance. Outsourced models, cloud providers and embedded AI services must increasingly meet the same governance standards as in-house systems. Without alignment, weaknesses in one part of the ecosystem can undermine trust across the whole. Global payment networks illustrate how governance is evolving at this level. Mastercard, alongside other networks, has adopted structured, risk-based approaches to AI oversight. AI use cases are evaluated using scorecards that assess risk, consumer impact, explainability and systemic relevance. “We put our AI use cases through a scorecard where we identify and classify the risk and then identify appropriate mitigating controls,” said Derek Ho, deputy chief privacy, AI & data responsibility officer at Mastercard. Lower-risk applications face lighter controls, while high-impact systems are subject to intensive oversight. Crucially, this approach is collaborative. “This collaborative approach means you can have both innovation and responsibility being considered together as a whole. You’re driving towards innovation and responsibility rather than innovation or responsibility,” Ho said. UOB complements this ecosystem perspective by contributing to national frameworks and industry initiatives. Eng noted that embedding MAS’s FEAT principles across operations helps align governance beyond individual firms. Participation in collaborative programmes such as Project MindForge further supports responsible AI adoption across the sector. These initiatives demonstrate that governance at scale is as much about coordination as control. Alvin Eng, Head of Enterprise AI, Innovation Group, UOB Derek Ho, Deputy Chief Privacy, AI & Data Responsibility Officer, Mastercard Governance as culture and long-term sustainability Mature AI governance is no longer defined by policies, dashboards or scorecards alone. Across financial services, it is increasingly understood as a culture embedded in daily decision-making, where shared ownership, collaboration and accountability are central. Governance is recognised as a continuous discipline that must evolve alongside models, data and business priorities. Boards and senior executives are investing in AI literacy, recognising that effective oversight requires understanding both the capabilities and limitations of AI systems. Regulators reinforce this perspective. Rusu emphasised the need for sustained vigilance. “Balance checks and controls should be in place to ensure awareness of model drift, bias or any other movement that could harm consumers.” Her view highlights that sustained oversight, early detection of risk and ongoing alignment with regulatory expectations are critical for enterprise resilience. Institutions translate these principles into practice in different ways, but common themes are emerging. Cross-functional communication, continuous monitoring and early detection of risk are increasingly prioritised. Payment networks and technology platforms reinforce responsible AI through training, certification and internal standards that extend beyond traditional risk functions. Sustainable governance also requires a forward-looking mindset. As AI capabilities evolve, organisations are combining talent development, ecosystem partnerships and robust operational frameworks to manage emerging risks proactively. Weak governance can lead to tangible failures, from biased outcomes affecting customers to operational incidents driven by unmanaged model drift. By cultivating governance as culture, the sector ensures that AI remains reliable, ethical and aligned with both commercial objectives and societal expectations. The path forward As AI moves decisively from pilots to production, the contours of effective governance across financial services are becoming clearer. Regulators are focusing on outcomes rather than technologies. Platforms and infrastructure providers are embedding controls into systems by design. National programmes and industry initiatives are aligning talent, standards and experimentation around responsible deployment. What is emerging is not a single governance model, but a shared direction of travel. AI is now part of the financial system’s core infrastructure. Governing it effectively requires coordination across regulators, institutions, platforms and ecosystems, not isolated compliance efforts. In this environment, governance is evolving from oversight into system design. It shapes how AI is built, deployed, monitored and adapted and how responsibility is shared across interconnected actors. Culture, capability and accountability matter as much as formal controls. Ultimately, governing AI is not about slowing innovation or achieving perfect transparency. It is about ensuring that powerful, adaptive technologies operate within frameworks that make their impact visible, their outcomes accountable and their use aligned with economic and societal goals. As AI reshapes the next frontier of financial services, effective governance will determine not only which organisations succeed, but whether the financial system remains trusted, resilient and fit for the future.