Financial institutions today run on software-driven architecture, where artificial intelligence (AI) shapes core workflows from onboarding and analytics to pricing, compliance and portfolio allocation. Because these decisions directly affect capital, liquidity and fiduciary outcomes, Europe’s regulatory agenda — including the Digital Operational Resilience Act (DORA), the General Data Protection Regulation (GDPR) and the developing European Union (EU) AI Act— recognises AI as integral to operational resilience, conduct governance and data-protection mandates. At the Singapore FinTech Festival (SFF) 2025, Peter Kerstens, adviser to the Directorate-General for Financial Stability, Financial Services, and Capital Markets Union (DG FISMA), discussed how supervisors now expect deeper evidence of model control and lifecycle assurance after AI adoption; explainability and human accountability have become central to responsible use; and digital-finance rules like DORA are aligning resilience, cybersecurity and third-party risk under a unified framework. Kerstens has long worked at the intersection of digital policy, financial regulation and technological innovation, contributing to major EU initiatives such as DORA and the Markets in Crypto-Assets Regulation (MiCA). He remains closely involved in how the EU incorporates AI into its financial-governance architecture, advising on explainability, model accountability, operational resilience, data protection and the evolving expectations around human oversight. Kerstens stressed that the EU’s approach to AI, privacy and digital transformation is rooted in fundamental rights, proportionality and rules that allow innovation while protecting citizens. He noted, “If you are using personal information, you must respect the laws of personal information… AI can be legitimate as well.” Operational and AI governance for critical financial systems Kerstens noted that regulation in the EU is designed not only to protect privacy and fundamental human rights, but also to provide confidence and enable the development of digital and tokenised economies. Institutions must evaluate their risks carefully, including operational dependencies and potential vendor failures. Under DORA — legally in force since January 2023 and applicable across the EU from January 2025 — financial firms must establish comprehensive ICT risk-management and resilience frameworks, report cyber or ICT incidents, test operational resilience and manage third-party ICT vendor risks under a common supervisory regime. These requirements will be reinforced by supervisory technical standards from bodies such as the European Banking Authority (EBA), the European Securities and Markets Authority (ESMA) and the European Central Bank (ECB), which will define how validation, documentation, model governance and lifecycle monitoring are applied in practice. Kerstens pointed out that large organisations face more complex data governance challenges than smaller firms. While smaller companies deal with simpler processes and limited data, larger institutions must ensure robust governance to manage intricate systems effectively. He argued that compliance with rules like DORA and GDPR is both necessary and feasible: “If you are smart enough to code up a smart contract, you are definitely smart enough to comply with the rules.” Explainability and human accountability as regulatory cornerstones Explainability has emerged as a persistent challenge in applying AI to financial services. Kerstens noted that understanding how AI reaches its conclusions can be difficult, particularly because many systems operate like neural networks, reflecting a level of complexity comparable to the human brain. “We can’t explain the human brain,” he noted, underscoring that it is not always possible to trace exactly how inputs map to outputs in sophisticated AI models, but stressed that this complexity is not a justification for lack of control. Rather than oversimplifying models solely to make them easier to interpret, Kerstens emphasised the importance of governance. Institutions, he argued, must focus on what AI tools are optimised to achieve, whether their use is ethical and whether they serve legitimate purposes. “You shouldn’t be optimising your systems for separating; you should be optimising them for the legitimate problem you’re trying to solve,” he said, stressing that demonstrating governance and the intended purpose of AI use are more important than simplifying models for explainability. Maintaining human oversight is essential. Kerstens explained that financial institutions must remain in control of AI systems, even as they use these tools to improve efficiency, reduce costs, or offer new services. As he put it, firms “have to understand… that they are in control and they understand what happens, but they’re not at the mercy of their AI.” Across the EU, supervisors are increasingly asking institutions to evidence validation, monitoring and assurance processes that demonstrate how AI behaves across its lifecycle, including under changing data conditions and stressed environments. Kerstens also stressed the societal dimension: while AI is advancing toward human-level and potentially superhuman capabilities, humans must retain authority over these systems. This perspective frames explainability not only as a technical challenge, but as a cornerstone of responsible, accountable and trustworthy AI governance in financial services. Managing third-party risks Kerstens highlighted the growing complexity that financial institutions face in digital transformation. Firms, he noted, must “think through where the risks are”, including operational weaknesses and situations where “things can go wrong.” Kerstens emphasised that outsourcing technical functions does not transfer liability. Even when using third-party models or cloud services, banks are accountable to regulators and customers if systems fail or behave unpredictably. This includes evaluating the operational, ethical and regulatory implications of vendor solutions and ensuring that the organisation maintains oversight of its AI systems. He also noted that generative AI (GenAI), often provided through large third-party models, must be subject to the same level of scrutiny. Institutions need clear controls to manage hallucinations, misclassification, unsanctioned data use and unintended bias. Third-party AI, he stressed, remains firmly within the bank’s risk perimeter. Governance-first foundations for scalable and sustainable AI Ensuring effective human oversight remains central to responsible AI deployment in financial services. As Kerstens explained, “Humans have to control the systems and have to remain in charge.” He added that AI oversight must evolve in step with the technology. Kerstens noted that regulators increasingly use hands-on approaches, such as technical discussions and collaborative engagement with institutions, to understand how AI systems behave in practice. These interactions, he said, allow supervisors and firms to work through concrete implementation issues rather than relying solely on abstract guidance. Across jurisdictions, regulatory approaches to AI differ. The EU is building a comprehensive, legally binding framework for high-risk AI, while Singapore currently relies on voluntary, sector-specific guidance and sandbox experimentation, and the UK adopts a principles-based, innovation-focused posture. A consistent supervisory principle across jurisdictions is proportionality: controls and documentation should reflect the materiality and risk of each AI use case rather than requiring uniform compliance. As supervisory expectations mature, these models are gradually converging toward stronger governance, oversight and proportional risk management as AI adoption accelerates. According to Kerstens, the crucial factor is that regulators and institutions maintain regular communication on risks, controls and the practical implications of emerging techniques. He highlighted the importance of dialogue and shared learning: “We are meeting, engaging and exchanging ideas and that will get us to much more convergence and similarity than some international treaty or negotiation,” he said, urging institutions to engage more actively in such exchanges. Kerstens stressed that requirements such as resilience, explainability and fairness should not be seen as limiting innovation. In his words, regulation “is not a one size fits all approach. It is firm specific, situational where it's extremely important.” For institutions, the imperative is clear: AI must be governed with the same rigour as capital, liquidity and operational risk. The EU’s regulatory principles offer the stability needed to adopt AI more confidently as its capabilities expand, underscoring Kerstens’ central message that strong governance is not optional — it is the foundation that enables innovation while safeguarding customers and scaling AI with confidence across mission-critical processes.