Artificial intelligence (AI) is moving from concept to practice in financial services. Banks are already deploying models in lending, compliance and supervision, and are starting to confront how AI will redefine decision-making, risk management and customer interaction across the enterprise. During the Singapore AI Sunset Cruise on 11 November 2025, TAB Global and Huawei brought together senior executives from institutions including Bank of Singapore, AEON Bank, Techcombank, Alliance Bank, VietinBank, Sathapana Bank, Bankinter, OCBC, GX Bank, Citibank, the Central Bank of Kenya and Huawei. The dialogue focused on the theme “From digital tools to AI colleagues: The implementation of agentic AI in finance”. Participants agreed that the industry is shifting from incremental digitisation to structural change driven by intelligent systems. The discussion was framed around five themes: building an AI framework for success, enabling human–AI collaboration, scaling use cases, measuring impact and managing emerging risks. AI framework for success A central question for the group was how banks can move from fragmented pilots to an enterprise framework that supports continuous AI deployment. Emmanuel Daniel, founder and chairman of TAB Global, argued that banking is undergoing a structural shift driven by customers’ everyday use of AI. As people become comfortable with intelligent agents in their personal lives, they will expect the same sophistication from their financial institutions. He stressed that AI is reshaping the industry “from the outside in”, as customer-side agents begin to define financial interactions and push banks towards real-time, data-driven engagement. Daniel warned that institutions which focus solely on internal data and incremental digitisation risk losing relevance. He emphasised that “the data outside the bank is now more important than the data inside the bank”, and that external, real-time flows will increasingly drive decisioning. This, he said, requires a rethink of architecture before AI can scale. Jason Cao, CEO, Digital Finance BU, Huawei, developed this point by arguing that AI must be treated as infrastructure rather than as a series of isolated tools. He observed that many banks start with narrow use cases that remain trapped in legacy workflows and never deliver structural benefit. In contrast, institutions that embed AI into product design, risk modelling, fraud monitoring and compliance are building foundations for sustained advantage. Cao urged banks not to overestimate short-term gains or underestimate long-term value. AI, he said, rebuilds end-to-end business processes and human–machine collaboration models. The real opportunity lies in redesigning systems and workflows so that AI becomes a pervasive operating layer rather than a bolt-on capability. Participants highlighted that legacy hierarchies and siloed governance remain major obstacles. Sequential approval chains, fragmented product structures and rigid IT processes slow experimentation and complicate integration of new models. Several speakers noted that AI readiness is as much an organisational and regulatory challenge as a technical one. The consensus was that banks need modular, resilient and interoperable architectures that can support iterative model development, evolving regulatory expectations and new patterns of customer behaviour. Without such a framework, even promising pilots will struggle to move from proof-of-concept to full-scale implementation. Human–AI collaboration The second theme focused on how AI changes day-to-day work, and how institutions can ensure that technology augments rather than replaces human judgement. Elizabeth Ndirangu, deputy manager, payment service provider authorisation, Central Bank of Kenya, explained that supervisory teams are already using AI to process high-volume data, identify anomalies and detect emerging risk patterns in payment systems. AI, she said, serves as an analytical partner that allows supervisors to focus on higher-order assessments while maintaining accountability for decisions. Explainability remains essential because supervisors must justify outcomes to stakeholders and policymakers. From a commercial-banking perspective, Nak Pechkorsa, chief technology and information officer, Sathapana Bank, described how AI supports frontline teams by accelerating operational decisions and improving accuracy. In emerging markets where speed and agility define competitiveness, AI helps deliver more consistent and timely outcomes, but human judgement still anchors final decisions. Pechkorsa characterised AI as a supportive layer that lifts performance rather than a substitute for staff. June Lomaria, policy analyst, digital payments division, Central Bank of Kenya, stressed that public institutions carry particular responsibilities around transparency and fairness. As models grow more sophisticated, she argued, banks and regulators must maintain clarity on how outputs are generated. Automation cannot extend beyond the point where decisions are no longer explainable or defensible. This reinforces the need for governance structures that preserve human authority over AI systems. Participants also discussed the likely emergence of customer-owned AI agents. As households and businesses adopt their own intelligent assistants, banks will increasingly interact with machines acting on behalf of customers. This will require new approaches to authentication, permissions and oversight, as well as infrastructures that can support secure machine-to-machine interactions. Across the theme, participants agreed that the defining characteristic of AI adoption will be the quality of collaboration between humans and machines. Institutions will need clear principles governing how AI supports staff, how accountability is allocated and how transparency is preserved in regulated environments. AI use cases take shape The third theme examined how agentic AI is being applied across lending, compliance, supervision and operations, and what distinguishes successful deployments. Santhosh Mahendiran, chief data and analytics officer, Techcombank, described the bank’s move from augmented to “authentic” intelligence in credit underwriting. He explained that Techcombank has unified identification, analysis and decisioning through real-time model interaction, replacing manual steps that previously fragmented the process. The shift has delivered measurable impact but only after a full workflow redesign. Mahendiran cautioned that automating individual tasks is insufficient if departments such as underwriting continue to rely on sequential handovers between frontline, operations, credit and risk. From a private-banking perspective, Céline Le Cotonnec, chief data and innovation officer, Bank of Singapore, outlined how democratised AI is reshaping experimentation and organisational roles. She observed that employees can now build and connect AI agents without waiting for extended information technology (IT) development cycles, enabling faster prototyping across functions. She compared this transition to the leap from pen-and-paper processes to computing, arguing that agentic tools fundamentally change how knowledge work is conducted. Le Cotonnec also highlighted the organisational implications of this shift. As AI becomes widely accessible, she argued, the role of IT should evolve from builder to governance body, focused on standards, security and guardrails. The challenge is to empower staff across the enterprise to create and use AI agents while ensuring responsible deployment and alignment with regulatory expectations. Technology-provider insights broadened the picture. Neo Gong, digital finance, Huawei, explained that institutions are beginning to explore multi-agent ecosystems in which AI models interact with one another as well as with customers. This trend, he noted, demands architectures that can support autonomous model interaction and robust monitoring of network-level resilience. Gong also pointed to global capital flows that increasingly prioritise AI-driven business models, influencing the types of use cases banks pursue. Regulatory examples tied these developments back to supervision. Ndirangu described how AI models assist with anomaly detection and transactional analysis, giving supervisors earlier visibility into emerging issues. Pechkorsa added that similar techniques support frontline decision-making in operational settings by boosting accuracy and reducing response times. Taken together, these examples show AI moving from isolated pilots to structural applications across underwriting, compliance, supervision, frontline decision support and customer interaction. The group saw these deployments as early steps towards fully agentic workflows. Measuring AI success Participants then turned to how banks should measure the success of AI initiatives. Cao argued that traditional metrics such as cost savings or short-term efficiency gains do not capture AI’s structural impact. Isolated applications often fail to scale because they remain tied to legacy workflows, he said. Meaningful benefit arises when AI becomes embedded in institutional architecture, shaping how teams collaborate and how decisions are made. He warned that the competitive gap between early and late adopters will widen quickly. “In the future there will be only two types of banks: AI bank or other banks,” Cao remarked, suggesting that the decisive metric is the speed at which institutions embed AI into their operating fabric rather than headline returns from individual projects. Mahendiran reiterated that process redesign is central to any serious measurement framework. Departments structured around sequential handovers cannot fully leverage model-driven decisioning, he said. For AI to deliver structural benefits, workflows must be rebuilt around continuous learning and real-time interaction, and performance indicators must reflect this shift. Participants emphasised that human–AI interaction itself is a key measure of success. Institutions need to assess whether employees understand how models operate, can interpret AI-generated outputs and can integrate them into their work. Metrics should capture improvements in decision quality, responsiveness and resilience across functions such as credit underwriting, risk management and fraud detection. The group also noted the emerging need to monitor model-to-model interactions in multi-agent systems. Stability of interactions, network resilience and traceability of decisions across distributed architectures are likely to become important dimensions of performance as agentic workflows expand. Overall, participants agreed that AI success must be defined by its contribution to long-term capability, resilience and decision quality, not just near-term efficiency. Managing emerging risks The final theme addressed governance, risk and accountability as AI adoption accelerates. Participants underlined that strong controls, transparent processes and responsible design are essential to scaling AI without compromising trust or compliance. They stressed that AI-driven decisions must remain interpretable to both customers and regulators, especially in environments where accountability is paramount. Ndirangu noted that supervisory institutions must maintain strict standards around explainability. AI-generated insights, she said, need to fit within existing regulatory frameworks and provide sufficient transparency for supervisors to justify decisions. This creates clear constraints on how far automation can be pushed in regulated settings. Lomaria added that public-sector bodies must balance innovation with responsibility. Advanced models should not undermine the ability to trace, justify or audit decisions. She emphasised the importance of clear governance principles and well-defined oversight structures to prevent hidden risks. Participants discussed the shared responsibility between banks and regulators in setting standards for responsible AI. They pointed to the need for common expectations around model behaviour, monitoring protocols and quality thresholds, which would guide safe scaling and ensure consistency across the sector. The group also highlighted that responsible adoption requires investment not only in technology but also in people and processes. Institutions must equip teams with the skills to understand and challenge AI outputs, update governance frameworks and cultivate a culture that treats AI as a tool for better decisions rather than an infallible authority. The conclusion was that institutions integrating strong controls, transparent design and accountable workflows will be better prepared to manage the risks associated with increasingly intelligent systems. Building AI native institutions The Singapore AI Sunset Cruise highlighted how financial institutions are progressing from experimentation to structural change in their approach to AI. Participants described an industry that is beginning to redesign architecture, workflows and governance around intelligent systems rather than simply adding digital tools to existing models. Speakers presented complementary perspectives: outside-in data flows, AI as infrastructure, democratised experimentation, process redesign, supervisory oversight and operational agility. Together, they painted a picture of a sector moving towards AI-native institutions. The group agreed that the real test will be whether banks can embed AI into their operating fabric while preserving transparency, accountability and human judgement. Institutions that treat AI as a structural priority, invest in adaptable architectures and empower staff across functions to work effectively with intelligent systems are likely to emerge as leaders. As Cao observed, the industry is heading towards a clear divide: “AI banks” and “other banks”. The transformation is already under way; the question is how quickly institutions can align their strategies, structures and cultures with the possibilities and responsibilities of AI.