At the Singapore FinTech Festival (SFF) 2025, United Overseas Bank (UOB) provided clarity on one of the most pressing questions in financial technology today: how banks can scale artificial intelligence from experimentation to enterprise-wide deployment without compromising trust, transparency or regulatory expectations. Speaking on the sidelines of the event, Alvin Eng, UOB’s head of enterprise AI, articulated a discipline-first approach that integrates governance, model lifecycle controls and cross-functional oversight into every stage of AI development. Eng leads UOB’s enterprise AI initiatives while overseeing the enterprise data science platform, positioning his mandate at the intersection of innovation and institutional safeguards. As global institutions shift from proof-of-concept models to industrialised AI applications, Eng noted that “there is a big gap between proof of concept (POC) and production,” and said the challenge is no longer whether AI can work, but whether it can work reliably, ethically and maintain explainability at scale. Industrialising AI with business, regulatory and operational discipline For UOB, moving AI into production begins with clarity around three foundational components — business requirements, regulatory expectations and operational readiness. On business ownership, Eng emphasised unambiguous accountability. This, he stressed means “making sure the owner and the model ownership is clear, the business sponsorship is very clear, the business use case is very well articulated, and this use case comes with key business KPIs as well.” As a result, every model is tied directly to a measurable commercial purpose. Equally critical is compliance with the Monetary Authority of Singapore’s (MAS) Fairness, Ethics, Accountability and Transparency (FEAT) principles. “All our models before they go live, they go through a very rigorous set of testing, attestation, evaluation, and also the model technical validation,” Eng explained, noting that UOB embeds MAS’ principles upfront and had been one of the earliest banks involved in shaping them. Operational robustness completes the triad. UOB has built automated machine learning operations (MLOps) and model-monitoring capabilities that support continuous retraining and detect drift in real time. As Eng noted: “There needs to be a feedback loop. From a deployment standpoint — the whole process of automating the continuous integration/ continuous deployment (CI/CD) — automated processes that manage how models are updated and deployed into production —, even the model monitoring for drift, that’s automated.” These automated dashboards, triggers and drift indicators give full visibility to model owners and reviewers. As Eng summed up, “People have no reason to say, ‘oh, I don’t know about it.’” Central to UOB’s production-grade AI is a unified platform that Eng described as integrating “development, deployment and governance,” resulting in an increase of ML workflows. “It’s a lot more seamless, a lot more auditable,” he observed, underscoring that the platform reduces friction for business teams and supports transparency. Explainability, fairness and transparency This governance discipline extends naturally into UOB’s work on explainability and fairness, which have become central pillars of responsible AI deployment. The bank’s responsible AI framework builds on its early partnership with MAS in developing the FEAT principles back in 2018. “UOB was one of the first banks in the consortium to actively partner with MAS to co-develop some of these principles,” Eng said. For UOB, explainability varies by the risk of the decision. In credit, Eng said the intent is clear: “If you’re making a loan to a customer, you must be able to explain why that loan is going to be checked.” The bank uses a suite of interpretability techniques — Shapley values, LIME and emerging methods like layer-wise relevance propagation — to provide transparency into model outputs. However, Eng stressed that technical explainability alone is insufficient; institutions must also build governance artefacts that allow reviewers and regulators to understand how models behave over time. “Model cards, documentation, data, even down to the data lineage, metadata… all this is part of having explainability capabilities,” he emphasised. Fairness is treated with the same level of operational seriousness. UOB monitors metrics such as demographic parity and equal opportunity, supported by automated alerts and dashboards. “We ensure that bias checks are constantly being monitored. If it falls down, we need to make sure it’s changed and that needs to be justified,” he explained. This sits within a tiered governance structure that starts with the AI Ethics and Model Governance Council, an internal body comprising of domain experts, and escalates to the Data and AI Committee and ultimately reaches the Operational Risk Management Committee (ORMC). “The tone from the top is very important,” Eng said, stressing “independence between the person working on the model and the peer reviewer — and the model owner is always from the business, where individual accountability is equally important.” He noted that generative AI models are harder to explain and must therefore be used carefully. Harmonising AI governance with enterprise-wide risk frameworks UOB’s approach avoids treating AI risk as a standalone discipline. Instead, it integrates AI into existing enterprise frameworks. “We have this model risk management framework,” Eng explained. We have developed AI-specific frameworks, but it forms within the broader umbrella, looking across all the various policies, making sure they are harmonised.” This ensures alignment across business lines and avoids disjointed governance. Collaboration underpins UOB’s approach to AI deployment. Eng described the bank’s culture as inherently cross-functional: “We don’t work in silos, we’ll be cutting across different functions to collaborate.” His team serves as a connector, providing standards and facilitating teams to assemble. He illustrated this culture with a personal analogy: “It’s like getting married, it’s about the communication, we are very ‘family’.” As AI scales, the bank has strengthened training and capability-building for employees. “I want to make sure that everybody stays relevant, everybody has jobs, everybody’s happy,” Eng said, underscoring that cultivating an innovation culture of excellence is as important to UOB’s vision as scaling the technology itself across the group. Contributing to the next phase of national AI governance UOB’s efforts extend beyond its internal systems. Eng highlighted the bank’s active role in MAS’s Project MindForge — a Veritas-initiative workstream designed to advance responsible AI deployment across the financial sector. “We are providing leadership in the risk and compliance streams,” he said. Through this collaboration, UOB is helping shape the next wave of industry-wide AI governance and compliance standards. Eng also noted early industry discussions on whether shared utilities or open-sourced components could support common AI risk and compliance needs — a direction that he believes could strengthen consistency and reduce duplication across the sector. He added that industry alignment is difficult: “It’s not easy. You have 30 people, everybody has a voice, and how to meld all the voices together, that’s a challenge.” Despite the complexity, he applauded MAS for the regulator’s catalytic role in “bringing the whole industry” together. “MAS is very progressive, taking us in the right direction,” he added. Governance as the foundation for scalable AI innovation The bank’s philosophy of “governance by design” is central to ensuring AI systems remain auditable, explainable and resilient as technologies evolve. Automated drift detection, versioning discipline and independence in validation provide a backbone for operational continuity. As financial institutions advance into more sophisticated models — including generative architectures that introduce new layers of complexity — UOB’s experience shows that innovation and governance are not opposing forces. Instead, when embedded from the outset, they reinforce each other, enabling AI to scale safely and sustainably across a regulated enterprise.