Singapore has positioned itself as a global leader in financial innovation, investing in infrastructure, policy design and technical talent to stay ahead of technological advancements. At the 2025 Singapore FinTech Festival (SFF), the Monetary Authority of Singapore (MAS) reinforced this ambition by releasing a consultation paper on AI risk management, exploring well-governed AI deployment across the financial sector. Supporting this national agenda is AI Singapore (AISG), a national AI programme funded by the National Research Foundation (NRF) under the Prime Minister’s Office (PMO) which executes research and develops capabilities across priority sectors, including finance. Its key initiatives — the AI Apprenticeship Programme (AIAP), which trains engineers on enterprise-grade projects, and the 100 Experiments (100E) programme, which deploys apprentices into organisations to build AI solutions — supply the sector with skilled talent and safe structures for experimentation pathways to bring projects to minimum viable product (MVP) stage. On the sidelines of SFF 2025, Laurence Liew, director of AI Innovation at AISG, outlined three pillars needed for AI adoption in industry: talent readiness, governance alignment and safe experimentation. National AI talent as critical infrastructure for financial institutions AISG’s experience mirrors that of financial institutions: demand for engineers who can operate in high-stakes, compliance-heavy environments continues to outpace supply. “Eighteen months ago, most of our projects were traditional machine learning. Today, nearly 80–90% are generative AI (GenAI) or LLM-based,” Liew said. AISG had to rapidly enhance its curriculum and infrastructure to mirror the systems banks were beginning to operationalise. Rather than compete with financial-sector salaries, AISG built AIAP to train strong candidates through hands-on enterprise projects. Through 100E, apprentices — from recent graduates to mid-career professionals — work on applied use cases such as intelligent document processing, anomaly detection, explainable LLM assistants and automated triage. These expose them to governance, data sensitivity and operational risk from day one. The structured combination of real projects, multidisciplinary teams and guided learning has created a workforce that understands both technical development and regulated operational responsibilities. In practice, this talent pipeline functions as a form of national AI infrastructure, enabling financial institutions to progress from pilot to production with greater discipline and confidence. Responsible and explainable AI aligned with financial regulation As banks embed AI into customer evaluation, surveillance, fraud detection and compliance, responsible and explainable models have become central to regulatory trust. MAS set early expectations through the Fairness, Ethics, Accountability and Transparency (FEAT) principles and the Veritas initiative, and has indicated that new guidance for GenAI systems will further raise expectations around fairness, accountability and transparency. AISG’s programmes are deliberately aligned with this regulatory direction. Every apprentice completes modules on AI ethics, governance and technical standards. Liew, who chaired Singapore’s first AI standards committee under IMDA and Enterprise Singapore, emphasised that “ethics, governance and standards go hand in hand… more companies will use standards to ensure the quality of AI products.” Training explicitly covers bias evaluation, drift monitoring, explainability checks and documentation aligned with supervisory expectations. Embedding FEAT and Veritas methodologies ensures that AI solutions developed through AISG’s programmes can integrate cleanly into existing risk, model governance and audit frameworks — a critical requirement as financial institutions move from constrained pilots to enterprise-scale adoption. Sandboxes and safe experimentation for AI in regulated environments For banks and insurers, the ability to experiment safely is essential. AISG’s 100E programme provides structured opportunities to build and test AI models under real enterprise constraints, complementing — but not replacing — formal regulatory sandboxes established by MAS and other agencies. Liew noted that sandboxes play a critical role in surfacing regulatory, privacy or operational issues early: “You can innovate within a very safe sandbox environment and then see whether it works.” He added that countries without such mechanisms “can come and study how Singapore is doing it,” underscoring the value of structured experimentation in regulated sectors. Each 100E project is co-funded by AISG, supported by the 100E AI Engineering team and projects run over a defined six-month sprint. Participating institutions contribute domain and compliance specialists, and IT to ensure operational realism and the ability to take over the MVP and deploy into production afterwards. Teams experiment safely, iterate, and retest — gaining practical experience in explainability, drift management, and data sensitivity before committing to deployment. The programme is intentionally multidisciplinary. Some projects have been led by AI Engineers from non-technical backgrounds, demonstrating how domain expertise — from psychology to finance — can strengthen model design as solutions mature towards production. Previously, AI Singapore collaborated with MAS on initiatives such as the AI in Finance Global Challenge, and through its 100 Experiments (100E) programme, it has also worked with financial institutions including Sompo on applied AI projects in insurance and risk management. The programmes paired real problem statements from financial institutions with disciplined technical development, demonstrating AISG’s trajectory in creating safe pathways for testing and validation. A connected ecosystem for trusted AI adoption AISG’s role in Singapore’s AI ecosystem reflects the priorities amplified at SFF 2025: trust, accountability, continuous validation and the safe use of GenAI. Its frameworks, standards and experimentation pathways help institutions build the foundations required for AI systems that increasingly influence credit assessment, fraud detection and customer operations. Beyond finance, AISG also supports the nation’s broader AI ecosystem through initiatives in healthcare, manufacturing, education and public-sector digitalisation. These programmes share the same foundations of talent development, applied research and governance. Liew noted that financial services, like healthcare, is among Singapore’s highly regulated sectors — a context that shapes how AISG adapts its training and experimentation frameworks for industry needs. For Liew, the strength of Singapore’s approach lies in balancing ambition with safeguards. As he puts it, “The brakes on a car are not there to stop you. They help you drive safer and faster… giving you safety guardrails.” With governance-aligned talent development, structured experimentation and sector-wide collaboration, AISG provides the financial industry with a disciplined path to scaling AI safely and sustainably.