Artificial intelligence (AI) is reshaping the way global banks operate, compete and protect their clients. For Standard Chartered, its enterprise AI approach has evolved into a structured and deliberately governed phase—one where discipline and focus is of utmost importance. The bank is refining how AI is designed, tested and deployed at scale, ensuring that innovation advances in step with its duty of care. On the sidelines of Hong Kong FinTech Week, Alvaro Garrido, chief operating officer for technology and operations and chief data officer, described how Standard Chartered is maturing its AI estate by focusing on coherence, consistency and measurable progress. Prioritising use cases that shape value, not volume Garrido notes that 2025 marks a turning point for the industry. Banks that spent the early years of generative AI trialling isolated proofs of concept are now under pressure to demonstrate real business impact. “Everyone in financial services at the global level is moving from experimentation to execution,” he said. For Standard Chartered, this has meant concentrating investment on use cases that materially change risk, productivity or client experience. The bank’s current portfolio centres on credit operations, financial crime, operational efficiency and front-office insights — domains where the operational impact is most visible. This shift reflects not only industry pressure to demonstrate tangible value from AI, but also the bank’s need to maintain consistent controls across highly regulated markets with differing regulatory expectations. This refinement is supported by Standard Chartered’s enterprise AI strategy, built on these foundational pillars: a federated target operating model where central teams provide shared platforms, data foundations and controls, while business units develop their own use cases on top of this common architecture, and a prioritised pipeline of client-focused applications, unified data and infrastructure, workforce literacy and talent, targeted research, and security and safety embedded from the outset. By “federated”, Standard Chartered refers to a model. Together, these elements guide how AI is built and adopted across jurisdictions. “We offer the standards and archetypes and do the heavy lifting,” Garrido said. Business teams then deploy on top of this shared architecture, gaining speed without fracturing governance or creating parallel platforms. The result is a scaling model rooted in consistency rather than fragmentation. Governance designed for scale and consistency A structured decisioning framework has emerged as Standard Chartered’s AI footprint grows. Garrido emphasises that innovation and oversight cannot be separated. “We are a bank. We provide trust,” he said, noting that each use case must withstand scrutiny long before it reaches production. Standard Chartered now evaluates new models against three potential impact dimensions— loss of data, loss of funds or loss of service—supported by preventive and detective controls that define acceptable residual risk. The process is embedded from design, guided by the bank’s Responsible AI principles and enterprise-level governance structures. A central AI Safety Council, composed of engineering, risk, compliance and business leaders, provides cross-functional oversight of the bank’s AI inventory. The council ensures that deployments meet consistent standards globally, avoiding local drift or inconsistent control environments. Using disciplined AI to strengthen cyber defence Standard Chartered also applies AI to cyber defence as part of its disciplined approach to design, testing and controlled deployment. As malicious actors adopt generative tools to automate attacks and distort data, the bank relies on governed AI models to detect abnormal access patterns and surface anomalies that may indicate early fraud or misuse. These capabilities reflect a move from isolated, bespoke tools to standardised, near-real-time analytics that sit within the bank’s enterprise framework. Automated code-analysis models embedded in the CI/CD pipelines identify vulnerabilities before they progress, and AI-driven simulation engines test controls under realistic attack conditions. These models learn through structured feedback loops approved through central oversight, reinforcing both detection quality and engineering standards. The result is shorter detection times, more resilient engineering practices and more capacity for teams to focus on complex investigations, outcomes consistent with the bank’s disciplined AI execution model. Sharper measurement to cut through hype To bring greater discipline to deployment, Standard Chartered relies on quantifiable indicators across risk, engineering and client engagement. In financial crime, false-positive and falsenegative rates serve as core metrics. Cyber teams track detection and response times to determine whether new tools offer meaningful advantage. Engineering units monitor defect rates and model performance, ensuring that AI elevates code quality rather than introduces new weaknesses. On the client side, product teams run controlled A/B tests to evaluate whether AI-enhanced experiences outperform traditional channels. These feedback loops, Garrido says, are essential for distinguishing genuine value from noise. AI solutions for both employees and clients SC launched SC GPT in early 2025, and it is now deployed across 41 markets and used by nearly 80,000 employees globally for tasks ranging from risk analysis to engineering support. A more customised, internally trained version is being developed to offer the “near-home experience” of consumer tools with enterprise-grade data protection, underpinned by strong governance. SC is also extending AI to clients through an AI-powered FX video insight service with the London Stock Exchange Group, already live in Mainland China and expanding across Asia—demonstrating how the bank is applying AI safely across internal and external use cases. A responsible AI future Garrido notes that while no bank has yet established clear dominance in AI, differentiation will emerge through coherence and discipline rather than experimentation. However, he notes that Standard Chartered stands out in its disciplined approach to data governance, secure-bydesign principles and clear separation between infrastructure and business logic. Decisions must be anchored in whether an AI deployment makes the bank safer, more scalable and more sustainable over the long term. The bank’s federated operating model with consistent enterprise controls gives business units flexibility while ensuring responsible AI by default. Its integrated 360-degree framework brings together security, third-party risk and AI assessments, reducing fragmentation and accelerating decision-making. As Standard Chartered deepens its measurement culture, strengthens governance and consolidates its platforms, the bank is entering a robust and structured phase of AI adoption— one defined by clarity, cohesion and long-term human-led accountability.