Banks are entering a new phase of AI adoption. Across the industry, institutions are moving beyond isolated proofs of concept towards broader deployment of generative artificial intelligence (GenAI), multimodal AI and decision-support tools across customer engagement, operations, risk management and internal productivity. This is happening at a time when banking itself is becoming more embedded in daily life and connected ecosystems, increasing the need for seamless journeys across customer interaction and bank operations. Yet progress remains uneven, as the underlying constraints are becoming clearer. As banks move toward AI-driven operating models, legacy cores and fragmented infrastructure are becoming major barriers, with many institutions still grappling with fragmented data environments, uneven process integration, cybersecurity and regulatory risks, and gaps in governance, accountability and operational readiness. In a recent discussion, Balaji Rajagopalan, chief technology officer at State Bank of India (SBI), India’s largest public sector bank, said the next phase of AI adoption will be defined not by the number of use cases, but whether banks can be redesigned to support intelligence at scale. That means rethinking architecture, integration, cloud strategy, data quality, governance and execution as a unified effort rather than a standalone AI programme. Scaling AI from the problem, not the model AI adoption is accelerating across the industry, but the challenge is no longer where AI can be used, but which business problems are worth solving first and what outcomes justify enterprise investment, as the gap between pilots and scaled deployment exposes weaknesses in prioritisation, process design and execution. Rajagopalan argues that banks cannot scale AI by expanding pilots alone; success depends on anchoring AI to clear business problems and embedding it into the institution’s operating model. “Fundamentally, the need for AI is all about improving customer experience and overall efficiency for the bank,” he said. He cautioned against a technology-led approach for its own sake: “The primary intent is not that the industry is talking about AI and therefore we are also doing it. The starting point has to be the problem the bank is trying to solve.” In his view, the focus has shifted from proving AI works to determining where it can deliver meaningful value across customer service, operational efficiency and decision support. That problem-first approach becomes critical as banks move from isolated use cases to enterprise-wide deployment, with Rajagopalan noting that many institutions still underestimate the operational complexity behind customer fulfilment and process execution. “I may have a fantastic digital platform but are my back-office operations and underlying processes optimised?” he said. He added that without visibility across the full process, banks will struggle to meet the instant or near real-time service expectations that are increasingly shaping customer behaviour Why architecture limits scale For many banks, the real barrier to scaling AI is not model capability but whether their underlying technology estate can support intelligence across the institution. Rajagopalan emphasises that large banks need clearer separation across core system layers to enable AI to operate reliably at scale. “There has to be clear segregation of layers,” he said. “The system of engagement is meant only for engagement with customers. The system of record, like core systems, should have only the data that is persisted there, such as customer data and account data being the primary information. There should be systems of integration with strong application programming interfaces (APIs) for a unified services platform and a system for intelligence.” That framing shifts the AI discussion from tools to institutional design. In his view, intelligence cannot simply be layered onto fragmented systems; it depends on whether the bank has built the underlying architecture for integration, control and reuse. He also linked this directly to technology modernisation. “You need to have end-to-end integrations. Strong core systems capabilities and the modernisation of infrastructure and applications are extremely crucial to make sure that your systems are ready for getting the true benefit of AI,” he said. Infrastructure, cloud and data at the core of AI strategy As AI workloads grow more demanding and embedded in core processes, infrastructure, cloud design and data quality are shifting from supporting roles to strategic priorities. Future-readiness is no longer about adding capacity, but about designing infrastructure, applications and data environments for scale. Rajagopalan pointed to growing adoption of microservices, scalable APIs, modular architectures and cloud-ready core environments across large banks. “It is not about infinitely adding compute and storage. The question is whether your applications are designed to handle scale,” he said. He added that “converting the core systems into a truly cloud-native architecture, being mindful about the layers and API scale, and demonstrating that in a private cloud gives the advantage of being ready for the future at any time.” He extends the same logic to physical infrastructure. “Data centres also need to be rethought. AI, for example, requires a different type of power and cooling. That means traditional data centre design will not sustain in the future,” he said. On data, he noted that India’s public digital infrastructure has helped banks improve customer uniqueness and legacy data quality. “There is a continuous focus on the data quality index, and all the large banks have significantly improved legacy data in terms of clean-up,” he said. Across the region, banks are steadily shifting to cloud to support greater scalability, flexibility and efficiency in infrastructure and processing environments. Cloud, in his view, is part of this transition, but must be used selectively and with discipline. “If you really want to explore conversational AI for large banks, you should definitely go for some kind of software as a service (SaaS)-based offering,” he said, particularly in areas such as language and multimodal capabilities. At the same time, he stressed the need for caution in how Saas layers interact with a bank’s own tenant environment. Regulatory, data security and cyber concerns, he added, remain more important than cost alone. Risks become more structural as AI scales As banks widen AI deployment, the risks shift from isolated technology issues to broader questions of software discipline, cyber resilience, data control and operational exposure. Rajagopalan argues the issue is no longer whether a model works, but whether the bank’s infrastructure, software stack and control environment are mature enough to support it safely. “From an investment perspective, most organisations are ready to invest. The fundamental concern is security. Are we secure? Will there be data leakage?” he said. He also stressed that infrastructure layers must be carefully managed even in cloud environments. In public cloud environments, he noted, fundamentals such as DevSecOps, development lifecycle management and tightly controlled internet access remain essential. One example he cited relates to open-source discipline. In an earlier project, a technology partner proposed using 2,000 open-source components for a conversational AI and document AI implementation. “I started questioning why they needed so many open sources. Can you justify every open source?” he said. The number was eventually reduced to fewer than 200. “Every open-source component comes with risk,” he added. Increasing cybersecurity risks and more sophisticated threats have pushed banks to adopt a multi-layered control model spanning anti-distributed denial-of-service (DDoS) systems, endpoint protections, channels, integrations with third parties and regulators, and continuous monitoring across all touchpoints. Rajagopalan places this in a broader cyber context. “When it comes to cybersecurity, it is not only about AI. It starts from endpoint protection. It starts from your data centre,” he said. What changes with AI, however, is the need to isolate the intelligence layer more carefully and ensure that public-cloud-based services do not create new exposure into protected environments. Responsible AI must move from principle to operating model Once AI moves closer to decision-making and regulated processes, broad principles are no longer enough and banks need operating frameworks that can be monitored, tested and audited. Rajagopalan’s offers one of his most structured arguments on responsible AI, saying that as deployment widens, banks require clearer assurance frameworks, rather than high-level principles alone. “In terms of AI assurance, there are seven key pillars,” he said. “These include model performance, explainability and transparency, fairness and non-bias, data governance and lineage, security and resilience, human oversight and accountability, and regulatory compliance and auditability.” Rajagopalan argues that these are not abstract governance ideas but operating requirements. He stressed the need to explain model decisions to regulators, customers and auditors and to maintain accountability in business functions even where AI is embedded. This reflects his wider view that banks cannot separate AI deployment from governance, controls and institutional responsibility. Regulators across major banking markets are also tightening AI governance requirements, with a stronger focus on fairness, accountability, transparency, resilience and controlled deployment in financial services. Human oversight remains central as AI advances Even as AI becomes more accurate and versatile, most banks are still focused on strengthening—not replacing— human decision-making. Rajagopalan does not advocate immediate autonomy. “Even if a decision is taken by AI, the organisation remains the final decision-maker. I am responsible for that decision being taken,” he said. He argued that AI should first serve as a support layer for human judgment, improving efficiency and consistency while preserving human accountability. “Look at AI as a virtual assistant to start with,” he said. “I do not think you will get 100% accuracy in AI, which is very unlikely. Explore what assistance capabilities you can build.” He notes that model capability has improved significantly with GenAI and multimodal AI. “Four or five years back, it used to take almost five to six months to reach 40% to 50% accuracy, even for a one-page structured document. But with GenAI and multimodal AI capabilities, in the first month you may get 60% to 70%,” he said. Despite this, he still argues for human-in-the-loop as part of the operating model. He illustrated this through a trade finance scrutiny example from earlier in his career, where AI-supported discrepancy detection reduced manual effort across multi-page, multi-department cases governed by more than 800 rules. “Imagine it like giving yourself ten virtual assistants. My life becomes easier because I can look at the specific areas that matter, instead of blindly checking every point one by one,” he said. He added that banks need to keep humans in the loop, closely monitor accuracy, and be deliberate about when to automate. Execution across the institution is the real challenge The gap between a successful pilot and enterprise-wide adoption is usually determined less by technology than by an institution’s ability to execute across functions, processes and teams. Rajagopalan argues that while strategy and architecture define the direction, execution determines whether banks can scale AI. This requires moving beyond technology teams and redesigning how institutions work across operations, business, legal, compliance and information technology (IT). “The scoping and the problem statement should be top-down. But when it comes to implementation, it is a bottom-up approach,” he said. He also stressed that “AI is not just a technology task”, adding that “the moment teams engage with the shop floor, you know the reality”. He emphasised the need for cross-functional teams across branches, legal, compliance, business and IT. That is why he emphasises cross-functional governance and iterative execution over one-off transformation programmes. Banks may already have steering committees, scope reviews, risk assessments and success metrics in place, but real progress depends on whether teams can translate these into enterprise capability. “It is not a one-step process. It is an iterative process. You take the learnings from one platform and then make it enterprise-wide,” he said. At the same time, Rajagopalan suggests that scaling AI also requires a culture that supports experimentation alongside operational discipline. In his framing, banks must balance three priorities: run the bank, growth and transformation. Transformation is where institutions test new ideas, run proofs of concept, learn from what works and move on quickly from what does not. Without this iterative loop of learning and course correction, sustained improvement in AI capability is unlikely. The same logic applies to talent and organisational readiness. Rajagopalan argues that banks need people who understand process, systems and operational risk, not only technical AI skills. “You start questioning the underlying process. Why do I need this?” he said, referring to workflow mapping across customer and bank journeys. Banks need a common platform that brings together technology teams, business analysis and an understanding of system performance. The bottom-up approach, he added, is what helps banks achieve stronger value creation because it reflects operational reality rather than abstract requirements. He also stressed the need for a mandated learning path for every role”, covering technical, process and domain skills, with continuous upskilling supported by centres of excellence and external partners where needed. Customer journeys become more contextual and continuous As customer journeys become more digital and more fragmented across channels, banks face growing pressure to deliver continuous, contextual real time service. Rajagopalan also sees the AI shift reshaping customer engagement. As AI expands into customer-facing journeys, the opportunity extends beyond personalisation to enabling continuity across channels and assistance that reflects the customer context, urgency and preferences. “Language is becoming another barrier. Especially in India, we have 22 languages,” he said, noting that banks are building multilingual capabilities into digital channels. He also observed that while many customers want end-to-end digital service, others still prefer branch interaction for parts of the journey, particularly onboarding and know your customer (KYC). “When it comes to interactions, people expect some kind of assistance to fulfil their transactions. They are looking for contextual customer service based on the type and segment of customer,” he said. He linked this to continuity across channels, including whether branch staff can pick up the context of a customer’s previous interaction. That, in his view, is how AI should improve service: by making journeys more coherent and more responsive, rather than simply more digital. He applies the same pragmatism to hyper-personalisation. “Hyper-personalisation is not only about the individual. It can also be about a segment of customers,” he said, suggesting banks should use ten to fifteen meaningful attributes as persona builders and configure relevant products, dashboards and service models around those, rather than creating fully bespoke system for each individual. At the same time, he stressed that privacy and consent remain central. “The Digital Personal Data Protection (DPDP) Act says that customer consent should be taken before data is consumed,” he said, advocating for centralised consent management and more disciplined use of internal and external data. Preparing for the shift to more autonomous systems The next challenge for banks will go beyond adopting more AI tools, towards preparing for advanced orchestration, autonomous workflows and tighter human-machine coordination. Looking ahead, Rajagopalan’s said the industry is moving from narrow applications to a deeper operating shift, with multimodal workflows and increasingly agentic systems. This will expand the opportunity but also raise the bar for architecture, accountability and governance. “It is not enough just to say that I have implemented GenAI. What is the business model?” he said. He added that success should be measured by outcomes such as cost reduction, elimination of manual issues and improved performance, not activity alone. He also stressed that banks will need to rethink how work is divided between humans and systems. “People have to realise what work a human has to spend time on, and what work a system has to spend time on,” he said, noting this distinction will become even more important as AI systems become more autonomous. Over the next three to five years, the more important question may not be how many AI tools banks deploy, but whether they can build the institutional maturity to govern more capable systems without losing control, accountability or resilience. As banks move towards more advanced orchestration and increasingly autonomous workflows, the institutions best placed to benefit are likely to be those that strengthen foundations early: redesigning architecture, tightening data discipline, embedding stronger governance and preserving meaningful human oversight as capability advances. Balaji Rajagopalan is a featured speaker at the forthcoming The Asian Banker Summit 2026, taking place May 13-14 in Kuala Lumpur.