logo

The end of buzzwords - Financial leaders define the roadmap for generative AI and LLMs in banking

The end of buzzwords - Financial leaders define the roadmap for generative AI and LLMs in banking

As generative AI and LLMs move from lab to bank floor, experts from NatWest, Kata.ai and Human AI called for a pragmatic shift in how financial institutions design, deploy and govern these systems—highlighting on-device privacy, agentic workflows and optimisation as the new frontiers of innovation.

Jakarta, 22 May 2025 – The Asian Banker Summit 2025 concluded with a bold session on the transformative potential of generative artificial intelligence (Gen-AI) and large language models (LLMs), moving the conversation far beyond surface-level hype. As financial institutions grapple with how to adopt and govern AI, speakers across the final keynote underscored a growing consensus: AI is no longer a supporting tool — it is fast becoming a primary interface, intelligence layer and optimisation engine across banking operations.

Chaired by TAB Global founder and chairman Emmanuel Daniel and advisor, Gordian Gaeta. The session featured Maja Pantic, former generative AI research director at Meta and currently the chief AI research officer at NatWest Group; Wahyu Wrehasnaya, chief operating officer and co-founder of Jakarta-based Kata.ai; and Steve Monaghan, executive chairman of Human AI and former chief innovation officer at DBS Group.

From chatbots to agentic AI: the next leap

Opening the discussion, Daniel emphasised that AI marks a fundamental break from the past, “Everything that you have learned in the last 10 years… the rules are being rewritten dramatically,” he said. “AI collapses everything we do at the back end and hands the front end over to the customer.”
Pantic, who recently joined NatWest from Meta, traced the evolution of LLMs back to their true origin, “The crucial year was not 2022. It was 2017, when the transformer architecture was released by eight people at Google — none of whom stayed,” she explained. “Seven went on to form billion-dollar startups, the eighth went to OpenAI.”

She credited open-source models like Meta’s LLaMA as catalysts for industry-wide innovation, “The whole world works on this model. It’s why LLaMA is one of the most progressive and fastest evolving large language models.”

But she warned that the field’s greatest bottleneck is not intellectual — it’s energy, “Training these models requires something like 20 gigawatts. That’s equivalent to 20 nuclear power plants. Nobody is building that.”

She advocated a hybrid approach — using traditional retrieval-based methods for simple queries and reserving LLMs for complex reasoning — both for efficiency and sustainability.

“For simple queries, you use pure retrieval. For reasoning beyond simple search, that’s when you apply LLMs,” she said.

Rethinking banking architecture

Kata.ai’s Wrehasnaya reflected on a decade of deploying AI in highly regulated sectors like finance, healthcare and telecommunications.

“In the early days, we built our own NLP (natural language processing) stack from scratch just to understand the Indonesian context,” he shared. “Now, with LLMs available, the focus is on contextual implementation.”

While banks are incrementally integrating LLMs, he argued the real opportunity lies in agentic AI — a system of autonomous agents performing multi-step workflows across tasks like onboarding, know-your-customer (KYC) and underwriting.

“Agentic AI is like your own internal team. It does not just respond — it completes multi-step processes,” he said.

Pantic affirmed the approach, “You do not use LLMs for everything. For efficiency and privacy, it is crucial to run lightweight, decentralised models on-device. Centralised systems are a security risk — they are a single point of failure.”

She also introduced a vision of hyper-personalised banking assistants, fully localised to each user’s device and privacy-preserving by design.
“Your biometric data, your financial profile, everything stays on your phone. It does not go anywhere else.”

Unlocking latent value in data and systems

Monaghan challenged the industry to confront its outdated paradigms, “Banks are still stuck in facilitation. But the magic is in optimisation — how do you help customers get ahead?”

He described how banks squander capital in legacy systems, citing simple fixes that could yield huge benefits, “If you pay your mortgage in real time instead of monthly, you save 20% in interest — that is like a 20% increase in pension savings.”

He argued that financial institutions are holding on to decades of data as a regulatory cost, when it should be a strategic asset, “We had the data. We made it valuable by putting it in the customer’s hands,” he said of an early DBS initiative that used computer vision to pull up mortgage offers on sight of a property.

Data ownership and trust as the next frontier

Pantic highlighted how banks can reclaim relevance in identity management — if they act, “We trust banks with our life’s savings. Why not with our identity?” she asked. “They could be the ones verifying people in digital worlds — but they have not thought about it yet.”

She sees the future of banking as moving decisively towards full-device architecture, agentic interfaces, and regulatory-compliant data sovereignty.
“Everything will be personalised and run on-device. The user will control what the bank sees — and what it does not,” she remarked.

Wrehasnaya noted that in Indonesia, the entry point is still basic, “Half of adults are unbanked — but they have WhatsApp. So, we are helping banks reach them through messaging, gold savings, micro-investments… anything to build trust.”

AI, personalisation and the future of work

In their final remarks, the panellists reflected on the broader implications of generative AI.

“For the first time,” Monaghan said, “I can talk to a customer not in my language, but in theirs. That is the miracle of Gen-AI — communicating meaningfully, at scale.”

Pantic added, “Universities won’t exist as they do today. Kids will learn with personal AI tutors. We will still pursue happiness — but with fewer people, and more productivity per person.”

Wrehasnaya concluded with a pragmatic vision, “Think of AI not just as automation — but as prediction. The next leap is knowing what users want before they ask.”

Institutions must put Gen-AI in the hands of experts

Asked where generative AI should sit within an organisation, Monaghan was clear, “In the hands of domain experts. They understand the context. Technologists will be there to add guardrails.”

Pantic agreed — but warned, “Guardrails are critical. If you do not explicitly tell the model what not to say, it will say it.”

Wrehasnaya added that generative AI should span internal and external domains — from customer-facing chatbots to enterprise operations — with a clear ethical and data framework.

Final reflections: from buzzwords to systems change

Closing the session, Daniel summarised the evolution, “Finance is not business as usual anymore. AI is changing everything — products, engagement, trust.”

This was not a session about hypothetical futures. It was about practical blueprints. AI in finance is not coming—it’s already here. What is left is how institutions choose to build, govern and scale it.