Temenos Community Forum (TCF) 2026 opened with a practical question banks already understand well: how to modernise critical banking systems without destabilising infrastructure that customers, operations teams and regulators depend on every day. The first day focused on progressive modernisation, composable architectures and software-as-a-service (SaaS) as ways to reduce the disruption traditionally associated with large-scale transformation. By the second day, the discussion had moved beyond the mechanics of modernisation towards a broader question: what happens when intelligence is embedded into banking platforms themselves rather than deployed as external tools alongside them. That reflects a wider industry transition as artificial intelligence (AI) moves from experimentation towards production deployment, where questions of governance, resilience and execution become materially more important. Barb Morgan, chief product and technology officer, framed that transition around three operating domains: customer engagement, frontline interaction and back-office operations. Rather than positioning AI as a standalone capability, the architecture presented was one in which intelligence becomes embedded into how customers interact with banks, how staff make decisions and how operational workflows are executed. Sairam Rangachari, chief product officer, made clear that this architecture was being developed across two product directions: intelligent digital and intelligent core. While the plenary session focused primarily on intelligent core, the broader framing matters because banks are unlikely to experience AI as a single infrastructure project. The implications extend across customer interaction, implementation workflows, operational resilience and enterprise operating models. The practical business question is therefore broader than product architecture. Banks are already under pressure to modernise legacy environments, simplify operational complexity and improve responsiveness without weakening resilience or regulatory control. The issue is no longer whether AI can improve parts of banking operations, but whether institutions are structurally prepared for what production deployment actually requires. Does conversational banking materially change digital engagement or simply the interface layer? Morgan’s framing begins at the customer and frontline level rather than inside the core, which is significant because for most institutions AI will first be experienced through interaction rather than infrastructure. The immediate question is whether conversational engagement materially changes how customers and staff interact with banking systems, or simply replaces one interface model with another. Rangachari positioned this as a shift away from conventional software navigation, where users learn systems through menus, workflows and repeated interaction. The alternative being proposed is conversational engagement, where interaction happens through natural language but remains grounded in structured banking context rather than functioning as a generic chatbot experience. That distinction matters because banking interactions cannot rely on loosely contextual responses where customer outcomes, product rules or operational processes are involved. The proposition is less about making interfaces feel more intuitive and more about whether conversational interaction can reliably surface relevant product logic, operational context and decision support within governed environments. At the customer level, the intended implication is reduced friction in routine servicing, information access and navigation across increasingly complex digital environments. For frontline teams, the implications may be more operational, particularly where relationship managers, implementation teams and operations staff work across fragmented systems, multiple knowledge sources and complex workflows. The harder question is not whether conversational interfaces are more intuitive, but whether they remain sufficiently accurate, controlled and auditable once embedded into live banking environments. That question becomes more important as conversational interaction moves from low-risk navigation into workflows that influence customer outcomes or operational decisions. AI becomes useful only when the banking architecture underneath can support it If intelligent digital represents the interaction layer, intelligent core is the infrastructure proposition underneath it. Rohit Chauhan, chief technology officer at Temenos, and Rangachari made clear that their argument was not about AI as a standalone assistant sitting outside the banking environment, but about intelligence embedded into governed banking systems where business logic, workflows and operational controls already exist. A central component of that architecture was what Chauhan described as a banking knowledge graph, positioned as a structured repository of banking domain knowledge built over 30 years of banking implementation and operations. He said it draws on product configurations, regulatory rules, workflow definitions, source code and operational structures across hundreds of institutions, creating a formal representation of banking logic rather than relying on generic probabilistic inference from large language models. That distinction is operationally important. Generic AI models may be useful for experimentation, but production deployment inside regulated institutions depends on contextual accuracy grounded in banking-specific logic. A mortgage product, sanctions workflow or onboarding process cannot be treated as an abstract conversational prompt if outputs affect customer outcomes, regulatory obligations or operational decisions. The architecture also included model context protocol (MCP), which Rangachari described as an open protocol for exchanging context across systems. In practical terms, this was presented as a way for systems to discover capabilities, connect structured knowledge, conversational interfaces and external applications without relying on repeated bespoke integrations. For banks operating fragmented technology estates, interoperability remains a practical operational issue rather than simply an architectural preference. The final architectural layer is the agentic framework that orchestrates AI agents inside governed workflows. Rangachari was explicit that these agents are intended for discrete deterministic tasks rather than unconstrained autonomous behaviour. He repeatedly emphasised that meaningful production deployment requires explainable, auditable and deterministic outcomes, with human oversight remaining central. He also said customer data is not used to train models without explicit agreement. The operating model is therefore less about open-ended experimentation and more about controlled deployment inside regulated operating environments. Can implementation become less dependent on specialist bottlenecks? One of the more practical propositions from the session concerned implementation rather than customer interaction. Chauhan framed the operational lifecycle around three familiar realities for banks running core platforms: install, run and upgrade. The implication is straightforward: if AI is to create measurable operational value, it needs to reduce friction in recurring operational burdens rather than simply add another layer of technology abstraction. Chauhan described an implementation model in which AI agents support requirement interpretation, build preparation, testing and deployment within an orchestrated workflow. The intended target is a longstanding banking problem: the heavy dependence on specialist interpretation between business requirements and technical execution, which often slows delivery, increases rework and concentrates knowledge in relatively narrow expert teams. David Aguirre, head of product design, demonstrated this through Co-pilot for Workbench, positioned as an AI-assisted implementation environment rather than generic development tooling. The core proposition was not autonomous code generation, but reducing the manual translation between business intent and executable technical change while preserving explicit human control over production decisions. That distinction matters because implementation errors inside banking systems create immediate operational consequences. Swan was clear that human review and approval remain central, particularly where changes affect production environments. The intended value lies in accelerating structured implementation work, improving consistency and reducing dependency on highly specialised individual expertise rather than removing human accountability. The practical question for banks is whether that materially changes implementation economics and delivery discipline in live environments. If structured AI support reduces variability between teams, accelerates testing preparation and shortens implementation cycles without weakening engineering control, the operational impact could be meaningful. Whether those gains translate consistently outside controlled demonstrations remains the more relevant business question. Can operational resilience shift from reactive firefighting to pre-emptive intervention? If implementation focuses on getting systems live, operational resilience focuses on keeping them running. Aparna Natarajan, senior product manager, addressed that operational reality directly, shifting the discussion from implementation workflows towards the ongoing demands of production support in complex banking environments. Her focus was not on isolated incidents, but on whether AI can reduce the time between operational disruption, diagnosis and structured response. In large banking environments, incidents rarely present as clean single-point failures. Symptoms often emerge across multiple systems, with diagnosis depending heavily on speed of correlation, evidence gathering and operational judgement. The operating proposition presented was that AI agents functioning continuously in the background may reduce some of that diagnostic burden by detecting anomalies, correlating related issues and assembling structured operational context before human teams begin investigation. The intended shift is from reactive diagnosis towards earlier intervention inside existing operational workflows. A central operational claim was consistency. Human incident response often varies depending on the experience of the support engineer handling the issue, particularly where complex patterns need to be recognised quickly under pressure. The argument here is that structured AI support may reduce that variability by making evidence gathering and issue correlation more systematic. The practical question for banks is whether that materially changes resilience management or simply accelerates diagnosis inside existing support models. Faster correlation and earlier escalation may improve resilience, but intervention authority, escalation thresholds and accountability remain operational decisions that institutions cannot delegate without clear governance. Does production AI become a technology problem or an enterprise execution challenge? Rangachari widened the discussion by bringing in external perspectives from Jochen Papenbrock, EMEA head of financial technology at NVIDIA, Shireesh Thota, corporate vice president for Azure Databases at Microsoft, and Sebastian Weier, executive partner leading AI, analytics and automation at IBM Consulting. The discussion moved beyond product architecture into the wider realities of production deployment. Weier focused on the transition from proof-of-concept work to scaled deployment. His point was that organisations risk becoming distracted by rapidly evolving frontier technology while losing focus on the more practical questions of customer outcomes, workforce productivity, operating discipline and trust. For regulated institutions, scaling AI is not simply a capability question, but an operational governance issue. Papenbrock focused on domain specificity and deployment flexibility. He argued that financial institutions need models shaped by domain knowledge rather than relying entirely on generic capabilities, while retaining flexibility around open-source ecosystems, interoperability and deployment choices across cloud and on-premise environments. Thota’s presence brought in the enterprise infrastructure perspective, although the available transcript extracts do not support attributing specific detailed claims to his remarks without fuller verification. Even without fuller attribution, the discussion reinforces a broader operational point: production deployment extends beyond application capability into infrastructure choices, operating readiness and institutional execution discipline. For banks, that distinction matters. AI may be introduced through customer interfaces, implementation tooling or operational support, but production deployment ultimately becomes an enterprise operating question rather than a narrow technology feature decision. What separates compelling demonstrations from operational deployment? The practical challenge for banks begins after the demonstrations end. The propositions presented across intelligent digital, intelligent core, implementation support and operational resilience all assume a level of institutional readiness that many banks may not yet possess. Conversational interfaces may appear intuitive, but their usefulness depends on the quality of the underlying knowledge architecture. Implementation acceleration may reduce specialist bottlenecks, but only if institutions can preserve engineering discipline while trusting structured AI-supported workflows. Operational support may accelerate diagnosis and escalation, but governance boundaries still determine what systems are permitted to do. Banks therefore face a more fundamental readiness question. Fragmented data environments, undocumented workflow exceptions, excessive customisation, brittle integrations and inconsistent process ownership may become more visible once intelligence is embedded into live operational systems. AI may expose institutional complexity as much as it helps reduce it. Sequencing also matters. Internal implementation support, testing preparation, documentation workflows and operational monitoring may represent more realistic early deployment pathways than higher-risk customer-facing decision environments. Institutions are more likely to begin where oversight is easier to preserve and operational consequences are more contained. The broader transition from experimentation to production will depend less on demonstrations than on whether banks can align architecture, governance, operating discipline and organisational readiness closely enough to deploy intelligence without introducing a different layer of complexity.