logo

Saviynt argues machine identity governance must evolve for the age of AI agents

Saviynt argues machine identity governance must evolve for the age of AI agents

As machine identities outpace human ones, Dan Mountstephen, senior vice president for APJ at Saviynt, said compliance-led controls are insufficient and calls for a consolidated governance fabric to manage autonomous AI agents across enterprise supply chains.

In most banks, the definition of a privileged user has widened beyond information technology (IT) administrators with broad system control to a broader range of roles. Privileged users sit at the centre of identity governance because they hold high-risk permissions and can directly affect core financial systems. Today, that category may include finance executives handling sensitive ledgers, third-party contractors maintaining critical systems or autonomous artificial intelligence (AI) agents.

As automation becomes more embedded in operations, the number and type of identities requiring close oversight have increased, complicating how organisations manage and govern access. Against this backdrop, Dan Mountstephen, senior vice president for Asia Pacific and Japan (APJ) at Saviynt — a cloud-native identity governance and access management provider operating in regulated environments — discussed the growing imbalance between human and machine identities, the operational risks linked to fragmented identity tooling, and the need for governance frameworks that can extend to autonomous software agents.

Mountstephen said many organisations are reaching the limits of manual identity processes used to track system permissions. He cited findings indicating that companies now manage an average of 82 machine identities for every employee, a figure expected to grow as software agents become more prevalent. A machine identity is a digital credential used by software scripts or automated tools to log into systems, similar to a username and password, but held by a non-human actor.

These identities include system accounts, service accounts, application programming interface (API) keys and software agents that proliferate as banks digitise operations and expand cloud usage. “The ratio between non-human and human identities continues to increase, and these traditional platforms were not built to manage that,” Mountstephen said, noting that some organisations are still working to establish appropriate strategies.

Why limited visibility creates operational vulnerabilities

Mountstephen highlighted uneven preparedness for machine-based system access. Saviynt noted that 92% of surveyed organisations lack comprehensive visibility into AI or automated identities, making it difficult to determine what agents exist, the permissions they hold or how they behave in live environments. Limited visibility, he said, introduces operational vulnerabilities because security teams cannot govern access they cannot observe.

Although identity-visibility tools are emerging to centralise access information across applications and cloud environments, some firms still struggle to maintain a complete picture as machine identities expand. The rise of shadow AI — informal or unregulated use of autonomous or generative tools — adds further complexity. A key risk area is where employees experiment with AI solutions, and automated agents interpret instructions in unexpected ways, potentially increasing the risk of unintended data exposure or unapproved system actions.

“AI agents have a high level of autonomy and access to very sensitive data, so we need to think very seriously about how we secure them,” Mountstephen said. This remains a particular concern for financial institutions, especially where generative AI (GenAI) interacts with regulated information.

Fragmented toolsets and operational inefficiencies

As these risks accumulate, attention is shifting toward the operational complexity created by distributed identity tooling and inconsistent visibility across environments. Mountstephen cited an example from a Hong Kong financial institution, where separate systems for identity governance, privileged access management and single sign-on created duplicated data and inconsistencies, ultimately slowing incident response. According to Saviynt, many institutions with multi-jurisdictional operations are reviewing whether consolidating identity systems could reduce operational complexity.

Mountstephen also discussed regulatory requirements around cloud architectures. Saviynt’s tenant isolation approach, which separates each customer’s data environment at the cloud platform level, is designed to reduce cross-access between tenants. Banks evaluate this type of architecture alongside operational complexity, cloud service provider capabilities and market-specific regulatory requirements, which results in varied levels of cloud deployment between institutions.

Automation in identity oversight

Mountstephen described how automation and AI are being used to assist access-related oversight, particularly for labour-intensive activities such as access attestations. Attestation reviews can be extensive, making manual inspection challenging. “Attestation reports are enormous, and people end up rubber-stamping them,” he said. “AI helps by calling attention to outlier access that needs urgent focus.”

Saviynt positions AI as a tool that enhances decision quality while supporting identity management at scale. Banks piloting similar capabilities may use AI to prioritise access risks or highlight anomalies, rather than automate final decisions, reflecting careful adoption in regulated environments.

Regulatory alignment and responsible AI oversight

The evolution of identity oversight intersects with broader requirements for accountability, governance and regulatory assurance. Globally, major cybersecurity and standards organisations are recognising the need for improved oversight of non-human identities but remain at different stages of maturity.

In Europe, the European Union Agency for Cybersecurity’s (ENISA) digital identity work focuses primarily on human eID trust models and has not yet defined controls tailored to autonomous software agents. Meanwhile, the US National Institute for Standards and Technology’s (NIST) core digital identity standard (SP 800-63) remains human-centric, while its AI Risk Management Framework encourages structured governance for AI systems without prescribing identity controls for machine agents.

In Singapore, the Monetary Authority of Singapore (MAS) requires secure access control, segregation of duties, tenant isolation, auditability and periodic access reviews under its Technology Risk Management (TRM) Guidelines, forming the basis for identity governance across systems. Meanwhile, MAS’s Fairness, Ethics, Accountability and Transparency (FEAT) and Veritas frameworks address responsible AI deployment and model oversight rather than identity controls, although they emphasise traceability and accountability in AI-driven decision outcomes.

Regulators are beginning to acknowledge the operational risks posed by autonomous agents. MAS’s November 2025 consultation paper on AI risk management highlighted that autonomous agents with internal permissions could execute unintended actions or, if compromised, exfiltrate sensitive data or issue malicious commands.

The most visible industry-level work has come from Cloud Security Alliance (CSA), a global non-profit that publishes cloud security guidance, which has published recommendations for non-human identity lifecycle management, credential hygiene and monitoring across cloud workloads and AI-driven environments. These external frameworks indicate rising awareness of non-human identity risk but show that standardised control models for AI agents are still evolving.

As autonomous agent usage evolves, institutions are examining how identity, access, model assurance and auditability can support responsible deployment. As Mountstephen warned: “Identity security needs to be recognised as an absolute top priority, and the old method of delivering it is over.” Key considerations for financial institutions include clearly defining operating responsibilities, retaining human supervision and ensuring traceability where AI tools interact with regulated environments.