logo

Mastercard embeds AI governance to support production-grade financial systems

Mastercard embeds AI governance to support production-grade financial systems

Mastercard is scaling responsible AI across its global payments network through privacy-by-design, ethical guardrails and accountability structures. Derek Ho, deputy chief privacy, AI & data responsibility officer, explains how these foundations help high-risk systems move from pilot to production safely and in line with regulatory expectations.

As global payment networks and financial institutions begin applying artificial intelligence (AI) to support real-time transactions, risk decisions and customer interactions, regulators across the globe are beginning to scrutinise how banks and financial firms design, monitor and validate their models. As a result, AI governance has become essential for maintaining trust and meeting regulatory standards.

Against this backdrop, key players in the payments landscape, such as Mastercard, are developing and refining governance frameworks to manage AI-related risks and compliance obligations. Over the past year, Mastercard has expanded its production-grade AI capabilities—from launching its generative AI onboarding assistant, to deploying AI platforms that improve payment approval rates, and introducing new agentic-commerce tools—underscoring why disciplined AI governance is now central to the company’s global payments strategy.

Leading this transformation at Mastercard is Derek Ho, the firm’s deputy chief privacy, AI & data responsibility officer. On the sidelines of SFF 2025, Ho discussed Mastercard’s enterprise-grade AI governance, which incorporates privacy-by-design, ethical guardrails, and accountability across its AI lifecycle, providing a trusted foundation for scaling AI across banking rails.

Integrating privacy-by-design into AI development

Ho said that Mastercard’s Data and Technology Principles place privacy at the core of how the firm designs, develops and deploys AI, serving as a “north star” that shapes a culture of responsible data usage across the organisation. These principles—covering security and privacy, transparency, accountability, fairness, inclusion and innovation—guide the design and application of data-driven technologies from the outset, ensuring that sensitive financial data is handled securely and in line with global privacy standards.

Ho stressed that principles alone are not sufficient. Operationalising them began in 2019 with Mastercard’s AI governance program—launched well before the rise of generative AI—which evaluates all built and bought AI systems for fairness, efficacy and transparency. The program brings together technology, product, data security, legal and privacy teams to assess both technical and business implications of AI.

“This collaborative approach means you can have both innovation and responsibility being considered together as a whole. You’re driving towards innovation and responsibility rather than innovation or responsibility,” Ho said.

Mastercard applies a risk-based AI governance framework that tailors controls to the potential impact of the AI system. “We put our AI use cases through a scorecard where we identify and classify the risk and then identify appropriate mitigating controls,” Ho noted.

Ho added that documentation supporting these assessments enables robust audit trails and ongoing monitoring. Toolkits, templates and automation help teams apply controls consistently, making responsible AI both practical and scalable across Mastercard’s global operations.

Ensuring accountability in high-risk systems

Accountability becomes especially critical when AI models influence payments, fraud outcomes, or similar decisions that affect customers. Ho explained that Mastercard reinforces accountability through its AI governance program, which incorporates structured risk assessments, supporting documentation and defined oversight mechanisms. “Accountability is ensured through our AI governance program, which has risk assessments but importantly, documentation, which supports those risk assessments,” he said.

Ho noted that documentation enables strong audit trails and ongoing monitoring of decisions, while defined roles and responsibilities ensure that teams involved in the governance process can exercise proper oversight. High-risk use cases are escalated to Mastercard’s governance councils to ensure they receive appropriate executive attention.

Lessons from deploying high-risk systems highlight the importance of early risk identification, ongoing monitoring and adaptable governance. Ho emphasised that risks should be identified as early as possible so controls can be built in from the start, rather than after a model enters production. “Ongoing and proactive monitoring helps you identify signs of model drift and potential unexpected behaviours,” he added.

These processes, Ho said, reflect the evolving nature of technology, business environments and regulatory expectations. “AI governance and privacy-by-design work is always a work in progress,” he noted.

Balancing ethics, compliance and business outcomes

Embedding ethics into AI requires integrating it into day-to-day operations, ensuring that compliance and innovation reinforce each other. Ho explained that Mastercard’s AI Governance Program is structured around four pillars: define, ensure, enable and advance responsible AI.

He cited generative AI guidelines as an example of policy guardrails, where Mastercard prohibits the sharing of sensitive customer data with public AI tools. He also mentioned the firm’s “inventory of AI systems” helps track which models are active and in use. “If you don’t know the AI system exists, you can’t manage the risk,” he noted.

Ho highlighted the importance of employee training to fostering an ethical AI culture. “We enable our employees to obtain AI certifications and make available online resources for them to understand and get practical guidance about how to use AI safely,” he said. These programs support practical applications, including anomaly detection in payment networks, real-time credit approval engines and predictive liquidity management for cross-border transfers.

Mastercard also embeds responsible AI principles—such as fairness, transparency and accountability—into its broader privacy and data-protection training, Ho added. As he noted, the firm provides “many online resources, whether workshops, forums, easy-to-use guidance, which create that culture… making everyone aware of the opportunities and the risks.”

According to Ho, Mastercard’s responsible AI efforts also extend beyond the organisation. The firm collaborates with external partners to advance responsible AI practices across the broader ecosystem. “We advance responsible AI with external parties… It’s necessary for partners in the ecosystem to advance responsible AI, but it also helps to inform our own program,” he said.

Ho observed that embedding responsible AI requires more than technology; it demands principles, governance and culture working together. Mastercard’s AI governance program brings these elements together through clear policies, proportionate risk controls and continuous oversight that keeps systems aligned with organisational and regulatory expectations.

Across applications—from real-time fraud detection and payment models—this approach demonstrates how disciplined governance, operational controls and broad-based AI literacy can strengthen cultural awareness, decision-making, operational resilience and customer confidence. These lessons offer a practical model for financial institutions seeking to scale AI responsibly.