Enter username and password to log on:
Fill in the form below to get registered :
Already have account ? Login here
OR
At the BIS Innovation Summit 2024 in May, HKMA’s Eddie Yue and Chia Der Jiun from MAS highlighted the balance between AI innovation and regulatory frameworks, discussing financial stability, cyberthreats, and ethical AI deployment.
At the BIS Innovation Summit 2024 panel discussion on Central Banks and the Rise of AI, Eddie Yue, chief executive of the Hong Kong Monetary Authority (HKMA), and Chia Der Jiun, managing director of the Monetary Authority of Singapore (MAS), underscored the intricate balance between fostering innovation driven by artificial intelligence (AI) and implementing robust regulatory frameworks.
Financial services in Asia are being rapidly reshaped by AI. A survey by Fintech Bookmarks found that 56% of central banks are actively using AI or machine learning, primarily for projections and forecasting. AI has the potential to transform banking operations, from enhancing risk management to personalising customer interactions. However, this technological leap also presents complex regulatory challenges.
Key concerns and emerging regulatory frameworks
The panel highlighted several pressing concerns regarding AI’s potential impact on financial stability and customer protection, prompting discussions on emerging regulatory frameworks.
Centralised risk: Yue raised concerns about an over-reliance on dominant AI platforms and cloud providers. He argued: “If financial institutions are heavily reliant at some point in future on these dominant platforms, which might only be a few, if there’s a cyberattack or if they have a failure, then there could be systemic risk or at least systemic operational risk.”
AI-powered fraud and deception: “When malicious actors are using generative AI for fraud, scam, false contract or false rumours, especially when there is market stress and people’s confidence is low, false rumours on social media that look real might actually aggravate a bad situation, or turn it into a very systemic one,” Yue said.
Democratised cybercrime: Chia underscored the potential for AI to significantly amplify the scale and sophistication of cyberattacks. He highlighted the concerning trend of democratised cybercrime, where individuals with limited technical expertise could use AI-powered tools to develop and launch sophisticated malware. This necessitates a proactive approach to developing robust defences against AI-driven threats.
Embedding organisational values, ethics within AI
The panel further agreed that achieving responsible AI deployment requires a deliberate effort to embed fairness, ethical considerations, robust governance, and transparency within the entire AI lifecycle.
Chia stressed the importance of fairness in data selection and model outcomes to prevent discrimination, ensuring AI models are consistent with an organisation’s values and ethical standards, establishing robust governance for accountability and explainability, and being transparent in order to build trust and confidence with customers.
He said: “In terms of the overall system, the model is a co-pilot, but ultimately, judgment and decision-making may have to be in the hands of humans who will apply that value overlay.”
Ensuring financial stability in the face of AI advancements requires strong central bank oversight. Chia underscored that supervisors should set guardrails for financial institutions and consider the reputational aspects of using AI models in decision-making to ensure the outputs align with the institution’s values.
Yue emphasised that supervisors must ensure these issues are thoroughly considered. He said: “The financial institutions relying on similar AI models will make investments or business decisions that can exhibit some kind of herding behaviour.” He urged central banks to update their guidance to include generative AI and stressed the importance of human judgment in decision-making.
AI and climate-related financial risks
AI is emerging as a powerful tool for addressing climate-related financial risks. Chia suggested using AI to identify greenwashing. He explained that AI models can be trained to cross-check company disclosures against greenwashing standards, and emphasised the need for continuous improvement based on feedback and refined benchmarks.
He added: “Much of the gap in terms of access to sustainable financing is about closing data gaps, reporting data and using data to drive change. Stakeholders can make use of the data to drive change. If it concerns data, that’s an area where AI can come in to try to plug some gaps and improve the situation, both in capturing and processing data, but also in reporting and making sense of it.”
International cooperation for global risk taxonomy
The rapidly evolving nature of AI in finance necessitates a global regulatory framework. Chia emphasised the need for regulators to collaborate on AI, proposing a phased approach starting with open discussion and eventually leading to the development of standards. He noted that initial discussions are already underway.
Yue commented: “The third-party service provider area is something that we might want to start more dialogue on.” He advocated for developing common parameters to assess the governance and cybersecurity of these providers, as well as a shared risk taxonomy to help regulators globally evaluate AI-related risks faced by financial institutions.
The panel concluded that while AI presents a significant opportunity for the financial industry, responsible deployment requires careful oversight and the establishment of appropriate guardrails to mitigate potential risks. For regulators, managing AI-related risks is crucial to fully realise the benefits of this technology while ensuring a stable and innovative financial future.
The panel discussion was moderated by Jemima Kelly, columnist at Financial Times.
Read the world-class contents, insights and pulse of the industry from insiders.
Thank you for Signing Up |
We've sent a confirmation email with login details to abc@gmail.com
Leave your Comments