Sunday,22 December 2024

MAS’ Hardoon: “Doing AI for the sake of AI doesn't make much sense”

5 min read

Interviewed By Foo Boon Ping

David Hardoon, special advisor on artificial intelligence for the Monetary Authority of Singapore, offers his insights on AI, from operationalising it to making it relevant to people and businesses.

David Hardoon is a prominent figure in Singapore when it comes to artificial intelligence (AI). He is currently special advisor on AI for the Monetary Authority of Singapore. At the same time, he is external advisor for Singapore’s Corrupt Investigation Practices Bureau. He was also the first chief data officer of MAS and was also previously head of data analytics group. He has had experience in private companies and academia as well.

In this interview, Hardoon talks about AI as a technology, its overall position in the financial services and technology sectors, and the necessity of interoperability across borders.

Foo Boon Ping (FBP): We're very pleased to be speaking with Dr. David Hardoon, special advisor to the Monetary Authority of Singapore (MAS) in the area of artificial intelligence (AI). This is quite a unique role. His role is to develop the AI strategy for the financial sector in Singapore as well as to promote greater open cross-border data flows. He started his career in consultancy with SAS, one of the big data service providers. He has also developed his own solutions company, Azendian Solutions. Tell us, how did you transition into this role at MAS?

David Hardoon (DH): I always like to say that I've always been fascinated with data. I've been a geek before it was cool. The transition came about because I've always been fascinated in having the different dimensions and perspectives of data. It is not just data from a technological point of view, it's also cultural.

I started off actually in academia, academic perspective, then went into software, consulting and in-house. Effectively, going into the regulator was adding that additional dimension of having that – well, I don't use the word – the policeman's perspective, but from a regulator’s point of view, the one who's seeing how to drive and help the industry at large, but at the same time helping MAS to adopt the usage of data. In fact, that's actually where I started, I started off being the chief data officer in the MAS. While it had the privilege of being the first one, it had that dimension of looking at the internal usage, as well as the developmental one. Now, with the new role, being a special advisor is effectively looking in a very focused point of view in how to help the industry – from also a regulatory perspective – in having the industry in a position where they can leverage data in a more holistic and also pragmatic fashion across the board, and for them to understand: what are the possibilities?

Just to add a bit more colour on that, people asked me the other day, ‘David, how do you define success in this role?’ That's a very difficult question. Obviously, we have initiatives and defining success with those initiatives is clear-cut whether they're successful or not. But when you think about the industry at large, success isn't just about whether a financial institution is using AI or using data. No, it is equally as when an institution chooses not to use it, but it has the competency, capability, maturity and understanding to assess whether it meets their own risk tolerance [and] framework. To me, that's also success. And that's what we want. We want the industry at large to be in a situation where they can ask themselves, ‘How do we do it?’ and ‘Can we do it effectively?’, rather than just saying, ‘Well, let's wait and see’.

FBP: Now this role, it's more of a developmental role where you're trying to develop the AI capability within the financial services sector. How much of it is about setting regulations and standards? Is there a conflict between the two?

DH: Yes, I like to call it a healthy tension, maybe not conflict, which will naturally exist between the supervisory side of the house and the developmental side of the house. We go about obviously having those conversations. However, it is less about developing the supervisory perspective, it is feeding information to the supervisors and the policymakers in terms of how they will go about developing those policies. It is very much in terms of looking at the industry and what you may consider to be an AI stack, how to make sure we're addressing the concerns from financial professionals – not just the ones which are now coming out of the tertiary systems, but the ones which have been working for the last five, 10, 30 years and assuring that they still feel in a position whereby it is not threatening them. How can an organisation, which will obviously have legacy systems, think about those scenarios? Then of course, on top of that is the whole regulatory environment.

We are true believers that it is really this harmonious relationship between regulation, governance and innovation. A governance environment, a regulatory environment allows you to innovate, it allows you to experiment. At times, you may find institutions thinking that ‘No, this we cannot do unless explicitly permitted’. But, the question is if that conversation can be done. It allows for it. What we’re trying to prevent is risk, unwanted situations occurring or obviously, downright financial crime.

It's more about having these conversations of understanding how to lubricate these developments, but equally, to understand what are the genuine challenges that the industry is facing?

Whether it is a perception of a challenge, and it's more of a clarification, or there's a genuine issue that's prohibiting or inhibiting the use, adoption or development that we may need to consider, because it is of a regulatory policy point of view, or it's because the problem isn't unique to any specific financial institutions, it's more of a ubiquitous problem like know-your-customer (KYC).

FBP: Within MAS, there is already an existing role that looks after financial technology under the chief fintech officer. In terms of developing a strategy for different areas under technology, AI being one, how is it different and where do you sit exactly within the organisation?

DH: It's in fact part and parcel of the broader fintech agenda. We not only created the role which I fulfil, but there's an additional office which actually sits within the fintech family – the AI development office. [It has] people on the ground making sure [to look at it] from a technological point of view, but more specifically focusing on the AI narrative. Now, the reason for calling that out is because if you look at fintech at large, it's a very broad spectrum – from payments, lending, investments and so forth. Now, it doesn't mean that there isn't necessarily an overlap at some point or another – like blockchain, you can have AI on blockchain – but the intention here is to just specifically call out that AI pillar and say, ‘Is there more that can be done, how and where relevant?’ It's integral as part of that agenda, but not just technology.

One of my strong views and beliefs from a personal perspective is that AI needs to be operational, needs to be relevant. Doing AI for the sake of AI doesn't make much sense. Naturally, we look at it from a technological point of view, but we also look at it in terms of traditional financial development, where we’re looking at the various asset classes. The question there is, can AI be relevant to the more traditional businesses and our approach towards developing that effectively?

FBP: AI in itself as a technology is not new. Its application in the financial services industry is also not new. In the trading space, for example, you have algorithm-based trading, which is basically AI. In terms of operationalising it, you always [focus] on a wider cross-section of the industry. As you start your work, what are some of the gaps and opportunities that you've identified?

DH: For example, as you rightly called out, in trading, it’s quite common. So there, the question is: what if there is a next step? Retail banking, for example, I would like to believe that it's quite prevalent across retail banking. However, if you start moving to the left and look at, let's say, corporate, institution banking, treasury – a large extent of let's say, insurance, has been starting to adopt and leverage on data, but again, making it more extensive. So the question is, how far and how encompassing could this be across the entire financial sector?

You raised one of the interesting challenges that the financial sector has. It is not homogenous, it's not a flat line. It is actually very kind of asymmetrical, whereby you have a number of institutions – where independent of size – that’d be more mature in using not just AI, but data more prolifically. But, you will have this long tail for various reasons, be it business lines or organisation, maybe it's a five-person shop, relatively small. There will be a whole type of degree that may not be leveraging on it for various reasons. Now, again, it's not to say everyone has to use AI. It's more about opening the possibilities and making sure that where possible, where relevant, we're exploring it and we're not limiting ourselves because of perceptions. It is really about how we make accessibility more prolific.

FBP: Is it correct to say that technology is not a core skill of banks? Are they better off buying that technology rather than internalising that capability where they may not have the skill nor the economy of specialist organisations?

DH: Absolutely. To build on that point – and again, this is more of a personal view – AI is truly not a technological play. The reason is because – whether you want to purchase, whether you want to partner, whether you want to build yourself – technologically, it is not new. In fact, we're talking about theories, algorithms and methodologies which date back to the 1970s, 1960s even in some scenarios. It has been there. It's just the moment has come where you have that infrastructure, the robustness of data that allows for it to be used.

Where the sticking point is [in] operationalising it. It's not AI for the sake of AI, it’s how does it now integrate with a business function? Does that business function or process need to change because of what is possible with the data and AI or not? Culturally, are there changes that need to happen? And at times, these are the more sticky issues that need to be identified.

Maybe in the future this will change, but for now, I personally strongly believe that AI and data should be seen as a part of the business function. Keep it as close as possible to the business side, rather than a technological support function. How does this augment my process? How does it augment my current human decision-making effectively with potentially new insight or corroboration of existing insight from the data that we've been collecting all this time? How do we put this data into maximum use? In the future, when AI becomes like now an iPhone or an Android – you don't think about it anymore, it's just used – it can go back into being maybe just a technological support function.

FBP: You mentioned the big goal is to operationalise the use of AI where it's relevant to the business. At the same time, as part of what MAS rolled out, there is also a grant given to the industry to identify initiatives. How do you evaluate? How do they qualify for the grant? How does that not fit into ‘doing AI for AI’s sake’?

DH: I'm actually happy you asked that question. When we constructed the grant scheme, we thought about exactly those questions. How do we make this an operational narrative rather than just doing a proof of concept or trying an idea?

Let me start by saying first of all, there has to be skin in the game. It is a grant, meaning it should be a co-funding. We believe that financial institutions are prudent enough not to put their own money in things which are going to waste. That's already the first litmus test.

The second one is by saying that there are two, broadly speaking, key performance indicators (KPIs). The first one is that even though it may be experimental – as in it's a new idea or a new concept – from the very beginning, as part of that application, we want to know how this is going to be operationalised and deployed. This needs to be answered upfront and not afterwards. After the initiative has been successful and you ask the question, ‘What are you going to do with it now’? At times, the answer is ‘We will see’. No, tell me now, how are you going to apply it and operationalise it? That's how we look at it.

The second one is manpower, because we need to be sensitive and put people at the centre of data and AI. How is this going to impact people, be it the existing workforce or potentially, the necessity of a new workforce that needs to be trained? How is this impact going to be?

These are the key considerations. Now, I omitted, as you may have noticed, the what. This was deliberate, because back to my asymmetrical curve, we realised what we may expect from a fairly advanced or mature organisation cannot be equally held to another small or less mature organisation. For them, what would be innovative, new, progressive would be like yesterday's news.

We needed to have an ability to say, ‘Look, at the end of the day, what we want is for you to identify areas within the organisation in which it will provide value’ – broadly speaking, reducing costs, increasing efficiency, mitigating risk or increasing revenue. You tell us what they are, we want to know how you're going to operationalise it, and how you’re going to make that assessment with respect to people, how you are going to assure that people are trained and effectively remain relevant.

FBP: AI is as good as access to open data, right? In China, for example, you have the big tech companies [like] Alibaba, and because of the nature of their regulation, they have access to a lot more data, even sharing of data with the state for social credit and so forth. AI has to exist within a data governance structure. In Singapore, we have the Personal Data Protection Commission (PDPC), but it doesn't address the issue of data ownership, just data consent.

With so much that is going on, do consumers know enough about how powerful data is to have regulation just based on consumer consent?

DH: I believe this is a journey. It's a journey that requires all of us to be a part of. That's why we believe in being an ecosystem – from a regulatory point of view, from an institutional perspective and of course, let us not forget the vendors out there that are part of that ecosystem. It's really important that while we may not have the answers for everything now, that we ask ourselves, we challenge ourselves not just in terms of what can be done. Should we do it? In terms of those considerations from a cultural perspective – expectations, transparency – as part of that journey and getting there effectively.

PDPC has issued out their AI governance framework, which looks at the development of AI and the best practices that one should consider. MAS has issued late last year, the FEAT principle – fairness, ethics, accountability and transparency – which again, provides a list of mechanisms to test oneself in terms of thinking about that use effectively.

Let's not forget the consumer or the citizen. They're also part of this journey. They equally will become more and more educated with respective consideration, possibilities, what does consent mean and whether or not to provide it. We're not there yet, but I think we are in a relatively progressive state.

Secondly, what you mentioned earlier about ownership. Fundamentally, especially in the banking sector, it's already enshrined in our Banking Act. There are fiduciary requirements and obligations to maintain the security, confidentiality and privacy of the consumers’ data. The question is, what more can be done?

I strongly believe that within the financial institution, the amount of data that's being collected is immense, even before you go into those partnerships – if you go in terms of Alibaba and when you're looking at the network – but even that we're seeing. In Singapore, we've been encouraging it – in the spirit of open banking – as APIs by seeing the visibility, with consent from a consumer point of view, or data that's aggregated and therefore avoids the sensitivity aspects of it. Open up data through APIs, see the value from a commercial relationship point of view and from a benefit to the consumer. That's been happening very prolifically, and it has driven that question of not just open banking but open data yet to be finalised. There are many parts of the equation that need to be addressed, but I think we are on the right track.

FBP: Part of your role also covers cross-border data flows. Explain this part, because a lot of restrictions do want to protect the data within their borders.

DH: If the starting point is that the future is hinged on a digital economy, one has to realise that for a digital economy to thrive, it transcends borders. A fintech or company operating in one jurisdiction may provide services to an organisation or financial institution in another jurisdiction. Financial institutions already are cross jurisdictions. If we believe that the future is hinged on the digital economy, what fuels and drives this economy is data. How can we attain that digital economy if data is locked down to every single country?

It provides two key challenges. The first challenge is – putting aside cost and additional risk rather than reduction of risk – every single company that wants to offer their service has to relocate and offer their services locally. Financial institutions, from a risk point of view – when we want to have an understanding of risk based on a consumer that now is more global – nope, it's going to be partitioned based on the data specifically to that location. We are inhibiting ourselves.

I fully understand the necessity and the requirement of cyber security and privacy. That is absolutely paramount, but we can achieve those requirements of security and privacy while allowing for data to flow. Notice the key word there is not data sharing. It’s not about data sharing, it's about data flow.

In that spirit, we believe so strongly about it that we're working with other counterparts, advocating to a certain degree, to start thinking about fuelling the digital economy as well as security and privacy in a different light that fulfils the capacity of what we're trying to achieve while attaining privacy. The two are not incongruous.

Ultimately, we want to be more global. We want Singapore to be a key player in that global arena. But that's the goal, like global trade, in which data already is playing an absolutely significant role.

I'm going to give you a very trivial or even silly example. Let's take the worst case scenario where every country now starts to localise data. How will you send emails? Data has to fundamentally flow, data has to move.

FBP: Today there's no regulation around flowing of emails. There is on financial data, on banking secrecy that is done through your financial institutions. But, send an email with your financial data and no regulator can stop it.

DH: That's what we're trying to inhibit. [There are] more countries out there that do not have data localisation requirements or overtly do not have the localisation requirements than those countries that do. It just may come through the fact that those countries are more vocal about it. What we're trying to show is that we need to start working on setting the trend of how we achieve that future that we all yearn to have while situated on security and privacy or whatever local consideration there may be. But, just locking down [data] results in unnecessary, unintended consequences.

Obviously, the first one is cost – significant costs, which, if we're honest about it, may at times be passed down to consumers. In a world where we are trying to drive inclusion, that may become a cost factor associated in preventing that.

The second one is it actually introduces risk. If we're taking a very siloed approach, you now have more fortified walls to prevent, but of course, this is a debate of whether central or decentral is more secure. What it means, fundamentally, is that you need more people on the ground to act as a defence.

Then of course, as I mentioned, subsequently is the services – having the ability of your own consumers, what is happening, the risk in the organisation – anti-money laundering (AML) being a good example of it. You need to see flows as a network, not just within a jurisdiction but across jurisdictions and not when you already know of someone in a sanctions list or someone suspicious, but really taking a more comprehensive view with respect to it. That's within the financial institution.

Going beyond financial institutions: service providers. When a service provider is contracted by financial institutions, how do you facilitate those commercial relationships independently of borders? In fact, this is nothing new. We have embassies, which have always existed effectively. It's taking this to the digital realm. The jurisdiction on that data still lies with whatever that sovereignty is, it just may physically exist elsewhere.

FBP: In terms of operationalising data, what are some of the more immediate use cases in the grant so far? Give us examples of projects across AI.

DH: We've deliberately taken the approach of telling the industry, ‘You tell us what's relevant to you. We don’t want to tell you no, you can only do it in this area’. We wanted that freedom. What we're finding is that it's actually been bucketing itself into largely three areas.

Financial crimes. We’ve seen an extensive and heartening adoption of machine learning, AI, AML, KYC, customer due diligence (CDD) as well as regulatory reporting. We've seen an extensive use on the retail side of the house as well as corporate in how to become more personalised and insightful, opportunities for cross-sell and upsell, and then of course, harnessing the data that exists within the organisation. There’s an immense amount of structured and unstructured data that then wants to be used in the earlier two and also in investments – how data can be used on investment optimisation, portfolio optimisation, nothing massively new, let's say on your asset management or potentially your algorithmic trading, but maybe slightly new on your corporate and so on Treasury side of the house – potentially, depending on the institution.

To add an example that incorporates both the cross border discussion as well as this, we've also seen a scenario looking at applying data techniques that looks cross border, where a compliance officer in Country A can effectively have the insight of whether the outcome of a transaction, even though it goes into Country B, links let’s say, to a provision for unrealised profit (PUP) using various techniques – for example, algorithm goes to the data rather than the data goes to the algorithm.

We're seeing very fascinating and very pragmatic applications and adoption across a wide spectrum of different types of financial institutions.

FBP: Institutions are starting to embrace AI. You have Ping An Bank, for example, who’s taking this vision to be an AI bank. What does it mean to be an AI bank and what are the ramifications of that? Would an AI bank be totally automated or will all decisions be made by algorithms?

In the trading world today, for example, you're trading between algorithms and creating risk events because there is no human intervention.

DH: On the latter point, that's why it's equally important for regulators to be immersed and involved in the area of discussion because there may be new risks that will evolve that, as a regulator, we need to know. That's one, which is extremely important, but we will not know until we actually start exploring, understanding and even experimenting on that regulatory perspective.

On the other one, when you asked me about what it means to be an AI bank or an autonomous bank, I would start off by saying that despite being an advocate for AI, I have to refer to it as augmented intelligence. It's absolutely critical to put the person, the human in the centre of this. So with all this wonderful technology that we’re developing, it should be designed to augment us, not replace us. It may mean that some of these functions would inherently mean this replacement, but that's not the objective. The objective is, how does it help human decision-making?

Now when thinking about banks, fundamentally, it goes down to risk appetite. You would find, given the operation of the bank, that you would want to always have the human in the loop, be it from an oversight, as a checker or as the final decision maker.

Again, it's not a matter of can AI do it – should AI do it? There are many scenarios where AI can be entirely automated from A to Z. It shouldn't. It does A to X – the last part, the human does. That goes from a design principle point of view. What it does then mean to me with those predicates is that an AI bank, an autonomous bank is highly efficient. It is leveraging on its data assets across the organisation to deliver better services [while] having insight and knowing how to engage the consumers, be it individual, corporate or institution. That's really important because at the end of the day, why collect data?

Many may say to me ‘Oh, David we collect data because the regulator forces us to do it’. Perhaps historically that's been the case and perhaps that's still the case with a lot of returns, you need to submit a lot of data. But this is data that's being collected, we don't collect data to hold it at night to feel good about ourselves. It needs to be used. Data that's not being used should not be collected – in fact, it becomes a liability. An AI bank should be one that maximises its asset of data. Data is in fact a line item on the balance sheet.

FBP: MAS has announced its criteria for issuing licences to digital-only banks, many of them would be using AI. Do you at this point work with them? Do you have criteria or guidelines around the use of AI in the granting of a licence?

DH: From a licensing point of view, the guidelines have been issued out there. While there's a key requirement or key focus of innovation and being innovative – which largely, arguably, AI will be a key element of that – the extent is left to the prospective applicants to apply, but I agree with you absolutely. I would believe that the use of data, the use of AI should be prevalent amongst digital banks, which have an opportunity of being digital, meaning data to the core. It's an immensely exciting opportunity because it can unlock both opportunities as well the future feasibility of what can be done.

To be very honest with you, it would also give light to non-digital banks. All banks can be digital – it just may mean they have also branches – in seeing what can be done with their data that's currently being collected. I actually see it as a win-win.

FBP: This role of special advisor, that's kind of an interesting title. Is it an executive role, special project role? This function of the AI department at MAS, would it at some point cease to exist once it becomes mainstream? Tell us more about it.

DH: I always like to say that the most important thing is that my mother is happy, because it has the word ‘special’ in it. So that's been ticked, that's the most important. Jokes aside, MAS has taken an approach which tries things out. We experiment internally amongst ourselves. Organisational structure has shifted and changed quite a few times. It will be a bit apprehensive to upfront forecast the future, as being a data person. It's exactly in the same moment when I started my previous role at MAS as chief data officer, we realised that it goes beyond that. We wanted to have a more dedicated function for development so the organisation changed. We changed accordingly and then created these two aspects with AI development. I'll repeat my sentence about operational AI: it needs to be relevant, it needs to be pragmatic. Therefore, MAS – even financial institutions, as they go about their journey – shouldn't be afraid of having this agility in changing to accommodate what we're trying to achieve, they're trying to achieve, and what is required.

If we find in time that it is just a role for the sake of a role, there's no need [for it]. It's part and parcel. I’d like to give an example: typists. Once upon a time, you had dedicated departments in organisations who were typists. Which organisation now has a typist department? Everyone's a typist.

My ironic hope for the future – and I say ironic because it puts me out of a job – is that everyone is prolific and proficient in data. Not necessarily a data scientist, statistician or mathematician, but as prolific just as you're able to use a typewriter or typing solution like Excel or whatnot. Same thing with data, so that the need for these special functions becomes diminished and it's more a support function. You’ll still need those people to help across the organisation, enterprise, but it's less of this, ‘Oh no, these people will do data, we do the business’, no. Data is part of the business – this is a mantra which is exceedingly important.

FBP: You came from the academia. At some point, AI will be about best practices as it matures and training future generations. Tell us the parallel between academia and your current role.

DH: The parallel actually touches on two particular points. One is the education aspect of it, that process of creating understanding and communicating what is basic, applied and theoretical. For example, just the very simple conversation of a business which may say, ‘Oh, how do we use natural language processing?’ Well, it's not just about explaining what natural language processing is, it’s actually saying, ‘Well, let's start with what you want to do’. Let's start with what's your business before talking about what natural language processing is. There’s still that very educational element of creating an understanding of possibilities.

Then the second one, which is also true with academia, because academics will focus on novel application and you writing papers, future possibilities – that still exists in the business where you will come up with a new idea that may not be currently relevant or doesn't exist. It is how to convince your peers on its relevance, its sustainability, and its generalised ability that can be adopted by the organisation. There will always be a bit of a lag when you create something new until when it's adopted. Again, look at AI, 1980s and start of 2000. That was still about a 20-year gap until it became more prolific.

However, naturally, academia gives you a bit more freedom to look at the blue sky. On the industry element, it's a lot more crucial, if not the most crucial thing, to be pragmatic, meaning you should at times sacrifice the excitement and novelty of the algorithm for the sake of pragmatic outcome. We may not use the most advanced techniques or drive that improvement. It's ‘I want to make a difference’.

This is actually in regards to that conversation. For example, if we say it from an academic lingo, ‘Oh I'm achieving 5% improvement’, but quite frankly, does that make sense or is that relevant? When you're thinking about revenue, maybe 1% is already equivalent to a million dollars more. It's again, bringing it back into the business context and to the business lingo.

FBP: Many banks’ AI capabilities come from their innovation labs filled with PhDs from big universities. There is a lot of concept theory, not too much business application.

DH: My personal view is innovation labs are great, but it has to be contextual. If you have a functional group, which is doing wonderful stuff but none of it is deployed or very little of it is applied, there's a bit of a break there. It's still needed, don't get me wrong, because I see this as the function that challenges our thinking. It challenges our view. It may not be applied yet, but it forces you how to think about new things. It still means that there's a necessity for a very operational, pragmatic deployment function in terms of how do we result in AI, data deliberation, and applications that are used within the organisation from a business-centric point of view?

Now, ideally, you would actually want to have this as 80% meaning deployment, operational AI and 20% innovation. If you are very advanced as an organisation, it may be 50-50. But this is critical. It's critical because while this would look at the most innovative, cutting edge, new possibilities, this would look at things that would make a difference on the ground. It could be pure automation. It could be changing a process, it could be things that are not that theoretical in nature or novel in application but make a difference on the day-to-day basis, and they can act as the conduit to say now we are ready for this next step.

FBP: We'll continue to see how it evolves.

DH: Absolutely. It's a never ending journey. The data never ends. That's actually another perspective. People always say, ‘How much money do I invest’ or ‘How much do you do and then I'm done’ – no, data is the never-ending story.

FBP: What are the chances of incumbent banks being able to bring AI in because it's something totally new? Incumbents are very good at running business as it is. Would you say there'll be greater success with a new bank built around AI from the ground up rather than having to feed or force AI into an existing organisation?

DH: I would say naturally, for a completely new, off the ground [bank], there is no legacy – cultural, processes, systems – none. However, I would challenge the thought and the view that the incumbents cannot do it. In fact, I would say there is a subtle change that needs to happen and they can run, because they already have the customers. They already have the data, quality aside. They already have the operation. They already have the people. It's a behemoth that's already in motion. But what needs to change is the perspective. It's the fact that we're willing to change, disrupt ourselves where relevant, challenge, explore – you will find that at times, that is the difficulty. It's not the technology, it's not the data. It's the willingness to now suddenly have an operation that's using data for multiple departments – small things that are, culturally, challenging at times.

FBP: How much looking out do we have to do as part of this job? The centre of excellence for AI is not in Singapore. It's in Israel, it’s in China.

DH: Which is why I believe in ecosystems – an ecosystem just like one that exists in Singapore amongst the financial institutions, regulators, providers. But equally, I believe in an ecosystem and a partner in the world. We can learn, cooperate, partner, which is exactly why we drive data connectivity and cross-border data – exactly for that reason. If I can leverage on the cyber security capabilities of Israel, if I can leverage on the technology coming out of China, from the [Silicon] Valley, or from Europe, how do we make sure that all of these competencies, capabilities can be brought together? We hope that it can be brought together in Singapore because of that triage between the demand, the supply and the regulatory environment.

FBP: Currently, we are living through this time of tension and possibly, fragmentation of technology ecosystems. There's one that is Western and one that is China. There’s so much unpredictability with the current administrations. things could change as early as they kind of taken on this plan. How do you see it?

DH: These are broader political aspects, but at the end of the day, it's about seeing how to create collaborations. At the end of the day, it is collaborations, relationships and interoperability that would unlock the potentiality in the future.

FBP: But the current issue is how this technology is being used.

DH: Which is why these conversations need to be had. If you think about it, the pillars – and again, even just specifically within Singapore – that we talked about: data, infrastructure, ecosystem. But trust, trust means transparency. Granted there's a threshold with respective relevancy and so forth, but there's an importance of transparency of how things are being built, the practices that are adopted, how it's being used. This kind of transparency I believe is important for all participants, in terms of assuring that we are in a confident situation to leverage on the maximum feasibility of that technology or service.

FBP: The trust only exists within an atmosphere or environment of knowledge – of knowing, being aware. I don’t know whether exactly there is full knowledge of what China is doing by their Western counterpart and vice versa.

DH: Hence going back to the analogy and the parallel for the academic. There's a necessity of having more knowledge sharing, that's what's critical. I will say the financial institutions need to become better frenemies. There's a necessity of having a knowledge economy. We need to share – obviously retaining IP, trade secrets – having that ability of transparency. Transparency means knowledge sharing, an understanding, mutualisation, and an acceptance of differences, because there will be cultural differences in terms of how privacy is perceived. It doesn't say that one's good, one's bad. It just means they're different. That can only happen through conversation and discourse.

FBP: Okay, thank you.

DH: Thank you again, absolute pleasure.

 


Leave your Comments
Recent Comments