Interviewed By
Ajit Kanagasundram, former managing director, technology, Asia Pacific, at Citibank, discusses how to manage the challenges and critical success factors in the implementation of a global project, as well as the relevance of mainframe as a reliable transaction processing hardware and how it can be integrated with the open systems.
Neeti Aggarwal (NA): Good afternoon! It is 4:00 pm in Singapore and 9:00 am in London. Welcome to The Asian Banker RadioFinance, the online broadcast platform that aims to enhance the understanding of the industry by bringing together senior opinion leaders to examine current critical issues. This comprises an incisive 15 minutes’ panel discussion with a short question and answer segment in which our call-in audience can participate in. Today, we have with us a panel expert in banking technology, Ajit Kanagsundram, who built the global cards processing hub capability for Citibank and spent over two decades at the bank before retiring as a managing director of cards and banking application. Thereafter, he also worked at DBS as managing director (MD) for banking technology until 2009. Now, to share more about his experience and achievements at Citibank, Citibank set up a regional processing hub in Southeast Asia in the 90’s and this was initially focused on Southeast Asian countries by 1996. They have started serving all of Asia and then Japan and Australia were eventually added and by late 90’s it was extended to Europe and Latin America covering 38 countries. This was one of the most notable projects of this scale of global processing and entire development was built and headed by Ajit Kanagsundram. Ajit has actually shared details about his experience with different technology implementation at Citibank in his memoirs, part of which has been published as a series of article on our website. These reveal his insights to the people processes and decisions made over the years that provide considerable lessons for generations to come. The pace of technology development is actually quite rapid as banks increasingly seek integrated systems for agile cards processing capabilities and flexibility to meet the demands for new age costumers that with a lower cost. The industries are moving towards rapid digitisation, open banking enabled by APIs, and cloud computing environment. On today’s RadioFinance session, we will discuss the following based on Ajit’s experience: managing the challenges and critical success factors in the implementation of a global project.In the current changing technology environment, we will discuss the relevance of mainstream as a reliable transaction processing hardware and how it can be integrated with the open systems. And what are the factors that heads of technology need to consider when assessing and choosing the technology that will fit their organisations. Good afternoon Ajit and welcome to the session. Ajit, you have implemented a global processing from Singapore. This was one of the few notable cards processing unit of that scale if I understand correctly?
Ajit Kanagasundram (AK): I don’t think anyone has done that. And it came about, there was no grand plan to do the whole work out of Singapore. The first thing, the ASEAN countries, Southeast Asia, and north Asia, Taiwan, Hong Kong wanted to join, Korea and then Japan, which is the first time, any Japanese institution was processed offshore because the Bank of Japan has very strict rules on security and reliability; Australia, then of course Middle East. And the final stage was Western Europe and the Latin America, which as you correctly stated, started in 1999 but we only finished it in 2003.
N: Okay. Ajit, we would like to understand from you, what was the biggest challenge you faced during this implementation, it was actually quite a big project spanning across countries within the globe, and what can the bankers today learn from your experience?
A: Well, let me tell you what we had. First of all, from the very beginning, this project was well-funded. There was no question of cutting corners. Because the head of the bank at that time, a person called Rana Talwar, believes that technology would give Citi a competitive edge. So, we were able to invest the way we wanted to in both the hardware and more importantly the people. The second reason why this worked was that New York had invested in a global elite communications network called the GPN. It was the second corporate after IBM to do this using very high bandwidth fiber optics. So, when we decided, for example, to release or invest in Europe or Latin America, their businesses did not have to invest in telecommunication. It was just… because the telecom connectivity was also there. The most important factor, however, was that I was able to build the team in 1989, which I selected, who stayed with this particular application until I left 15 years later. Now, this is important because when you, a crew experience both technical as well as business knowledge things get done much faster and much better. So I would say these are the four success factors.
N: Okay, great. And with regards to meeting the unique country specific requirements because obviously, I mean it’s expanding 28 countries, with different interfaces, they have unique requirements, how did you manage that? And that to keeping the cost down while doing that?
A: Right. First, is you address the question of cost. The cost per card would complete the processing as well the application development; we used to aggregate it and divide it by the number of cards inbound from about 850 down to 3 at the time I left. This is a factor of let’s say efficiency and of course increasing volume of cards.
N: So, how did you meet the unique country-specific requirements or interfaces?
A: That’s a very good question because… you see the way we did it was we had region specific code disc, for example, notice Japan, Taiwan, China, Korea, right? Which were recognized with one code disc. Then, Southeast Asia, then the Middle East, Western Europe, Latin America. So we have about six code discs. Then, within each country we would have a country specific code disc. This is a very important point because after we did this project, Citi went into a project, which I’m going to talk to, where they tried to come up with a single code disc across all businesses, and they failed. This is not intuitively obvious, you can’t get all the advantages of having a covered code disc by being say 90% covered, because that last 10% will kill you. It’ll kill you in terms of complexity; it’ll kill you in terms of regression testing. Imagine you have the whole word on one disc and UAE has to make a small change, everybody has to test it, right. So, we were sensible. We had five, six countries in the code disc and then we did small individual changes, right. With the modern tools of library management and all that, it was possible. It’s not rocket science. So, this is one of the important lessons because with the higher management, we don’t know the details, think of, if six code bases reduces your cost by saving 70%, why don’t go for one code disc. And it is a very dangerous step.
NA: And you were also mentioning about another project in Citibank, I’m sure you are referring to the Global Go to Common project. So which I remember reading about costing something about $1.4 billion and later was shut down after five years of increasing complexity and escalating cost and delays. So, what was the reason behind this failure?
A: There are number of reasons. You see, if you want everybody on the same league of a code base, new bodies, new changes had to be there. So let me repeat myself, the code becomes more complex because you have to have special sub-group teams for each country. For example, this is a fact, in Latin America, Columbia has a very peculiar way of calculating interest. Other countries don’t want it. Why? Because the computer programmes has to go. Okay. The sub routine… so there is some time wasted there. Secondly, span of control. You have to get requirements from 28 countries. You have three releases a year, not everything are going to be easy. So now, 28 countries have to agree what has to go in, whereas if you have six countries or four countries, that is much easier. The next warning, regression testing. If you have everyone in one code disc, you make one small change for one minor country; everyone has to test it and ensure that they are not affected. This is not only a technology requirement, it’s an audit requirement to ensure that everything is tested. You know, wasting an enormous amount of time. You know, what I am saying is, use common sense here. You don’t have regional code bases, not a single code base. Now, this does not prevent you from rolling out change quickly. I’ll give you an example. In 2001, we came out with a new application called FEWS, fraud early warning system, right? And we rolled it out across Asia, Middle East, Europe, Latin America, within two weeks. You don’t want to get faster than that. And the only… the time limited factors was for people to test it, to make sure that’s okay. So this does not prevent you from rolling out products quickly and time does not prevent cost-quality impact the other way.
NA: Now that brings me to another part which is the technology development as you know is very rapid. We have seen systems move from the main frame to clouds, a variety of banks looking for lower-cost, higher efficiency, agility, in their system requirements. You’ve been an advocate for mainframe technologies, which was also used in Citibank. We have seen banks shifting towards open system. I am trying to understand what the advantage of one system over the other is and what type of system would work best for what type of bank?
A: Alright, the first thing is, it is not one fits all. You’ll have a mix of systems. Today we must remember, it’s a 50 year old technology. IBM introduced the 360 way back in 1967, so 51 years old. So, is this still relevant? Let us take Singapore, in Singapore, Citibank, Hong Kong Bank and DBS, their card applications is on the mainframe. Standard Chartered, UOB and OUB have out sourced their card processing to a French company, which is running it on a mainframe. So, in Singapore, your hundred percent of banks have card applications, right. Now why cards? Because it is a heavy transaction processing, intense application. The number of transactions, say during the day, will double, triple in the evening. The number of transactions on an average day may go up by 15 to 20 times on this day before Chinese year or Christmas. So, you want to have a machine that is capable of running through volumes of transactions. And as far as I see, this is mainframe, even now because if you look at all the main banks in the UK and the US that’s still the case. India may be an exception but all the big banks in the UK and US, hundred percent of banks in Singapore, conduct credit cards transaction processing– that is interest calculation and authorisation, etc, statement generation, in the matrix.
Now, why this? Because the cost per transaction is basically if you have a high volume of application, you have to take everything into account. The cost of adding extra security, all that, mainframe still comes out cheaper. Let me give you an example. Five years ago, the Korean banks, they have a common processing centre. They moved to open platforms. I won’t mention which software but open platform. Now, they found that at very high transaction volume, nothing broke down but a few transactions will be synced. So, you can’t have this in a financial system. You have to have 100% reliability, 100% of the time. So, what they will spend a long time, about six months investigating it, but they went back to the mainframe. Because remember 50 years old hardware, the software which is called the OST 90MB CICS has been tested so much by so many people, all the banks out now, right? So, it works.
But the mainframe is not suitable for things like the new internet banking where you need fancy graphics. You don’t use it for mobile banking. So, this will co-exists. Now, in Citibank, mobile banking and internet platforms are on open systems, connected by APIs to the mainframe, which keeps the financial records of the bank. It works quiet well. You need smart people to do the interfaces, to make sure that it works well but it works. The second reason why the mainframe should keep the financial records of the bank and the posting to your financial ledger is security. Now, there are different levels of security but what IBM uses is something called… which is the only commercial security which has the highest US Defense Department rating. In addition, two weeks ago IBM announced that their mainframe, they are going to implement security at the bit level using hardware encryption, which makes it almost impossible to break.
Now, we read all the time, PayPal has been hacked, Uber has been hacked. Because they are in the open system, which are much more user friendly in many ways but they have this problem that you cannot secure them 100%. So, as I’ve said why mainstream is not able to take, it will be a long time before banks move away from IBM mainframes. By the way, IBM is the only one in town, that’s one of the negative, because of security, scalability, and the fact that you can’t be absolutely sure of transaction integrity. Now, the down side is you are tied to one vendor. Up to 15 years ago, the Japanese, Hitachi and Fujitsu, where making compatible mainframe but they dropped out to the business when IBM introduced the CMOS processor. So, today IBM is the only supplier of hardware and software if you are on the mainframe, if you are on the IBM mainframe. So, many managements, regardless of the down side, because IBM has all pricing power and this is an issue which is a matter of concern for me as well.
N: Okay. Quiet interesting and quiet insightful, it actually explains a lot of things and details. So, would you say that for card transaction processing, you would say that mainframe is better but for internet, mobile and the other front-end applications, one should look for open banking systems. Would that be the way to go?
A: Yes. For example, say the internet banking, you have lovely graphics, radio buttons, all that kind of stuff, and you don’t do that on the mainframe, partly expensive. You do it on an open platform. But when it comes to the customising, to show my, balance account, you will go to the mainframe through an API, testing information and display it nicely.
N: And there are no issues in the kind of developing interfaces or integration of the mainframe systems with the new system applications, so how can banks achieve that?
A: There are standard protocols you can use. As I’ve said, you need smart people to do it. Because you need to make sure that everything is in sync, right? For example, if a costumer is looking at his account and meanwhile there is a credit from somewhere else, when do you update it? How do you update it? So, there are a lot of issues like this but smart people can solve them.
N: Yes, it always boils down to people, process and technology the end of the day.
A: Absolutely.
N: So, after Citibank, you were with DBS Bank. And I guess, they have a different kind of system. Even their core banking system is on a different system or a mainframe.
A: I will tell you a very interesting story. When I joined, their core banking was on the mainframe. It was actually driven surprisingly by both the Singaporean authority in the 1970, they still had it; at Hong Kong, DBS was on something systematic, which what Citibank used. Their cards, they were using the same platform. They decided to move, not the cards, but the banking tool to the Infosys system, right? Finacle… they got the most expensive technology project manager to do the transition. They started the project a year before I joined. When I left three years later, the project was still going on. A year after I left, they cancel the project writing off S$500 billion. Now, again they made the same mistake. They wanted Hong Kong, Singapore, Indonesia and India in a single code disk, right? For the three years I was there, they were still collecting requirements and the requirements keep changing. And if Hong Kong is one and Singapore is another, you start a question and saying… which one is a better way of doing things? Now, they are still doing it. They are trying to move to Finacle, but they are doing it in-house. They’re doing it at small volume at a time and eventually they will move mainframe, first HongKong and then Singapore from their existing mainframe to Finacle. Infosys was not responsible for that because they did not code it and they did it quite well. It was the concept and the execution that prevents all.
N: Okay, that’s interesting. You also mentioned about Accenture being involved in the project and outsourcing the system. And Citibank, s you mentioned before, there was a lot of in-house development and there is a lot of outsourcing of systems in DBS bank. What are the different situations here and what do the bankers need to keep in mind in an in-house versus an outsourced system environment?
A: It is an interesting question. First, developing in-house capability require years. You need to have a team which is not only technically good but to understand banking, etc. And technologies are not easy to marry, they are detrimental, and all kinds of stuff. But once you have it, you can do so much more because that is so much more productive than outsource. If you outsource, it’s a one-way scheme. If you lose your IT, you lose your people, it will take you five years to rebuild it, alright? So many banks, ten to 15 years ago outsourced it. All of them regretted it. JP Morgan Chase in US, also gave everything to IBM. When the current chairman, Jamie Dimon, came in, one of the first things he did was to pay him back. In Citibank, we were fortunate, we had the top management support from the bank. And we were able to build up a team, pay them sufficient fee to retain them, right? And definitely it’s a huge advantage for us. Of course, you can misuse this as they did with the Global Go to Common. And give them the impossible task of a single code base. But if you use them properly, they are extremely effective. I am personally against complete outsourcing. You can outsource cording. For example, if you have the expertise to do the design then the testing, fine, the coding can be outsourced. Not all of it, most of it. But you should not use your intellectual property to be able to design and code. Also, technology does not stand alone, there’s a huge amount of interaction between the business of technology. For example, if you’re outsourcing and your dealing with a vendor, you give him a set of requirements. She’s loyal, you’re loyal. You look certain, you signed so many agreements, and they go open develop it. It might take them six months, one year to deliver it. Meanwhile, conditions in the market change. Your guys want to make a change. You have to go through a lot of processes to change it. If it’s in-house… “Hey, I don’t want this reward system like this, I want it like this quickly. Do a prototype, let me see what is it like.” You can’t do that with a vendor. Every time you do something, there will be a legal contract, right? And the number of outsourcing, the contract that have ended up in courts, you would believe it, especially in the US. So my advice to bankers is, keep your technology, the core team together and keep it in house.
N: What about the cost in that scenario? Isn’t that a substantial burden in terms of having a—
A: If you look at the cost, it is much higher. But if you look at the total cost of a project, it’ll change from project to project but it’s cheaper. We did the global cards project for $150 million, right? I mean, it’s a mistake of just --- look at the --- okay, in India or China, the cost per head is $5,000, in Singapore is $15,000, therefore China or India must be cheaper, no! We have to look at the total cost of the project. So I’ve said it is much cheaper to do it in-house, with some of the coding outsourced. Yeah, the coding can be outsourced, no problem.
N: Coming to my last question; as we see the pace of technology development, there is a climate of a strong integration between systems and as you mentioned again between multiple vendors also, and the bank with the vendor. And banks are moving more towards open banking, API kind of an environment, and there is a quick systems of legacy with new technology in an integrated climate. In this environment, what would be your advice to current heads of technology especially with regards to technology architecture, how they can define it, keeping the future in mind?
A: Technology architecture for banks. My first advice is to keep it simple. Technology architects are usually very prone to having very complex model. We try to do implement it, which causes you a lot of problems. Then we use to keep it simple. The second thing is in an open, you seen, even in IBM, which resisted this for a long time, is mow making its systems compatible with certain open architecture principles, which are really determine by industry groups and not by a particular vendor, right? So they have to do it in order to play in the new game, right? So this is really a good time. I mean in a sense that in the past 20 years of going to work with IBM, very difficult to integrate with everything else but that’s not the case anymore. Even IBM subscribes to the open data interfacing, they even run the operating system like Linux on their mainframe. And Linux is on IBM so you get it for free. This is the way things are moving, and I think it’s a very positive thing.
N: Okay, great. Let me just concisely summarise what we discussed and then there are quite a few insight that came out. So, of course one is that when you are building a common architecture or technology platform, there needs to be a reasonable amount of standardisation and a kind of keeping interfaces standards, and have a common coding to a certain extent but not have a single code. Maybe you can have regional code, which could make it easier for launch of global projects, definitely. The other thing we talked about was regards to mainframe becoming much more scalable, especially on high volume of transactions, reliability and security, ensuring transaction integrity. But towards meeting the new requirements of internet, mobile banking, the fact that they’re tied to one single vendor, and there isa need for quick systems today, we need to identify what system works best for these kinds of systems, depending on the individual requirements and possibly for the cards transaction processing, mainframes as you’ve recommended, while for systems like internet and mobile banking, look for more open systems. The other thing we discussed was keeping the architecture simple and open, having open interfaces and, of course, keeping in mind the skill set required for the projects, which could be a very critical factor in the eventual success of your project; especially if it can be an in-house project like the project was done at Citibank. I think we covered quite a lot of ground into this session.
A: Thank you
N: --- and I have to thank you Ajit for your insights for this session today. And I have to thank the audience for calling in for the session today. We hope that you found today’s session insightful and useful. If you have missed any part of this session, a recording of this will be available on The Asian Banker website. So do visit the website if you want to listen to the playback of this session. Until the next event, we wish you all a very good day. And thank you again Ajit.
Leave your Comments