Sunday, 3 November 2024

“FIs today use AI for cost containment and not productisation as they should”

5 min read

Interviewed By Emmanuel Daniel

Emmanuel Daniel, chairman and founder of The Asian Banker, joined the World Artificial Intelligence Conference’s (WAIC) first European online forum to discuss “AI Ethics goals for the 21st century” where he pointed out that financial institutions are still stuck in the industrial era by positioning AI as a cost-saving platform.

The dialogue was hosted by Alexander Honchar of Neurons Lab and HPA. Other esteemed speakers were Ana Chubinidze of Adalan AI &AI Governance International, Emmanuel Goffi of the Global AI Ethics Institute, Steve Nouri of the Forbes Council of Technology, and Aishwarya Srinivasan of IBM.

They following key points were discussed:

 

The following is the edited transcript of interview:

Alexandr Honchar (AH): I'm really happy to see you all here in our panel discussion. It’s great to start with the introductions on who we have with us today. Emmanuel Daniel is the founder of The Asian Banker, Wealth and Society and BankQuality.com. He continues to serve as an advisor and consultant to various governments and institutions as a highly regulated confidante in leadership circles. He is an esteemed global speaker on a variety of topics in the financial services industry and the development of the future of Asia. His first book on the future of finance is to be published by the end of this year. You have Ana Chubinidze. She's a social scientist building the bridge with computer science, artificial intelligence (AI) with various organisations. She is the chief executive officer (CEO) of Adalan AI & AI in Berlin, which focuses on governance policy and ethics consulting.

We have Emmanuel Goffi. He is an AI philosopher and a consultant on ethical particularism and artificial intelligence. He is co-director and co-founder of Global AI Ethics Institute, which aims at opening the debate on ethics to all philosophical stances, wisdom and perspectives coming from different cultures in our planet and he's also applying this philosophy and practice consulting Huawei on ethics and making the bridge between different cultures from the business perspective. We have Aishwarya Srinivasan. She's AI and machine learning (ML) innovation leader at IBM data in AI. She is an advocate for open source technologies and currently a developer advocate for PyTorch Lightning, and previously contributed to Scikit-Learn. She is an ambassador to the human data science community originating from Stanford University. Steve Nouri is head of data science and AI at the Australian Computer Society, which has evolved the way people look at AI and innovations. He is an expert in the international standards organisation (ISO) and member at the Forbes Council of Technology, an Australian ICT professional of the year and accomplished influencer on LinkedIn.

I am Alexandr Honchar, co-founder and ML director of Neurons Lab, a consulting company. Today, I will be humbly listening to all the experts who are with me here. I want to bring us starting from the high level strategy, philosophical and ethical questions and going a bit down on the details how it actually works, but not from the technical side or from the management side. How do we actually implement it? How do we control it? And how do we ensure that philosophical wealth and educational activities that we start to develop go in the right direction? Starting with philosophy, there is a thought that the philosophy of AI is outdated because we're just looking on the biases and patterns from the past, instead of trying to develop some new artificial intelligence and that we're just doing ethics, basically trying to put humans in the past. Does AI exist? Emmanuel Goffi, as the philosopher, you can start opening this topic.

Emmanuel Goffi (EG): That's a really tough subject. Obviously, there is no truth about AI and its potential reality. But what I can say from the philosophical stance is that there is a lack of philosophical reflection. A lot of people are doing ethics without doing ethics and that's really problematic when it comes to artificial intelligence because doing so, we are avoiding really deep and complex questions. The difficulty that we all have is to bridge the needs of philosophy with the needs of operational managers because time is not the same. So on one side, you need to go really fast and you don't have that much time to spend into philosophical questions. On the other side, my side, we must take much more time. We do need to go deeper into philosophy and to think about ethics, not just as a communication tool, but as a process to think about AI, its risks, its benefits.

AH: When you say that today, people who are not ethicists, they're working on ethics. What kind of problem does this bring? For example, in an engineering approach or entrepreneurship approach, it might lead us in the wrong direction from the ethics’ point of view.

EG: The point is that doing the ethics without having any kind of philosophical background is not really doing ethics. Ethics is a really complex field of study. So you need some background, knowledge and skills in order to do that. What I've seen so far is that a lot of people are just providing bullet point solutions to really complex problems and where I do not agree with that is I don't feel like our philosophy is reemerging. It's a really superficial layer of philosophy that is really emerging. But we are not going deep into the questions that are raised by artificial intelligence. You were mentioning that at the very beginning, is AI something that really exist? Or is it only kind of a narrative, a speech hack? Because we don't know what artificial means? Exactly. We don't know what intelligence means. So I don't know how we can pretend that artificial intelligence exists. This is the kind of question that we can have. And regarding, for example, biases. We are all discussing about biases as we are rediscovering that human beings are made of biases, our own life is made of biases. So when we're trying to just remove biases from artificial intelligence, we are not trying to mimic or replicate brain or human intelligence, if it ever exists, we are trying to do something that is kind of idealistic in the sense that we're trying to do something that would be perfect in the sense that there would be no flaws in this system. So these are the kind of questions that we are not asking, because once again, we have to go fast, because companies needs to go fast, obviously.

Sometimes we are just putting aside the very complex questions that do take time. At the very end, what will happen is that we are doing what I call the cosmetics. We're just hiding the reality behind the view of ethics. We're just summoning the vocabulary of ethics without doing that really well. But I feel like in the long run, it will pose a lot of problems because we are not asking the fundamental questions about what kind of society do we want for the future? Yes, in each and every society, not only the Western society. Where do we want to go? We don't know that. We don't ask that. We're just assuming that AI is here and we have to deal with that. Then we have to find solutions to existing problems and this is something that is really problematic because we will not find any kind of root solution with that.

AH: I want to ask Steve and Ana because my bias is the action to the processes and to the metrics. I know that Steve is working on the ISO standards regulations for AI. When you work on regulations in your companies, you require a plan, metrics and some actions. How do you merge the philosophical ideas and some actional plan which you can actually measure and improve?

Steve Nouri (SN): Humans are biased and we all know that not all devices are bad. We do have certain characteristics that are protected and we should make sure that AI is not biased against them. We need to understand what do we think about the fairness? What do we think about all the pillars of the ethics that we're discussing as a framework in the ISO and all the standard organisations? They're essentially thinking about how they can come up with best practices and ideas that can become a roadmap for societies, for governments, for legal organisations to ensure and enforce some of these standards that are very important and crucial. But at the end of the day, if you want to have a definitive solution for this, I'm going to say, unfortunately, we don't have it yet. We're walking through the process. There are a lot of great advances in terms of explainability of the AI, understanding all the characteristics of decision making and this would help but this is an open discussion that we're having that's why the organisations exist. They want to bring experts together and they will think about the problems that are now open discussions in society.

Emmanuel Daniel (ED): So let me just take Emmanuel Goffi’s position and try and make it practical. I'm here in Beijing and I read the foreign newspapers and the Chinese newspapers as to what some of the ethical issues in technology are. It's very interesting. The issues that Facebook and Twitter and all of the social medias that were created in the 2000s, in 2007, are also in my view – and I see very graphically – have a different set of AI issues than the platforms that were created in the 2010s, The platforms that were created in 2007s were desktop-based platforms. The focus there was on generating a lot of users, creating user-generated content. The whole idea was to create addiction to platforms. By the time we reach 2010 in China, for example, you have WeChat, AliPay, and Alibaba. All of these had their iterations in the 2000s at the same time as the Western social media was starting. They basically copied them and tried to replicate that in China. But on the desktop model, it didn't work very well. But when it went on to mobile, it then created a whole different ecosystem that today, I see a fundamental difference in the challenges of AI being applied to social media, Western social media and AI being applied to what I would call mobile-based social media.

The ethical consequences of AI

The Western-based social media is very subscription-centric, publishing-centric, content-centric, whereas the mobile-based platforms have community creation, ecosystem-type issues which have more to do with privacy issues as opposed to siloing the users into habits, into addiction. And because I cover financial services more than any other industry, what I see that’s going to happen is that another iteration of AI is going to come about, which is a high degree of personalisation. Now, when you get into a world that is device independent, you're going to start seeing a whole new set of ethical issues coming about. So for example, on the mobile device, global positioning system (GPS) for example, you onboard into a GPS app and the GPS app then collects all of the data and then makes that available to you telling you where you are, but in a mobile device independent platform, your device might well be the collector of data that tells you where you are. In other words, you don't need to go to another device or another platform or application to discover where you are. That then creates personalised issues in ethics. What data is personal to me that I have the right not to share with people and so on. So this is my contribution to what Emmanuel just said, a practical evolution as I see, having taken place over time.

EG: It's also a cultural point of view that you have to take into account. Working with Huawei, as I was saying, the mindset in China regarding privacy is not the mindset that we have in the West, in France, in the US, etc. So you also have to take that into account when you're doing ethics and that's really difficult. I know that you have to withdraw from your own Western priority about what is acceptable, what is not, what is good, what is bad, etc. just to adjust. And the difficulty with ethics is that it must be really flexible. So far, what I've seen is much more a tendency to try to impose some Western standards based on Western issues and Western counselling to the rest of the world. This is something that I feel we should think about. The Confucian thinking is not Christian thinking, which is not the same as Ubuntu thinking because we're talking about Shinto thinking. And for those who are working with Institute of Electrical and Electronics Engineers (IEEE), they've clearly stated that this kind of philosophy, this kind of wisdom or spirituality is not brought into the debate just to enrich it and to avoid having this kind of unique one-fits-all solution to really complex and diverse issues or cases that you mentioned Emmanuel.

ED: In fact, what you just said about the values being different, explains very quickly, very clearly why certain community-based applications were very successful in China but may never have gotten off the ground in the US or in Europe. But what's interesting is what I see here in China even among my own employees is that there's a greater sense of self-awareness and a greater sense of personal assets. What's mine and what's valuable to me. And that is a global phenomenon and it's evolving. I see Chinese young people being increasingly mindful of what they consider to be personal assets. And that AI is a technology and – here, Emmanuel, you might give me a dimension on this – is actually heading towards increased personal enrichment. At the same time, it's taking away personal rights in a way the technology is taking us that way. It's giving us rights, and it's taking away rights from us at the personal level. And that dimension is something that all of us suffer, regardless of where we come from. So we probably need to identify the elements that are universal. If we come back again to the philosophical question of universality, what is universal? Personal rights, is that universal? Privacy, is that universal? That sort of thing. Those are some of the issues.

Aishwarya Srinivasan (AS): One of the points that Emmanuel Goffi mentioned, that we are trying to pull out the bias out of the system and the reason behind that is that we do address that there exists those biases and it's not right to have those biases because that itself is pulling out the fundamental right of people to have equality in things like having equal rights, having that equal say in situations and having equal access to products and services. That's why companies are driving a lot and focusing a lot towards building trustworthy AI systems. Some of them call it as responsible AI systems or ethical AU systems. This is not as simple as just having ethics part in mind. I'm coming from a technical background and I have been helping in building these AI systems from a product standpoint. When we are building these systems from a smaller application or it could be something which is integrated in an organisational flow, or it could be something which is a business to customer or a business to business service. So in all these situations, we are looking at five different pillars and these five pillars pretty much encompass things that we were discussing as one of the concerns. So it does encompass having fairness and devising in the model that is because we want to see that having the fundamental human right of equal opportunities.

I feel with time, these technologies are helping us understand what are some of the fundamental humanitarian problems in society? How can we see them through a data perspective? How can we see them from an evidence perspective, and how can we work on it to rectify these. So now that we have this data in front of us, which is giving us a heads up that this is what is happening in society and this is what we are moving towards, then we can take those remedial steps to understand and rectify such things happening in society. The second pillar after fairness would be robustness. And this connects to what Emmanuel Daniel was saying that every country has different sets of rules, different sets of priorities, different ways of how technology is handled, different demographics and various different factors. This is where we want the models to be robust and adjust towards different schemas in the society. This is attached with the third pillar of privacy because when we were developing these models, these AI systems, they were being built on top of different regulations coming from different countries. So, all these are various steps which are being initiated by different countries or different states to protect the data, to protect the rights of their citizens. This is where we see a difference in the governmental procedures. For example, some of the countries have data localisation embedded in their system that poses a challenge for companies to pull out the data from all the users who might not be in their same base country. In that situation, federated learning plays a huge role because we are trying to say that now we are decentralising the data. The data is still going to be decentralised while we are training the models on a central server. So things like that are being addressed.

Another point which I wanted to mention is explainability. Imagine a world where we are able to attach explainability to all our models and we say that because of your historical activities, your score was affected in a certain way and your premium has been increased or decreased. So attaching that explainable part into the AI models help people trust more on the AI systems. The final pillar is transparency. This is where we say that we are pulling out every information about the model, about the services that we are building and we are telling the users that your data is being used in these different scenarios. For example, if I use my phone, I have no idea if the alarm that I'm setting up, is that data being used somewhere? If I'm typing something on my keyboard, is that data being tracked somewhere? Where is this data being sent to? How is it being used? That is something none of us probably know at the current situation. We know that there are some trackers which are sitting on our phones or our computers, but we don't know what data has been collected. We don't know where is it being used? How are the recommendations of these models being sent back to us? That's where the transparency part fits in. And I promise that a lot of companies are actively working on all these five pillars.

AH: Emmanuel Daniel, in the financial industry, for example, all this promise like expandability and fairness, how do you see them in your experience? What do you see, like the cases where these actually happen the most often and that was painful?

ED: Let me tell you this. In financial services, the business that I'm in is to assess banks in all the regions in which we operate. So we actually collect data on AI projects that all banks perform. I've just gone through the list of a number of the leading initiatives: chatbot, robo advisors for transaction banking, fraud. Banking or financial services right now, whether in Asia or anywhere in the world, is still very inward looking. The AI is not being deployed for the customers even though it's being sold as being for the customer. The mindset of financial services, the industry today, is still stuck in the industrial era. It looks like a customer enhancing experience, actually at the backend, it's a cost saving  platform. With a chatbot, you don't need to have a call centre. With robo advisors, you need to reduce the expense that you have on wealth managers and you even create the bias into your robo advisors. So financial services is really a follower rather than a leader, but the agenda is being imposed on them. So when we think about tokens today, for example, or cryptocurrencies, whether or not whichever side of the aisle you are on, it's now creating a new level of personalisation. Actually, the AI can even sit on the token and carry a lot more intelligence than it has right now. There’s a lot of work being done on that front, blockchain and all that.

Large companies are owning the AI community

I was actually curious with Aishwarya’s comment just now on accountability and I wanted to ask because this is an area where I don't have a sense but open AI, for example, as a platform for greater transparency and accountability and self-checking mechanism within the industry. When I saw what happened in open source technology – IBM bought one and Microsoft bought another open source platform – these were intended to be vendor neutral, open to all of the world. It was never meant to be industrialised or owned by a corporation as it were. The interesting thing about ethics in AI, for example open AI, is a source of  compliance and ethics, self-checking mechanism. What prevents that from being corporatised? Because when any element of integrity is corporatised, then it becomes compromised in a sense. So I was curious when she was making those comments, whether any of you have an opinion on open AI and whether that helps to ameliorate accountability in AI.

AS: One of the reasons why I really love this open source community concept is the push towards research. That's probably one of the missing pieces when it comes to the corporate world because mostly in corporate settings, you are driven towards the business value, you're driven towards producing numbers. That's how businesses work. So that's probably one of the biggest focuses for every business. Whereas in a community where we are trying to build technology, open source is something that pushes more towards research. That's what pushes companies to work towards a similar goal. So things like when we were talking about communities like open AI, they are setting up these standards, which I wouldn't say should be, but they are being followed by various different companies to maintain their standpoint on such kind of technology and research going behind it.

AH: It's very interesting because I also know that the AI is growing so fast because of this open culture, because this is a significant field where so many things have been open switch where we can reuse them, to use them freely to build applications to try it on the real world. It's interesting that one of the reasons that investors are starting to look at the open source side is because of the recent case of the Hugging Face library, which started this open source project that developed in the tools for national language processing (NLP). And now they're started with investments. What is the future of the business models for the open source?

SN: So the reality is, most of these open source projects are being backed by either corporates, or they're using this as a mechanism to get some sort of visibility or to drive some customers and that's what we are seeing here. A lot of these open source projects that are being open sourced by Microsoft, Facebook, or open AI, they're being used to deliver a lot of visibility and credibility for the company at the same time. On the other hand, many individuals are entering these open source communities then leveraging the open source to have a backend kind of business project that will run on this library or platform and become corporates. So that's the kind of trend that we're seeing. That doesn't mean it's a bad thing, necessarily. It’s like the word runs on numbers and everything needs to have some incentive and that's the reality. You can't just run an organisation or community by itself without having any sort of incentive. But it's still a great way to bring the value back to the community. Get them involved. Let the research go on and instead of having a closed kind of source environment, which we did use at the time that I was software engineer, Microsoft was leading that kind of environment and everybody were competing against such a closed environment that keeping IP and making sure everything is only available from the company.

AS: Some of the biggest technologies which we see currently have started not as business value generating scheme. Like developing and working on these technologies that have a vision that, what is possible. For example, one of the projects which I keep following is Project Loon. That is something which has naturally driven a business or is not generating that sales number for them. But that is something which is pushing us towards thinking that do we have the internet access abilities in all these places. And if not, then what are the possible ways to help these people who are living in communities, living in these locations where internet access is not available? Imagine one day without internet. For us, even a day without internet seems scary. So if you think about people who are living in these areas, who have no access to internet, they might be missing a lot of information. And that's where we try to produce these kinds of works, which are probably towards a cause, towards some social good and not directly working on numbers are also very important. So that's where I feel like having this ideology of I'm building something for the greater good and it’s good because it's somewhere pushing boundaries on what is possible and what's not.

AH: When you say, I want to access this technology, open source in the regions, how do you measure it? It's what I'm going to try to push our question out of good ideas and you're all good in talking about ideas. But when we start doing some work with our customers and work with our partners, we start some activity and then in a week, in a month we need to see where we are. Are we going closer to the goal or not? So with the open source, what could be measured? Maybe it shouldn't be financial metrics but at least we should know that every day, every week or every month, we're going in the right direction.

AS: So one is starting with the mission statement. For example, any open source community or any open source project has a mission statement towards it. So this is what we're trying to solve. For open source, one of the best ways to measure their impact is through community like how is it impacting the community? How many users do we have? How has it changed their turnaround time? Is my technology helping them do something faster? Is my technology helping them do something better? Or is it probably addressing a problem, addressing a community which was not really in line. So just help getting feedback from the community is very important, when it comes to open source projects.

Government funding vs. private funding

ED: One question of ethics which is coming out from this conversation is who should fund an ethics project? Should it be private enterprise or should it be state? We are all in different continents right now, the US, Europe and Asia. And the philosophies are so different in each of these continents that in the US, a company like IBM is able to fund non-profit making projects for a long time, for 10 years, and then burn rate. They can absorb that. In Europe, the state seems to be very involved in subsidising a lot of these programmes. In China, the state becomes involved. It becomes a geopolitical issue so the state is not consciously or obviously are involved. But, you have companies like Huawei who say that they're funding it out of their own resources in that way. The thing is that the role of the state, the role of private enterprise, and the role of the individual, and each of these are trying to affect each other.

What's interesting in financial services is banks are really bad investors in new technology and bad investors in the infrastructure for new technology, including ethics. They usually buy in after it's been created. And the one thing that makes banks very bad, in fact investors, in ethics is that financial services itself is a regulated industry. In other words, there's already a bias built in. Bias to protect the continued survival of the industry. So, they are users of AI rather than builders of AI. And whatever technology you build outside of financial services, you’ve got to hand it over because the regulator's will make you hand it over. So what is an acceptable neutral platform that is fair, as you say, and that is also accountable to the end user so all the infrastructure that Aishwarya mentioned, just now, explainability, all that is good intention. But if it is in the hands of private enterprise, it's done with profit as the motive. If it's in the hands of state, there are philosophical issues which are different from a commercial issue. So I'm curious, how that will evolve?

AS: If I were to look at the United States, or like Western universities, it's mostly driven by tech companies. So I feel like all these industries, companies like IBM, Google, Facebook, Netflix, everybody is playing their role in building systems which are more robust, or building systems, which are more trustworthy. So I do not really see a concern that companies are driven by business value. But they also want to build something which is a resilient platform. It's not that they want their business goals just in short term, but they want it in longer term. And that's where it is also beneficial for them to attach all these components to their systems that they're building, which will help them stay stabilised in a longer period of time.

EG: I don't think that it's really relevant to make a difference between states and private company. I feel like most of the time they work hand in hand, definitely. And whether it is private or public, they all have certain benefits that they're looking for. It can be financial benefits, it can be strategic benefits in terms of diplomacy. So whatever the actor, they will have something in mind. That's the big issue that we have in philosophy, that most of the time, all those principals that have been mentioned: explainability, transparency, etc., they are presented as deontological tools, meaning that they are principles that you cannot violate because there are fundamental for human beings, these kinds of things. But on the other hand, what we see is that the reality behind that is that everything is really consequentialist. When you're talking about explainability, for example, or transparency in terms of AI, it's really nonsensical. Because transparency doesn't make sense if people that you are showing what is going on in the black box are not able to understand it.

Most of the time, I made this comparison with me when I go to the mechanic. While he opened the trunk and showed me what is happening in the model, explaining everything, but I'm not able to understand. It is perfectly transparent. But that does not change anything because I'm unable to understand. So you can be real transparent. I've tried to go through this kind of algorithm. There is a French newspaper that has released these algorithms and if you're not a tech guy, if you're not a computer scientist, you will not understand. They can be transparent all they want but it will not change anything. So all these kinds of transparency, explainability is a way just to move the responsibility, accountability on the shoulder of the user saying you were aware of that. You knew it was showed to you what is happening. So you cannot say that you were not aware of what was happening with your data, it's much more than really a deontological point of view saying that people have to know because people cannot know if they're not able to understand it. When I'm looking, for example, at the setting of the cookies when I go to a website, I cannot do that. So I just accept them. So if I do something wrong, what I would be told is, you were aware, you had the choice, you could set your own cookies the way you want it. But I'm unable to do that. So it's all about vested interest, may be public or private, that does not change anything.

The big issue that we have here is that most of the time, real people, those from the street, are not into debate. They are not participating. They do not have any kind of say. You have people that are supposedly representing people: let's say friends, you have the government and the authority. So all this wording that has been created, once again, it's mainly cosmetic, it's narrative, it does not mean anything –  transparency, AI. Professor Thomas Metzinger was part of the panel of AI level experts writing the Ethics Guidelines for Trustworthy AI. He left because he was saying that at the very end it's just a narrative, it just a bedtime story as he wrote it for consumers to create artificially a kind of trust. But AI itself cannot be trusted because it has no intention. It's not autonomous. You can trust people that are developing it. You can trust people that are using it, that are deploying it, but you cannot trust the system, because trust is based on the probability that the other agent will cheat on you or not. So if we consider that AI is not autonomous enough, that there is still human in control, we cannot talk about trustworthy AI. And it's a way just to move the focus from the individual people that are behind those algorithms and those systems to work a technical tool. When I take the plane, I don't wonder if I can trust the plane. All these wording and that's really philosophical, must be questioned. Because we all take it for granted. All the time, I hear exactly the same wording. The same argument with the same principles, but with no in-depth analysis or just thinking, what does that mean?

Trustworthy AI doesn't even mean something. Trustworthy AI doesn't mean anything because AI does not exist. Artificial does not mean anything because intelligence is impossible to define. So how can something that you cannot define be real? So this is the kind of question that is always worrying that we really have to, at some point, what is the reality behind that? But I'm just wondering why we're not asking this kind of question? And I feel like it's not only a matter of the people that are involved into that, because it's not enough to say you were aware of the rules, you were aware that you could set your own cookies, you were aware because that was transparent, that was explainable and all that stuff. Because most of the people, myself included, we are just scrolling without even thinking about it. We are just accepting all the cookies, because we don't have time to go into that to set all the cookies for each website that we will consult. So this is mainly cosmetics. It’s just veil. So even if the intention is good, sometimes it doesn't lead to anything good at the very end.

AH: I see in our conversation, we literally are trying to balance between two things. One is the wealth creation, economics and all the rest of this, and then after the wealth is created, the need to create some value, maybe human value. That's hard to do as it gets to the thread that goes between philosophy and actions, wealth, and ethics. I want to ask everyone to give two advices with two books, two blogs, two videos, one on the kind of philosophical topic, ethical topic, where to go, what to do, what can be the next step just so our listeners, they go to the first source and they get motivated, they get a direction, and then they go to the second source, and this can be the immediate value of what we discussed.

EG: My advice would be just be curious. Make your own opinion. Build your own opinion. Just don't buy things that you find here and there. And sometimes try to look outside the box. Just try to make your own opinion. In terms of philosophy, go back to the philosophers instead of reading analysis of philosophers. It’s great to have people that are interpreting but it's also great to build your own opinion. So just be curious with whatever you want but read a lot and read diversity.

SN: For me, I would say I would go with a book, I know that I haven't read a lot of them. But one I really liked was the “AI Superpowers: China, Silicon Valley, and the New World Order” by Kai-Fu Lee. It would give us a little bit more understanding of what's happening in the world and what to expect. One thing that I'm very worried about is the superpowers, especially using AI and in some sense that might be used in a competitive way. Or I would say even bolder than that being weaponised. That's something that we need to be very aware of and people need to be very conscious of. Ask the questions following Emmanuel’s advice. Always think outside the box and make sure that you don't take anything for granted. Things will change quickly and will affect everybody's lives. AI is a super tool. Whatever we call AI, machine learning or whatever it is, this is very powerful and will have a huge impact. If it goes wrong, it will hugely go wrong.

ED: I've read Kai-Fu Lee's book. The interesting thing is that all the books that are very popular on AI at the moment, except for Kai-Fu Lee’s book, has to do with ‘AI is going to change our lives, we shouldn't be afraid, it's not going to take away our jobs’ and stuff like that. How do we deal with AI? How do we make use of algorithms? I'm also familiar with the “Life 3.0: Being Human in the Age of Artificial Intelligence” book. So many of these books are designed to help us. Imagine what's going to happen to us as a result of AI – what's going to happen to my job, my lifestyle, am I going to be fooled by an algorithm – that sort of thing. And the thing about this book is that there's only one or two chapters in there on AI. The rest of it is the history of fintech and platforms in China.

The book that I haven't read is a book that what does AI make corporations into? If you look at what platforms made corporations into, if you look at what platforms did to Facebook and Twitter, it's turned them into governments. And what the government's asking them to do, they’re asking them to moderate society and governments are outsourcing that responsibility to these platforms because they are all encompassing, they are all powerful. And when AI becomes increasingly institutionalised, the question is, what is the role of business or industry? And whose government is in that realm? So yes, you can be self-administering, you can be self-regulating. The realm of self-regulation has passed now. Anything that's new, that is all encompassing, needs an external regulator. If we restate what is the end goal that we are trying to achieve in ethics and then work backwards to understand what kind of books need to be written in the first place. The primer as to what AI is going to do to our allies, do to our jobs, that's passed.

We are now entering a realm where what AI is potentially going to do to society as a whole. And that's the final goal, the top price in AI ethics. What is the ultimate destruction capable of when AI ethics breaks down? And in the platform era, we've already seen what's already happening, which is there's great confusion in governance and AI is going to accentuate that. It's going to create new players. Many governments missed the opportunity to govern the platform era as it was evolving. And from that, we've learned a lot that we now need to figure out what governance should look like, as AI becomes more institutionalised, as new sets of corporations take over the running of AI as a business, as a platform. That's the kind of book I'm looking for.

AH: When we have ideas, we have plans, we can wrap it up and keep working on making it useful, ethical, and potentially open our doors as good, nice potential as a human species, not just as cultures or people, or individuals or countries. Thanks, everyone.


Institutions: Neurons Lab, HPA, Adalan AI &AI Governance International, Global AI Ethics Institute, Forbes Council Of Technology, IBM, ISO, Forbes Council Of Technology, Huawei, Institute Of Electrical And Electronics Engineers, Netflix
Region: Global
People : Emmanuel Daniel, Alexander Honchar, Ana Chubinidze, Emmanuel Goffi, Steve Nouri, Aishwarya Srinivasan, Lee Kai Fu, Thomas Metzinger
Leave your Comments
Recent Comments