In this interview transcript, we speak with Kiran Muthal, a leading expert in AI governance and risk management. Kiran Muthal is management consultant for EY with specialization in financial risk management and responsible AI.

He has over 10 years of experience in the field of financial risk management, and has worked with a wide range of organizations, including Fortune 500 companies, government agencies, and research institutions. He is a frequent speaker at industry conferences and has published numerous articles on AI governance and risk management.

In this interview, Kiran discusses the importance of AI governance, the challenges of managing AI risks, and best practices for developing and implementing AI governance frameworks.


Tell us about yourself?

Hi, I am Kiran Muthal. I work for EY as a management consultant based out of Amsterdam with specialization in financial risk management and responsible AI. Currently, I am advising large banks to set up risk management frameworks, including AI risk and AI model governance. Additionally, I specialize in developing AI systems validation and responsible AI systems. 

Outside my primary job at EY, I am on the board of the Dutch chapter of a Professional Association for Anti-Money Laundering professionals. Furthermore, I am also on the board of a start up at TU Delft, focusing on emerging technologies and serving as a Guest Researcher at Leiden University. I participate in the discussion at think tank level on Governance of emerging technologies, civilizations and regulations.


How financial services leverage AI? Any prime examples?

In Financial institutions, most AI applications are in the risk management domain. After risk management, AI is also applied in investment decision-making and customer experience enhancement models. The AI models are used for predictive analytics, simulations, and automation. 

Risk management: The AI models can identify and quantify the credit, market, and financial crime risk. AI models analyze large data sets of transactions, market/reference data, and customer profiling to predict loan defaults and identify the outlier transactions and market risk. 

Investment decision making: AI models can analyze the historical price data of security, security’s characteristics like volatility, PE ratio, and traded volume against the market news. AI models can perform sentiment analysis using market news, combine with macro and micro trends and apply to the specific security to predict security pricing. The AI models are also used to perform high-speed trading activity for increasing customer profit.

Customer experience enhancement: The AI models help to enhance the customer experience by creating a predictive behavior pattern. AI models gather all available customer interaction data points from the bank and open source to create a customer360 profile. This profile helps to predict customer behavior, which can be used to mitigate fraud by identifying any unexpected transactions. Chatbot is another example of using AI for customer experience enhancement. The chatbots use natural language processing AI tools to respond to customer queries. This helps customers to get their queries answered without waiting for a live representative.

Automated investigations: AI models use tools such as robotics process automation, image processing and natural language processing for financial crime or fraud alert investigations. AI significantly reduces manual work, accelerates the process, and improves the quality of alert investigation. 


What are the significant challenges financial institutions face due to the use of AI and their implication?

Financial institutions face two significant challenges concerning AI application – Developing responsible and ethical AI models and AI model validation and governance.

Responsible and Ethical AI model development: A major financial institution’s credit card issuing process provided different results based on the applicants’ genders. Another large global asset management firm stopped its neural network-based liquidity AI model due to a lack of explainability to senior management. These incidents have highlighted a more significant challenge of developing responsible AI models in financial institutions that are transparent, fair, and unbiased. The AI model should also consider trust, privacy, accountability, and fundamental human and societal values.

AI Model validation and governance: All financial institutions must validate the models, including AI models. The AI model validation has multi-layer complexity. The AI models involve a black box of decision-making. The black box self-identifies the pattern and predicts the future outcome based on the historical patterns. Analyzing and validating the pattern development and decision-making process against business and regulatory requirements is a challenging task. Decision-making depends on the model input, and hence, validating model input is a critical part of model validation. In the unsupervized learning AI models, data quality, data risk labels, and risk weightages assigned by AI models without (or with minimal) human interventions. Hence, validating the decision-making process’s model input and black box is challenging for financial institutions.

Ongoing monitoring: AI systems are self and constantly learning based on continuous feedback. This dynamic nature of the decision-making process creates a risk of model validation being outdated quickly. Hence, validating the AI model’s controls and governance is crucial to ensure the model produces a desirable outcome. 


What is the EU AI act and what are some things governments around the world should prioritize while drafting AI policies?

The EU AI Act will be one of the most comprehensive acts to govern all AI systems that impact EU citizens. The Act (parliamentary committee’s draft) defines AI with three significant criteria – system operating with autonomy, generating predictions or recommendations-based outputs, and decisions influencing the environment. The EU AI Act classifies AI models into tiered risk levels based on their impact and requires human oversight on high-risk AI models. It also mentions the requirements for risk management, cybersecurity, data governance accuracy, and transparency. The EU AI Act is also expected to clearly define the roles, responsibilities, and obligations of all actors in the AI value chain.

Governments worldwide should consider the following aspects while drafting AI policies.

  1. Are you making regulations for an AI development/research region or AI user region?
  2. How do local culture, laws, and societal values influence the decision-making process concerning AI? 
  3. What overarching legal principles might impact new AI policies such as data privacy, equal credit opportunity, OCC’s fair access to the financial system, etc.
  4. Can AI models from extra-territorial tech firms impact the local population and how to develop controls for that?
  5. What governance mechanism should be developed for AI applications such as drones, autonomous driving etc.

In terms of geopolitics, which country is leading in the AI race and how can other countries catch up?

Leading the AI race is a very interpretive question. The AI regulations primarily focus on digital services like big techs, financial services, and e-commerce. However, there is a need for more comprehensive regulations on AI-related subjects like autonomous driving, drones, and medical/healthcare applications. Autonomous weapons is a new category in AI systems, which requires a strict governance mechanism.

The US is leading the AI race regarding AI research, application, and funding available for research. All big-techs and leading university and industrial research labs are based in the US. However, the AI systems developed by US-based big techs and labs penetrate global markets. The digital services operate irrespective of the geophysical boundaries. Instead of competing against each other in the AI race, there is a need to join forces. The countries should collectively contribute to developing AI systems responsibly by considering the local cultures, regulations, and societal nuances.


How do you see the future of humanity with AI in it?

AI is applied in every part of life and will make human life better and more efficient. AI will directly help to increase human life expectancy. It will make healthcare and medical support quicker, providing vaccines, medication, and therapies for life-threatening diseases. The LLM applications like ChatGPT are disrupting the market and will bring AI applications closer to the masses from non-technical backgrounds. 

The excessive use of AI might adversely impact society’s critical thinking as machines will make decisions. Applying AI in lower and mid-level education might lower students’ computing and quantitative aptitude. In the long term, it will impact society’s faster processing abilities and ability to create scenarios by thinking in terms of probabilities.

The technology is to help humans and not to create new challenges. We are closer to Sci-Fi script becoming reality, where robots will create another robot. However, this increases the risk to humanity as another robot, not humans, will develop a robot’s intelligence. Hence, AI systems need strong governance. Only data science and statisticians can develop an AI model, but humanities experts can help to make it safe for all societal segments and protect fundamental human rights. The ultimate aim is to use technology to serve humans rather than humans serving technology, which can be achieved by ethical, trusted, and responsible AI/ML principles. 


How can our viewers learn more about you and your activities? 

I am involved in multiple client discussions on AI governance, responsible AI, and regulatory subjects. Additionally, I am open to any speaking engagements on these or related subjects. You can also reach me on LinkedIn, Twitter, or simply at kiranmuthal@gmail.com 

https://www.linkedin.com/in/kiranmuthal/

https://twitter.com/KiranMuthal


Check out more AI tools.

Sign up for Textify AI membership.