Skip to content

Latest commit

 

History

History
28 lines (19 loc) · 2.81 KB

README.md

File metadata and controls

28 lines (19 loc) · 2.81 KB

Explainable Decision Trees

Implementation of an explainable decision tree

An implementation of explainable artificial intelligence, namely decision tree to predict loan approval. This project is an implementation in support of the book chapter, "Explainable AI in Autonomous Finance: Challenges, Techniques and Opportunities" co-authored by Vasu Jain and Dr. Umesh Gupta in Bennett University, 2023.

Abstract

This project emphasizes the importance of using explainable artificial intelligence (XAI) in autonomous finance systems to ensure transparency, accountability, and regulatory compliance. The article identifies and analyzes the challenges of using XAI in autonomous finance, including balancing accuracy and interpretability, dealing with complex financial data, and avoiding bias. This project also explores various techniques to address these challenges, such as rule-based systems, model visualization, and feature importance analysis. Finally, we highlight the future research directions for XAI in autonomous finance, including developing more accurate and interpretable models, improving transparency and fairness, and integrating XAI into existing financial systems. Overall, the repository provides a comprehensive overview of XAI in autonomous finance and its potential impact on creating more transparent, accountable, and trustworthy financial systems.

Introduction to XAI

Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms. Explainable AI is used to describe an AI model, its expected impact and potential biases. It helps characterize model accuracy, fairness, transparency and outcomes in AI-powered decision making. Explainable AI is crucial for an organization in building trust and confidence when putting AI models into production. AI explainability also helps an organization adopt a responsible approach to AI development.

Model Specific Explainability of Decision Trees

As the fin-tech industry grows into the domain of BigData over years, AI has become more powerful than ever. However, increased computation requires more complex AI models and algorithms, consequently leading to reduced transparency. Although AI algorithms have become more efficient and accurate over years of development, this complexity has made it harder for users to interpret AI models. The plot below shows a model specific explainability solution of decision trees.

image

The purpose of this plot to explain the decision pathing, provided an input, to explain the decision making of a decision tree