Skip to content

Modelling: Explainable AI

Lee Zhan Peng edited this page Apr 26, 2024 · 4 revisions

On this page, we provide an overview of explainable AI, highlight its use cases, particularly in the banking industry, and examine various methodologies for achieving explainability in Artificial Intelligence (AI) systems.

What is Explainable AI?

Explainable AI

Explainable AI is a set of techniques, principles, and processes designed to help AI developers and users understand how AI models make decisions. It aims to provide insights into the data used to train AI models, the algorithms employed, and the predictions or outputs generated by these models. As AI continues to play a significant role in various industries, explainable AI is crucial for improving model accuracy, detecting unwanted biases, and ensuring accountability and transparency in AI systems.

How does Explainable AI work?

Explainable AI focuses on explaining one or more aspects of AI models, such as:

  • Data: Describing the data used to train the model and the reasons for its selection.
  • Predictions: Identifying the factors considered in reaching a specific prediction.
  • Algorithms: Providing information about the role and function of the algorithms used in the model.

There are two main approaches to explaining AI models:

  1. Self-Interpretable Models: These models are inherently explainable and can be directly understood by humans. Examples include decision trees and regression models.
  2. Post-Hoc Explanations: These explanations describe or model the AI system's behavior and are often provided by other software tools without needing in-depth knowledge of the underlying algorithm.

Explanations can be presented in graphical formats (e.g., data visualisations), verbal formats (e.g., speech), or written formats (e.g., reports).

Why is Explainable AI Important?

Explainable AI helps make AI systems more manageable and understandable, enabling developers to identify and resolve errors efficiently. It fosters trust and confidence among users by providing transparency in AI decision-making. As AI becomes more ubiquitous across different industries and use cases, explainable AI is essential to ensure that AI models operate fairly and ethically and comply with regulations (Glover, 2023).

Use Cases

Explainable AI has a range of applications across multiple fields, such as:

  • Finance: Explainable AI is crucial in finance, where it helps manage financial processes such as credit scoring, insurance claims assessment, and investment portfolio management. It is vital to ensure that AI models do not introduce biases that could negatively impact users.
  • Autonomous Vehicles: Explainability is critical for autonomous vehicles, where AI models must make split-second decisions based on large amounts of data. Understanding why the vehicle made certain decisions can help improve safety and accountability.
  • Healthcare: Explainable AI is beneficial in healthcare for tasks such as diagnostics and treatment recommendations. It allows healthcare professionals and patients to understand the reasoning behind the AI model's decisions, ensuring more reliable and trustworthy outcomes.

As AI continues to advance and permeate various aspects of society, explainable AI will play an increasingly important role in promoting accountability, transparency, and trust in AI systems.

Real-Word Application in the Banking Industry

Explainable AI (XAI) plays a crucial role in the banking industry, as it enhances transparency, accountability, and trust in AI-driven financial processes. Some real-world applications of explainable AI in the banking industry include:

  • Credit Risk Assessment: Explainable AI can provide insights into how credit scores and lending decisions are made. This transparency helps lenders understand the key factors influencing credit decisions and allows them to address potential biases and ensure fairness in the process (X, 2023).
  • Fraud Detection: Explainable AI can be used to analyse transactions and identify suspicious patterns indicative of fraud. By understanding how the model detects fraud, bank employees can verify its accuracy and make better decisions when investigating flagged transactions (Orbograph, 2023).
  • Loan Approval Processes: Explainable AI can offer insights into why certain loan applications are approved or denied. This transparency helps both customers and lenders understand the decision-making process, reducing confusion and potential disputes.

Explainable AI enables banks to better understand and manage the models they use, ensuring that decisions made by AI systems are fair, ethical, and in line with regulatory requirements.

Literature Review

Explainable AI techniques play a key role in understanding the keywords that significantly impact sentiment in reviews. Given the black-box nature of AI model inference processes, the use of explainable AI methods are essential. XAI techniques were evaluated on the sentiment analysis model BERTweet, which was identified previously, with the review "The application is not good. UI sucks."

Techniques Evaluated

The two most prominent strategies for explainable AI include Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP). Both methods offer insights into the model's decision-making process, helping identify influential keywords in reviews.

  1. LIME: LIME creates local surrogate models around individual predictions and interprets them to understand the importance of input features (Ribeiro, 2016). It is model-agnostic, making it applicable to any model. Despite its popularity, LIME does have its limitations such as variability in explanations and low explanation fidelity (Vorotyntsev, 2023).

  2. SHAP: SHAP uses cooperative game theory to attribute the contribution of each feature to the model's prediction. SHAP values quantify each feature's impact on the model's output to determine its importance during inference (SHAP, n.d.). However, SHAP may require high computational demand when used with large datasets or complex models. While SHAP reveals relationships between features, it does not inherently provide insights into causal relationships, highlighting the need for further data and analysis (Ateşli, 2023).

Results

LIME output
Output of LIME

SHAP output
Output of SHAP

Both methods identified "not" as a significant keyword that led to the classification of negative sentiment. LIME also highlighted "sucks" as influential. Although LIME performed well, SHAP was preferred due to its faster inference time. SHAP completed its analysis in 8 seconds, while LIME took 28 seconds. Given the project's requirements, SHAP's speed was prioritised over the slight performance advantage of LIME, enabling efficient handling of the large volume of review data.

Usage in the Project

By utilising the shap library (details here), we first initialise the Explainer class within shap with our sentiment analysis model:

self.explainer = shap.Explainer(model)

We then input the review text to identify the important words:

shap_values = self.explainer([text], silent=True)

The results are then compiled and used within our data pipeline.

References

Ateşli, H. (2023, December 23). Explainable AI with SHAP - Hepsiburada Data Science and Analytics - Medium. Medium.

Glover, E. (2023, June 23). Explainable AI, explained. Built In.

Orbograph. (2023, October 4). Explainable AI: Transparency important to fraud detection.

Ribeiro, M. T. (2016, April 2). LIME - Local Interpretable Model-Agnostic Explanations.

Vorotyntsev, D. (2023, September 25). What’s Wrong with LIME - Towards Data Science. Medium.

Welcome to the SHAP documentation. (n.d.). Welcome to the SHAP documentation — SHAP latest documentation.

X, I. (2023, November 19). Use cases of explainable AI (XAI) across various sectors. Medium.