Luna Connect Blog

Explainable AI: Building Trust In AI For Digital Lending

Dec 9, 2021 10:15:00 AM / by Claire Gibbons

Artificial Intelligence and Machine Learning are certainly hot topics on everyone’s lips lately. With AI seemingly present in every facet of modern life, from checking in with Siri or Alexa about tomorrow’s weather to the chatbot on any given website, the possibilities are seemingly endless. Many people know the surface-level basics of what AI is and can do, but may not know just how much AI affects the financial sector. A lack of understanding often leads to mistrust of the technology, especially when money is involved. Understanding Explainable AI and what it could mean for you will open your eyes to a whole new world of possibilities with Luna Connect.

Luna+Connect+Digital+Transformation+FINAL+VERSION+Sept_21_Page_08-1

Explainable AI 

Unlike “black box” machine learning, in which even its designers are unable to explain why an AI model made a specific decision, Explainable AI includes the results of the solution readily understandable to humans. This understanding is important to have so that both engineers and management benefit from identifying areas of weakness and bias, and users can more actively engage with it

 Accuracy Measures  

Part of what is appealing about AI is the accuracy associated with it. To understand this accuracy, in a confusion matrix analysis, the model prediction is compared to the real outcome known from the history of the data, and the number of true positives, negatives and false positives is reported. 

  • True Positive Rate
    Proportion of applications that were correctly predicted as approved.
  • False Positive Rate
    Proportion of applications that were incorrectly predicted as approved.
  • True Negative Rate
    Proportion of applications that were correctly predicted as declined.
  • False Negative Rate
    Proportion of applications that were incorrectly predicted as declined.

These rates can be shown as follows:

  • True Positive Rate
    True Positive / True Positive + False Negative
  • False Positive Rate
    False Positive / False Positive + True Negative
  • True Negative Rate
    True Negative / True Negative + False Positive
  • False Negative Rate
    False Negative / False Negative + True Positive

For simplicity, we can shorten these terms to be:

  • TP: True Positive
  • FP: False Positive
  • TN: True Negative
  • FN: False Negative

A model's predictive accuracy is defined by how many accurately predicted cases are included among all the cases. The formula for accuracy is defined as follows:

TP + TN / TP + FP + TN + FN

When using predictive models to make credit decisions, a low false-positive rate is much more important than a low false-negative rate. To understand this way of thinking, a real-world, practical example can help: 

A person would like to borrow money from a lending company. If the lending company makes the false prediction that this person will be a good credit risk, the lending company can suffer. Giving credit to the “good” credit risk, which turns out to actually be bad, is going to cost the lending company money if the person fails to make payments. 

In another scenario, a person is classified as a bad credit risk and denied a loan. If this prediction turns out to be false and the person was actually a good credit risk, the worst that has happened is that the lender denied the person a loan. 

If we compare the outcomes of having a false positive and a false negative, it’s easy to see that predicting a false positive will be far more detrimental to the business than a false negative.

The Importance of Context 

When assessing the accuracy of predictive models, context is crucial. Medical diagnoses, for instance, can also result in false positives and false negatives. Again, a practical example can demonstrate this:

In this scenario, a patient comes in complaining of an illness. The doctor examines this patient, does some testing, and, upon the results of the test, comes up with a diagnosis and treatment plan. 

In one case, we have the test come back with a false positive. Because the test came back positive, the doctor makes their diagnosis and prescribes a course of treatment. In this case, any treatment the patient receives is unnecessary.

In the second case, the tests come back with a false negative. Because the tests came back negative, the doctor does not make a diagnosis or make the wrong diagnosis. No treatment or the wrong treatment is prescribed for the patient. It is easily deduced that a false negative can lead to a missed diagnosis of serious diseases and missing the opportunity for treatment of said disease. 

By looking at these examples along with the previous scenario with our borrowers, we can see the difference in consequences of false-positive and negative results. In the medical scenario, a false negative is much worse than a false positive. In a financial scenario, it’s the opposite - a false positive is worse than a false negative. 

There Are More Ways To Measure Accuracy

Receiver Operating Characteristic Area Under Curve (ROC- AUC) score:

This method helps one to visualise how well a classifier is performing. The Receiver Operating Characteristic (ROC) curve is a graph that plots the true positive rate against the false-positive rate at various thresholds. The Area Under this Curve (AUC) measures how likely a classifier is to rank a positive instance higher than a negative instance chosen randomly.

Cohen-Kappa coefficient (κ:)

As a statistical measurement, Cohen's Kappa identifies how often two raters are reliable, as well as how often the two raters agree when they are rating the same quantity.

Specificity:

Specificity indicates how accurate a diagnostic test is at identifying normal (negative) conditions. It refers to the proportion of the true negatives correctly identified by a diagnostic test. Accuracy is the proportion of true results, either true positive or true negative, in a population.

Cross-Validation Mean Score:

Cross-validation refers to the evaluation of a model by resampling based on a small sample size. In k-fold cross-validation, a specific value for k is chosen. The sample is split at random into k equal-sized sub-samples. Following this, the mean accuracy score is calculated.

Test error rate:

The test error rate refers to the frequency of errors that occurred. The formula to determine test error rate is (1 - Accuracy)

Prevalence 

The accuracy of the model if the majority class was always predicted (baseline metric).

At Luna Connect, we use sophisticated AI and Machine Learning to change the world of money lending with our digital lending platform. We have changed the game by using modern technology and algorithms that allow us to identify applicants’ credit risks and other essential data, which allows for more successful lending. 

To make the move to digital lending or to find out more, visit our website.

Claire Gibbons

Written by Claire Gibbons

Claire is a Data Scientist at Luna Connect, and is passionate about Data Analytics, Machine Learning and Artificial Intelligence. Women and girls in Tech and STEM advocate.