AI: following the black box, interpretability?
The evolution of Artificial Intelligence since the 70’s has intrinsically reinvented Decision-Making processes. The rise of Machine Learning has made it possible to learn directly from data instead of human knowledge with a strong emphasis on accuracy. The lack of interpretability (the ability to explain or to present in understandable terms to a human) and the introduction of possible biases has led to ethical and legal issues. The EU General Data Protection Regulation took action and a growing concern has risen around the Interpretability of Machine Learning algorithms.
How interpretability can help companies to leverage the new AI tools to gain deeper insights on their decision-making processes? Here are the answers.
In 2018, the European Union General Data Protection Regulation (GDPR) regulated any significant or legally related decision to be explainable. The subject can require human intervention to challenge the decision. There are other regulations in specific domains such as the Code of Federal Regulations of the US which states that every credit action has a well-established right for explanation.
When we give the priority to precision over interpretability
In the area of Machine Learning methods and algorithms, the levels of Interpretability may vary greatly. Some methods are human friendly since they are highly interpretable. Others are too complex to apprehend and thus require ad-hoc methods to obtain an interpretation.
With the use of Big Data, the number of features and the high dimensionality complexifies the method comprehension. For instance, a Decision Tree is a sequence of decisions in order to split the data. Those decisions are easy to understand if the sequence is not too long. However, the Random Forest is an ensemble of Decision Trees and visualizing every sequence is not suitable to human intelligence. Deep Learning or Neural Network is a massive number of links by addition and multiplication with non-linearities. It is hard to keep track of the relevant computations.
Therefore, why are non interpretable models barely used? The complexity introduced in Machine Learning Models has consequently increased the performances in most domains. Therefore, a trade-off depending on the application has then occurred: Accuracy vs. Interpretability.
Interpretability: the 2020 challenge for CIOS
Even for high accuracy algorithms, there are methods to obtain either the interpretation or the explanation. However, there is not one single method which can be applied safely with a stable result to every machine learning model. Every method (SHAP, LIME…) has its pros and cons. A new trade-off occurs between the stability of results, the computation time and how specialized the algorithm is.
Once the method is chosen, it is important to apply it carefully mainly due to the lack of fidelity of some methods. However, both interpretation and explanation are often based on a certain dataset, to a specific area or a part of the data space. Misinterpretation can be easy.
Some interpretation methods miss the correlations between features or only offer one counterfactual explanation where multiple ones could have been given. Despite these limitations, the tools are sufficiently powerful to produce a GDPR compliant Interpretability.
Nevertheless, even without the GDPR regulation, the interpretation of a complex learning algorithm helps to optimize the model overall.
The analysis of the requirement for interpretability and the optimization of the problem through the Interpretability methods are soon to be a basic step of any Machine Learning use-case.
This insight was written thanks to the help of Alexandre Verine, consultant Wavestone.