Explain vs Interpret: What to choose?

In recent past one of the hot areas in AI has been the domain of Explainable AI also known as XAI. This initiative started by Darpa in US is aimed at making AI and decisions taken by AI systems more acceptable to humans. A key thought process underlying this is that if we can complement AI based systems decisions with some kind of rationale or explanation it is acceptable for humans. However it is important to understand the this approach while offers a slight additional comfort to users of AI systems, that is not the only approach in XAI which is prevalent in today s AI systems.

Above approach is termed typically as post facto explanations of the AI model inference. In this the approach is to allow any kind of AI model be it a complex multi layer neural network model or difficult to interpret ensemble models, in search of higher accuracy and more precise models. However in XAI the usual idea is that we do not compromise the accuracy offered by these complex models, by allowing for post facto explanations of the decisions taken by the models be it at instance level or at global level.

This XAI approach of explainability takes upon itself to explain the internal workings of the complex model in easy to pin point mechanisms for end users to get an exact and precise idea of the rationale of the model. E.g. In an image it will show precise locations in the image which is responsible for the way the deep learning model has taken the decision in that instance.

The benefits of XAI with explainability include possibilities of the using really complex models with high accuracy and precision. This is cementing the business case for modern deep learning based AI models. So the concerns with lack of transparency of these models is complemented with such explanations be it black box or white box.

In contrast the negative side of such models is that these models as such are difficult to interpret because of the inherent lack of a logic inside these models.

Just to contrast the above with the opposing approach for achieving transparency is via use of interpretable models. The key is we don't have to use complex deep learning models necessarily for a business scenario. In that case we would resort to simple models like linear regression or polynomial regression or decision trees.

Decision trees are a great tool for interpretable models, as they are high on accuracy as well as interpretability. A decision tree is very readable. It gives the clear explanation of any decision at the leaf node in terms of the rule followed in the path from root to the leaf. So a decision tree can be considered a combination of all rules from root to the leaf nodes. Hence this offers a really neat explanation of the behavior of the model in terms of the rule it follows. In fact we can say decision trees are in some sense automating the creation of erstwhile expert systems which heralded popularity of AI couple of decades back.

Likewise linear regression models are inherently interpretable because of the power of the correlation between coefficient of a independent variable and the output variable. The sign of the coefficient indicates where there is positive or negative correlation. So it is easy to interpret the models in form of linear regression models.

In scenarios like financial credit modeling where customers are very wary of credit decisions it is imperative to use interpretable models instead of deep learning models at the cost of accuracy. In these models we may not achieve high level of accuracy like those in deep learning.

So i hope this clarifies the two paths of achieving of transparency of machine learning — explainability or intepretability. It should help you choose the right path..

#XAI #Intepretability #Explainability #Deeplearning #ML

Dr Srinivas Padmanabhuni

Intellect, testAIng.com