Fail proofing your AI model
-Dr Srinivas Padmanabhuni, CTO , testAIng.com
A very popular notion in quality management is the notion of Poka-Yoke. It is invented in Japan for ensuring quality. Poka-Yoke means ‘mistake-proofing’ or more literally — avoiding (yokeru) inadvertent errors (poka).
In context of our daily lives there are several instances of Poka-Yoke in action. To consider the popular example, let us see what happens when you try to enter an elevator suddenly, the sensor in the doors detects your presence and causes the doors to open. This is a classic example of preventing mistakes from happening. Same can be seen in for example airbags, door sensors in cars etc.
Extending this concept of Poka-Yoke to Machine Learning models, how does it apply? We literally take the notion of fail-proofing Machine learning models by considering possible ways of avoiding mistakes in ML models. In a previous article https://medium.com/@srinivaspadmanabhuni/why-we-need-to-let-of-our-programming-instinct-in-ml-based-ai-6d6f34d7c313 we saw that dealing with ML models is not same as dealing with software programs. So we need to take an approach where the data in program out concept in ML is taken into account while deciding on a approach to implementing Poka-Yoke in ML Models.
With that view of fail-proofing ML models, we need to consider enhancing coverage of ML models to include as many input scenarios as possible on which they should give correct outputs, and not errors, thereby increasing accuracy. What best way to achieve this than by anticipating the error scenarios of ML models beforehand and use the error scenarios to train the ML models to behave correctly for those error scenarios.
Two such techniques which are popular in conventional testing which can be borrowed for ML world are : Coverage based techniques and Metamorphic testing.
Metamorphic testing based approach tries to overcome ML testing by substituting notion of a oracle with a Metamorphic relation based pseudo oracle. These metamorphic relations in context of ML testing can be a great way of determining error scenarios for ML models. Like for example, we can try different transformations of image inputs, and find out which transformations are deviating in accuracy from the baselined accuracy of the ML model. That gives rise to the potential error scenarios of this ML model.
Likewise recent notions of coverage in context of neural networks like neuron coverage https://arxiv.org/abs/1705.06640 have given rise to approaches for generating error scenarios for ML models. These approaches try to discover error scenarios by traversing in a white box manner the neural network and discovering less covered paths/neurons. That gives a way to generate error scenarios based on white box coverage analysis of deep learning models.
Combining both the metamorphic testing and white box neuron coverage based model for generating error scenarios we can form a pool of potential error input scenarios, to help in Poka-Yoking the ML model.
Once these error scenarios are generated retraining the ML model with these error scenarios boosted, can help the ML model in exhibiting correct behavior for these error scenarios. Thereupon the ML model will automatically become more robust and will be able to withstand real-life error scenarios.
Such approaches are implemented in full in our end to end ML testing platform AIEnsured https://testaing.com/product/
These approaches have been tested thoroughly to implement Poka Yoke in several ML scenarios like ADAS, text classification systems and credit modeling systems.
- Dr Srinivas Padmanabhuni
srinivas AT testAIng.com