Machine learning has been impacting a business revenue in a huge way, be it a top product-based company like Netflix where a recommondation system is worth $1 Billion, or any medium-scale company where machine learning models play a vital role in business decisions.
By knowing its contribution towards revenue, should we use AI models as a black box? Should we productionize untested AI models? Should we take business decisions according to the model’s prediction?
The answer to all the above questions is a big NO! And the actual question which we will go through in this article is, why we should not believe untested AI models and what impact it could cause on the company.
To understand more about the above question, let us take a real-world example – Zillow, which is an online real estate company.
Zillow was one of the tech revolution real estate companies where it had an algorithm called Zestimate which predicts the price of the house based on some important features. The success of Zestimate had ignited a new idea of instant buying (iBuying), where Zillow buys under-market value houses (based on Zestimate prediction) and fixes them up, and sells them at a higher price which is basically called a house flipping business.
Risks involved with the above idea
- The seller may not sell the house if the price is undervalued. This can reduce the revenue of Zillow
- The seller will sell the house if it is overpriced and this may result in selling back the houses with a net loss
Zillow which was generating revenue of ~$2.9 Billion in 2019 had disclosed a loss of ~$500 million just after a few days of entering into iBuying business.
What could be the reason for failure?
No doubt, Machine learning algorithm has improved a lot in a decade and has been playing a major role in decision making. But still, there is a chance of uncertainty in predictions, errors, etc. These uncertainties can even drastically increase due to the reduction in data quality.
Zillow when starting iBuying business, there was a drift in the data and this drift caused the model to predict the house with overpricing and blindly believing in these predictions cost a company of about $500 million.
How Zillow could have avoided the disaster?
(The approach for this problem is purely based on publically available data)
- Concept Drift - Concept drift in machine learning and data mining refers to the change in the relationships between input and output data in the underlying problem (for more info - click here). Monitoring of data could have helped the data science team to address the drift issues.
- Explainable - AI - The real estate field has many uncertainties. Relying blindly on the machine learning model can lead to improper decision-making. Such uncertainties can be handled by a combination of two things – 1) the Explainability of machine learning models, and 2) Domain experts. Explainability can be done on two levels – 1) Global Explainability and 2) Local Explainability. Global Explainability will provide the global feature importance and Local Explainability provides an individual explanation of prediction with model interpretability. Observation and feedback of domain experts based on the Explainability technique would help the data science team to take actions accordingly and productionize the well-tested model
- Counterfactual - Counterfactual mainly works on “what-if”. Using the counterfactual technique, one can get all the possible inputs for given output e.g.:- for a house price of $150,000, one can get all the possible number of rooms in the house, geographical locations, area/size of the house, etc. The domain expert can analyze the input and output relation using counterfactual technique and based on the observations data scientist team can work for a better prediction of the model.
Above are the 3 among many machine learning testing techniques which should be carried out to avoid any ML failures. For more information about ML testing - https://www.aiensured.com/.
For more articles about testing ML models refer to - https://blog.aiensured.com/
The machine learning model works well only when data repeats. With the evolving world, it is rare that data repeats, Due to this, there is a need for constant retraining and testing of Machine learning models.
Also Checkout: One such magical product that offers explainability is AIEnsured by testAIng. Do check this link.