Tesla Car Crash in Autopilot Mode
Machine learning is one field that has grown exponentially over the last decade creating a large pool of opportunities and solving real-world problems. Companies have started adopting machine learning practices in their existing systems. Machine learning has many use cases from image classification, Natural Language Processing(NLP) to house price prediction, etc. Here, the discussion will be on another exciting use case of machine learning i.e. self-driving cars. This is now a tremendously important topic, with more and more companies getting involved in testing to bring fully autonomous vehicles on the road as soon as possible. Going forward with more self-driving cars on the road soon, proper attention must also be given to ethics, else these self-driving cars or driverless cars will be passengers less as well.
Tesla
Now, let’s talk a little about the company "Tesla". It is an automotive company with a popular CEO Elon Musk. This company designs and manufactures electric vehicles, along with energy storage devices like solar batteries, solar panels, etc. The major highlight of these electric cars is the presence of the autopilot mode. The features offered by the autopilot mode are as follows:
1. Navigate on Autopilot: Navigate on Autopilot suggests lane changes to optimize your route, and makes adjustments so you don’t get stuck behind slow cars or trucks. When active, Navigate on Autopilot will also automatically steer your vehicle toward highway interchanges and exits based on your destination.
2. Autosteer: Using advanced cameras, sensors, and computing power, your Tesla will navigate tighter, more complex roads.
3. Smart Summon: With Smart Summon, your car will navigate more complex environments and parking spaces, maneuvering around objects as necessary to come to find you in a parking lot.
According to the official Tesla website, the Autopilot mode enables the car to steer, accelerate and brake automatically within its lane. But, that doesn’t mean that the car is autonomous and requires active driver supervision.
Getting back to the topic a Tesla driver was killed while the Autopilot mode was active. The accident occurred on a divided highway in central Florida when a tractor-trailer drove across the highway perpendicular to the Model S Tesla. Neither the driver, whom Tesla mentions is ultimately responsible for the vehicle’s actions, even with Autopilot on nor the car noticed the big rig or the trailer "against a brightly lit sky" and brakes were not applied. Although, here the driver is at fault for not paying attention to the road as Tesla clearly states that active driver supervision is required even in Autopilot mode. But, this incident depicts that the road with fully autonomous vehicles is a long one.
This incident says it loud and clear that there is a need for the explainability of decisions/predictions made by machine learning and AI systems. That there is a need for explainable AI(XAI). Before talking about XAI there is a need to understand the working of the Autopilot mode of Tesla.
How does the Autopilot mode work in a Tesla?
Tesla says that all cars are built with the necessary hardware to allow self-driving capability at a safety level. Their hardware includes eight surround cameras and twelve ultrasonic sensors and forward-facing radar with enhanced processing capabilities. The system will operate in "shadow mode" (processing without taking action) and send data back to Tesla to improve its abilities until the software is ready for deployment via over-the-air upgrades. This covers the hardware aspect but what about the software? How does that work? The software is based on deep neural networks. These neural networks are the brain behind the decision-making capability of the vehicle in Autopilot mode. Under the hood, an object detection algorithm is executed with the help of these powerful neural networks to detect traffic signals, passengers, and other vehicles on the road. So, this is the story behind the Autopilot mode of Tesla. These neural networks have a complex structure and it is extremely difficult to understand the decision made by them. So, in order to understand the decisions made by neural networks there comes the need for XAI.
The need for XAI?
XAI i.e. Explainable AI is extremely important to answer why the neutral network made that particular decision. In the car crash incident, the sensors were not able to identify the 18-wheeler truck against a brightly lit sky which resulted in the unfortunate fatality of the driver. This states the requirement to understand the decision-making capability of neural networks with the help of XAI techniques. To know more about the importance of explainability in AI. Check this link.
Could this accident be avoided?
The answer to this question is yes. Definitely, it could have been avoided. Firstly, there is a need to understand why a particular decision was made by the neural network in order to improve it. Here, XAI comes into the picture and saves the day. Secondly, more rigorous testing needs to be done for all machine learning and AI-based systems. One such unique product that offers this rigorous testing is AIEnsured by testAIng. Do check this link.
References:
1.https://www.theverge.com/2016/6/30/12072408/tesla-autopilot-car-crash-death-autonomous-model-s
Also Checkout:
1. Want to check another machine learning related incident. Visit this link.
2. Curious about various job opportunities in data science. Refer to this link.
3. To read more awesome articles. Check this link.