Hands-On Explainable AI(XAI) with Python
上QQ阅读APP看书,第一时间看更新

Summary

In this chapter, we defined XAI, a new approach to AI that develops users' trust in the system. We saw that each type of user requires a different level of explanation. XAI also varies from one aspect of a process to another. An explainable model applied to input data will have specific features, and explainability for machine algorithms will use other functions.

With these XAI methods in mind, we then build an experimental KNN program that could help a general practitioner make a diagnosis when the same symptoms could lead to several diseases.

We added XAI to every phase of an AI project introducing explainable interfaces for the input data, the model used, the output data, and the whole reasoning process that leads to a diagnosis. This XAI process made the doctor trust AI predictions.

We improved the program by adding the patient's Google Location History data to the KNN model using a Python program to parse a JSON file. We also added information on the location of mosquitos carrying the West Nile virus. With this information, we enhanced the KNN by correlating a patient's locations with potential critical diseases present in those locations.

In this case, XAI may have saved a patient's life. In other cases, XAI will provide sufficient information for a user to trust AI. As AI spreads out to all of the areas of our society, we must provide XAI to all the types of users we encounter. Everybody requires XAI at one point or another to understand how a result was produced.

The appearance of COVID-19 in late 2019 and 2020 shows that AI and XAI applied to viral infections in patients that have traveled will save lives.

In this chapter, we got our hands dirty by using various methods to explain AI by building our solution from scratch in Python. We experienced the difficulty of building XAI solutions. In Chapter 2, White Box XAI for AI Bias and Ethics, we'll see how we can build a Python program using decision trees to make real-life decisions, and explore the ethical and legal problems involved with allowing AI to make real-life decisions.