Explainable AI

“Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms. Explainable AI is used to describe an AI model, its expected impact, and potential biases.” -IBM

Feature Attributions

Understanding the Feature Attribution Methods

Figure 1: (a) LIME explanation for logistic regression model trained on iris dataset and
(b) LIME explanation for linear regression trained on Boston housing dataset.
Figure 2: LIME explanation for BERT model trained on BBC news dataset.
Figure 3: LIME explanation for image classification model on cats and dog dataset.
Figure 4: GradCAM result for VGG16 model trained on human activity recognition dataset.
Figure 5: Deep Taylor result for VGG16 model trained on human activity recognition dataset.
Figure 6: SHAP result for VGG16 model trained on human activity recognition dataset.
Figure 7: SHAP results for YOLOv3 model trained on the COCO dataset.
Figure 8: (a) SHAP result for text classification using BERT
Figure 8: (b) SHAP result for text summarization using Schleifer/distilbart-cnn-12–6 model.
Figure 9: SODEx result for YOLOv3 model trained on the COCO dataset.
Figure 10: SegGradCAM result for U-net model trained on camvid dataset.

Conclusion

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store