Below you will find pages that utilize the taxonomy term “Explainable AI”
Post
Project 6: Natural Language Inference with BERT and Explainable Artificial Intelligence
1. Overview In this project, we will build a Bidirectional Encoder Representations from Transformers (BERT) based model for Natural Language Inference. The performance of the model will be evaluated on the Stanford Natural Language Inference (SNLI) Corpus. To further understand how it works, we will visualize attention mechanism and compare output embedding of BERT using Euclidean distance and Cosine similarity.
The Python Notebook containing the complete model development process and the data used in this project can be found at Google Drive.
Post
Project 4: Image Classification and Explainable Artificial Intelligence
1. Project Overview In this project, we will build a model for image classification and understand how it works.
In the first part, we will develop a convolutional neural network (CNN) model for food image classification. We will also apply t-distributed Stochastic Neighbor Embedding (t-SNE) technique on the output of different layers to visualize learned visual representations of the CNN model.
In order to understand how the model works, we will employ four popular Explainable AI approaches in the second part, including (1) Saliency map, (2) Smooth gradient, (3) Lime package, and (4) Integrated gradients.