Below you will find pages that utilize the taxonomy term “Bidirectional Encoder Representations from Transformers (BERT)”
Post
Project 7: Extractive QA with a Fine-Tuned BERT
1. Overview In this project, we will build a Bidirectional Encoder Representations from Transformers (BERT) based model for a different Natural Language Processing task – Question Answering. The model will be fine-tuned on the Conversational Question Answering Challenge (CoQA) dataset from Stanford University.
The Python Notebook containing the complete model development process and the data used in this project can be found at Google Drive.
2. Question Answering (QA) Question Answering, particularly Extraction-based Question Answering, is another type of Natural Language Processing task.
Post
Project 6: Natural Language Inference with BERT and Explainable Artificial Intelligence
1. Overview In this project, we will build a Bidirectional Encoder Representations from Transformers (BERT) based model for Natural Language Inference. The performance of the model will be evaluated on the Stanford Natural Language Inference (SNLI) Corpus. To further understand how it works, we will visualize attention mechanism and compare output embedding of BERT using Euclidean distance and Cosine similarity.
The Python Notebook containing the complete model development process and the data used in this project can be found at Google Drive.