This project explores text classification using transformer-based models such as BERT. The primary goal is to compare two approaches:
- Using transformers as feature extractors.
- Fine-tuning transformer models for specific text classification tasks.
Additionally, this project implements key concepts and techniques from the book Natural Language Processing with Transformers.
All code, experiments, and detailed explanations are available inside the provided notebook. Follow along to see the step-by-step process, from data preparation to model evaluation.