https://github.com/sean-doody/modernbert-classification-example
An example workflow for fine-tuning ModernBERT for a classification task using the IMDB dataset.
https://github.com/sean-doody/modernbert-classification-example
classification encoder-model huggingface modernbert nlp transformers
Last synced: 7 months ago
JSON representation
An example workflow for fine-tuning ModernBERT for a classification task using the IMDB dataset.
- Host: GitHub
- URL: https://github.com/sean-doody/modernbert-classification-example
- Owner: sean-doody
- Created: 2025-01-17T22:10:57.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2025-01-17T22:21:17.000Z (about 1 year ago)
- Last Synced: 2025-06-25T18:46:01.309Z (9 months ago)
- Topics: classification, encoder-model, huggingface, modernbert, nlp, transformers
- Language: Jupyter Notebook
- Homepage:
- Size: 14.6 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# ModernBERT Fine-Tune: IMDB Example
This notebook contains a recipe for fine-tuning [ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) for a binary classification task. The [IMDB reviews dataset](https://huggingface.co/datasets/stanfordnlp/imdb) is used as a toy example, but the code can adapted for other workflows.
## Running the Code
- The notebook can be run on free-tier [Google Colab](https://colab.research.google.com) runtimes.
- I used a batch size of 4 and an eval batch size of 2 on a free T4 GPU runtime.
- While ModernBERT supports up to 8192 tokens, I kept this example to 1024 tokens.
- I wanted to test the performance of the model using a moderately sized max length.