{"id":13534930,"url":"https://github.com/dhlee347/pytorchic-bert","last_synced_at":"2025-04-02T00:31:16.595Z","repository":{"id":38428599,"uuid":"161426593","full_name":"dhlee347/pytorchic-bert","owner":"dhlee347","description":"Pytorch Implementation of Google BERT","archived":false,"fork":false,"pushed_at":"2020-03-29T13:48:14.000Z","size":54,"stargazers_count":590,"open_issues_count":10,"forks_count":180,"subscribers_count":10,"default_branch":"master","last_synced_at":"2024-11-02T22:32:54.910Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/dhlee347.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-12-12T03:17:23.000Z","updated_at":"2024-10-17T13:36:01.000Z","dependencies_parsed_at":"2022-07-12T15:30:59.920Z","dependency_job_id":null,"html_url":"https://github.com/dhlee347/pytorchic-bert","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dhlee347%2Fpytorchic-bert","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dhlee347%2Fpytorchic-bert/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dhlee347%2Fpytorchic-bert/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dhlee347%2Fpytorchic-bert/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/dhlee347","download_url":"https://codeload.github.com/dhlee347/pytorchic-bert/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":246735038,"owners_count":20825212,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T08:00:46.973Z","updated_at":"2025-04-02T00:31:16.581Z","avatar_url":"https://github.com/dhlee347.png","language":"Python","readme":"[\u003cimg width=\"400\"\nsrc=\"https://user-images.githubusercontent.com/32828768/49876264-ff2e4180-fdf0-11e8-9512-06ffe3ede9c5.png\"\u003e](https://jalammar.github.io/illustrated-bert/)\n\n# Pytorchic BERT\nThis is re-implementation of [Google BERT model](https://github.com/google-research/bert) [[paper](https://arxiv.org/abs/1810.04805)] in Pytorch. I was strongly inspired by [Hugging Face's code](https://github.com/huggingface/pytorch-pretrained-BERT) and I referred a lot to their codes, but I tried to make my codes **more pythonic and pytorchic style**. Actually, the number of lines is less than a half of HF's. \n\n(It is still not so heavily tested - let me know when you find some bugs.)\n\n## Requirements\n\nPython \u003e 3.6, fire, tqdm, tensorboardx,\ntensorflow (for loading checkpoint file)\n\n## Overview\n\nThis contains 9 python files.\n- [`tokenization.py`](./tokenization.py) : Tokenizers adopted from the original Google BERT's code\n- [`checkpoint.py`](./checkpoint.py) : Functions to load a model from tensorflow's checkpoint file\n- [`models.py`](./models.py) : Model classes for a general transformer\n- [`optim.py`](./optim.py) : A custom optimizer (BertAdam class) adopted from Hugging Face's code\n- [`train.py`](./train.py) : A helper class for training and evaluation\n- [`utils.py`](./utils.py) : Several utility functions\n- [`pretrain.py`](./pretrain.py) : An example code for pre-training transformer\n- [`classify.py`](./classify.py) : An example code for fine-tuning using pre-trained transformer\n\n## Example Usage\n\n### Fine-tuning (MRPC) Classifier with Pre-trained Transformer\nDownload pretrained model [BERT-Base, Uncased](https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip) and\n[GLUE Benchmark Datasets]( https://github.com/nyu-mll/GLUE-baselines) \nbefore fine-tuning.\n* make sure that \"total_steps\" in train_mrpc.json is n_epochs*(num_data/batch_size)\n```\nexport GLUE_DIR=/path/to/glue\nexport BERT_PRETRAIN=/path/to/pretrain\nexport SAVE_DIR=/path/to/save\n\npython classify.py \\\n    --task mrpc \\\n    --mode train \\\n    --train_cfg config/train_mrpc.json \\\n    --model_cfg config/bert_base.json \\\n    --data_file $GLUE_DIR/MRPC/train.tsv \\\n    --pretrain_file $BERT_PRETRAIN/bert_model.ckpt \\\n    --vocab $BERT_PRETRAIN/vocab.txt \\\n    --save_dir $SAVE_DIR \\\n    --max_len 128\n```\nOutput :\n```\ncuda (8 GPUs)\nIter (loss=0.308): 100%|██████████████████████████████████████████████| 115/115 [01:19\u003c00:00,  2.07it/s]\nEpoch 1/3 : Average Loss 0.547\nIter (loss=0.303): 100%|██████████████████████████████████████████████| 115/115 [00:50\u003c00:00,  2.30it/s]\nEpoch 2/3 : Average Loss 0.248\nIter (loss=0.044): 100%|██████████████████████████████████████████████| 115/115 [00:50\u003c00:00,  2.33it/s]\nEpoch 3/3 : Average Loss 0.068\n```\n\n### Evaluation of the trained Classifier\n```\nexport GLUE_DIR=/path/to/glue\nexport BERT_PRETRAIN=/path/to/pretrain\nexport SAVE_DIR=/path/to/save\n\npython classify.py \\\n    --task mrpc \\\n    --mode eval \\\n    --train_cfg config/train_mrpc.json \\\n    --model_cfg config/bert_base.json \\\n    --data_file $GLUE_DIR/MRPC/dev.tsv \\\n    --model_file $SAVE_DIR/model_steps_345.pt \\\n    --vocab $BERT_PRETRAIN/vocab.txt \\\n    --max_len 128\n```\nOutput :\n```\ncuda (8 GPUs)\nIter(acc=0.792): 100%|████████████████████████████████████████████████| 13/13 [00:27\u003c00:00,  2.01it/s]\nAccuracy: 0.843137264251709\n```\n[Google BERT original repo](https://github.com/google-research/bert) also reported 84.5%.\n\n\n### Pre-training Transformer\nInput file format :\n1. One sentence per line. These should ideally be actual sentences, not entire paragraphs or arbitrary spans of text. (Because we use the sentence boundaries for the \"next sentence prediction\" task).\n2. Blank lines between documents. Document boundaries are needed so that the \"next sentence prediction\" task doesn't span between documents.\n```\nDocument 1 sentence 1\nDocument 1 sentence 2\n...\nDocument 1 sentence 45\n\nDocument 2 sentence 1\nDocument 2 sentence 2\n...\nDocument 2 sentence 24\n```\nUsage :\n```\nexport DATA_FILE=/path/to/corpus\nexport BERT_PRETRAIN=/path/to/pretrain\nexport SAVE_DIR=/path/to/save\n\npython pretrain.py \\\n    --train_cfg config/pretrain.json \\\n    --model_cfg config/bert_base.json \\\n    --data_file $DATA_FILE \\\n    --vocab $BERT_PRETRAIN/vocab.txt \\\n    --save_dir $SAVE_DIR \\\n    --max_len 512 \\\n    --max_pred 20 \\\n    --mask_prob 0.15\n```\nOutput (with Toronto Book Corpus):\n```\ncuda (8 GPUs)\nIter (loss=5.837): : 30089it [18:09:54,  2.17s/it]\nEpoch 1/25 : Average Loss 13.928\nIter (loss=3.276): : 30091it [18:13:48,  2.18s/it]\nEpoch 2/25 : Average Loss 5.549\nIter (loss=4.163): : 7380it [4:29:38,  2.19s/it]\n...\n```\nTraining Curve (1 epoch ~ 30k steps ~ 18 hours):\n\nLoss for Masked LM vs Iteration steps\n\u003cimg src=\"https://user-images.githubusercontent.com/32828768/50011629-9a0e5380-ff8a-11e8-87ab-18cd22453561.png\"\u003e\nLoss for Next Sentence Prediction vs Iteration steps\n\u003cimg src=\"https://user-images.githubusercontent.com/32828768/50011633-9c70ad80-ff8a-11e8-8670-8baaebb6e51a.png\"\u003e\n\n","funding_links":[],"categories":["implement of BERT besides tensorflow:","Transformer Implementations By Communities","Python"],"sub_categories":["PyTorch"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdhlee347%2Fpytorchic-bert","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdhlee347%2Fpytorchic-bert","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdhlee347%2Fpytorchic-bert/lists"}