https://github.com/kimiyoung/transformer-xl
https://github.com/kimiyoung/transformer-xl
Last synced: 22 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/kimiyoung/transformer-xl
- Owner: kimiyoung
- License: apache-2.0
- Created: 2019-01-08T12:20:24.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2022-09-21T06:22:01.000Z (over 2 years ago)
- Last Synced: 2024-11-02T22:32:55.168Z (6 months ago)
- Language: Python
- Size: 112 KB
- Stars: 3,611
- Watchers: 84
- Forks: 763
- Open Issues: 97
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-bert - kimiyoung/transformer-xl - XL: Attentive Language Models Beyond a Fixed-Length Context, This repository contains the code in both PyTorch and TensorFlow for our paper. (improvement over BERT:)
- awesome-transformer-nlp - kimiyoung/transformer-xl - Code repository associated with the Transformer-XL paper. (Transformer Implementations By Communities / TensorFlow)
README
# Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
This repository contains the code in both **PyTorch** and **TensorFlow** for our paper
>[Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](http://arxiv.org/abs/1901.02860)>Zihang Dai\*, Zhilin Yang\*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov (*: equal contribution)
>Preprint 2018
## TensorFlow
- The source code is in the `tf/` folder, supporting (1) single-node multi-gpu training, and (2) multi-host TPU training.
- Besides the source code, we also provide pretrained "TensorFlow" models with state-of-the-art (SoTA) performances reported in the paper.
- Please refer to `tf/README.md` for details.## PyTorch
- The source code is in the `pytorch/` folder, supporting single-node multi-gpu training via the module `nn.DataParallel`.
- Please refer to `pytorch/README.md` for details.## Results
Transformer-XL achieves new state-of-the-art results on multiple language modeling benchmarks. Transformer-XL is also the first to break through the 1.0 barrier on char-level language modeling. Below is a summary.
Method | enwiki8 | text8 | One Billion Word | WT-103 | PTB (w/o finetuning)
-- | -- | -- | -- | -- | --
Previous Best | 1.06 | 1.13 | 23.7 | 20.5 | 55.5
Transformer-XL | **0.99** | **1.08** | **21.8** | **18.3** | **54.5**## Acknowledgement
A large portion of the `getdata.sh` script comes from the [awd-lstm](https://github.com/salesforce/awd-lstm-lm/) repo. Happy Language Modeling :)