https://github.com/abdulfatir/wgan-gp-billion-words
PyTorch implementation of WGAN-GP for character level language modeling
https://github.com/abdulfatir/wgan-gp-billion-words
Last synced: 4 months ago
JSON representation
PyTorch implementation of WGAN-GP for character level language modeling
- Host: GitHub
- URL: https://github.com/abdulfatir/wgan-gp-billion-words
- Owner: abdulfatir
- Created: 2020-09-11T19:15:07.000Z (about 5 years ago)
- Default Branch: master
- Last Pushed: 2020-09-11T19:34:14.000Z (about 5 years ago)
- Last Synced: 2025-04-03T19:37:13.260Z (7 months ago)
- Language: Python
- Size: 427 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# WGAN-GP on the Billion Words Dataset
I could not find a Pytorch implementation for the character level language modeling task presented in [1], so I ported it into PyTorch.
## Usage
* Download the 1 Billion Words dataset from [here](http://www.statmt.org/lm-benchmark/) and extract in `./data/`.
* Execute `wgan_text.py`.
* The results and models will be saved in `./results/`.## Results
![]()
![]()
![]()
Fig. 1: JS-4 score vs. Iterations.### Acknowledgments
This code has been ported from the original code [igul222/improved_wgan_training](https://github.com/igul222/improved_wgan_training) which was released under MIT License.
#### References
[1] Gulrajani, Ishaan, et al. "Improved training of Wasserstein GANs." Advances in neural information processing systems. 2017.