Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/saattrupdan/autopoet
Build poems from text sources
https://github.com/saattrupdan/autopoet
haiku neural-network poems syllables trump twitter
Last synced: 2 months ago
JSON representation
Build poems from text sources
- Host: GitHub
- URL: https://github.com/saattrupdan/autopoet
- Owner: saattrupdan
- Created: 2019-09-29T16:37:05.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2023-10-18T01:49:35.000Z (over 1 year ago)
- Last Synced: 2024-11-10T14:42:58.852Z (3 months ago)
- Topics: haiku, neural-network, poems, syllables, trump, twitter
- Language: Python
- Homepage:
- Size: 43.1 MB
- Stars: 6
- Watchers: 1
- Forks: 0
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# AutoPoet
Build poems from text sources.
## Todos
- [x] Fetch Trump tweet data
- [x] Generate vocabulary from Trump tweets
- [x] Build baseline model to count syllables in English words
- [x] Optimise syllable counter
- [x] Use model to build Haikus from Trump tweets
- [ ] Build progressive web app that generates poems
- [ ] Enable working with live tweets
- [ ] Enable working with other text sources## Syllable model
A large part of this project was to develop a model that counts syllables in English words.
The syllable counter is trained on a (slightly modified version of) the [Gutenberg](https://en.wikipedia.org/wiki/Project_Gutenberg) syllable corpus, consisting of ~170,000 English words split into syllables. The `process_gutsyls` notebook converts these into a format which is more convenient for our purposes. The raw dataset can be freely downloaded [here](http://onlinebooks.library.upenn.edu/webbin/gutbook/lookup?num=3204), and the preprocessed versions used for this project can be found [here](https://filedn.com/lRBwPhPxgV74tO0rDoe8SpH/autopoet_data/).
The model is a recurrent neural network that works at the character level, with the following rough architecture:
1. Embed the characters into 64-dimensional vectors
2. Process the characters through three bidirectional GRU layers, each having 2x128 = 256 hidden units
3. Process the GRU outputs through a time-distributed dense layers with 256 hidden units followed by a ReLU activation
4. Finally project the outputs from the dense layer down to a single dimension across time, outputting a sequence of real numbers between 0 and 1, of the same length as we started with
5. To get the syllable count, we sum up the probabilities and round to nearest integerTo get a more detailed view of the model's architecture see `syllablecounter.py`, and check out `core.py` for an idea of how the model is trained.
This model currently achieves a 96.89% validation accuracy.
The reason why we sum up (aggregates of) the *probabilities* in point (5), rather than firstly rounding the probabilities, is to deal with the situation where the model is unsure whether two consecutive characters begin a new syllable. These will have probabilities ~50% and so will constitute a single syllable rather than two.