https://github.com/yjlolo/music-seq2seq
https://github.com/yjlolo/music-seq2seq
Last synced: 3 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/yjlolo/music-seq2seq
- Owner: yjlolo
- Created: 2018-11-06T03:36:23.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2018-11-14T03:55:21.000Z (over 6 years ago)
- Last Synced: 2025-01-31T06:42:42.441Z (5 months ago)
- Language: Python
- Size: 56.6 KB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# music-seq2seq
This is to build a seq2seq auto-encoder for music audio, and use the learnt representations for downstream tasks.
The current downstream task in interest is emotion recognition, using the [PMEmo Dataset](http://pmemo.hellohui.cn).
The repo will not include the dataset.Under construction
- [x] dataset
- [x] data loader
- [x] chunks division
- [x] model
- [x] attention
- [x] regressive inference
- [ ] Hierarchical RNN
- [ ] RCNN
- [x] trainer
- [x] extra loss constraints
- [ ] classifierThis is also the first repo, intended to improve coding skills as well.
The structure and template are adapted from [Pytorch-template](https://github.com/victoresque/pytorch-template).