Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/cyrusvahidi/vector-quantized-timbre
Implementation/Interpretation of Vector Quantized Timbre Representation (Bitton et al 2020)
https://github.com/cyrusvahidi/vector-quantized-timbre
Last synced: 16 days ago
JSON representation
Implementation/Interpretation of Vector Quantized Timbre Representation (Bitton et al 2020)
- Host: GitHub
- URL: https://github.com/cyrusvahidi/vector-quantized-timbre
- Owner: cyrusvahidi
- Created: 2021-10-14T11:31:34.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2022-01-28T14:24:43.000Z (almost 3 years ago)
- Last Synced: 2024-10-04T17:17:52.069Z (about 1 month ago)
- Language: Jupyter Notebook
- Homepage:
- Size: 6.23 MB
- Stars: 11
- Watchers: 3
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Vector-Quantized Timbre Representation (Bitton et al. 2020)
Implementation/exploration of the paper [Vector-Quantized Timbre Representation](https://arxiv.org/pdf/2007.06349.pdf)Developed with [Adán Benito](https://github.com/adanlbenito) early in 2021
* This repository contains a Pytorch Lightning framework to train a model similar to the one described in the paper.
* The main difference is the use of an exponential moving-average (EMA) codebook
* A conventional VQ codebook caused the codebook to collapse `perplexity = 1`.## Setup
```
pip install -r requirements.txt
pip install -e .
```## Data preparation
Data splits and preprocessing are performed for the URMP western musical instrument dataset
To split the URMP dataset into 3 second numpy files for efficient loading:
`python scripts/urmp_numpy_segments.py `Then, set the `URMP_DATA_DIR` variable in `gin_configs/vq_timbre.gin` to ``
## Training
`python scripts/run_train.py`
* Most hyperparameters are configured with `gin-config`
* set `URMP.instr_ids= []` to train for a target musical instrument
* To log to wandb, enable `lightning_run.logger = True` in `gin_configs/vq_timbre.gin`## Evaluation
* download model checkpoint for a model trained on Violin: `https://drive.google.com/file/d/1fJ9bkM5eAuCNz4DClfeTm6b-IKjkTs0y/view?usp=sharing`
* put it in `/checkpoints`
* `jupyter notebook`
* see `notebooks/eval_model.ipynb` for simple timbre transfer and feature-based synthesis