Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/laurenzbeck/music-processing-challenge
JKU special topics - Audio and Music Processing
https://github.com/laurenzbeck/music-processing-challenge
ai beats mir ml onsets signal-processing tempo
Last synced: 6 days ago
JSON representation
JKU special topics - Audio and Music Processing
- Host: GitHub
- URL: https://github.com/laurenzbeck/music-processing-challenge
- Owner: LaurenzBeck
- License: mit
- Created: 2022-04-24T12:33:16.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2022-06-21T19:52:43.000Z (over 2 years ago)
- Last Synced: 2024-11-14T20:44:38.559Z (2 months ago)
- Topics: ai, beats, mir, ml, onsets, signal-processing, tempo
- Language: Jupyter Notebook
- Homepage:
- Size: 30.8 MB
- Stars: 3
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
![image_header](./studio.jpg)
ποΈπ§ - Audio and Music Processing - πΌπΆ
Special Topics - JKU Linz---
## Project
This project was part of my master studies in Artificial Intelligence at the Johannes Kepler University in Linz.
During the Special Topics lecture on audio and music processing from Rainer Kelz, I took part in three challenges:+ Onset Detection
+ Beat Detection
+ Tempo Detection**Team:** NeuraBeats
## Installation
To install the projects dependencies and create a virtual environment, make sure that your system has python (>=3.9,<3.10) and [poetry](https://python-poetry.org/) installed.
Then `cd` into the projects root directory and call: `$poetry install`.
Alternatively, install the dependencies from the `requirements.txt` file with your python environment manager of choice.
I was not allowed to make the dataset public, which is why one needs to add the challenge dataset in the `data/raw/` directory.
## Project structure
```
.
βββ challenges # scripts and pipeline descriptions of the three challenges
β βββ beat-detection
β βββ onset-detection
β βββ tempo-estimation
β βββ train_val_split.py
β βββ utils.py
βββ data
β βββ interim
β βββ processed
β βββ raw # the train and test .wav files and labels are stored here
βββ dvc.lock
βββ dvc_plots
βββ dvc-storage # dvc storage backend (not included in this public repo)
βββ LICENSE
βββ models
βββ notebooks # exploratory programming
βββ params.yaml # configuration for the three challenges
βββ poetry.lock
βββ pyproject.toml # python environment information
βββ README.md
βββ reports # predictions and project reports
β βββ beat-detection
β βββ onset-detection
β βββ tempo-estimation
βββ requirements.txt
```## Running the data pipelines
The three challenges were implemented as [dvc](https://dvc.org/) pipelines, which allows for a complete reprodicibility of every experiment, given that the datbackend storage is available. This is achieved by a git-centric approach, were not only the code is versioned with git, but also the configuration, the data, the artifacts, the models and the metrics.
The pipelines are defined by the `dvc.yaml` files in the `challenges` directory. To run them all, simply call `$dvc repro -P`.
If you want to execute the scripts manually, you could go through the stages in the `dvc.yaml` files and call the `cmd` value of every stage from the projects root.