{"id":13812018,"url":"https://github.com/calclavia/DeepJ","last_synced_at":"2025-05-14T21:32:30.236Z","repository":{"id":45898096,"uuid":"79006723","full_name":"calclavia/DeepJ","owner":"calclavia","description":"A deep learning model for style-specific music generation.","archived":false,"fork":false,"pushed_at":"2018-09-30T18:28:01.000Z","size":660934,"stargazers_count":728,"open_issues_count":16,"forks_count":111,"subscribers_count":36,"default_branch":"icsc","last_synced_at":"2024-11-19T07:38:32.640Z","etag":null,"topics":["composition","deep","generation","keras","learning","machine","music","tensorflow"],"latest_commit_sha":null,"homepage":"https://arxiv.org/abs/1801.00887","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/calclavia.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2017-01-15T06:43:17.000Z","updated_at":"2024-11-02T07:29:33.000Z","dependencies_parsed_at":"2022-08-31T21:25:01.061Z","dependency_job_id":null,"html_url":"https://github.com/calclavia/DeepJ","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/calclavia%2FDeepJ","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/calclavia%2FDeepJ/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/calclavia%2FDeepJ/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/calclavia%2FDeepJ/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/calclavia","download_url":"https://codeload.github.com/calclavia/DeepJ/tar.gz/refs/heads/icsc","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254231209,"owners_count":22036322,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["composition","deep","generation","keras","learning","machine","music","tensorflow"],"created_at":"2024-08-04T04:00:45.018Z","updated_at":"2025-05-14T21:32:25.226Z","avatar_url":"https://github.com/calclavia.png","language":"Python","readme":"# DeepJ: A model for style-specific music generation\nhttps://arxiv.org/abs/1801.00887\n\n## Abstract\nRecent advances in deep neural networks have enabled algorithms to compose music that is comparable to music composed by humans. However, few algorithms allow the user to generate music with tunable parameters. The ability to tune properties of generated music will yield more practical benefits for aiding artists, filmmakers, and composers in their creative tasks. In this paper, we introduce DeepJ - an end-to-end generative model that is capable of composing music conditioned on a specific mixture of composer styles. Our innovations include methods to learn musical style and music dynamics. We use our model to demonstrate a simple technique for controlling the style of generated music as a proof of concept. Evaluation of our model using human raters shows that we have improved over the Biaxial LSTM approach.\n\n## Requirements\n- Python 3.5\n\nClone Python MIDI (https://github.com/vishnubob/python-midi)\n`cd python-midi`\nthen install using\n`python3 setup.py install`.\n\nThen, install other dependencies of this project.\n```\npip install -r requirements.txt\n```\n\nThe dataset is not provided in this repository. To train a custom model, you will need to include a MIDI dataset in the `data/` folder.\n\n## Usage\nTo train a new model, run the following command:\n```\npython train.py\n```\n\nTo generate music, run the following command:\n```\npython generate.py\n```\n\nUse the help command to see CLI arguments:\n```\npython generate.py --help\n```\n","funding_links":[],"categories":["GitHub projects","Python","🎶 Music","Table of Contents"],"sub_categories":["tools"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcalclavia%2FDeepJ","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcalclavia%2FDeepJ","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcalclavia%2FDeepJ/lists"}