{"id":13487056,"url":"https://github.com/pytorch/audio","last_synced_at":"2025-05-05T23:04:08.464Z","repository":{"id":37269693,"uuid":"90321822","full_name":"pytorch/audio","owner":"pytorch","description":"Data manipulation and transformation for audio signal processing, powered by PyTorch","archived":false,"fork":false,"pushed_at":"2025-04-30T11:34:37.000Z","size":1552462,"stargazers_count":2654,"open_issues_count":284,"forks_count":688,"subscribers_count":73,"default_branch":"main","last_synced_at":"2025-04-30T12:23:58.068Z","etag":null,"topics":["audio","audio-processing","io","machine-learning","python","pytorch","speech"],"latest_commit_sha":null,"homepage":"https://pytorch.org/audio","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-2-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/pytorch.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":"CITATION","codeowners":"CODEOWNERS","security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2017-05-05T00:38:05.000Z","updated_at":"2025-04-30T11:43:37.000Z","dependencies_parsed_at":"2023-09-23T13:17:03.235Z","dependency_job_id":"fa536c89-8d1f-432c-8b7b-c0a22c14e1fc","html_url":"https://github.com/pytorch/audio","commit_stats":{"total_commits":2296,"total_committers":256,"mean_commits":8.96875,"dds":0.6219512195121951,"last_synced_commit":"ba696ea3dfec4cbe693bf06a84c75dc196077f5b"},"previous_names":[],"tags_count":166,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pytorch%2Faudio","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pytorch%2Faudio/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pytorch%2Faudio/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pytorch%2Faudio/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/pytorch","download_url":"https://codeload.github.com/pytorch/audio/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252590574,"owners_count":21772936,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["audio","audio-processing","io","machine-learning","python","pytorch","speech"],"created_at":"2024-07-31T18:00:54.859Z","updated_at":"2025-05-05T23:04:08.457Z","avatar_url":"https://github.com/pytorch.png","language":"Python","readme":"torchaudio: an audio library for PyTorch\n========================================\n\n[![Documentation](https://img.shields.io/badge/dynamic/json.svg?label=docs\u0026url=https%3A%2F%2Fpypi.org%2Fpypi%2Ftorchaudio%2Fjson\u0026query=%24.info.version\u0026colorB=brightgreen\u0026prefix=v)](https://pytorch.org/audio/main/)\n[![Anaconda Badge](https://anaconda.org/pytorch/torchaudio/badges/downloads.svg)](https://anaconda.org/pytorch/torchaudio)\n[![Anaconda-Server Badge](https://anaconda.org/pytorch/torchaudio/badges/platforms.svg)](https://anaconda.org/pytorch/torchaudio)\n\n![TorchAudio Logo](docs/source/_static/img/logo.png)\n\n\u003e [!NOTE]\n\u003e **We are in the process of refactoring TorchAudio and transitioning it into a\n\u003e  maintenance phase. This process will include removing some user-facing\n\u003e  features. Our main goals are to reduce redundancies with the rest of the\n\u003e  PyTorch ecosystem, make it easier to maintain, and create a version of\n\u003e  TorchAudio that is more tightly scoped to its strengths: processing audio\n\u003e  data for ML. Please see\n\u003e  [our community message](https://github.com/pytorch/audio/issues/3902)\n\u003e  for more details.**\n\nThe aim of torchaudio is to apply [PyTorch](https://github.com/pytorch/pytorch) to\nthe audio domain. By supporting PyTorch, torchaudio follows the same philosophy\nof providing strong GPU acceleration, having a focus on trainable features through\nthe autograd system, and having consistent style (tensor names and dimension names).\nTherefore, it is primarily a machine learning library and not a general signal\nprocessing library. The benefits of PyTorch can be seen in torchaudio through\nhaving all the computations be through PyTorch operations which makes it easy\nto use and feel like a natural extension.\n\n- [Support audio I/O (Load files, Save files)](http://pytorch.org/audio/main/)\n  - Load a variety of audio formats, such as `wav`, `mp3`, `ogg`, `flac`, `opus`, `sphere`, into a torch Tensor using SoX\n  - [Kaldi (ark/scp)](http://pytorch.org/audio/main/kaldi_io.html)\n- [Dataloaders for common audio datasets](http://pytorch.org/audio/main/datasets.html)\n- Audio and speech processing functions\n  - [forced_align](https://pytorch.org/audio/main/generated/torchaudio.functional.forced_align.html)\n- Common audio transforms\n  - [Spectrogram, AmplitudeToDB, MelScale, MelSpectrogram, MFCC, MuLawEncoding, MuLawDecoding, Resample](http://pytorch.org/audio/main/transforms.html)\n- Compliance interfaces: Run code using PyTorch that align with other libraries\n  - [Kaldi: spectrogram, fbank, mfcc](https://pytorch.org/audio/main/compliance.kaldi.html)\n\nInstallation\n------------\n\nPlease refer to https://pytorch.org/audio/main/installation.html for installation and build process of TorchAudio.\n\n\nAPI Reference\n-------------\n\nAPI Reference is located here: http://pytorch.org/audio/main/\n\nContributing Guidelines\n-----------------------\n\nPlease refer to [CONTRIBUTING.md](./CONTRIBUTING.md)\n\nCitation\n--------\n\nIf you find this package useful, please cite as:\n\n```bibtex\n@article{yang2021torchaudio,\n  title={TorchAudio: Building Blocks for Audio and Speech Processing},\n  author={Yao-Yuan Yang and Moto Hira and Zhaoheng Ni and Anjali Chourdia and Artyom Astafurov and Caroline Chen and Ching-Feng Yeh and Christian Puhrsch and David Pollack and Dmitriy Genzel and Donny Greenberg and Edward Z. Yang and Jason Lian and Jay Mahadeokar and Jeff Hwang and Ji Chen and Peter Goldsborough and Prabhat Roy and Sean Narenthiran and Shinji Watanabe and Soumith Chintala and Vincent Quenneville-Bélair and Yangyang Shi},\n  journal={arXiv preprint arXiv:2110.15018},\n  year={2021}\n}\n```\n\n```bibtex\n@misc{hwang2023torchaudio,\n      title={TorchAudio 2.1: Advancing speech recognition, self-supervised learning, and audio processing components for PyTorch}, \n      author={Jeff Hwang and Moto Hira and Caroline Chen and Xiaohui Zhang and Zhaoheng Ni and Guangzhi Sun and Pingchuan Ma and Ruizhe Huang and Vineel Pratap and Yuekai Zhang and Anurag Kumar and Chin-Yun Yu and Chuang Zhu and Chunxi Liu and Jacob Kahn and Mirco Ravanelli and Peng Sun and Shinji Watanabe and Yangyang Shi and Yumeng Tao and Robin Scheibler and Samuele Cornell and Sean Kim and Stavros Petridis},\n      year={2023},\n      eprint={2310.17864},\n      archivePrefix={arXiv},\n      primaryClass={eess.AS}\n}\n```\n\nDisclaimer on Datasets\n----------------------\n\nThis is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.\n\nIf you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!\n\nPre-trained Model License\n-------------------------\n\nThe pre-trained models provided in this library may have their own licenses or terms and conditions derived from the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.\n\nFor instance, SquimSubjective model is released under the Creative Commons Attribution Non Commercial 4.0 International (CC-BY-NC 4.0) license. See [the link](https://zenodo.org/record/4660670#.ZBtWPOxuerN) for additional details.\n\nOther pre-trained models that have different license are noted in documentation. Please checkout the [documentation page](https://pytorch.org/audio/main/).\n","funding_links":[],"categories":["The Data Science Toolbox","Libraries","Python","Computer Audition","音频处理","Pytorch \u0026 related libraries｜Pytorch \u0026 相关库","Open source projects","Deep Learning Framework","Pytorch \u0026 related libraries","Audio","Deep Learning","Hub / Database / Library","📚 فهرست","Audio Processing \u0026 I/O","Audio Related Packages","Feature Extraction"],"sub_categories":["Deep Learning Packages","Data Transformation and Manipulation","Others","NLP \u0026 Speech Processing｜自然语言处理 \u0026 语音处理:","High-Level DL APIs","NLP \u0026 Speech Processing:","PyTorch","Open Source","کار با فایل های صوتی","Audio"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpytorch%2Faudio","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fpytorch%2Faudio","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpytorch%2Faudio/lists"}