{"id":13715833,"url":"https://github.com/awni/speech","last_synced_at":"2025-04-13T00:48:22.244Z","repository":{"id":40987167,"uuid":"93684694","full_name":"awni/speech","owner":"awni","description":"A PyTorch Implementation of End-to-End Models for Speech-to-Text","archived":false,"fork":false,"pushed_at":"2023-07-06T21:09:27.000Z","size":227,"stargazers_count":758,"open_issues_count":24,"forks_count":177,"subscribers_count":30,"default_branch":"master","last_synced_at":"2025-04-13T00:48:16.950Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/awni.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2017-06-07T22:23:03.000Z","updated_at":"2025-03-10T12:13:12.000Z","dependencies_parsed_at":"2023-01-19T23:35:00.322Z","dependency_job_id":"41fcf6a8-aca4-45e0-9028-70f5fa87a58c","html_url":"https://github.com/awni/speech","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/awni%2Fspeech","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/awni%2Fspeech/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/awni%2Fspeech/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/awni%2Fspeech/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/awni","download_url":"https://codeload.github.com/awni/speech/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248650437,"owners_count":21139672,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-03T00:01:04.055Z","updated_at":"2025-04-13T00:48:22.223Z","avatar_url":"https://github.com/awni.png","language":"Python","readme":"# speech\n\nSpeech is an open-source package to build end-to-end models for automatic\nspeech recognition. Sequence-to-sequence models with attention,\nConnectionist Temporal Classification and the RNN Sequence Transducer\nare currently supported.\n\nThe goal of this software is to facilitate research in end-to-end models for\nspeech recognition. The models are implemented in PyTorch.\n\nThe software has only been tested in Python3.6. \n\n**We will not be providing backward compatability for Python2.7.**\n\n## Install\n\nWe recommend creating a virtual environment and installing the python\nrequirements there.\n\n```\nvirtualenv \u003cpath_to_your_env\u003e\nsource \u003cpath_to_your_env\u003e/bin/activate\npip install -r requirements.txt\n```\n\nThen follow the installation instructions for a version of\n[PyTorch](http://pytorch.org/) which works for your machine.\n\nAfter all the python requirements are installed, from the top level directory,\nrun:\n\n```\nmake\n```\n\nThe build process requires CMake as well as Make.\n\nAfter that, source the `setup.sh` from the repo root.\n\n```\nsource setup.sh\n```\n\nConsider adding this to your `bashrc`.\n\nYou can verify the install was successful by running the\ntests from the `tests` directory.\n\n```\ncd tests\npytest\n```\n\n## Run \n\nTo train a model run\n```\npython train.py \u003cpath_to_config\u003e\n```\n\nAfter the model is done training you can evaluate it with\n\n```\npython eval.py \u003cpath_to_model\u003e \u003cpath_to_data_json\u003e\n```\n\nTo see the available options for each script use `-h`: \n\n```\npython {train, eval}.py -h\n```\n\n## Examples\n\nFor examples of model configurations and datasets, visit the examples\ndirectory. Each example dataset should have instructions and/or scripts for\ndownloading and preparing the data. There should also be one or more model\nconfigurations available. The results for each configuration will documented in\neach examples corresponding `README.md`.\n","funding_links":[],"categories":["Pytorch \u0026 related libraries｜Pytorch \u0026 相关库","Pytorch \u0026 related libraries"],"sub_categories":["NLP \u0026 Speech Processing｜自然语言处理 \u0026 语音处理:","NLP \u0026 Speech Processing:"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fawni%2Fspeech","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fawni%2Fspeech","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fawni%2Fspeech/lists"}