{"id":13701647,"url":"https://github.com/espnet/interspeech2019-tutorial","last_synced_at":"2025-07-11T05:03:06.176Z","repository":{"id":80353577,"uuid":"197606368","full_name":"espnet/interspeech2019-tutorial","owner":"espnet","description":"INTERSPEECH 2019 Tutorial Materials","archived":false,"fork":false,"pushed_at":"2021-03-30T03:18:10.000Z","size":9546,"stargazers_count":193,"open_issues_count":0,"forks_count":38,"subscribers_count":13,"default_branch":"master","last_synced_at":"2025-05-28T11:08:01.815Z","etag":null,"topics":["interspeech2019","speech-recognition","text-to-speech","tutorial"],"latest_commit_sha":null,"homepage":"","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/espnet.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2019-07-18T14:50:07.000Z","updated_at":"2024-07-11T18:13:42.000Z","dependencies_parsed_at":"2023-06-18T19:50:55.052Z","dependency_job_id":null,"html_url":"https://github.com/espnet/interspeech2019-tutorial","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/espnet/interspeech2019-tutorial","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/espnet%2Finterspeech2019-tutorial","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/espnet%2Finterspeech2019-tutorial/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/espnet%2Finterspeech2019-tutorial/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/espnet%2Finterspeech2019-tutorial/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/espnet","download_url":"https://codeload.github.com/espnet/interspeech2019-tutorial/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/espnet%2Finterspeech2019-tutorial/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":264734316,"owners_count":23655640,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["interspeech2019","speech-recognition","text-to-speech","tutorial"],"created_at":"2024-08-02T20:01:53.379Z","updated_at":"2025-07-11T05:03:06.158Z","avatar_url":"https://github.com/espnet.png","language":"Jupyter Notebook","readme":"# Advanced methods for neural end-to-end speech processing – unification, integration, and implementation, INTERSPEECH2019 Tutorial (T6)\n\nThis repository provides the materials for INTERSPEECH 2019 Tutorial [Advanced methods for neural end-to-end speech processing – unification, integration, and implementation](https://www.interspeech2019.org/program/tutorials/).\n\n## Hands-on materials\n\n1. TTS demostraion \u003ca href=\"https://colab.research.google.com/github/espnet/interspeech2019-tutorial/blob/master/notebooks/interspeech2019_tts/interspeech2019_tts.ipynb\" target=\"_blank\"\u003e\u003cimg src =\"https://colab.research.google.com/assets/colab-badge.svg\"\u003e\u003c/a\u003e\n2. ASR demostraion \u003ca href=\"https://colab.research.google.com/github/espnet/interspeech2019-tutorial/blob/master/notebooks/interspeech2019_asr/interspeech2019_asr.ipynb\" target=\"_blank\"\u003e\u003cimg src =\"https://colab.research.google.com/assets/colab-badge.svg\"\u003e\u003c/a\u003e\n\n## Questionnaire\nhttps://forms.gle/RhUaU5437sx5dsmAA\n\n## Slides\n\n1. [Lecture](https://drive.google.com/open?id=1YRwQ9S2PmRCp5WBcNufAhcqIKtmREkfn)\n2. [TTS demonstration](https://nbviewer.jupyter.org/format/slides/github/espnet/interspeech2019-tutorial/blob/master/notebooks/interspeech2019_tts/interspeech2019_tts.ipynb)\n3. [ASR demonstration](https://nbviewer.jupyter.org/format/slides/github/espnet/interspeech2019-tutorial/blob/master/notebooks/interspeech2019_asr/interspeech2019_asr.ipynb)\n\n\n## Organizers\n\n- Takaaki Hori (Mitsubishi Electric Research Laboratories)\n- Shinji Watanabe (Johns Hopkins University)\n- Shigeki Karita (NTT Communication Science Laboratories)\n- Tomoki Hayashi (Nagoya University / Human Dataware Lab. Co., Ltd.)\n","funding_links":[],"categories":["Jupyter Notebook"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fespnet%2Finterspeech2019-tutorial","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fespnet%2Finterspeech2019-tutorial","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fespnet%2Finterspeech2019-tutorial/lists"}