https://github.com/jrmeyer/interspeech-2018
I submitted this paper to Interspeech 2018. The paper was not accepted. The reviewer comments are included in the repo.
https://github.com/jrmeyer/interspeech-2018
interspeech2018 kaldi multi-task-learning rejection
Last synced: 2 months ago
JSON representation
I submitted this paper to Interspeech 2018. The paper was not accepted. The reviewer comments are included in the repo.
- Host: GitHub
- URL: https://github.com/jrmeyer/interspeech-2018
- Owner: JRMeyer
- Created: 2018-03-02T17:49:41.000Z (about 8 years ago)
- Default Branch: master
- Last Pushed: 2018-06-22T15:15:28.000Z (almost 8 years ago)
- Last Synced: 2025-02-08T09:11:13.765Z (about 1 year ago)
- Topics: interspeech2018, kaldi, multi-task-learning, rejection
- Language: TeX
- Homepage:
- Size: 3.44 MB
- Stars: 1
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Multilingual Multi-Task Learning for Low-Resource Languages
## Abstract
The following study investigates low-resource multilingual acoustic model training with Multi-Task Learning (MTL) for Automatic Speech Recognition. The main question of this research is: *What is the best way to represent a source language with MTL to improve performance on the target language?* The two parameters of interest are (1) the level of detail at which the source language is modeled, and (2) the relative weighting of source vs. target languages during backprop.
Results show that when the source task is weighted \textit{higher} than the target task, a *more* detailed task representation (ie. the triphone) leads to better performance on the target language. On the other hand, when the source task is weighted *lower*, then a *less* detailed level of source task representation (ie. the monophone) is better for performance in the target language. Given all levels of detail in the source task, a 1-to-1 weighting ratio of source-to-target leads to best results on average.
This study uses Kyrgyz (audiobook recordings) as a target language and English (LibriSpeech subset) as a source language.
## Reviewer Comments
You can find the Interspeech Committee's comments in the `REVIEWER_COMMENTS.txt` file.