https://github.com/will-rice/asr-papers
Place to add Automatic Speech Recognition (ASR) and Speech-to-Text (STT) papers.
https://github.com/will-rice/asr-papers
Last synced: 2 months ago
JSON representation
Place to add Automatic Speech Recognition (ASR) and Speech-to-Text (STT) papers.
- Host: GitHub
- URL: https://github.com/will-rice/asr-papers
- Owner: will-rice
- License: apache-2.0
- Created: 2021-08-11T21:16:33.000Z (almost 4 years ago)
- Default Branch: main
- Last Pushed: 2021-08-23T21:55:58.000Z (almost 4 years ago)
- Last Synced: 2025-01-29T18:13:20.993Z (4 months ago)
- Size: 6.84 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# ASR Papers
## 2021
### [MixSpeech: Data Augmentation for Low-resource Automatic Speech Recognition](https://arxiv.org/abs/2102.12664v1)
#### Notes***
### [Thank you for Attention: A survey on Attention-based Artificial Neural Networks for Automatic Speech Recognition](https://arxiv.org/abs/2102.07259v1)
#### Notes***
## 2020### [Enhancing Monotonic Multihead Attention for Streaming ASR](https://arxiv.org/abs/2005.09394)
#### Notes***
### [A Better and Faster End-to-End Model for Streaming ASR](https://arxiv.org/abs/2011.10798)
***
### [A review of on-device fully neural end-to-end automatic speech recognition algorithms](https://arxiv.org/abs/2012.07974)
***
## 2019### [SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition](https://arxiv.org/abs/1904.08779)
#### Abstract
We present SpecAugment, a simple data augmentation method for speech recognition. SpecAugment is applied directly to the feature inputs of a neural network (i.e., filter bank coefficients). The augmentation policy consists of warping the features, masking blocks of frequency channels, and masking blocks of time steps. We apply SpecAugment on Listen, Attend and Spell networks for end-to-end speech recognition tasks. We achieve state-of-the-art performance on the LibriSpeech 960h and Swichboard 300h tasks, outperforming all prior work. On LibriSpeech, we achieve 6.8% WER on test-other without the use of a language model, and 5.8% WER with shallow fusion with a language model. This compares to the previous state-of-the-art hybrid system of 7.5% WER. For Switchboard, we achieve 7.2%/14.6% on the Switchboard/CallHome portion of the Hub5'00 test set without the use of a language model, and 6.8%/14.1% with shallow fusion, which compares to the previous state-of-the-art hybrid system at 8.3%/17.3% WER.
#### Notes
***## 2018
### [STREAMING END-TO-END SPEECH RECOGNITION FOR MOBILE DEVICES](https://arxiv.org/pdf/1811.06621.pdf)
***
## 2017
***## 2016 and before
***
## 2012
[Sequence Transduction with Recurrent Neural Networks]
***