Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/charlesliucn/awesome-end2end-speech-recognition
💬 A list of End-to-End speech recognition, including papers, codes and other materials
https://github.com/charlesliucn/awesome-end2end-speech-recognition
List: awesome-end2end-speech-recognition
awesome-list code curated-list end-to-end-speech-recognition papers speech-recognition toolkits
Last synced: 3 months ago
JSON representation
💬 A list of End-to-End speech recognition, including papers, codes and other materials
- Host: GitHub
- URL: https://github.com/charlesliucn/awesome-end2end-speech-recognition
- Owner: charlesliucn
- License: mit
- Created: 2019-03-13T08:11:56.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2019-04-14T12:34:47.000Z (over 5 years ago)
- Last Synced: 2024-07-29T12:13:27.385Z (3 months ago)
- Topics: awesome-list, code, curated-list, end-to-end-speech-recognition, papers, speech-recognition, toolkits
- Homepage:
- Size: 13.7 KB
- Stars: 53
- Watchers: 5
- Forks: 13
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- Awesome-Paper-List - End2End Speech Recognition
README
# Awesome End-to-End Speech Recognition
(Still Updating)
This repository contains a curated list of End-to-End speech recognition, including papers, codes, toolkits and other materials.## Table of Contents
### [1. Papers](#Papers)
- [1.1 Higly Recommended Papers](#recommended)
- [1.2 All Paper List](#paperlist)
+ [2019](#2019)
+ [2018](#2018)
+ [2017](#2017)
+ [2016](#2016)
+ [2015](#2015)
+ [2014](#2014)### [2. Toolkits](#toolkits)
* * *
## 1. Papers
### 1.1 Highly Recommended Papers- Graves A, Fernández S, Gomez F, et al. **Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. Proceedings of the 23rd international conference on Machine learning. ACM, 2006: 369-376.** [[pdf]](https://mediatum.ub.tum.de/doc/1292048/file.pdf)
- Graves A, Mohamed A, Hinton G. **Speech recognition with deep recurrent neural networks. 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013: 6645-6649.** [[pdf]](https://arxiv.org/pdf/1303.5778.pdf)
- Graves A, Jaitly N. **Towards end-to-end speech recognition with recurrent neural networks. International Conference on Machine Learning. 2014: 1764-1772.** [[pdf]](http://proceedings.mlr.press/v32/graves14.pdf)
- Awni Y. Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, Andrew Y. Ng: **Deep Speech: Scaling up end-to-end speech recognition. CoRR abs/1412.5567 (2014)** [[pdf]](https://arxiv.org/pdf/1412.5567.pdf)[[code]](https://github.com/search?o=desc&q=DeepSpeech&s=&type=Repositories)
- Amodei D, Ananthanarayanan S, Anubhai R, et al. **Deep speech 2: End-to-end speech recognition in english and mandarin. International conference on machine learning. 2016: 173-182.** [[pdf]](http://proceedings.mlr.press/v48/amodei16.pdf)[[code]](https://github.com/search?o=desc&q=DeepSpeech2&s=stars&type=Repositories)
- Bahdanau D, Chorowski J, Serdyuk D, et al. **End-to-end attention-based large vocabulary speech recognition. 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016: 4945-4949.** [[pdf]](https://arxiv.org/pdf/1508.04395.pdf)
- Zhang Y, Chan W, Jaitly N. **Very deep convolutional networks for end-to-end speech recognition. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017: 4845-4849.** [[pdf]](https://arxiv.org/pdf/1609.06773)
- Zhang Y, Pezeshki M, Brakel P, et al. **Towards end-to-end speech recognition with deep convolutional neural networks[J]. arXiv preprint arXiv:1701.02720, 2017.** [[pdf]](https://arxiv.org/pdf/1701.02720)
- Kim S, Hori T, Watanabe S. **Joint CTC-attention based end-to-end speech recognition using multi-task learning. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017: 4835-4839.** [[pdf]](https://arxiv.org/pdf/1609.06773.pdf)
- Hori T, Watanabe S, Zhang Y, et al. **Advances in joint CTC-attention based end-to-end speech recognition with a deep CNN encoder and RNN-LM[J]. arXiv preprint arXiv:1706.02737, 2017.** [[pdf]](https://arxiv.org/pdf/1706.02737)
- Rao K, Sak H, Prabhavalkar R. **Exploring architectures, data and units for streaming end-to-end speech recognition with RNN-transducer. 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEE, 2017: 193-199.** [[pdf]](https://arxiv.org/pdf/1801.00841)
- Chiu C C, Sainath T N, Wu Y, et al. **State-of-the-art speech recognition with sequence-to-sequence models[C]//2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018: 4774-4778.** [[pdf]](https://arxiv.org/pdf/1712.01769)
- Neil Zeghidour, Nicolas Usunier, Gabriel Synnaeve, Ronan Collobert, Emmanuel Dupoux: **End-to-End Speech Recognition From the Raw Waveform. CoRR abs/1806.07098 (2018)** [[pdf]](https://arxiv.org/pdf/1806.07098)
#### 2019
- Dario Bertero, Onno Kampman, Pascale Fung: **Towards Universal End-to-End Affect Recognition from Multilingual Speech by ConvNets. CoRR abs/1901.06486 (2019)**- Yiming Wang, Xing Fan, I-Fan Chen, Yuzong Liu, Tongfei Chen, Björn Hoffmeister: **End-to-end Anchored Speech Recognition. CoRR abs/1902.02383 (2019)**
- Jinxi Guo, Tara N. Sainath, Ron J. Weiss: **A spelling correction model for end-to-end speech recognition. CoRR abs/1902.07178 (2019)**
- Egor Lakomkin, Mohammad-Ali Zamani, Cornelius Weber, Sven Magg, Stefan Wermter: **Incorporating End-to-End Speech Recognition Models for Sentiment Analysis. CoRR abs/1902.11245 (2019)**
- Dimitri Palaz, Mathew Magimai-Doss, Ronan Collobert: **End-to-end acoustic modeling using convolutional neural networks for HMM-based automatic speech recognition.** Speech Communication 108: 15-32 (2019)
- Chongchong Yu, Yunbing Chen, Yueqiao Li, Meng Kang, Shixuan Xu, Xueer Liu: **Cross-Language End-to-End Speech Recognition Research Based on Transfer Learning for the Low-Resource Tujia Language.** Symmetry 11(2): 179 (2019)
- Yangyang Shi, Mei-Yuh Hwang, Xin Lei: **End-To-End Speech Recognition Using A High Rank LSTM-CTC Based Model.** CoRR abs/1903.05261
- Jian Kang, Wei-Qiang Zhang, Wei-Wei Liu, Jia Liu, Michael T. Johnson: **Lattice Based Transcription Loss for End-to-End Speech Recognition. Signal Processing Systems 90(7): 1013-1023 (2018)**
- Hiroshi Seki, Takaaki Hori, Shinji Watanabe, Jonathan Le Roux, John R. Hershey: **A Purely End-to-End System for Multi-speaker Speech Recognition. ACL (1) 2018: 2620-2630**
- Leyuan Qu, Cornelius Weber, Egor Lakomkin, Johannes Twiefel, Stefan Wermter: **Combining Articulatory Features with End-to-End Learning in Speech Recognition. ICANN (3) 2018: 500-510**
- Hirofumi Inaguma, Masato Mimura, Koji Inoue, Kazuyoshi Yoshii, Tatsuya Kawahara: **An End-to-End Approach to Joint Social Signal Detection and Automatic Speech Recognition. ICASSP 2018: 6214-6218**
- Shigeki Karita, Atsunori Ogawa, Marc Delcroix, Tomohiro Nakatani: **Sequence Training of Encoder-Decoder Model Using Policy Gradient for End-to-End Speech Recognition. ICASSP 2018: 5839-5843**
- Suyoun Kim, Michael L. Seltzer: **Towards Language-Universal End-to-End Speech Recognition. ICASSP 2018: 4914-4918**
- Tsubasa Ochiai, Shinji Watanabe, Shigeru Katagiri, Takaaki Hori, John R. Hershey: **Speaker Adaptation for Multichannel End-to-End Speech Recognition. ICASSP 2018: 6707-6711**- Shruti Palaskar, Ramon Sanabria, Florian Metze: **End-to-end Multimodal Speech Recognition. ICASSP 2018: 5774-5778**
- Stavros Petridis, Themos Stafylakis, Pingchuan Ma, Feipeng Cai, Georgios Tzimiropoulos, Maja Pantic: **End-to-End Audiovisual Speech Recognition. ICASSP 2018: 6548-6552**- Shane Settle, Jonathan Le Roux, Takaaki Hori, Shinji Watanabe, John R. Hershey: **End-to-End Multi-Speaker Speech Recognition. ICASSP 2018: 4819-4823**
- Changhao Shan, Junbo Zhang, Yujun Wang, Lei Xie: **Attention-Based End-to-End Speech Recognition on Voice Search. ICASSP 2018: 4764-4768**
- Shubham Toshniwal, Tara N. Sainath, Ron J. Weiss, Bo Li, Pedro J. Moreno, Eugene Weinstein, Kanishka Rao: **Multilingual Speech Recognition with a Single End-to-End Model. ICASSP 2018: 4904-4908**
- Panagiotis Tzirakis, Jiehao Zhang, Björn W. Schuller: **End-to-End Speech Emotion Recognition Using Deep Neural Networks. ICASSP 2018: 5089-5093**
- Yingbo Zhou, Caiming Xiong, Richard Socher: **Improving End-to-End Speech Recognition with Policy Learning. ICASSP 2018: 5819-5823**
- Yunhao Yan, Qinmengying Yan, Guang Hua, Haijian Zhang: **Information Distance Based Self-Attention-BGRU Layer for End-to-End Speech Recognition. DSL 2018: 1-5**
- Zhichao Peng, Zhi Zhu, Masashi Unoki, Jianwu Dang, Masato Akagi: **Auditory-Inspired End-to-End Speech Emotion Recognition Using 3D Convolutional Recurrent Neural Networks Based on Spectral-Temporal Representation. ICME 2018: 1-6**
- Benjamin Sertolli, Nicholas Cummins, Abdulkadir Sengür, Björn W. Schuller: **Deep End-to-End Representation Learning for Food Type Recognition from Speech. ICMI 2018: 574-578**
- Stefan Braun, Daniel Neil, Jithendar Anumula, Enea Ceolini, Shih-Chii Liu: **Multi-channel Attention for End-to-End Speech Recognition. Interspeech 2018: 17-21**
- Jaesung Bae, Dae-Shik Kim: **End-to-End Speech Command Recognition with Capsule Network. Interspeech 2018: 776-780**
- Linhao Dong, Shiyu Zhou, Wei Chen, Bo Xu: **Extending Recurrent Neural Aligner for Streaming End-to-End Speech Recognition in Mandarin. Interspeech 2018: 816-820**
- Hossein Hadian, Hossein Sameti, Daniel Povey, Sanjeev Khudanpur: **End-to-end Speech Recognition Using Lattice-free MMI. Interspeech 2018: 12-16**
- Tomoki Hayashi, Shinji Watanabe, Tomoki Toda, Kazuya Takeda: **Multi-Head Decoder for End-to-End Speech Recognition. Interspeech 2018: 801-805**
- Shigeki Karita, Shinji Watanabe, Tomoharu Iwata, Atsunori Ogawa, Marc Delcroix: **Semi-Supervised End-to-End Speech Recognition. Interspeech 2018: 2-6**
- Suyoun Kim, Michael L. Seltzer, Jinyu Li, Rui Zhao: **Improved Training for Online End-to-end Speech Recognition Systems. Interspeech 2018: 2913-2917**
- Titouan Parcollet, Ying Zhang, Mohamed Morchid, Chiheb Trabelsi, Georges Linarès, Renato de Mori, Yoshua Bengio: **Quaternion Convolutional Neural Networks for End-to-End Automatic Speech Recognition. Interspeech 2018: 22-26**
- Dengke Tang, Junlin Zeng, Ming Li: **An End-to-End Deep Learning Framework for Speech Emotion Recognition of Atypical Individuals. Interspeech 2018: 162-166**
- Chao Weng, Jia Cui, Guangsen Wang, Jun Wang, Chengzhu Yu, Dan Su, Dong Yu: **Improving Attention Based Sequence-to-Sequence Models for End-to-End English Conversational Speech Recognition. Interspeech 2018: 761-765**
- Ian Williams, Anjuli Kannan, Petar S. Aleksic, David Rybach, Tara N. Sainath: **Contextual Speech Recognition in End-to-end Neural Network Systems Using Beam Search. Interspeech 2018: 2227-2231**
- Neil Zeghidour, Nicolas Usunier, Gabriel Synnaeve, Ronan Collobert, Emmanuel Dupoux: **End-to-End Speech Recognition from the Raw Waveform. Interspeech 2018: 781-785**
- Albert Zeyer, Kazuki Irie, Ralf Schlüter, Hermann Ney: **Improved Training of End-to-end Attention Models for Speech Recognition. Interspeech 2018: 7-11**
- Yoonho Boo, Jinhwan Park, Lukas Lee, Wonyong Sung: **On-Device End-to-end Speech Recognition with Multi-Step Parallel Rnns. SLT 2018: 376-381**
- Jennifer Drexler, James Glass: **Combining End-to-End and Adversarial Training for Low-Resource Speech Recognition. SLT 2018: 361-368**
- Takaaki Hori, Jaejin Cho, Shinji Watanabe: **End-to-end Speech Recognition With Word-Based Rnn Language Models. SLT 2018: 389-396**
- Suyoun Kim, Florian Metze: **Dialog-Context Aware end-to-end Speech Recognition. SLT 2018: 434-440**
- Gakuto Kurata, Kartik Audhkhasi: **Improved Knowledge Distillation from Bi-Directional to Uni-Directional LSTM CTC for End-to-End Speech Recognition. SLT 2018: 411-417**
- Da-Rong Liu, Chi-Yu Yang, Szu-Lin Wu, Hung-Yi Lee: **Improving Unsupervised Style Transfer in end-to-end Speech Synthesis with end-to-end Speech Recognition. SLT 2018: 640-647**
- Golan Pundak, Tara N. Sainath, Rohit Prabhavalkar, Anjuli Kannan, Ding Zhao: **Deep Context: End-to-end Contextual Speech Recognition. SLT 2018: 418-425**
- Lahiru Samarakoon, Brian Mak, Albert Y. S. Lam: **Domain Adaptation of End-to-end Speech Recognition in Low-Resource Settings. SLT 2018: 382-388**
- Tzu-Hsuan Ting, Chia-Ping Chen: **Combining De-noising Auto-encoder and Recurrent Neural Networks in End-to-End Automatic Speech Recognition for Noise Robustness. SLT 2018: 405-410**
- Vladimir Bataev, Maxim Korenevsky, Ivan Medennikov, Alexander Zatvornitskiy: **Exploring End-to-End Techniques for Low-Resource Speech Recognition. SPECOM 2018: 32-41**
- Nikita Markovnikov, Irina S. Kipyatkova, Elena E. Lyakso: **End-to-End Speech Recognition in Russian. SPECOM 2018: 377-386**
- Kanishka Rao, Hasim Sak, Rohit Prabhavalkar: **Exploring Architectures, Data and Units For Streaming End-to-End Speech Recognition with RNN-Transducer. CoRR abs/1801.00841 (2018)**
- Stavros Petridis, Themos Stafylakis, Pingchuan Ma, Feipeng Cai, Georgios Tzimiropoulos, Maja Pantic: **End-to-end Audiovisual Speech Recognition. CoRR abs/1802.06424 (2018)**
- Tomoki Hayashi, Shinji Watanabe, Tomoki Toda, Kazuya Takeda: **Multi-Head Decoder for End-to-End Speech Recognition. CoRR abs/1804.08050 (2018)**
- Shruti Palaskar, Ramon Sanabria, Florian Metze: **End-to-End Multimodal Speech Recognition. CoRR abs/1804.09713 (2018)**
- Albert Zeyer, Kazuki Irie, Ralf Schlüter, Hermann Ney: **Improved training of end-to-end attention models for speech recognition. CoRR abs/1805.03294 (2018)**
- Wei Zou, Dongwei Jiang, Shuaijiang Zhao, Xiangang Li: **A comparable study of modeling units for end-to-end Mandarin speech recognition. CoRR abs/1805.03832 (2018)**
- Hiroshi Seki, Takaaki Hori, Shinji Watanabe, Jonathan Le Roux, John R. Hershey: **A Purely End-to-end System for Multi-speaker Speech Recognition. CoRR abs/1805.05826 (2018)**
- Shiyu Zhou, Shuang Xu, Bo Xu: **Multilingual End-to-End Speech Recognition with A Single Transformer on Low-Resource Languages. CoRR abs/1806.05059 (2018)**
- Linhao Dong, Shiyu Zhou, Wei Chen, Bo Xu: **Extending Recurrent Neural Aligner for Streaming End-to-End Speech Recognition in Mandarin. CoRR abs/1806.06342 (2018)**
- Neil Zeghidour, Nicolas Usunier, Gabriel Synnaeve, Ronan Collobert, Emmanuel Dupoux: **End-to-End Speech Recognition From the Raw Waveform. CoRR abs/1806.07098 (2018)**
- Titouan Parcollet, Ying Zhang, Mohamed Morchid, Chiheb Trabelsi, Georges Linarès, Renato De Mori, Yoshua Bengio: **Quaternion Convolutional Neural Networks for End-to-End Automatic Speech Recognition. CoRR abs/1806.07789 (2018)**
- Vladimir Bataev, Maxim Korenevsky, Ivan Medennikov, Alexander Zatvornitskiy: **Exploring End-to-End Techniques for Low-Resource Speech Recognition. CoRR abs/1807.00868 (2018)**
- Zhangyu Xiao, Zhijian Ou, Wei Chu, Hui Lin: **Hybrid CTC-Attention based End-to-End Speech Recognition using Subword Units. CoRR abs/1807.04978 (2018)**
- Suyoun Kim, Florian Metze: **Dialog-context aware end-to-end speech recognition. CoRR abs/1808.02171 (2018)**
- Golan Pundak, Tara N. Sainath, Rohit Prabhavalkar, Anjuli Kannan, Ding Zhao: **Deep context: end-to-end contextual speech recognition. CoRR abs/1808.02480 (2018)**
- Takaaki Hori, Jaejin Cho, Shinji Watanabe: **End-to-end Speech Recognition with Word-based RNN Language Models. CoRR abs/1808.02608 (2018)**
- Xinpei Zhou, Jiwei Li, Xi Zhou: **Cascaded CNN-resBiLSTM-CTC: An End-to-End Acoustic Model For Speech Recognition. CoRR abs/1810.12001 (2018)**
- Genta Indra Winata, Andrea Madotto, Chien-Sheng Wu, Pascale Fung: **Towards End-to-end Automatic Code-Switching Speech Recognition. CoRR abs/1810.12620 (2018)**
- Ne Luo, Dongwei Jiang, Shuaijiang Zhao, Caixia Gong, Wei Zou, Xiangang Li: **Towards End-to-End Code-Switching Speech Recognition. CoRR abs/1810.13091 (2018)**
- Zhiping Zeng, Yerbolat Khassanov, Van Tung Pham, Haihua Xu, Eng Siong Chng, Haizhou Li: **On the End-to-End Solution to Mandarin-English Code-switching Speech Recognition. CoRR abs/1811.00241 (2018)**
- Alexander H. Liu, Hung-yi Lee, Lin-Shan Lee: **Adversarial Training of End-to-end Speech Recognition Using a Criticizing Language Model. CoRR abs/1811.00787 (2018)**
- Takaaki Hori, Ramón Fernández Astudillo, Tomoki Hayashi, Yu Zhang, Shinji Watanabe, Jonathan Le Roux: **Cycle-consistency training for end-to-end speech recognition. CoRR abs/1811.01690 (2018)**
- Nelson Yalta, Shinji Watanabe, Takaaki Hori, Kazuhiro Nakadai, Tetsuya Ogata: **CNN-based MultiChannel End-to-End Speech Recognition for everyday home environments. CoRR abs/1811.02735 (2018)**
- Hainan Xu, Shuoyang Ding, Shinji Watanabe: **Improving End-to-end Speech Recognition with Pronunciation-assisted Sub-word Modeling. CoRR abs/1811.04284 (2018)**
- Ruizhi Li, Xiaofei Wang, Sri Harish Reddy Mallidi, Takaaki Hori, Shinji Watanabe, Hynek Hermansky: **Multi-encoder multi-resolution framework for end-to-end speech recognition. CoRR abs/1811.04897 (2018)**
- Xiaofei Wang, Ruizhi Li, Sri Harish Mallidi, Takaaki Hori, Shinji Watanabe, Hynek Hermansky: **Stream attention-based multi-array end-to-end speech recognition. CoRR abs/1811.04903 (2018)**
- Pan Zhou, Wenwen Yang, Wei Chen, Yanfeng Wang, Jia Jia: **Modality Attention for End-to-End Audio-visual Speech Recognition. CoRR abs/1811.05250 (2018)**
- Yanzhang He, Tara N. Sainath, Rohit Prabhavalkar, Ian McGraw, Raziel Alvarez, Ding Zhao, David Rybach, Anjuli Kannan, Yonghui Wu, Ruoming Pang, Qiao Liang, Deepti Bhatia, Yuan Shangguan, Bo Li, Golan Pundak, Khe Chai Sim, Tom Bagby, Shuo-Yiin Chang, Kanishka Rao, Alexander Gruenstein: **Streaming End-to-end Speech Recognition For Mobile Devices. CoRR abs/1811.06621 (2018)**
- Bo Li, Yu Zhang, Tara N. Sainath, Yonghui Wu, William Chan: **Bytes are All You Need: End-to-End Multilingual Speech Recognition and Synthesis with Bytes. CoRR abs/1811.09021 (2018)**
- Zhehuai Chen, Mahaveer Jain, Yongqiang Wang, Michael L. Seltzer, Christian Fuegen: **End-to-end contextual speech recognition using class language models and a token passing decoder. CoRR abs/1812.02142 (2018)**
- Patrick Doetsch, Mirko Hannemann, Ralf Schlüter, Hermann Ney: **Inverted Alignments for End-to-End Automatic Speech Recognition. J. Sel. Topics Signal Processing 11(8): 1265-1273 (2017)**
- Tsubasa Ochiai, Shinji Watanabe, Takaaki Hori, John R. Hershey, Xiong Xiao: **Unified Architecture for Multichannel End-to-End Speech Recognition With Neural Beamforming. J. Sel. Topics Signal Processing 11(8): 1274-1288 (2017)**
- Hao Tang, Liang Lu, Lingpeng Kong, Kevin Gimpel, Karen Livescu, Chris Dyer, Noah A. Smith, Steve Renals: **End-to-End Neural Segmental Models for Speech Recognition. J. Sel. Topics Signal Processing 11(8): 1254-1264 (2017)**
- Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R. Hershey, Tomoki Hayashi: **Hybrid CTC/Attention Architecture for End-to-End Speech Recognition. J. Sel. Topics Signal Processing 11(8): 1240-1253 (2017)**
- Bo Wu, Kehuang Li, Fengpei Ge, Zhen Huang, Minglei Yang, Sabato Marco Siniscalchi, Chin-Hui Lee: **An End-to-End Deep Learning Approach to Simultaneous Speech Dereverberation and Acoustic Modeling for Robust Speech Recognition. J. Sel. Topics Signal Processing 11(8): 1289-1300 (2017)**
- Takaaki Hori, Shinji Watanabe, John R. Hershey: **Joint CTC/attention decoding for end-to-end speech recognition. ACL (1) 2017: 518-529**
- Hitoshi Ito, Aiko Hagiwara, Manon Ichiki, Takeshi Mishima, Shoei Sato, Akio Kobayashi: **End-to-end speech recognition for languages with ideographic characters. APSIPA 2017: 1228-1232**
- Sivanagaraja Tatinati, Mun Kit Ho, Andy W. H. Khong, Yubo Wang: **End-to-end speech emotion recognition using multi-scale convolution networks. APSIPA 2017: 189-192**
- Qingnan Wang, Wu Guo, Peixin Chen, Yan Song: **Tibetan-Mandarin bilingual speech recognition based on end-to-end framework. APSIPA 2017: 1214-1217**
- Eric Battenberg, Jitong Chen, Rewon Child, Adam Coates, Yashesh Gaur, Yi Li, Hairong Liu, Sanjeev Satheesh, Anuroop Sriram, Zhenyao Zhu: **Exploring neural transducers for end-to-end speech recognition. ASRU 2017: 206-213**
- Takaaki Hori, Shinji Watanabe, John R. Hershey: **Multi-level language modeling and decoding for open vocabulary end-to-end speech recognition. ASRU 2017: 287-293**
- Kanishka Rao, Hasim Sak, Rohit Prabhavalkar: **Exploring architectures, data and units for streaming end-to-end speech recognition with RNN-transducer. ASRU 2017: 193-199**
- Shinji Watanabe, Takaaki Hori, John R. Hershey: **Language independent end-to-end architecture for joint language identification and speech recognition. ASRU 2017: 265-271**
- Suyoun Kim, Takaaki Hori, Shinji Watanabe: **Joint CTC-attention based end-to-end speech recognition using multi-task learning. ICASSP 2017: 4835-4839**
- Stavros Petridis, Zuwei Li, Maja Pantic: **End-to-end visual speech recognition with LSTMS. ICASSP 2017: 2592-2596**
- Andrew Rosenberg, Kartik Audhkhasi, Abhinav Sethy, Bhuvana Ramabhadran, Michael Picheny: **End-to-end speech recognition and keyword search on low-resource languages. ICASSP 2017: 5280-5284**
- Yu Zhang, William Chan, Navdeep Jaitly: **Very deep convolutional networks for end-to-end speech recognition. ICASSP 2017: 4845-4849**
- Tsubasa Ochiai, Shinji Watanabe, Takaaki Hori, John R. Hershey: **Multichannel End-to-end Speech Recognition. ICML 2017: 2632-2641**
- Takaaki Hori, Shinji Watanabe, Yu Zhang, William Chan: **Advances in Joint CTC-Attention Based End-to-End Speech Recognition with a Deep CNN Encoder and RNN-LM. INTERSPEECH 2017: 949-953**
- Junfeng Hou, Shiliang Zhang, Li-Rong Dai: **Gaussian Prediction Based Attention for Online End-to-End Speech Recognition. INTERSPEECH 2017: 3692-3696**
- Suyoun Kim, Ian Lane: **End-to-End Speech Recognition with Auditory Attention for Multi-Microphone Distance Speech Recognition. INTERSPEECH 2017: 3867-3871**
- Seyedmahdad Mirsamadi, John H. L. Hansen: **On Multi-Domain Training and Adaptation of End-to-End RNN Acoustic Models for Distant Speech Recognition. INTERSPEECH 2017: 404-408**
- Ehsan Variani, Tom Bagby, Erik McDermott, Michiel Bacchiani: **End-to-End Training of Acoustic Models for Large Vocabulary Continuous Speech Recognition with TensorFlow. INTERSPEECH 2017: 1641-1645**
- Yonatan Belinkov, James R. Glass: **Analyzing Hidden Representations in End-to-End Automatic Speech Recognition Systems. NIPS 2017: 2438-2448**
- Branislav M. Popovic, Edvin Pakoci, Darko Pekar: **End-to-End Large Vocabulary Speech Recognition for the Serbian Language. SPECOM 2017: 343-352**
- Yajie Miao, Florian Metze: **End-to-End Architectures for Speech Recognition. New Era for Robust Speech Recognition, Exploiting Deep Learning 2017: 299-323**
- Eric Battenberg, Jitong Chen, Rewon Child, Adam Coates, Yashesh Gaur, Yi Li, Hairong Liu, Sanjeev Satheesh, David Seetapun, Anuroop Sriram, Zhenyao Zhu: **Exploring Neural Transducers for End-to-End Speech Recognition. CoRR abs/1707.07413 (2017)**
- Takaaki Hori, Shinji Watanabe, Yu Zhang, William Chan: **Advances in Joint CTC-Attention based End-to-End Speech Recognition with a Deep CNN Encoder and RNN-LM. CoRR abs/1706.02737 (2017)**
- Tsubasa Ochiai, Shinji Watanabe, Takaaki Hori, John R. Hershey: **Multichannel End-to-end Speech Recognition. CoRR abs/1703.04783 (2017)**
- Stavros Petridis, Zuwei Li, Maja Pantic: **End-To-End Visual Speech Recognition With LSTMs. CoRR abs/1701.05847 (2017)**
- Changhao Shan, Junbo Zhang, Yujun Wang, Lei Xie: **Attention-Based End-to-End Speech Recognition in Mandarin. CoRR abs/1707.07167 (2017)**
- Andros Tjandra, Sakriani Sakti, Satoshi Nakamura: **Local Monotonic Attention Mechanism for End-to-End Speech Recognition. CoRR abs/1705.08091 (2017)**
- Hao Tang, Liang Lu, Lingpeng Kong, Kevin Gimpel, Karen Livescu, Chris Dyer, Noah A. Smith, Steve Renals: **End-to-End Neural Segmental Models for Speech Recognition. CoRR abs/1708.00531 (2017)**
- Yonatan Belinkov, James R. Glass: **Analyzing Hidden Representations in End-to-End Automatic Speech Recognition Systems. CoRR abs/1709.04482 (2017)**
- Shubham Toshniwal, Tara N. Sainath, Ron J. Weiss, Bo Li, Pedro J. Moreno, Eugene Weinstein, Kanishka Rao: **Multilingual Speech Recognition With A Single End-To-End Model. CoRR abs/1711.01694 (2017)**
- Suyoun Kim, Michael L. Seltzer: **Towards Language-Universal End-to-End Speech Recognition. CoRR abs/1711.02207 (2017)**
- Suyoun Kim, Michael L. Seltzer, Jinyu Li, Rui Zhao: **Improved training for online end-to-end speech recognition systems. CoRR abs/1711.02212 (2017)**
- Yingbo Zhou, Caiming Xiong, Richard Socher: **Improving End-to-End Speech Recognition with Policy Learning. CoRR abs/1712.07101 (2017)**
- Yingbo Zhou, Caiming Xiong, Richard Socher: **Improved Regularization Techniques for End-to-End Speech Recognition. CoRR abs/1712.07108 (2017)**
- Rahil Mahdian Toroghi: **Blind speech separation in distant speech recognition front-end processing. Saarland University, Saarbrücken, Germany 2016**
- Xuyang Wang, Pengyuan Zhang, Qingwei Zhao, Jielin Pan, Yonghong Yan: **Improved End-to-End Speech Recognition Using Adaptive Per-Dimensional Learning Rate Methods. IEICE Transactions 99-D(10): 2550-2553 (2016)**
- Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, Yoshua Bengio: **End-to-end attention-based large vocabulary speech recognition. ICASSP 2016: 4945-4949**
- Liang Lu, Xingxing Zhang, Steve Renals: **On training the recurrent neural network encoder-decoder for large vocabulary end-to-end speech recognition. ICASSP 2016: 5060-5064**
- George Trigeorgis, Fabien Ringeval, Raymond Brueckner, Erik Marchi, Mihalis A. Nicolaou, Björn W. Schuller, Stefanos Zafeiriou: **Adieu features? End-to-end speech emotion recognition using a deep convolutional recurrent network. ICASSP 2016: 5200-5204**
- Takuya Higuchi, Takuya Yoshioka, Tomohiro Nakatani: **Optimization of Speech Enhancement Front-End with Speech Recognition-Level Criterion. INTERSPEECH 2016: 3808-3812**
- Souvik Kundu, Khe Chai Sim, Mark J. F. Gales: **Incorporating a Generative Front-End Layer to Deep Neural Network for Noise Robust Automatic Speech Recognition. INTERSPEECH 2016: 2359-2363**
- Liang Lu, Lingpeng Kong, Chris Dyer, Noah A. Smith, Steve Renals: **Segmental Recurrent Neural Networks for End-to-End Speech Recognition. INTERSPEECH 2016: 385-389**
- Ying Zhang, Mohammad Pezeshki, Philémon Brakel, Saizheng Zhang, César Laurent, Yoshua Bengio, Aaron C. Courville: **Towards End-to-End Speech Recognition with Deep Convolutional Neural Networks. INTERSPEECH 2016: 410-414**
- Jian Kang, Wei-Qiang Zhang, Jia Liu: **Lattice based transcription loss for end-to-end speech recognition. ISCSLP 2016: 1-5**
- Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve: **Wav2Letter: an End-to-End ConvNet-based Speech Recognition System. CoRR abs/1609.03193 (2016)**
- Suyoun Kim, Takaaki Hori, Shinji Watanabe: **Joint CTC-Attention based End-to-End Speech Recognition using Multi-task Learning. CoRR abs/1609.06773 (2016)**
- Liang Lu, Lingpeng Kong, Chris Dyer, Noah A. Smith, Steve Renals: **Segmental Recurrent Neural Networks for End-to-end Speech Recognition. CoRR abs/1603.00223 (2016)**
- Ramon Sanabria, Florian Metze, Fernando De la Torre: **Robust end-to-end deep audiovisual speech recognition. CoRR abs/1611.06986 (2016)**
- Hassan Taherian: **End-to-end attention-based distant speech recognition with Highway LSTM. CoRR abs/1610.05361 (2016)**
- Yu Zhang, William Chan, Navdeep Jaitly: **Very Deep Convolutional Networks for End-to-End Speech Recognition. CoRR abs/1610.03022 (2016)**- Zewang Zhang, Zheng Sun, Jiaqi Liu, Jingwen Chen, Zhao Huo, Xiao Zhang: **An Experimental Comparison of Deep Neural Networks for End-to-end Speech Recognition. CoRR abs/1611.07174 (2016)**
- Javier Gonzalez-Dominguez, David Eustis, Ignacio Lopez-Moreno, Andrew W. Senior, Françoise Beaufays, Pedro J. Moreno: **A Real-Time End-to-End Multilingual Speech Recognition Architecture. J. Sel. Topics Signal Processing 9(4): 749-759 (2015)**
- Yajie Miao, Mohammad Gowayyed, Florian Metze: **EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding. ASRU 2015: 167-174**
- Jie Li, Heng Zhang, Xinyuan Cai, Bo Xu: **Towards end-to-end speech recognition for Chinese Mandarin using long short-term memory recurrent neural networks. INTERSPEECH 2015: 3615-3619**
- Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Y. Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Y. Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu: **Deep Speech 2: End-to-End Speech Recognition in English and Mandarin. CoRR abs/1512.02595 (2015)**
- Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, Yoshua Bengio: **End-to-End Attention-based Large Vocabulary Speech Recognition. CoRR abs/1508.04395 (2015)**
- Yajie Miao, Mohammad Gowayyed, Florian Metze: **EESEN: End-to-End Speech Recognition using Deep RNN Models and WFST-based Decoding. CoRR abs/1507.08240 (2015)**
- Alex Graves, Navdeep Jaitly: **Towards End-To-End Speech Recognition with Recurrent Neural Networks. ICML 2014: 1764-1772**
- Jan Chorowski, Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio: **End-to-end Continuous Speech Recognition using Attention-based Recurrent NN: First Results. CoRR abs/1412.1602 (2014)**
- Awni Y. Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, Andrew Y. Ng: **Deep Speech: Scaling up end-to-end speech recognition. CoRR abs/1412.5567 (2014)**
- **EESEN**. [[GitHub]](https://github.com/srvk/eesen)
+ Paper: Miao Y, Gowayyed M, Metze F. **EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding. 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). IEEE, 2015: 167-174.** [[pdf]](https://arxiv.org/pdf/1507.08240)- **Wav2Letter(++)**. [[Website]](https://research.fb.com/downloads/wav2letter/)[[GitHub]](https://github.com/facebookresearch/wav2letter)
+ Paper1: Collobert R, Puhrsch C, Synnaeve G. **Wav2letter: an end-to-end convnet-based speech recognition system[J]. arXiv preprint arXiv:1609.03193, 2016.** [[pdf]](https://arxiv.org/pdf/1609.03193.pdf)
+ Paper2: Pratap V, Hannun A, Xu Q, et al. **wav2letter++: The Fastest Open-source Speech Recognition System[J]. arXiv preprint arXiv:1812.07625, 2018.** [[pdf]](https://arxiv.org/pdf/1812.07625)- **espnet**. [[GitHub]](https://github.com/espnet/espnet)
+ Paper: Watanabe S, Hori T, Karita S, et al. **Espnet: End-to-end speech processing toolkit[J]. arXiv preprint arXiv:1804.00015, 2018.** [[pdf]](https://arxiv.org/pdf/1804.00015)- **WavNet(STT)**. [[GitHub]](https://github.com/buriburisuri/speech-to-text-wavenet)
- **neural_sp**. [[GitHub]](https://github.com/hirofumi0810/neural_sp)
- **Other Github Repositories**. [[Link]](https://github.com/search?q=end+to+end+speech+recognition)