Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/yufan-aslp/AliMeeting
The project is associated with the recently-launched ICASSP 2022 Multi-channel Multi-party Meeting Transcription Challenge (M2MeT) to provide participants with baseline systems for speech recognition and speaker diarization in conference scenario.
https://github.com/yufan-aslp/AliMeeting
aishell-4 alimeeting asr challenge m2met multi-speaker-asr speaker-diarization
Last synced: 3 months ago
JSON representation
The project is associated with the recently-launched ICASSP 2022 Multi-channel Multi-party Meeting Transcription Challenge (M2MeT) to provide participants with baseline systems for speech recognition and speaker diarization in conference scenario.
- Host: GitHub
- URL: https://github.com/yufan-aslp/AliMeeting
- Owner: yufan-aslp
- Created: 2021-10-20T03:10:37.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2022-06-10T02:51:32.000Z (over 2 years ago)
- Last Synced: 2024-06-05T14:39:47.797Z (5 months ago)
- Topics: aishell-4, alimeeting, asr, challenge, m2met, multi-speaker-asr, speaker-diarization
- Language: Python
- Homepage:
- Size: 492 KB
- Stars: 107
- Watchers: 3
- Forks: 17
- Open Issues: 6
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- StarryDivineSky - yufan-aslp/AliMeeting
README
# M2MeT challenge baseline -- AliMeeting
This project provides the baseline system recipes for the ICASSP 2020 Multi-channel Multi-party Meeting Transcription Challenge (M2MeT). The challenge mainly consists of two tracks, named ***Automatic Speech Recognition (ASR)*** and ***Speaker Diarization***. For each track, detailed descriptions can be found in its corresponding directory. The goal of this project is to simplify the training and evaluation procedures and make it flexible for participants to reproduce the baseline experiments and develop novelty methods.
## Setup
```shell
git clone https://github.com/yufan-aslp/AliMeeting.git
```## Introduction
* [Speech Recognition Track](asr): Follow the detailed steps in `./asr`.
* [Speaker Diarization Track](speaker): Follow the detailed steps in `./speaker`.
## General steps
1. Prepare the training data for speaker diarization and ASR model, respectively
2. Follow the running steps of the speaker diarization experiment and obtain the `rttm` file. The `rttm` file includes the voice activity detection (VAD) and speaker diarization results, which will be used to compute the final Diarization Error Rate (DER) scores.
3. For ASR track, we can train the single-speaker or multi-speaker ASR models. The evaluation metric of ASR systems is Character Error Rate (CER).## Citation
If you use the challenge dataset or our baseline systems, please consider citing the following:
@inproceedings{Yu2022M2MeT,
title={M2{M}e{T}: The {ICASSP} 2022 Multi-Channel Multi-Party Meeting Transcription Challenge},
author={Yu, Fan and Zhang, Shiliang and Fu, Yihui and Xie, Lei and Zheng, Siqi and Du, Zhihao and Huang, Weilong and Guo, Pengcheng and Yan, Zhijie and Ma, Bin and Xu, Xin and Bu, Hui},
booktitle={Proc. ICASSP},
year={2022},
organization={IEEE}
}@inproceedings{Yu2022Summary,
title={Summary On The {ICASSP} 2022 Multi-Channel Multi-Party Meeting Transcription Grand Challenge},
author={Yu, Fan and Zhang, Shiliang and Guo, Pengcheng and Fu, Yihui and Du, Zhihao and Zheng, Siqi and Huang, Weilong and Xie, Lei and Tan, Zheng-Hua and Wang, DeLiang and Qian, Yanmin and Lee, Kong Aik and Yan, Zhijie and Ma, Bin and Xu, Xin and Bu, Hui},
booktitle={Proc. ICASSP},
year={2022},
organization={IEEE}
}Challenge introduction paper: M2MeT: The ICASSP 2022 Multi-Channel Multi-Party Meeting Transcription Challenge (https://arxiv.org/abs/2110.07393?spm=a3c0i.25445127.6257982940.1.111654811kxLMY&file=2110.07393)
Challenge summary paper: Summary On The ICASSP 2022 Multi-Channel Multi-Party Meeting Transcription Grand Challenge (https://arxiv.org/abs/2202.03647?spm=a3c0i.25445127.6257982940.2.111654811kxLMY&file=2202.03647)
The AliMeeting data download at https://www.openslr.org/119
Room config of AliMeeting Train set download at https://speech-lab-share-data.oss-cn-shanghai.aliyuncs.com/AliMeeting/AliMeeting_Trainset_Room.xlsx
M2MeT challege codalab(Open evaluation platform for Eval and Test sets of both Tracks): https://codalab.lisn.upsaclay.fr/competitions/?q=M2MeT
## Organizing Committee
* Lei Xie, AISHELL Foundation, China, [email protected]
* Bin Ma, Principal Engineer at Alibaba, Singapore, [email protected]
* DeLiang Wang, Professor, Ohio State University, USA, [email protected]
* Zheng-Hua Tan, Professor, Aalborg University, Denmark, [email protected]
* Kong Aik Lee, Senior Scientist, Institute for Infocomm Research, A*STAR, Singapore, [email protected]
* Zhijie Yan, Director of Speech Lab at Alibaba, China, [email protected]
* Yanmin Qian, Associate Professor, Shanghai Jiao Tong University, China,
[email protected]
* Hui Bu, CEO, AIShell Inc., China, [email protected]## Contributors
[](https://damo.alibaba.com/labs/speech/?lang=zh)[](http://www.aishelltech.com/sy)[](https://isca-speech.org/iscaweb/)
## Code license
[Apache 2.0](./LICENSE)