https://github.com/slinusc/speaker_identification_evaluation
Evaluating the Effectiveness of Transformer Layers in Wav2Vec 2.0, XLS-R, and Whisper for Speaker Identification Tasks
https://github.com/slinusc/speaker_identification_evaluation
wav2vec2 whisper xls-r
Last synced: 14 days ago
JSON representation
Evaluating the Effectiveness of Transformer Layers in Wav2Vec 2.0, XLS-R, and Whisper for Speaker Identification Tasks
- Host: GitHub
- URL: https://github.com/slinusc/speaker_identification_evaluation
- Owner: slinusc
- Created: 2024-05-24T19:38:30.000Z (11 months ago)
- Default Branch: main
- Last Pushed: 2025-01-25T17:18:29.000Z (3 months ago)
- Last Synced: 2025-04-12T22:05:08.892Z (14 days ago)
- Topics: wav2vec2, whisper, xls-r
- Language: Jupyter Notebook
- Homepage:
- Size: 8.56 MB
- Stars: 3
- Watchers: 1
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
### Abstract
This study evaluates the performance of three
advanced speech encoder models—Wav2Vec
2.0, XLS-R, and Whisper—in speaker identification tasks. By fine-tuning these models and
analyzing their layer-wise representations using
SVCCA, k-means clustering, and t-SNE visualizations, we found that Wav2Vec 2.0 and XLSR capture speaker-specific features effectively
in their early layers, with fine-tuning improving stability and performance. Whisper showed
better performance in deeper layers. Additionally, we determined the optimal number of transformer layers for each model when fine-tuned
for speaker identification tasks.