Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/Picovoice/speech-to-text-benchmark
speech to text benchmark framework
https://github.com/Picovoice/speech-to-text-benchmark
aws-transcribe cheetah deep-learning deep-neural-networks deepspeech edge-ai google-speech-to-text mozilla-deepspeech offline picovoice pocketsphinx privacy speech-recognition speech-to-text voice-recognition
Last synced: 2 months ago
JSON representation
speech to text benchmark framework
- Host: GitHub
- URL: https://github.com/Picovoice/speech-to-text-benchmark
- Owner: Picovoice
- License: apache-2.0
- Created: 2018-08-04T02:52:01.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2024-01-12T00:17:59.000Z (about 1 year ago)
- Last Synced: 2024-02-17T08:34:39.932Z (11 months ago)
- Topics: aws-transcribe, cheetah, deep-learning, deep-neural-networks, deepspeech, edge-ai, google-speech-to-text, mozilla-deepspeech, offline, picovoice, pocketsphinx, privacy, speech-recognition, speech-to-text, voice-recognition
- Language: Python
- Homepage: https://picovoice.ai/
- Size: 159 MB
- Stars: 577
- Watchers: 28
- Forks: 62
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Speech-to-Text Benchmark
Made in Vancouver, Canada by [Picovoice](https://picovoice.ai)
This repo is a minimalist and extensible framework for benchmarking different speech-to-text engines.
## Table of Contents
- [Data](#data)
- [Metrics](#metrics)
- [Engines](#engines)
- [Usage](#usage)
- [Results](#results)## Data
- [LibriSpeech](http://www.openslr.org/12/)
- [TED-LIUM](https://www.openslr.org/7/)
- [Common Voice](https://commonvoice.mozilla.org/en)## Metrics
### Word Error Rate
Word error rate (WER) is the ratio of edit distance between words in a reference transcript and the words in the output
of the speech-to-text engine to the number of words in the reference transcript.### Core-Hour
The Core-Hour metric is used to evaluate the computational efficiency of the speech-to-text engine,
indicating the number of CPU hours required to process one hour of audio. A speech-to-text
engine with lower Core-Hour is more computationally efficient. We omit this metric for cloud-based engines.### Model Size
The aggregate size of models (acoustic and language), in MB. We omit this metric for cloud-based engines.
## Engines
- [Amazon Transcribe](https://aws.amazon.com/transcribe/)
- [Azure Speech-to-Text](https://azure.microsoft.com/en-us/services/cognitive-services/speech-to-text/)
- [Google Speech-to-Text](https://cloud.google.com/speech-to-text)
- [IBM Watson Speech-to-Text](https://www.ibm.com/ca-en/cloud/watson-speech-to-text)
- [OpenAI Whisper](https://github.com/openai/whisper)
- [Picovoice Cheetah](https://picovoice.ai/)
- [Picovoice Leopard](https://picovoice.ai/)## Usage
This benchmark has been developed and tested on `Ubuntu 22.04`.
- Install [FFmpeg](https://www.ffmpeg.org/)
- Download datasets.
- Install the requirements:```console
pip3 install -r requirements.txt
```In the following, we provide instructions for running the benchmark for each engine. The supported datasets are:
`COMMON_VOICE`, `LIBRI_SPEECH_TEST_CLEAN`, `LIBRI_SPEECH_TEST_OTHER`, or `TED_LIUM`.### Amazon Transcribe Instructions
Replace `${DATASET}` with one of the supported datasets, `${DATASET_FOLDER}` with path to dataset, and `${AWS_PROFILE}`
with the name of AWS profile you wish to use.```console
python3 benchmark.py \
--dataset ${DATASET} \
--dataset-folder ${DATASET_FOLDER} \
--engine AMAZON_TRANSCRIBE \
--aws-profile ${AWS_PROFILE}
```### Azure Speech-to-Text Instructions
Replace `${DATASET}` with one of the supported datasets, `${DATASET_FOLDER}` with path to dataset,
`${AZURE_SPEECH_KEY}` and `${AZURE_SPEECH_LOCATION}` information from your Azure account.```console
python3 benchmark.py \
--dataset ${DATASET} \
--dataset-folder ${DATASET_FOLDER} \
--engine AZURE_SPEECH_TO_TEXT \
--azure-speech-key ${AZURE_SPEECH_KEY}
--azure-speech-location ${AZURE_SPEECH_LOCATION}
```### Google Speech-to-Text Instructions
Replace `${DATASET}` with one of the supported datasets, `${DATASET_FOLDER}` with path to dataset, and
`${GOOGLE_APPLICATION_CREDENTIALS}` with credentials download from Google Cloud Platform.```console
python3 benchmark.py \
--dataset ${DATASET} \
--dataset-folder ${DATASET_FOLDER} \
--engine GOOGLE_SPEECH_TO_TEXT \
--google-application-credentials ${GOOGLE_APPLICATION_CREDENTIALS}
```### IBM Watson Speech-to-Text Instructions
Replace `${DATASET}` with one of the supported datasets, `${DATASET_FOLDER}` with path to dataset, and
`${WATSON_SPEECH_TO_TEXT_API_KEY}`/`${${WATSON_SPEECH_TO_TEXT_URL}}` with credentials from your IBM account.```console
python3 benchmark.py \
--dataset ${DATASET} \
--dataset-folder ${DATASET_FOLDER} \
--engine IBM_WATSON_SPEECH_TO_TEXT \
--watson-speech-to-text-api-key ${WATSON_SPEECH_TO_TEXT_API_KEY}
--watson-speech-to-text-url ${WATSON_SPEECH_TO_TEXT_URL}
```### OpenAI Whisper Instructions
Replace `${DATASET}` with one of the supported datasets, `${DATASET_FOLDER}` with path to dataset, and
`${WHISPER_MODEL}` with the whisper model type (`WHISPER_TINY`, `WHISPER_BASE`, `WHISPER_SMALL`,
`WHISPER_MEDIUM`, or `WHISPER_LARGE`)```console
python3 benchmark.py \
--engine ${WHISPER_MODEL} \
--dataset ${DATASET} \
--dataset-folder ${DATASET_FOLDER} \
```### Picovoice Cheetah Instructions
Replace `${DATASET}` with one of the supported datasets, `${DATASET_FOLDER}` with path to dataset, and
`${PICOVOICE_ACCESS_KEY}` with AccessKey obtained from [Picovoice Console](https://console.picovoice.ai/).```console
python3 benchmark.py \
--engine PICOVOICE_CHEETAH \
--dataset ${DATASET} \
--dataset-folder ${DATASET_FOLDER} \
--picovoice-access-key ${PICOVOICE_ACCESS_KEY}
```### Picovoice Leopard Instructions
Replace `${DATASET}` with one of the supported datasets, `${DATASET_FOLDER}` with path to dataset, and
`${PICOVOICE_ACCESS_KEY}` with AccessKey obtained from [Picovoice Console](https://console.picovoice.ai/).```console
python3 benchmark.py \
--engine PICOVOICE_LEOPARD \
--dataset ${DATASET} \
--dataset-folder ${DATASET_FOLDER} \
--picovoice-access-key ${PICOVOICE_ACCESS_KEY}
```## Results
### Word Error Rate (WER)
![](results/plots/WER.png)
| Engine | LibriSpeech test-clean | LibriSpeech test-other | TED-LIUM | CommonVoice | Average |
|:------------------------------:|:----------------------:|:----------------------:|:--------:|:-----------:|:-------:|
| Amazon Transcribe | 2.6% | 5.6% | 3.8% | 8.7% | 5.2% |
| Azure Speech-to-Text | 2.8% | 6.2% | 4.6% | 8.9% | 5.6% |
| Google Speech-to-Text | 10.8% | 24.5% | 14.4% | 31.9% | 20.4% |
| Google Speech-to-Text Enhanced | 6.2% | 13.0% | 6.1% | 18.2% | 10.9% |
| IBM Watson Speech-to-Text | 10.9% | 26.2% | 11.7% | 39.4% | 22.0% |
| Whisper Large (Multilingual) | 3.7% | 5.4% | 4.6% | 9.0% | 5.7% |
| Whisper Medium | 3.3% | 6.2% | 4.6% | 10.2% | 6.1% |
| Whisper Small | 3.3% | 7.2% | 4.8% | 12.7% | 7.0% |
| Whisper Base | 4.3% | 10.4% | 5.4% | 17.9% | 9.5% |
| Whisper Tiny | 5.9% | 13.8% | 6.5% | 24.4% | 12.7% |
| Picovoice Cheetah | 5.6% | 12.1% | 7.7% | 17.5% | 10.7% |
| Picovoice Leopard | 5.3% | 11.3% | 7.2% | 16.2% | 10.0% |### Core-Hour & Model Size
To obtain these results, we ran the benchmark across the entire TED-LIUM dataset and recorded the processing time.
The measurement is carried out on an Ubuntu 22.04 machine with AMD CPU (`AMD Ryzen 9 5900X (12) @ 3.70GHz`),
64 GB of RAM, and NVMe storage, using 10 cores simultaneously. We omit Whisper Large (Multilingual) from this benchmark.| Engine | Core-Hour | Model Size / MB |
|:-----------------:|:---------:|:---------------:|
| Whisper Medium | 1.50 | 1457 |
| Whisper Small | 0.89 | 462 |
| Whisper Base | 0.28 | 139 |
| Whisper Tiny | 0.15 | 73 |
| Picovoice Leopard | 0.05 | 36 |
| Picovoice Cheetah | 0.09 | 31 |![](results/plots/cpu_usage_comparison.png)