Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ma7555/evalify
Evaluate your biometric verification models literally in seconds.
https://github.com/ma7555/evalify
evaluation evaluation-framework evaluation-metrics face-recognition face-verification python
Last synced: about 5 hours ago
JSON representation
Evaluate your biometric verification models literally in seconds.
- Host: GitHub
- URL: https://github.com/ma7555/evalify
- Owner: ma7555
- License: bsd-3-clause
- Created: 2022-02-15T01:06:53.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2023-06-17T23:27:50.000Z (over 1 year ago)
- Last Synced: 2024-09-04T04:47:46.821Z (30 days ago)
- Topics: evaluation, evaluation-framework, evaluation-metrics, face-recognition, face-verification, python
- Language: Python
- Homepage:
- Size: 2.63 MB
- Stars: 19
- Watchers: 5
- Forks: 20
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Changelog: HISTORY.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Citation: CITATION.cff
Awesome Lists containing this project
README
# evalify
**Evaluate Biometric Authentication Models Literally in Seconds.**
## Installation
#### Stable release:
```bash
pip install evalify
```
#### Bleeding edge:
```bash
pip install git+https://github.com/ma7555/evalify.git
```
## Used for
Evaluating all biometric authentication models, where the model output is a high-level embeddings known as feature vectors for visual or behaviour biometrics or d-vectors for auditory biometrics.## Usage
```python
import numpy as np
from evalify import Experimentrng = np.random.default_rng()
nphotos = 500
emb_size = 32
nclasses = 10
X = rng.random((self.nphotos, self.emb_size))
y = rng.integers(self.nclasses, size=self.nphotos)experiment = Experiment()
experiment.run(X, y)
experiment.get_roc_auc()
print(experiment.roc_auc)
print(experiment.find_threshold_at_fpr(0.01))
```
## How it works
* When you run an experiment, evalify tries all the possible combinations between individuals for authentication based on the `X` and `y` parameters and returns the results including FPR, TPR, FNR, TNR and ROC AUC. `X` is an array of embeddings and `y` is an array of corresponding targets.
* Evalify can find the optimal threshold based on your agreed FPR and desired similarity or distance metric.## Documentation:
*## Features
* Blazing fast implementation for metrics calculation through optimized einstein sum and vectorized calculations.
* Many operations are dispatched to canonical BLAS, cuBLAS, or other specialized routines.
* Smart sampling options using direct indexing from pre-calculated arrays with total control over sampling strategy and sampling numbers.
* Supports most evaluation metrics:
- `cosine_similarity`
- `pearson_similarity`
- `cosine_distance`
- `euclidean_distance`
- `euclidean_distance_l2`
- `minkowski_distance`
- `manhattan_distance`
- `chebyshev_distance`
* Computation time for 4 metrics 4.2 million samples experiment is **24 seconds vs 51 minutes** if looping using `scipy.spatial.distance` implemntations.## TODO
* Safer memory allocation. I did not have issues but if you ran out of memory please manually set the `batch_size` argument.## Contribution
* Contributions are welcomed, and they are greatly appreciated! Every little bit helps, and credit will always be given.
* Please check [CONTRIBUTING.md](https://github.com/ma7555/evalify/blob/main/CONTRIBUTING.md) for guidelines.## Citation
* If you use this software, please cite it using the metadata from [CITATION.cff](https://github.com/ma7555/evalify/blob/main/CITATION.cff)