https://github.com/tabahi/accuracy_metrics
Precision, Recall, F1, UAR, WAR all in one
https://github.com/tabahi/accuracy_metrics
Last synced: 3 months ago
JSON representation
Precision, Recall, F1, UAR, WAR all in one
- Host: GitHub
- URL: https://github.com/tabahi/accuracy_metrics
- Owner: tabahi
- License: mit
- Created: 2024-03-22T23:41:00.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-04-07T01:47:19.000Z (about 1 year ago)
- Last Synced: 2025-01-13T14:52:32.316Z (5 months ago)
- Language: Python
- Size: 20.5 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# accuracy_metrics
- Saving everyone the hassle. Precision, Recall, F1, unweighted average recall (micro-average), weighted accuracy (typical macro-accuracy), and confusion matrix, all in one code.
- Requires ndarray type labels for numpy.
- Option to skip a class in weighting macro averages.## Use:
```python
import accuracy_metrics #put accuracy_metrics.py in your project dir.y_true =['M', 'F', 'M', 'F', 'O', 'F', 'M', 'F', 'M', 'O', 'M', 'F', 'O', 'F', 'M', 'F', 'M', 'F', 'M', 'F', 'O', 'F', 'M', 'F', 'M', 'F', 'M', 'O', 'O', 'F', 'M', 'F', 'M', 'F', 'F', 'F', 'O']
y_pred =['F', 'F', 'M', 'O', 'F', 'F', 'M', 'F', 'M', 'F', 'M', 'M', 'O', 'F', 'F', 'M', 'O', 'F', 'F', 'M', 'F', 'M', 'F', 'M', 'M', 'O', 'F', 'F', 'M', 'O', 'F', 'F', 'M', 'M', 'M', 'F', 'O']results = accuracy_metrics.generate_classification_metrics(y_true, y_pred, skip_label='O', confusion_csv="confusion.csv", precisions_csv="precisions.csv")
print(results)'''
{'N_counts': [17, 13, 7], 'uar': 43.67, 'war': 43.33, 'precision': 0.405, 'recall': 0.405, 'f1_score': 0.405, 'precision_sk': 0.433, 'recall_sk': 0.419, 'f1_score_sk': 0.426}
_sk for metrics calculated skipping the skipped class.
'''```