Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ArmandGiraud/letor_scores
a flask minimal api to score a ranking system against human relevance scores
https://github.com/ArmandGiraud/letor_scores
Last synced: about 1 month ago
JSON representation
a flask minimal api to score a ranking system against human relevance scores
- Host: GitHub
- URL: https://github.com/ArmandGiraud/letor_scores
- Owner: ArmandGiraud
- Created: 2019-04-18T07:13:21.000Z (almost 6 years ago)
- Default Branch: master
- Last Pushed: 2019-04-26T08:44:01.000Z (almost 6 years ago)
- Last Synced: 2024-12-25T11:51:34.612Z (about 1 month ago)
- Language: Python
- Size: 45.9 KB
- Stars: 5
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Learning to Rank scores api
a flask minimal api to score a ranking system against human relevance scores## Usage
### Deploy
```bash
sudo docker-compose up --build -d
```
### Endpoint /api/scorethe score endpoint enables to score a single request, to evaluate the system on the list of requests, just compute the average of the returned scores
```python
import requestsparams = {
"y_pred" : ["a", "b", "c", "w", "k","e"], # y_pred (array): list of documents id predicted by the system
"y_true" : ["a", "b", "c", "e"], # y_true (array): documents id scored by humans sorted from most relevant to least relevant
"y_score" : {
"a":5,
"b":3,
"c":2
"e":2
}, # y_score is a dict {"doc_id":"score"} of documents assigned as relevant y humans with the associated scores (ordered internally)
"method" : "all", # one of ["precision", "recall", "dcg", "mrr", "all"]
"k": 3 # k integer preferaly =< to length(y_pred)
}r = requests.post("0.0.0.0:4545/api/score", json = params)
r
>>> 200r.json()
>>> {'dcg': 0.31259247142834, 'mrr': 0.5, 'precision': 0.6666666666666666, 'recall': 0.5}```
## Metrics:
for a given request:- [x] [precision at k](https://en.wikipedia.org/wiki/Precision_and_recall): quelle proportion de resultat pertinents dans les k premiers documents.
- [x] [recall at k](https://en.wikipedia.org/wiki/Precision_and_recall): quelle proportion de documents pertinents en regardant seulement les k premiers documents.
- [x] [DCG](https://en.wikipedia.org/wiki/Discounted_cumulative_gain) : takes into account the relevance factor given by an user (discounted : pénalisation logarithmique)
- [x] [MRR](https://en.wikipedia.org/wiki/Mean_reciprocal_rank) mean reciprocal rank (inverse du rang du premier résultat pertinent)
- [x] [nDCG](https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG) normalized Dicounted cumulative gain
- [ ] [average precision score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.label_ranking_average_precision_score.html#sklearn.metrics.label_ranking_average_precision_score) : multiplicative mean between precision and recall at k for k = 1 à n (resemble au fscore)