Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/armandgiraud/evaluatesearch
Repository dedicated at Evaluating ranking model vs ground truth ranks
https://github.com/armandgiraud/evaluatesearch
learning-to-rank nlp
Last synced: 6 days ago
JSON representation
Repository dedicated at Evaluating ranking model vs ground truth ranks
- Host: GitHub
- URL: https://github.com/armandgiraud/evaluatesearch
- Owner: ArmandGiraud
- Created: 2020-02-07T18:11:07.000Z (almost 5 years ago)
- Default Branch: master
- Last Pushed: 2020-02-08T09:48:01.000Z (almost 5 years ago)
- Last Synced: 2024-01-17T20:34:13.963Z (10 months ago)
- Topics: learning-to-rank, nlp
- Language: Python
- Size: 1.34 MB
- Stars: 1
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Evaluate Semantic Search
These scripts helps evaluate the quality of a ranking model for the [Code du travail Numérique](code.travail.gouv.fr) Search Engine against manually labelled data aka the [Datafiller](https://datafiller.num.social.gouv.fr/).
### Install:
`pip install -r requirements.txt`### Run on [FlauBert](https://github.com/getalp/Flaubert):
`python eval.py`
### Run on [CamemBert](https://camembert-model.fr/):
`to do`### Run on [USE](https://tfhub.dev/google/universal-sentence-encoder-multilingual-qa/2)
`to do`## Evaluate your own model:
create a new file `yourModel.py` which defines a Predictor class having a predict method
- given a string query should return k slugs from the slugs parameters
and a name attribute and param attributes.see [Flaubertmodel.py](https://github.com/ArmandGiraud/EvaluateSearch/blob/40e1fb956d93059adfe7f58a54309a516cd9fb71/Flaubertmodel.py#L25) for example