{"id":20837732,"url":"https://github.com/astrazeneca/rexmex","last_synced_at":"2025-04-04T10:10:00.706Z","repository":{"id":40501735,"uuid":"420948708","full_name":"AstraZeneca/rexmex","owner":"AstraZeneca","description":"A general purpose recommender metrics library for fair evaluation.","archived":false,"fork":false,"pushed_at":"2023-08-22T09:22:20.000Z","size":2813,"stargazers_count":280,"open_issues_count":3,"forks_count":25,"subscribers_count":7,"default_branch":"main","last_synced_at":"2025-03-27T21:39:38.835Z","etag":null,"topics":["coverage","deep-learning","evaluation","machine-learning","metric","metrics","mrr","personalization","precision","rank","ranking","recall","recommender","recommender-system","recsys","rsquared"],"latest_commit_sha":null,"homepage":"https://rexmex.readthedocs.io/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/AstraZeneca.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2021-10-25T08:56:25.000Z","updated_at":"2025-03-07T01:24:58.000Z","dependencies_parsed_at":"2024-01-03T02:30:17.151Z","dependency_job_id":"7aac2e9e-b5bb-4421-a48e-5f732068244c","html_url":"https://github.com/AstraZeneca/rexmex","commit_stats":{"total_commits":465,"total_committers":10,"mean_commits":46.5,"dds":0.3870967741935484,"last_synced_commit":"4b0dd419c10a548452b9f50f587f4d740a65ff03"},"previous_names":[],"tags_count":17,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AstraZeneca%2Frexmex","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AstraZeneca%2Frexmex/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AstraZeneca%2Frexmex/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AstraZeneca%2Frexmex/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/AstraZeneca","download_url":"https://codeload.github.com/AstraZeneca/rexmex/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247157283,"owners_count":20893220,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["coverage","deep-learning","evaluation","machine-learning","metric","metrics","mrr","personalization","precision","rank","ranking","recall","recommender","recommender-system","recsys","rsquared"],"created_at":"2024-11-18T01:08:24.259Z","updated_at":"2025-04-04T10:10:00.687Z","avatar_url":"https://github.com/AstraZeneca.png","language":"Python","readme":"![Version](https://badge.fury.io/py/rexmex.svg?style=plastic)\n[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![repo size](https://img.shields.io/github/repo-size/AstraZeneca/rexmex.svg)](https://github.com/AstraZeneca/rexmex/archive/master.zip)\n[![build badge](https://github.com/AstraZeneca/rexmex/workflows/CI/badge.svg)](https://github.com/AstraZeneca/rexmex/actions?query=workflow%3ACI)\n[![codecov](https://codecov.io/gh/AstraZeneca/rexmex/branch/main/graph/badge.svg?token=cYgAejRA0Z)](https://codecov.io/gh/AstraZeneca/rexmex)\n\n\u003cp align=\"center\"\u003e\n  \u003cimg width=\"90%\" src=\"https://github.com/AstraZeneca/rexmex/blob/main/rexmex_small.jpg?raw=true?sanitize=true\" /\u003e\n\u003c/p\u003e\n\n--------------------------------------------------------------------------------\n\n**reXmeX** is recommender system evaluation metric library.\n\nPlease look at the **[Documentation](https://rexmex.readthedocs.io/en/latest/)** and **[External Resources](https://rexmex.readthedocs.io/en/latest/notes/resources.html)**.\n\n**reXmeX** consists of utilities for recommender system evaluation. First, it provides a comprehensive collection of metrics for the evaluation of recommender systems. Second, it includes a variety of methods for reporting and plotting the performance results. Implemented metrics cover a range of well-known metrics and newly proposed metrics from data mining ([ICDM](http://icdm2019.bigke.org/), [CIKM](http://www.cikm2019.net/), [KDD](https://www.kdd.org/kdd2020/)) conferences and prominent journals.\n\n**Citing**\n\nIf you find *RexMex* useful in your research, please consider adding the following citation:\n\n```bibtex\n@inproceedings{rexmex,\n       title = {{rexmex: A General Purpose Recommender Metrics Library for Fair Evaluation.}},\n       author = {Benedek Rozemberczki and Sebastian Nilsson and Piotr Grabowski and Charles Tapley Hoyt and Gavin Edwards},\n       year = {2021},\n}\n```\n--------------------------------------------------------------------------------\n\n**An introductory example**\n\nThe following example loads a synthetic dataset which has the mandatory `y_true` and `y_score` keys.  The dataset has binary labels and predictied probability scores. We read the dataset and define a defult `ClassificationMetric` instance for the evaluation of the predictions. Using this metric set we create a score card and get the predictive performance metrics.\n\n```python\nfrom rexmex import ClassificationMetricSet, DatasetReader, ScoreCard\n\nreader = DatasetReader()\nscores = reader.read_dataset()\n\nmetric_set = ClassificationMetricSet()\n\nscore_card = ScoreCard(metric_set)\n\nreport = score_card.get_performance_metrics(scores[\"y_true\"], scores[\"y_score\"])\n```\n\n--------------------------------------------------------------------------------\n\n**An advanced example**\n\nThe following more advanced example loads the same synthetic dataset which has the `source_id`, `target_id`, `source_group` and `target group` keys besides the mandatory `y_true` and `y_score`.   Using the `source_group` key  we group the predictions and return a performance metric report.\n\n```python\nfrom rexmex import ClassificationMetricSet, DatasetReader, ScoreCard\n\nreader = DatasetReader()\nscores = reader.read_dataset()\n\nmetric_set = ClassificationMetricSet()\n\nscore_card = ScoreCard(metric_set)\n\nreport = score_card.generate_report(scores, grouping=[\"source_group\"])\n```\n\n--------------------------------------------------------------------------------\n\n**Scorecard**\n\nA **rexmex** score card allows the reporting of recommender system performance metrics, plotting the performance metrics and saving those. Our framework provides 7 rating, 38 classification, 18 ranking, and 2 coverage metrics.\n\n**Metric Sets**\n\nMetric sets allow the users to calculate a range of evaluation metrics for a label - predicted label vector pair. We provide a general `MetricSet` class and specialized metric sets with pre-set metrics have the following general categories:\n\n- **Ranking**\n- **Rating**\n- **Classification**\n- **Coverage**\n\n--------------------------------------------------------------------------------\n\n**Ranking Metric Set**\n\n* **[Normalized Distance Based Performance Measure (NDPM)](https://asistdl.onlinelibrary.wiley.com/doi/abs/10.1002/%28SICI%291097-4571%28199503%2946%3A2%3C133%3A%3AAID-ASI6%3E3.0.CO%3B2-Z)**\n* **[Discounted Cumulative Gain (DCG)](https://en.wikipedia.org/wiki/Discounted_cumulative_gain)**\n* **[Normalized Discounted Cumulative Gain (NDCG)](https://en.wikipedia.org/wiki/Discounted_cumulative_gain)**\n* **[Reciprocal Rank](https://en.wikipedia.org/wiki/Mean_reciprocal_rank)**\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eExpand to see all ranking metrics in the metric set.\u003c/b\u003e\u003c/summary\u003e\n\n* **[Mean Reciprocal Rank (MRR)](https://en.wikipedia.org/wiki/Mean_reciprocal_rank)**\n* **[Spearmanns Rho](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient)**\n* **[Kendall Tau](https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient)**\n* **[HITS@k](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval))**\n* **[Novelty](https://www.sciencedirect.com/science/article/pii/S163107051930043X)**\n* **[Average Recall @ k](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval))**\n* **[Mean Average Recall @ k](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval))**\n* **[Average Precision @ k](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval))**\n* **[Mean Average Precision @ k](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval))**\n* **[Personalisation](http://www.mavir.net/docs/tfm-vargas-sandoval.pdf)**\n* **[Intra List Similarity](http://www.mavir.net/docs/tfm-vargas-sandoval.pdf)**\n\n\u003c/details\u003e\n\n--------------------------------------------------------------------------------\n\n**Rating Metric Set**\n\nThese metrics assume that items are scored explicitly and ratings are predicted by a regression model. \n\n* **[Mean Squared Error (MSE)](https://en.wikipedia.org/wiki/Mean_squared_error)**\n* **[Root Mean Squared Error (RMSE)](https://en.wikipedia.org/wiki/Mean_squared_error)**\n* **[Mean Absolute Error (MAE)](https://en.wikipedia.org/wiki/Mean_absolute_error)**\n* **[Mean Absolute Percentage Error (MAPE)](https://en.wikipedia.org/wiki/Mean_absolute_percentage_error)**\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eExpand to see all rating metrics in the metric set.\u003c/b\u003e\u003c/summary\u003e\n\n* **[Symmetric Mean Absolute Percentage Error (SMAPE)](https://en.wikipedia.org/wiki/Symmetric_mean_absolute_percentage_error)**\n* **[Pearson Correlation](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient)**\n* **[Coefficient of Determination](https://en.wikipedia.org/wiki/Coefficient_of_determination)**\n\n\u003c/details\u003e\n\n--------------------------------------------------------------------------------\n\n**Classification Metric Set**\n\nThese metrics assume that the items are scored with raw probabilities (these can be binarized).\n\n* **[Precision (or Positive Predictive Value)](https://en.wikipedia.org/wiki/Precision_and_recall)**\n* **[Recall (Sensitivity, Hit Rate, or True Positive Rate)](https://en.wikipedia.org/wiki/Precision_and_recall)**\n* **[Area Under the Precision Recall Curve (AUPRC)](https://besjournals.onlinelibrary.wiley.com/doi/10.1111/2041-210X.13140)**\n* **[Area Under the Receiver Operating Characteristic (AUROC)](https://en.wikipedia.org/wiki/Receiver_operating_characteristic)**\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eExpand to see all classification metrics in the metric set.\u003c/b\u003e\u003c/summary\u003e\n\n* **[F-1 Score](https://en.wikipedia.org/wiki/F-score)**\n* **[Average Precision](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.average_precision_score.html)**\n* **[Specificty (Selectivity or True Negative Rate )](https://en.wikipedia.org/wiki/Precision_and_recall)**\n* **[Matthew's Correlation](https://en.wikipedia.org/wiki/Precision_and_recall)**\n* **[Accuracy](https://en.wikipedia.org/wiki/Precision_and_recall)**\n* **[Balanced Accuracy](https://en.wikipedia.org/wiki/Precision_and_recall)**\n* **[Fowlkes-Mallows Index](https://en.wikipedia.org/wiki/Precision_and_recall)**\n\n\u003c/details\u003e\n\n--------------------------------------------------------------------------------\n\n**Coverage Metric Set**\n\nThese metrics measure how well the recommender system covers the available items in the catalog and possible users. \nIn other words measure the diversity of predictions.\n\n* **[Item Coverage](https://www.bgu.ac.il/~shanigu/Publications/EvaluationMetrics.17.pdf)**\n* **[User Coverage](https://www.bgu.ac.il/~shanigu/Publications/EvaluationMetrics.17.pdf)**\n\n\n--------------------------------------------------------------------------------\n**Documentation and Reporting Issues**\n\nHead over to our [documentation](https://rexmex.readthedocs.io) to find out more about installation and data handling, a full list of implemented methods, and datasets.\n\nIf you notice anything unexpected, please open an [issue](https://github.com/AstraZeneca/rexmex/issues) and let us know. If you are missing a specific method, feel free to open a [feature request](https://github.com/AstraZeneca/rexmex/issues).\nWe are motivated to constantly make RexMex even better.\n\n--------------------------------------------------------------------------------\n\n**Installation via the command line**\n\nRexMex can be installed with the following command after the repo is cloned.\n\n```sh\n$ pip install .\n```\n\nUse `-e/--editable` when developing.\n\n**Installation via pip**\n\nRexMex can be installed with the following pip command.\n\n```sh\n$ pip install rexmex\n```\n\nAs we create new releases frequently, upgrading the package casually might be beneficial.\n\n```sh\n$ pip install rexmex --upgrade\n```\n\n--------------------------------------------------------------------------------\n\n**Running tests**\n\nTests can be run with `tox` with the following:\n\n```sh\n$ pip install tox\n$ tox -e py\n```\n\n--------------------------------------------------------------------------------\n\n**Citation**\n\nIf you use RexMex in a scientific publication, we would appreciate citations. Please see GitHub's built-in citation tool.\n\n\n\n--------------------------------------------------------------------------------\n\n**License**\n\n- [Apache-2.0 License](https://github.com/AZ-AI/rexmex/blob/master/LICENSE)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fastrazeneca%2Frexmex","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fastrazeneca%2Frexmex","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fastrazeneca%2Frexmex/lists"}