{"id":16750465,"url":"https://github.com/mberr/rank-based-evaluation","last_synced_at":"2025-04-10T14:12:11.419Z","repository":{"id":73433708,"uuid":"308713311","full_name":"mberr/rank-based-evaluation","owner":"mberr","description":"Code for the paper \"On the Ambiguity of Rank-Based Evaluation of Entity Alignment or Link Prediction Methods\" (https://arxiv.org/abs/2002.06914)","archived":false,"fork":false,"pushed_at":"2021-03-08T14:34:43.000Z","size":73,"stargazers_count":8,"open_issues_count":0,"forks_count":0,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-03-24T12:56:18.629Z","etag":null,"topics":["entity-alignment","evaluation-metrics","knowledge-graph"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/mberr.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-10-30T18:30:18.000Z","updated_at":"2024-08-30T17:13:01.000Z","dependencies_parsed_at":null,"dependency_job_id":"313bdef1-37ac-4f4e-bbfe-ec87b9dfef20","html_url":"https://github.com/mberr/rank-based-evaluation","commit_stats":null,"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mberr%2Frank-based-evaluation","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mberr%2Frank-based-evaluation/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mberr%2Frank-based-evaluation/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mberr%2Frank-based-evaluation/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/mberr","download_url":"https://codeload.github.com/mberr/rank-based-evaluation/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248232572,"owners_count":21069487,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["entity-alignment","evaluation-metrics","knowledge-graph"],"created_at":"2024-10-13T02:28:14.326Z","updated_at":"2025-04-10T14:12:11.411Z","avatar_url":"https://github.com/mberr.png","language":"Python","readme":"# On the Ambiguity of Rank-Based Evaluation of Entity Alignment or Link Prediction Methods\n\n[![Python 3.8](https://img.shields.io/badge/Python-3.8-2d618c?logo=python)](https://docs.python.org/3.8/)\n[![PyTorch](https://img.shields.io/badge/Made%20with-PyTorch-ee4c2c?logo=pytorch)](https://pytorch.org/docs/stable/index.html)\n[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)\n\nThis repository contains the code for the paper\n```\nOn the Ambiguity of Rank-Based Evaluation of Entity Alignment or Link Prediction Methods\nMax Berrendorf, Evgeniy Faerman, Laurent Vermue and Volker Tresp\n```\n\n# Installation\nSetup and activate virtual environment:\n```shell script\npython3.8 -m venv ./venv\nsource ./venv/bin/activate\n```\n\nInstall requirements (in this virtual environment):\n```shell script\npip install -U pip\npip install -U -r requirements.txt\n```\n\n## MLFlow\nIn order to track results to a MLFlow server, start it first by running\n```shell script\nmlflow server\n```\n\n# GCN experiments on DBP15k\nTo run the experiments on DBP15k use\n```shell script\n(venv) PYTHONPATH=./src python3 executables/adjusted_ranking_experiments.py\n```\nThe results are logged to the running MLFlow instance.\nOnce finished, you can summarize the results and reproduce the visualization by\n```shell script\n(venv) python3 executables/summarize.py\n```\n\n# Degree investigations\nTo rerun the experiments for investigating the correlation between node degree, matchings \nand entity representation norms, run\n```shell script\n(venv) PYTHONPATH=./src python3 executables/degree_investigation.py\n```","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmberr%2Frank-based-evaluation","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmberr%2Frank-based-evaluation","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmberr%2Frank-based-evaluation/lists"}