https://github.com/mberr/rank-based-evaluation
Code for the paper "On the Ambiguity of Rank-Based Evaluation of Entity Alignment or Link Prediction Methods" (https://arxiv.org/abs/2002.06914)
https://github.com/mberr/rank-based-evaluation
entity-alignment evaluation-metrics knowledge-graph
Last synced: about 1 month ago
JSON representation
Code for the paper "On the Ambiguity of Rank-Based Evaluation of Entity Alignment or Link Prediction Methods" (https://arxiv.org/abs/2002.06914)
- Host: GitHub
- URL: https://github.com/mberr/rank-based-evaluation
- Owner: mberr
- License: mit
- Created: 2020-10-30T18:30:18.000Z (over 4 years ago)
- Default Branch: main
- Last Pushed: 2021-03-08T14:34:43.000Z (about 4 years ago)
- Last Synced: 2025-03-24T12:56:18.629Z (about 2 months ago)
- Topics: entity-alignment, evaluation-metrics, knowledge-graph
- Language: Python
- Homepage:
- Size: 71.3 KB
- Stars: 8
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# On the Ambiguity of Rank-Based Evaluation of Entity Alignment or Link Prediction Methods
[](https://docs.python.org/3.8/)
[](https://pytorch.org/docs/stable/index.html)
[](https://opensource.org/licenses/MIT)This repository contains the code for the paper
```
On the Ambiguity of Rank-Based Evaluation of Entity Alignment or Link Prediction Methods
Max Berrendorf, Evgeniy Faerman, Laurent Vermue and Volker Tresp
```# Installation
Setup and activate virtual environment:
```shell script
python3.8 -m venv ./venv
source ./venv/bin/activate
```Install requirements (in this virtual environment):
```shell script
pip install -U pip
pip install -U -r requirements.txt
```## MLFlow
In order to track results to a MLFlow server, start it first by running
```shell script
mlflow server
```# GCN experiments on DBP15k
To run the experiments on DBP15k use
```shell script
(venv) PYTHONPATH=./src python3 executables/adjusted_ranking_experiments.py
```
The results are logged to the running MLFlow instance.
Once finished, you can summarize the results and reproduce the visualization by
```shell script
(venv) python3 executables/summarize.py
```# Degree investigations
To rerun the experiments for investigating the correlation between node degree, matchings
and entity representation norms, run
```shell script
(venv) PYTHONPATH=./src python3 executables/degree_investigation.py
```