{"id":28700527,"url":"https://github.com/deepgraphlearning/knowledgegraphembedding","last_synced_at":"2025-06-14T11:08:20.410Z","repository":{"id":38206518,"uuid":"167231866","full_name":"DeepGraphLearning/KnowledgeGraphEmbedding","owner":"DeepGraphLearning","description":null,"archived":false,"fork":false,"pushed_at":"2023-12-15T23:43:27.000Z","size":32962,"stargazers_count":1314,"open_issues_count":7,"forks_count":270,"subscribers_count":24,"default_branch":"master","last_synced_at":"2025-05-01T23:35:52.008Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/DeepGraphLearning.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-01-23T18:21:00.000Z","updated_at":"2025-04-29T16:26:05.000Z","dependencies_parsed_at":"2024-11-12T13:48:50.264Z","dependency_job_id":null,"html_url":"https://github.com/DeepGraphLearning/KnowledgeGraphEmbedding","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/DeepGraphLearning/KnowledgeGraphEmbedding","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DeepGraphLearning%2FKnowledgeGraphEmbedding","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DeepGraphLearning%2FKnowledgeGraphEmbedding/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DeepGraphLearning%2FKnowledgeGraphEmbedding/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DeepGraphLearning%2FKnowledgeGraphEmbedding/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/DeepGraphLearning","download_url":"https://codeload.github.com/DeepGraphLearning/KnowledgeGraphEmbedding/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DeepGraphLearning%2FKnowledgeGraphEmbedding/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":259804865,"owners_count":22913903,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-06-14T11:08:14.850Z","updated_at":"2025-06-14T11:08:20.394Z","avatar_url":"https://github.com/DeepGraphLearning.png","language":"Python","readme":"\n# RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space\n**Introduction**\n\nThis is the PyTorch implementation of the [RotatE](https://openreview.net/forum?id=HkgEQnRqYQ) model for knowledge graph embedding (KGE). We provide a toolkit that gives state-of-the-art performance of several popular KGE models. The toolkit is quite efficient, which is able to train a large KGE model within a few hours on a single GPU.\n\nA faster multi-GPU implementation of RotatE and other KGE models is available in [GraphVite](https://github.com/DeepGraphLearning/graphvite).\n\n**Implemented features**\n\nModels:\n - [x] RotatE\n - [x] pRotatE\n - [x] TransE\n - [x] ComplEx\n - [x] DistMult\n\nEvaluation Metrics:\n\n - [x] MRR, MR, HITS@1, HITS@3, HITS@10 (filtered)\n - [x] AUC-PR (for Countries data sets)\n\nLoss Function:\n\n - [x] Uniform Negative Sampling\n - [x] Self-Adversarial Negative Sampling\n\n**Usage**\n\nKnowledge Graph Data:\n - *entities.dict*: a dictionary map entities to unique ids\n - *relations.dict*: a dictionary map relations to unique ids\n - *train.txt*: the KGE model is trained to fit this data set\n - *valid.txt*: create a blank file if no validation data is available\n - *test.txt*: the KGE model is evaluated on this data set\n\n**Train**\n\nFor example, this command train a RotatE model on FB15k dataset with GPU 0.\n```\nCUDA_VISIBLE_DEVICES=0 python -u codes/run.py --do_train \\\n --cuda \\\n --do_valid \\\n --do_test \\\n --data_path data/FB15k \\\n --model RotatE \\\n -n 256 -b 1024 -d 1000 \\\n -g 24.0 -a 1.0 -adv \\\n -lr 0.0001 --max_steps 150000 \\\n -save models/RotatE_FB15k_0 --test_batch_size 16 -de\n```\n   Check argparse configuration at codes/run.py for more arguments and more details.\n\n**Test**\n\n    CUDA_VISIBLE_DEVICES=$GPU_DEVICE python -u $CODE_PATH/run.py --do_test --cuda -init $SAVE\n\n**Reproducing the best results**\n\nTo reprocude the results in the ICLR 2019 paper [RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space](https://openreview.net/forum?id=HkgEQnRqYQ), you can run the bash commands in best_config.sh to get the best performance of RotatE, TransE, and ComplEx on five widely used datasets (FB15k, FB15k-237, wn18, wn18rr, Countries).\n\nThe run.sh script provides an easy way to search hyper-parameters:\n\n    bash run.sh train RotatE FB15k 0 0 1024 256 1000 24.0 1.0 0.0001 200000 16 -de\n\n**Speed**\n\nThe KGE models usually take about half an hour to run 10000 steps on a single GeForce GTX 1080 Ti GPU with default configuration. And these models need different max_steps to converge on different data sets:\n\n| Dataset | FB15k | FB15k-237 | wn18 | wn18rr | Countries S* |\n|-------------|-------------|-------------|-------------|-------------|-------------|\n|MAX_STEPS| 150000 | 100000 | 80000 | 80000 | 40000 | \n|TIME| 9 h | 6 h | 4 h | 4 h | 2 h | \n\n**Results of the RotatE model**\n\n| Dataset | FB15k | FB15k-237 | wn18 | wn18rr |\n|-------------|-------------|-------------|-------------|-------------|\n| MRR | .797 ± .001 | .337 ± .001 | .949 ± .000 |.477 ± .001\n| MR | 40 | 177 | 309 | 3340 |\n| HITS@1 | .746 | .241 | .944 | .428 |\n| HITS@3 | .830 | .375 | .952 | .492 |\n| HITS@10 | .884 | .533 | .959 | .571 |\n\n**Using the library**\n\nThe python libarary is organized around 3 objects:\n\n - TrainDataset (dataloader.py): prepare data stream for training\n - TestDataSet (dataloader.py): prepare data stream for evluation\n - KGEModel (model.py): calculate triple score and provide train/test API\n\nThe run.py file contains the main function, which parses arguments, reads data, initilize the model and provides the training loop.\n\nAdd your own model to model.py like:\n```\ndef TransE(self, head, relation, tail, mode):\n    if mode == 'head-batch':\n        score = head + (relation - tail)\n    else:\n        score = (head + relation) - tail\n\n    score = self.gamma.item() - torch.norm(score, p=1, dim=2)\n    return score\n```\n\n**Citation**\n\nIf you use the codes, please cite the following [paper](https://openreview.net/forum?id=HkgEQnRqYQ):\n\n```\n@inproceedings{\n sun2018rotate,\n title={RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space},\n author={Zhiqing Sun and Zhi-Hong Deng and Jian-Yun Nie and Jian Tang},\n booktitle={International Conference on Learning Representations},\n year={2019},\n url={https://openreview.net/forum?id=HkgEQnRqYQ},\n}\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeepgraphlearning%2Fknowledgegraphembedding","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdeepgraphlearning%2Fknowledgegraphembedding","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeepgraphlearning%2Fknowledgegraphembedding/lists"}