https://github.com/anvesham/huggingace_for_knowledge_graph_completion
Designed pipeline for completing the enterprise Knowledge Graph at R2 Factory to reduce number of incorrect triples and predict missing relations and entities;
https://github.com/anvesham/huggingace_for_knowledge_graph_completion
Last synced: 3 months ago
JSON representation
Designed pipeline for completing the enterprise Knowledge Graph at R2 Factory to reduce number of incorrect triples and predict missing relations and entities;
- Host: GitHub
- URL: https://github.com/anvesham/huggingace_for_knowledge_graph_completion
- Owner: AnveshaM
- Created: 2023-02-22T00:01:25.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2023-02-22T00:13:22.000Z (over 2 years ago)
- Last Synced: 2025-01-16T02:25:35.203Z (5 months ago)
- Language: Jupyter Notebook
- Size: 406 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Huggingace_for_Knowledge_Graph_Completion
Designed pipeline for completing the enterprise Knowledge Graph at R2 Factory to reduce number of incorrect triples and predict missing relations and entities;Knowledge Graph Completion involves the following tasks:
1. Entity/Link Prediction: This refers to the task of predicting the entities or the nodes of the Knowledge Graph.
2. Relation Prediction: This refers to the task of predicting the missing links or the edges of the Knowledge Graph.
3. Triple Classification: This refers to verifying if the triple created by the pretrained language model from the input text is correct.This project has been run on Google Colab. Experiments have been conducted through August, September, and October. On average around 250 compute units (Cloud Virtual Machines) have been used per month. All VMs ran on High RAM runtime configuration where a maximum of 37.8 Gb of memory was available. The notebook used a premium A100-SXM4-40GB GPU runtime for an advanced GPU setting. All three tasks for both the language models (DeBERTa and XLNet) have been carried out on a single GPU Training environment. To enable use of GPU in Google Colab, ensure that the Pytorch CUDA toolkit is installed correctly.