Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/Akirato/LLM-KG-Reasoning
We want to try and evaluate LLMs using Knowledge Graphs
https://github.com/Akirato/LLM-KG-Reasoning
Last synced: 1 day ago
JSON representation
We want to try and evaluate LLMs using Knowledge Graphs
- Host: GitHub
- URL: https://github.com/Akirato/LLM-KG-Reasoning
- Owner: Akirato
- License: gpl-3.0
- Created: 2023-03-31T00:02:12.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-05-02T01:57:59.000Z (over 1 year ago)
- Last Synced: 2024-10-09T10:05:54.874Z (about 1 month ago)
- Language: Python
- Size: 608 KB
- Stars: 101
- Watchers: 6
- Forks: 9
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-logical-query - LARK
README
# Reasoning over Knowledge Graphs using Large Language Models
![Overview of LARK model.](./assets/model.png)
### Abstract
Reasoning over knowledge graphs (KGs) is a challenging task that requires a deep
understanding of the complex relationships between entities and the underlying
logic of their relations. Current approaches rely on learning geometries to embed
entities in vector space for logical query operations, but they suffer from subpar
performance on complex queries and dataset-specific representations. In this paper,
we propose a novel decoupled approach, Language-guided Abstract Reasoning
over Knowledge graphs (LARK), that formulates complex KG reasoning as a
combination of contextual KG search and abstract logical query reasoning, to
leverage the strengths of graph extraction algorithms and large language models
(LLM), respectively. Our experiments demonstrate that the proposed approach
outperforms state-of-the-art KG reasoning methods on standard benchmark datasets
across several logical query constructs, with significant performance gain for
queries of higher complexity. Furthermore, we show that the performance of our
approach improves proportionally to the increase in size of the underlying LLM,
enabling the integration of the latest advancements in LLMs for logical reasoning
over KGs. Our work presents a new direction for addressing the challenges of
complex KG reasoning and paves the way for future research in this area.