https://github.com/logikon-ai/logikon
Analyzing and scoring reasoning traces of LLMs
https://github.com/logikon-ai/logikon
ai argument-mapping argument-mining argumentation critical-thinking explainable-ai llmops llms metrics mlops observability reasoning reasoning-agent reliable-ai
Last synced: 7 months ago
JSON representation
Analyzing and scoring reasoning traces of LLMs
- Host: GitHub
- URL: https://github.com/logikon-ai/logikon
- Owner: logikon-ai
- License: agpl-3.0
- Created: 2023-09-10T09:01:23.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2024-09-01T17:42:45.000Z (about 1 year ago)
- Last Synced: 2024-11-05T13:43:14.634Z (12 months ago)
- Topics: ai, argument-mapping, argument-mining, argumentation, critical-thinking, explainable-ai, llmops, llms, metrics, mlops, observability, reasoning, reasoning-agent, reliable-ai
- Language: Python
- Homepage: https://logikon.ai
- Size: 6.04 MB
- Stars: 39
- Watchers: 5
- Forks: 0
- Open Issues: 10
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Logikon
*AI Analytics for Natural Language Reasoning.*
🕹️ [Guided Reasoning™️ Demo](https://huggingface.co/spaces/logikon/benjamin-chat) | 📄 [Technical Report](https://arxiv.org/abs/2408.16331)
> [!NOTE]
> 🎉 We're excited to announce the release of `Logikon 0.2.0` –– a major update to our analytics toolbox for natural-language reasoning.Main changes:
* All LLM-based argument analysis pipelines are now built with _LCEL/LangChain_ (and not with LMQL anymore).
* We're introducing _Guided Reasoning™️_ (abstract interface and simple implementations) for walking arbitrary conversational AI agents through complex reasoning processes.
* AGPL license.Our *short-term priorities* are housekeeping, code cleaning, and documentation. Don't hesitate to reach out if you have any questions or feedback, or if you'd like to contribute to the project.
## Installation
```
pip install git+https://github.com/logikon-ai/logikon@v0.2.0
```## Basic usage
```python
import os
from logikon import ascore, ScoreConfig# 🔧 config
config = ScoreConfig(
global_kwargs={
"expert_model": "meta-llama/Meta-Llama-3-70B-Instruct",
"inference_server_url": "https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct",
"llm_backend": "HFChat",
"api_key": os.environ["HF_TOKEN"],
"classifier_kwargs": {
"model_id": "MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli",
"inference_server_url": "https://api-inference.huggingface.co/models/MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli",
"api_key": os.environ["HF_TOKEN"],
"batch_size": 8,
},
}
)# 📝 input to evaluate
issue = "Should I eat animals?",
reasoning = "No! Animals can suffer. Animal farming causes climate heating. Meat is not healthy.",# 🏃♀️ run argument analysis
result = await ascore(
prompt = issue,
completion = reasoning,
config = config,
)
```## Guided Reasoning™️
```mermaid
sequenceDiagram
autonumber
actor User
participant C as Client LLM
User->>+C: Problem statement
create participant G as Guide LLM
C->>+G: Problem statement
loop
G->>+C: Instructions...
C->>+G: Reasoning traces...
G->>+G: Evaluation
end
destroy G
G->>+C: Answer + Protocol
C->>+User: Answer (and Protocol)
User->>+C: Why?
C->>+User: Explanation (based on Protocol)
```