https://github.com/kazkozdev/deepchain-refinement
π§ A multi-stage prompt refinement system leveraging chain-of-thought reasoning to enhance responses, reduce hallucinations, and enable progressive validation with intelligent synthesis.
https://github.com/kazkozdev/deepchain-refinement
ai-enhancements chain-of-thought deep-learning llm machine-learning prompt-engineering prompt-refinement reasoning
Last synced: 6 months ago
JSON representation
π§ A multi-stage prompt refinement system leveraging chain-of-thought reasoning to enhance responses, reduce hallucinations, and enable progressive validation with intelligent synthesis.
- Host: GitHub
- URL: https://github.com/kazkozdev/deepchain-refinement
- Owner: KazKozDev
- License: mit
- Created: 2024-11-08T10:08:10.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2025-03-23T20:14:32.000Z (10 months ago)
- Last Synced: 2025-03-23T21:22:59.100Z (10 months ago)
- Topics: ai-enhancements, chain-of-thought, deep-learning, llm, machine-learning, prompt-engineering, prompt-refinement, reasoning
- Language: Python
- Homepage:
- Size: 6.31 MB
- Stars: 3
- Watchers: 1
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
DeepChain is a Python-based pipeline that applies a three-stage refinement process to LLM outputs (using gemma2:9b via Ollama). By combining chain-of-thought prompting, context-aware improvements, and factβchecking, it produces more accurate and in-depth answers from a compact model.
## Key Features
- **Three-Stage Refinement Pipeline**
1. **Basic Analysis:** Generates an initial (βnaiveβ) response to your prompt.
2. **Contextual Refinement:** Adds context, performs intent analysis, and fact-checks the initial output.
3. **Final Synthesis:** Integrates and cross-validates information from earlier stages into a polished, comprehensive answer.
- **Hallucination Reduction**
Intermediate checks and cross-validation between stages reduce misinformation and incorrect facts.
- **Easy Setup & Lightweight**
Just Python 3.8+ + Ollama + gemma2:9b. No heavyweight frameworks are required.
- **Simple, Clear Code**
All core logic lives in a single `src/main.py`, making it easy to understand and extend.
---
## Demo / Usage Example

> In this example, the user sends the ambiguous query "How many disks does Madonna have?". Through DeepChain's refinement process, the system demonstrates how a small Gemma2:9B model is enhanced with multi-stage reasoning capabilities. The system showcases improved analytical performance by providing a comprehensive answer that categorizes Madonna's discography into studio albums, live recordings, and compilations. The answer includes key information about her most significant releases, offering the appropriate context for a complete answer to the query - all achieved by applying refinement techniques to a relatively compact 9B parameter model that wouldn't typically exhibit such sophisticated reasoning on its own.
1. **Clone and Install**
```bash
git clone https://github.com/KazKozDev/deepchain-refinement.git
cd deepchain-refinement
pip install -r requirements.txt
```
2. **Run the Pipeline**
```bash
python src/main.py --prompt "How many discs does Madonna have?"
```
---
## Installation
1. **Ensure Python 3.8+ is installed**.
2. **Install Ollama** and confirm it is running locally.
3. **Clone this repository**:
```bash
git clone https://github.com/KazKozDev/deepchain-refinement.git
cd deepchain-refinement
```
4. **Install Python dependencies**:
```bash
pip install -r requirements.txt
```
---
## Project Structure
```text
deepchain-refinement/
βββ src/
β βββ main.py # Core implementation with three refinement stages
βββ requirements.txt # Python dependencies
βββ LICENSE # MIT license text
βββ README.md # This file
```
---
If you like this project, please give it a star β
For questions, feedback, or support, reach out to:
[Artem KK](https://www.linkedin.com/in/kazkozdev/) | MIT [LICENSE](LICENSE)