https://github.com/resilientworkflowsentinel/resilient-workflow-sentinel
Local, offline 7B LLM task orchestrator — analyzes urgency, debates assignment, balances load. Runs on RTX 3080/4090. Chaos mode included.
https://github.com/resilientworkflowsentinel/resilient-workflow-sentinel
Last synced: 15 days ago
JSON representation
Local, offline 7B LLM task orchestrator — analyzes urgency, debates assignment, balances load. Runs on RTX 3080/4090. Chaos mode included.
- Host: GitHub
- URL: https://github.com/resilientworkflowsentinel/resilient-workflow-sentinel
- Owner: resilientworkflowsentinel
- License: other
- Created: 2026-01-14T09:18:31.000Z (about 2 months ago)
- Default Branch: main
- Last Pushed: 2026-02-14T19:14:55.000Z (23 days ago)
- Last Synced: 2026-02-15T03:44:28.857Z (23 days ago)
- Language: Python
- Size: 218 KB
- Stars: 33
- Watchers: 1
- Forks: 3
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Notice: NOTICE
Awesome Lists containing this project
- awesome-github-repos - resilientworkflowsentinel/resilient-workflow-sentinel - Local, offline 7B LLM task orchestrator — analyzes urgency, debates assignment, balances load. Runs on RTX 3080/4090. Chaos mode included. (Python)
README
# Resilient Workflow Sentinel — Demo
🛡️ **Official Project Status**
Resilient Workflow Sentinel (RWS) is an independent open-source project managed by the RWS Core Team.
🌐 **Official Website:** [resilientworkflowsentinel.com](https://resilientworkflowsentinel.com)
📢 **Note on Authenticity**: This is the only official repository for RWS. Any third-party platforms claiming "Core Team" status or citing launch dates prior to **January 2026** are unofficial and unaffiliated. For verified documentation and the 2026 Roadmap, please refer to this GitHub and our official domain.
[](LICENSE)
## Goal
Local demo of LLM-powered orchestrator for intelligent task routing.
## Quick start
```bash
# create venv
python -m venv .venv
.venv\Scripts\activate
# install requirements
pip install -r requirements.txt
# download local LLM model
python models/download_model.py
# start LLM service (port 8000)
uvicorn app.local_llm_service.llm_app:app --host 127.0.0.1 --port 8000 --reload
# start orchestrator (port 8100)
uvicorn app.main:app --host 127.0.0.1 --port 8100 --reload
# start UI (NiceGUI)
python ui/nicegui_app.py
```
-------------------------------------------------------------------------------------------
## Windows Batch Script Options (Alternative)
```bash
# One-time setup scripts
download_model.bat
install_and_run.bat
# Start services individually
run_llm.bat # Start LLM service
run_api.bat # Start orchestrator API
run_ui.bat # Start NiceGUI interface
```
## ⚙️ Verified Hardware Configurations
This project has been tested in the following environments:
**1. Local Development (Primary)**
* **GPU:** NVIDIA RTX 3080 (10GB VRAM)
* **CPU:** AMD Ryzen 5
* **Performance:** Full UI + Backend support.
**2. Cloud Environment**
* **Platform:** Lightning AI
* **GPU:** NVIDIA Tesla T4
* **Performance:** Backend/API verified.