An open API service indexing awesome lists of open source software.

https://github.com/mrdushidush/agent-battle-command-center

Cost-optimized AI agent orchestration with RTS nostalgia UI. Run 88% of coding tasks FREE on Ollama, with Claude handling the rest.
https://github.com/mrdushidush/agent-battle-command-center

ai-agents claude coding-agent command-center cost-optimization developer-tools docker local-llm ollama qwen retro-ui self-hosted

Last synced: 23 days ago
JSON representation

Cost-optimized AI agent orchestration with RTS nostalgia UI. Run 88% of coding tasks FREE on Ollama, with Claude handling the rest.

Awesome Lists containing this project

README

          

# ๐ŸŽฎ Agent Battle Command Center

> **Run 90% of coding tasks for FREE on a $300 GPU โ€” including LRU caches and RPN calculators โ€” with Claude handling the rest at ~$0.002/task average.**

An RTS-inspired control center for orchestrating AI coding agents with intelligent tiered routing. Watch your AI agents work in real-time with a retro strategy game-style interface.

[![Strong MVP](https://img.shields.io/badge/status-Strong%20MVP%20(8.5%2F10)-brightgreen)](./MVP_ASSESSMENT.md)
[![Docker Hub](https://img.shields.io/badge/Docker%20Hub-dushidush-blue?logo=docker)](https://hub.docker.com/u/dushidush)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Docker](https://img.shields.io/badge/docker-ready-blue.svg)](https://www.docker.com/)
[![Tests](https://img.shields.io/badge/tests-27%20test%20files-success)](./packages/api/src/__tests__)
[![Ollama Tested](https://img.shields.io/badge/Ollama%20C1--C9-90%25%20pass%20(16K%20ctx)-success)](./scripts/)
[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](CONTRIBUTING.md)

**If you find this useful, [give it a star](https://github.com/mrdushidush/agent-battle-command-center/stargazers)** โ€” it helps others discover this project and motivates development.

### Quick Links

[What Makes This Special](#-what-makes-this-special) ยท [Screenshots](#-screenshots) ยท [Quick Start (Docker Hub)](#-quick-start-docker-hub--recommended) ยท [Quick Start (Source)](#๏ธ-quick-start-build-from-source) ยท [Architecture](#๏ธ-architecture) ยท [Key Features](#-key-features) ยท [Configuration](#๏ธ-configuration) ยท [Usage](#-usage) ยท [Testing](#-testing) ยท [Troubleshooting](#-troubleshooting) ยท [Documentation](#-documentation) ยท [Development](#๏ธ-development) ยท [Contributing](#-contributing) ยท [Performance](#-performance--benchmarks) ยท [Roadmap](#๏ธ-roadmap)

---

## โœจ What Makes This Special

**๐Ÿ’ฐ Cost Optimization (20x cheaper than cloud-only)**
- FREE local execution via Ollama (qwen2.5-coder:7b + custom 16K context Modelfile) for 90% of tasks
- Smart tiered routing: only use paid Claude API for multi-class architectural tasks
- **Proven:** 90% success rate on 40-task C1-C9 suite in just 11 minutes (Feb 2026)
- Passes LRU Cache, RPN Calculator, Sorted Linked List, Stack โ€” all FREE on local GPU

**๐ŸŽฏ Academic Complexity Routing + Per-Agent Model Override**
- Based on Campbell's Task Complexity Theory
- Dual assessment: rule-based + Haiku AI semantic analysis
- Automatic escalation: Ollama (1-8) โ†’ Sonnet (9-10) โ€” Haiku eliminated from routing
- **NEW: Per-agent model dropdown** โ€” override Auto routing with Ollama/Grok/Haiku/Sonnet/Opus per agent

**๐ŸŽต Bark TTS Military Radio Voice Lines**
- 96 GPU-generated voice lines with military radio post-processing (static, squelch, crackle)
- 3 voice packs: Tactical Ops, Mission Control, Field Command (32 lines each)
- Generated locally with [Bark TTS](https://github.com/suno-ai/bark) (MIT) โ€” $0 cost
- Voice feedback for every agent action ("Mission complete!", "Engaging target!")

**๐Ÿ“Š Full Observability**
- Every tool call logged with timing, tokens, and cost
- Loop detection prevents infinite retries
- Training data export for model fine-tuning

---

## ๐Ÿ“ธ Screenshots

### Live Demo
![Command Center in Action](docs/screenshots/CommandCenter1.gif)
*The full command center running live โ€” agent minimap, task queue, and real-time tool execution log.*

### Main Command Center (Overseer Mode)
![Command Center Overview](docs/screenshots/command-center-overview.png)
*The main view showing task queue (bounty board), active missions strip, and real-time tool log with RTS-inspired aesthetic.*

### Task Queue (Bounty Board)
![Task Queue](docs/screenshots/task-queue.png)
*Large task cards with complexity indicators, priority colors, and status badges. Click any task to view details and execution logs.*

### Active Missions & Agent Health
![Active Missions](docs/screenshots/active-missions.png)
*Real-time agent status strip with health indicators (green=idle, amber=working, red=stuck). Shows current task and progress for each agent.*

### Tool Log (Terminal Feed)
![Tool Log](docs/screenshots/tool-log.png)
*Live feed of every tool call with syntax highlighting. Expand entries to see full input/output, timing, and token usage.*

### Dashboard & Analytics
![Dashboard](docs/screenshots/dashboard.png)
*Success rates by complexity, cost breakdown by model tier, agent comparison charts, and budget tracking.*

### Cost Tracking
![Cost Dashboard](docs/screenshots/cost-dashboard.png)
*Real-time cost monitoring with daily budget limits, model tier breakdown, and token burn rate visualization.*

---

**UI Features:**
- ๐ŸŽฎ RTS nostalgia-inspired design
- ๐ŸŽจ Teal/amber HUD colors with terminal-style panels
- ๐Ÿ”Š Military voice feedback for agent actions ("Acknowledged!", "Mission complete!")
- โšก Real-time WebSocket updates (no polling)
- ๐Ÿ“Š Live metrics and health indicators

---

## ๐Ÿš€ Quick Start (Docker Hub โ€” Recommended)

Pre-built images on Docker Hub. No cloning the full repo, no build step โ€” just pull and run.

### Prerequisites

- **Docker Desktop** (with GPU support for Ollama)
- **NVIDIA GPU** (recommended: 8GB+ VRAM for local models)
- *Or CPU-only mode (comment out the `deploy:` GPU block in the compose file)*
- **Anthropic API key** (for Claude models โ€” [get one here](https://console.anthropic.com))
- **8GB+ RAM** (for Docker containers)

### Installation

1. **Download the files**
```bash
mkdir agent-battle-command-center && cd agent-battle-command-center
mkdir -p scripts
curl -O https://raw.githubusercontent.com/mrdushidush/agent-battle-command-center/main/docker-compose.hub.yml
curl -O https://raw.githubusercontent.com/mrdushidush/agent-battle-command-center/main/.env.example
curl -o scripts/setup.sh https://raw.githubusercontent.com/mrdushidush/agent-battle-command-center/main/scripts/setup.sh
curl -o scripts/ollama-entrypoint.sh https://raw.githubusercontent.com/mrdushidush/agent-battle-command-center/main/scripts/ollama-entrypoint.sh
curl -o scripts/nginx-hub.conf https://raw.githubusercontent.com/mrdushidush/agent-battle-command-center/main/scripts/nginx-hub.conf
```

2. **Run setup** (auto-generates all keys, prompts for Anthropic key)
```bash
bash scripts/setup.sh
```
Or manually: `cp .env.example .env` and edit the `CHANGE_ME` values.

3. **Start all services** (~30 seconds to pull images)
```bash
docker compose -f docker-compose.hub.yml up
```

4. **Open the UI** โ†’ http://localhost:5173
- First startup downloads the Ollama model (~5 min one-time)

5. **Verify health**
```bash
docker ps # All 6 containers running
docker exec abcc-ollama ollama list # Should show qwen2.5-coder:32k
curl http://localhost:3001/health # API healthy
```

---

## ๐Ÿ› ๏ธ Quick Start (Build from Source)

For contributors and developers who want to modify the code.

### Prerequisites

- **Docker Desktop** (with GPU support for Ollama)
- **NVIDIA GPU** (recommended: 8GB+ VRAM for local models)
- *Or CPU-only mode (see [Configuration](#configuration))*
- **Anthropic API key** (for Claude models)
- **8GB+ RAM** (for Docker containers)

### Installation

1. **Clone the repository**
```bash
git clone https://github.com/mrdushidush/agent-battle-command-center.git
cd agent-battle-command-center
```

2. **Run setup** (auto-generates all keys, prompts for Anthropic key)
```bash
bash scripts/setup.sh
```
Or manually: `cp .env.example .env` and edit the `CHANGE_ME` values.

3. **Start all services** (first build takes ~5 minutes)
```bash
docker compose up --build
```
> **No NVIDIA GPU?** Comment out the `deploy:` block in `docker-compose.yml` (lines 53-58) to run Ollama in CPU-only mode. It's slower but works.

4. **Open the UI** โ†’ http://localhost:5173
- Ollama model download adds ~5 min on first startup

5. **Verify health**
```bash
# Check all services are running
docker ps

# Check Ollama model is loaded
docker exec abcc-ollama ollama list
# Should show: qwen2.5-coder:32k (custom 16K context model)
```

---

## ๐Ÿ—๏ธ Architecture

```
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ UI (React + Vite) โ”‚
โ”‚ localhost:5173 โ”‚
โ”‚ โ€ข Task queue (bounty board) โ€ข Active missions โ€ข Tool log โ”‚
โ”‚ โ€ข Dashboard โ€ข Minimap โ€ข Voice pack audio โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
โ”‚ HTTP/WebSocket (authenticated)
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ API (Express + Socket.IO) โ”‚
โ”‚ localhost:3001 โ”‚
โ”‚ โ€ข Task routing โ€ข Cost tracking โ€ข Budget service โ”‚
โ”‚ โ€ข Rate limiting โ€ข CORS protection โ€ข File locking โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
โ”‚ โ”‚ โ”‚
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ PostgreSQL โ”‚ โ”‚ Redis โ”‚ โ”‚ Agents (FastAPI + CrewAI) โ”‚
โ”‚ :5432 โ”‚ โ”‚ :6379 โ”‚ โ”‚ localhost:8000 โ”‚
โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ€ข Coder (CodeX-7) โ”‚
โ”‚ โ€ข Tasks โ”‚ โ”‚ โ€ข Cache โ”‚ โ”‚ โ€ข QA (Haiku/Sonnet) โ”‚
โ”‚ โ€ข Logs โ”‚ โ”‚ โ€ข MCP โ”‚ โ”‚ โ€ข CTO (Opus - review only) โ”‚
โ”‚ โ€ข Reviews โ”‚ โ”‚ โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ Ollama (Local LLM) โ”‚
โ”‚ localhost:11434 โ”‚
โ”‚ โ€ข qwen2.5-coder:32k โ”‚
โ”‚ (7b + 16K ctx Modelfile) โ”‚
โ”‚ โ€ข RTX 3060 Ti 8GB VRAM โ”‚
โ”‚ โ€ข 93% GPU / 7% CPU โ”‚
โ”‚ โ€ข FREE execution โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
```

---

## ๐ŸŽฏ Key Features

### Tiered Task Routing + Per-Agent Model Selection (v0.7.0)
- **Complexity 1-8** โ†’ Ollama (FREE, ~12s avg with 16K context) - 90-100% success rate
- **Complexity 9** โ†’ Ollama for single-class tasks (80% โ€” LRU Cache, Stack, RPN Calculator)
- **Complexity 9-10** โ†’ Sonnet (~$0.01/task) - Multi-class architectural tasks only
- **Decomposition** โ†’ Opus (~$0.02/task) - Breaking down complex tasks only
- **Haiku eliminated** from execution routing โ€” Ollama handles C7-C8 at 100%
- **Per-agent model override** โ€” sidebar dropdown to force any agent to use a specific model
- **Grok (xAI) support** โ€” set `XAI_API_KEY` to enable Grok as a model option

### Real-Time Monitoring
- **Active Missions** - Live agent status with health indicators
- **Tool Log** - Terminal-style feed of every tool call
- **Token Burn Rate** - Real-time cost tracking with budget warnings
- **WebSocket Updates** - Instant UI updates for all events

### Cost Controls
- **Daily budget limits** with warnings at 80%
- **Cost tracking** per task, agent, and model tier
- **Tiered code reviews** (Haiku every 5th Ollama, Opus every 10th complex)
- **Training data export** for future model fine-tuning

### Parallel Execution
- **Resource pools** - Ollama (1 slot) + Grok (2 slots) + Claude (2 slots)
- **40-60% faster** for mixed-complexity batches
- **File locking** prevents conflicts between parallel tasks

### Error Recovery
- **Stuck task detection** - Auto-recovery after 10 min timeout
- **Loop detection** - Prevents agents from repeating failed actions
- **Escalation system** - Ollama fails โ†’ Haiku retries with context โ†’ Human escalation

### Security (MVP Hardened - Feb 2026)
- โœ… API key authentication on all endpoints (except /health)
- โœ… CORS restricted to configured origins with test coverage
- โœ… HTTP rate limiting (100 req/min default, configurable)
- โœ… All secrets externalized to .env (not in docker-compose.yml)
- โœ… Trivy security scanning in CI/CD
- โœ… Error boundaries prevent UI crashes
- โœ… Input sanitization and SQL injection prevention via Prisma

---

## โš™๏ธ Configuration

### Environment Variables

All configuration is in `.env`. See [.env.example](.env.example) for full list.

**Essential:**
```bash
# API Authentication (REQUIRED)
API_KEY=your_secure_api_key_here

# Claude API (REQUIRED for complex tasks)
ANTHROPIC_API_KEY=sk-ant-api03-...

# Database (REQUIRED)
POSTGRES_PASSWORD=your_secure_password
DATABASE_URL=postgresql://postgres:${POSTGRES_PASSWORD}@localhost:5432/abcc?schema=public

# Ollama Model (custom 16K context Modelfile, auto-created on startup)
OLLAMA_MODEL=qwen2.5-coder:32k

# Grok / xAI (OPTIONAL โ€” enables Grok as a model option in agent dropdowns)
XAI_API_KEY=xai-your_key_here
```

**Security:**
```bash
# CORS (comma-separated origins)
CORS_ORIGINS=http://localhost:5173,https://your-domain.com

# Rate Limiting
RATE_LIMIT_MAX=100 # requests per minute
RATE_LIMIT_WINDOW_MS=60000 # 1 minute window

# JWT Secret
JWT_SECRET=your_secure_jwt_secret_minimum_32_characters
```

**Cost Controls:**
```bash
# Budget Service
BUDGET_DAILY_LIMIT_CENTS=500 # $5.00 daily limit
BUDGET_WARNING_THRESHOLD=0.8 # Warn at 80%

# Review Schedule
OLLAMA_REVIEW_INTERVAL=5 # Haiku review every 5th Ollama task
OPUS_REVIEW_INTERVAL=10 # Opus review every 10th complex task
```

**Ollama Optimization:**
```bash
# Rest delays between tasks (prevents context pollution)
OLLAMA_REST_DELAY_MS=3000 # 3s between tasks
OLLAMA_EXTENDED_REST_MS=8000 # 8s every Nth task
OLLAMA_RESET_EVERY_N_TASKS=3 # Reset interval

# Stuck task recovery
STUCK_TASK_TIMEOUT_MS=600000 # 10 minutes
STUCK_TASK_CHECK_INTERVAL_MS=60000 # Check every 1 minute
```

### GPU Configuration

**Verify GPU access:**
```bash
docker exec abcc-ollama nvidia-smi
```

**CPU-only mode** (slower, but works without GPU):
```yaml
# In docker-compose.yml, comment out GPU section:
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: all
# capabilities: [gpu]
```

---

## ๐Ÿ“– Usage

### Creating Tasks

**Via UI:**
1. Click "New Task" button
2. Enter title and description
3. (Optional) Select required agent type
4. Submit - complexity is auto-calculated

**Via API:**
```bash
curl -X POST http://localhost:3001/api/tasks \
-H "X-API-Key: your_api_key" \
-H "Content-Type: application/json" \
-d '{
"title": "Create user authentication",
"description": "Implement JWT-based auth with login/logout endpoints",
"type": "code"
}'
```

### Monitoring Progress

**Active Missions Panel:**
- Shows currently executing tasks
- Agent health (green=idle, amber=working, red=stuck)
- Progress indicators

**Tool Log Panel:**
- Real-time feed of agent actions
- Click to expand full details
- Filter by agent or task

**Dashboard:**
- Success rates by agent and complexity
- Cost breakdown by model tier
- Daily/weekly/monthly metrics

### Managing Agents

**Pause/Resume:**
```bash
curl -X POST http://localhost:3001/api/agents/qa-01/pause \
-H "X-API-Key: your_api_key"
```

**Reset stuck agents:**
```bash
curl -X POST http://localhost:3001/api/agents/reset-all \
-H "X-API-Key: your_api_key"
```

**Check Ollama status:**
```bash
curl http://localhost:3001/api/agents/ollama-status \
-H "X-API-Key: your_api_key"
```

---

## ๐Ÿงช Testing

### Stress Tests

**20-task graduated complexity (C1-C8, 100% pass rate):**
```bash
node scripts/ollama-stress-test.js
```

**40-task ultimate test (C1-C9, 90% pass rate, 11 min with 16K context):**
```bash
node scripts/ollama-stress-test-40.js
```

**Full tier test (Ollama + Haiku + Sonnet + Opus, ~$1.50 cost):**
```bash
node scripts/run-full-tier-test.js
```

**Parallel execution test:**
```bash
node scripts/test-parallel.js
```

### Unit Tests

```bash
# API tests (18 test suites, 122 tests)
cd packages/api
npm test

# Agent tests (2 Python test suites, 696 lines)
cd packages/agents
pytest
```

**Test Coverage (as of Feb 7, 2026):**
- **27 total test files:** 23 TypeScript + 4 Python
- **API tests (16 files):** budgetService, resourcePool, stuckTaskRecovery, rateLimiter, agentManager, costCalculator, complexityAssessor, taskRouter, taskQueue, fileLock, ollamaOptimizer, schedulerService, taskExecutor, taskAssigner
- **UI component tests (5 files):** ErrorBoundary, AgentCard, TaskCard, AnimatedCounter, StatusBadge
- **Python tests (4 files):** action_history, file_ops, validators, tools
- **Integration tests:** CORS, auth middleware, task lifecycle

### System Health Check

```bash
# Full health check (includes load test)
node scripts/full-system-health-check.js

# Quick check (skip load test)
node scripts/full-system-health-check.js --skip-load-test
```

---

## ๐Ÿ› Troubleshooting

### Ollama model not loading

**Symptoms:** Tasks stuck in queue, Ollama health check failing

**Solution:**
```bash
# Check if model is downloaded (should show qwen2.5-coder:32k)
docker exec abcc-ollama ollama list

# If missing, the entrypoint auto-creates it. Restart the container:
docker compose restart ollama

# Or pull base model and create manually:
docker exec abcc-ollama ollama pull qwen2.5-coder:7b
# The 32k model (16K context) is auto-created from the base model on startup

# Check logs
docker logs abcc-ollama --tail 50
```

### CORS errors in browser

**Symptoms:** `Access to fetch blocked by CORS policy`

**Solution:**
```bash
# Add your origin to .env
CORS_ORIGINS=http://localhost:5173,http://localhost:8080

# Restart API
docker compose restart api
```

### Rate limit exceeded

**Symptoms:** `429 Too Many Requests`

**Solution:**
```bash
# Increase limit in .env
RATE_LIMIT_MAX=300

# Or use WebSockets instead of polling
```

### API authentication fails

**Symptoms:** `401 Unauthorized` or `403 Forbidden`

**Solution:**
```bash
# Check API key matches in .env
API_KEY=...
VITE_API_KEY=... # Must be same value

# Rebuild UI container
docker compose up --build ui
```

### Tasks stuck in "in_progress"

**Symptoms:** Task running for >10 minutes with no updates

**Automatic recovery:** Stuck task recovery service runs every 60 seconds

**Manual recovery:**
```bash
# Check stuck task recovery status
curl http://localhost:3001/api/agents/stuck-recovery/status \
-H "X-API-Key: your_api_key"

# Trigger manual recovery
curl -X POST http://localhost:3001/api/agents/stuck-recovery/check \
-H "X-API-Key: your_api_key"
```

### Database connection issues

**Symptoms:** `Error: P1001: Can't reach database server`

**Solution:**
```bash
# Check PostgreSQL is running
docker ps | grep postgres

# Check connection
docker exec abcc-postgres pg_isready -U postgres

# View logs
docker logs abcc-postgres --tail 50
```

### Out of disk space

**Symptoms:** Docker fails to start, "no space left on device"

**Check Docker disk usage:**
```bash
docker system df
```

**Clean up:**
```bash
# Remove old containers and images
docker system prune -a

# Remove old Ollama models
docker exec abcc-ollama ollama list
docker exec abcc-ollama ollama rm

# Clean backups (keep last 7 days)
# Default backup location: ./backups/ (set BACKUP_MIRROR_PATH in .env)
```

---

## ๐Ÿ“š Documentation

- **[Authentication Guide](docs/AUTHENTICATION.md)** - API key setup and usage
- **[CORS Configuration](docs/CORS.md)** - Origin restrictions and security
- **[Rate Limiting](docs/RATE_LIMITING.md)** - Request limits and bypass
- **[Error Handling](docs/ERROR_HANDLING.md)** - UI error boundaries
- **[Database Migrations](docs/DATABASE_MIGRATIONS.md)** - Schema changes and deployment
- **[API Reference](docs/API.md)** - All endpoints and examples
- **[Development Guide](docs/DEVELOPMENT.md)** - Testing and debugging
- **[AI Assistant Context](CLAUDE.md)** - Project overview for AI tools

---

## ๐Ÿ› ๏ธ Development

### Prerequisites

- Node.js 20+
- Python 3.11+
- pnpm 8+ (install with `npm install -g pnpm`)
- Docker Desktop

### Local Setup

```bash
# Install dependencies
pnpm install

# Start database only
docker compose up postgres redis -d

# Run API in dev mode
cd packages/api
pnpm dev

# Run UI in dev mode
cd packages/ui
pnpm dev

# Run agents in dev mode
cd packages/agents
uvicorn src.main:app --reload --port 8000
```

### Running Tests

```bash
# All tests
pnpm test

# API tests only
cd packages/api && pnpm test

# UI tests (when available)
cd packages/ui && pnpm test

# Agent tests (when available)
cd packages/agents && pytest
```

### Code Quality

```bash
# Lint all packages
pnpm lint

# Security scan (requires Trivy)
pnpm run security:scan

# Type check
pnpm run type-check
```

---

## ๐Ÿค Contributing

We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for:
- Code of conduct
- Development setup
- Pull request process
- Code style guidelines
- Testing requirements

**Quick start for contributors:**
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes and add tests
4. Ensure all tests pass (`pnpm test`)
5. Commit with clear messages (`git commit -m 'Add amazing feature'`)
6. Push to your fork (`git push origin feature/amazing-feature`)
7. Open a Pull Request

---

## ๐Ÿ“Š Performance & Benchmarks

### Proven Metrics (Feb 2026)

| Metric | Result | Test |
|--------|--------|------|
| **Ollama C1-C9 Success** | **90% (36/40)** | 40-task ultimate test, 16K context |
| **Ollama C1-C8 Success** | 100% (20/20) | 20-task stress test |
| **Total runtime (40 tasks)** | **11 minutes** | 4.5x faster than 4K context (was 43 min) |
| **Ollama avg time** | **12s** | 16K context (was 54s with 4K) |
| **Cost per task (avg)** | $0.002 | Mixed complexity batch |
| **GPU utilization** | 93% GPU / 7% CPU | 7GB VRAM on RTX 3060 Ti 8GB |
| **Parallel speedup** | 40-60% | vs sequential |

**16K Context Window Upgrade (Feb 17, 2026):**

| Complexity | Success | Avg Time | What's Tested |
|------------|---------|----------|---------------|
| C1-C4 | **100%** | 8-11s | Math, strings, conditionals, loops |
| C5-C6 | **80-100%** | 10-12s | FizzBuzz, palindrome, caesar cipher, primes |
| C7-C8 | **100%** | 14-15s | Fibonacci, word freq, matrix, binary search, power set |
| C9 (Extreme) | **80%** | 19s | LRU Cache, RPN Calculator, Sorted Linked List, Stack |

### Hardware Requirements

**Recommended:**
- RTX 3060 Ti (8GB VRAM) or better
- 16GB system RAM
- 50GB free disk space
- Ubuntu 22.04 / Windows 11 with WSL2

**Minimum:**
- CPU-only (no GPU) - much slower
- 8GB system RAM
- 30GB free disk space

---

## ๐Ÿ—บ๏ธ Roadmap

### Current (v0.7.x)
- โœ… **Per-agent model selection** โ€” dropdown to override Auto routing per agent (v0.7.0)
- โœ… **Grok (xAI) support** โ€” new model option for all agents (v0.7.0)
- โœ… **CTO agents in sidebar** โ€” full visibility for all 3 agent types (v0.7.0)
- โœ… **3-tier routing** โ€” Local Ollama / Remote Ollama / Claude API (v0.5.1)
- โœ… **Auto-retry pipeline** โ€” 98% pass rate with validation + retry (v0.5.0)
- โœ… Tiered task routing (Ollama/Sonnet/Opus)
- โœ… 16K context window for Ollama โ€” 90% C1-C9, 4.5x faster
- โœ… 3D holographic battlefield view with React Three Fiber
- โœ… Bark TTS military radio voice lines โ€” 96 clips, 3 packs
- โœ… API authentication and rate limiting
- โœ… Parallel execution and file locking
- โœ… Cost tracking and budget limits
- โœ… Stuck task auto-recovery
- โœ… Docker Hub image publishing
- โœ… Multi-language workspace (Python, JavaScript, TypeScript, Go, PHP)

### Next (v0.8.x)
- [ ] E2E test suite (Playwright)
- [ ] Onboarding flow / first-run wizard
- [ ] Agent workspace viewer (live code editing view)
- [ ] Plugin system for custom agent tools

### Community Release (v1.0.x) - Target: 2-3 months
- [ ] Multi-user authentication (OAuth2/OIDC)
- [ ] Workspace isolation per user
- [ ] Cloud deployment guides (Railway, Render, AWS)
- [ ] Demo mode (pre-loaded tasks, no API key)
- [ ] Agent marketplace (community agents)
- [ ] WCAG 2.1 AA accessibility compliance

---

## ๐Ÿ“œ License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

---

## ๐Ÿ™ Acknowledgments

- **Anthropic** - Claude API powering the intelligent agents
- **Ollama** - Local LLM runtime enabling free execution
- **CrewAI** - Agent orchestration framework
- **[Bark TTS](https://github.com/suno-ai/bark)** - GPU-generated military radio voice lines
- **Classic RTS games** - Inspiration for the UI/UX

---

## ๐Ÿ“ง Support

- **Issues:** [GitHub Issues](https://github.com/mrdushidush/agent-battle-command-center/issues)
- **Discussions:** [GitHub Discussions](https://github.com/mrdushidush/agent-battle-command-center/discussions)
- **Documentation:** [docs/](docs/)

---

**Built with โค๏ธ by the ABCC community**

*"One write, one verify, mission complete." - CodeX-7*