An open API service indexing awesome lists of open source software.

https://github.com/wronai/taskguard

TaskGuard - LLM Task Controller - AI-powered development assistant that controls LLM behavior, enforces best practices, and maintains laser focus through intelligent automation.
https://github.com/wronai/taskguard

claude cursor cursro llm ollama ollama-client

Last synced: 3 months ago
JSON representation

TaskGuard - LLM Task Controller - AI-powered development assistant that controls LLM behavior, enforces best practices, and maintains laser focus through intelligent automation.

Awesome Lists containing this project

README

          

# ๐Ÿง  TaskGuard - LLM Task Controller with Local AI Intelligence

[![Version](https://img.shields.io/pypi/v/taskguard.svg)](https://pypi.org/project/taskguard/)
[![Python](https://img.shields.io/pypi/pyversions/taskguard.svg)](https://pypi.org/project/taskguard/)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Downloads](https://img.shields.io/pypi/dm/taskguard.svg)](https://pypi.org/project/taskguard/)

**Your AI-powered development assistant that controls LLM behavior, enforces best practices, and maintains laser focus through intelligent automation.**

## ๐Ÿ”Œ **Disabling TaskGuard**

To disable TaskGuard, use one of the following methods:

### Temporary Disable (current session only):
```bash
unset -f python python3 node npm npx git 2>/dev/null
```

### Permanent Disable:
```bash
# Remove TaskGuard from shell config
sed -i '/source.*llmtask_shell/d' ~/.bashrc ~/.zshrc 2>/dev/null

# Remove shell functions file
rm -f ~/.llmtask_shell.sh

# Clear environment variables
unset TASKGUARD_ENABLED

# Restart your shell
exec $SHELL
```

### Complete Removal:
```bash
# Remove all configuration files
rm -f ~/.llmcontrol.yaml ~/.llmstate.json ~/.llmtask_shell.sh

# Remove from shell config
sed -i '/source.*llmtask_shell/d' ~/.bashrc ~/.zshrc 2>/dev/null
exec $SHELL
```

## ๐ŸŽฏ **What This Solves**

LLMs are powerful but chaotic - they create too many files, ignore best practices, lose focus, and generate dangerous code. **TaskGuard** gives you an intelligent system that:

โœ… **Controls LLM behavior** through deceptive transparency
โœ… **Enforces best practices** automatically
โœ… **Maintains focus** on single tasks
โœ… **Prevents dangerous code** execution
โœ… **Understands any document format** using local AI
โœ… **Provides intelligent insights** about your project

## ๐Ÿš€ **Quick Installation**

```bash
# Install TaskGuard
pip install taskguard

# Setup local AI (recommended)
curl -fsSL https://ollama.ai/install.sh | sh
ollama serve
ollama pull llama3.2:3b

# Initialize your project
taskguard init

# Setup shell integration
taskguard setup shell

# IMPORTANT: Load shell functions
source ~/.llmtask_shell.sh

# Start intelligent development
show_tasks
```

**That's it! Your development environment is now intelligently controlled.** ๐ŸŽ‰

## โš ๏ธ **Important Setup Note**

After installation, you **must** load the shell functions:

```bash
# Load functions in current session
source ~/.llmtask_shell.sh

# For automatic loading in new sessions
echo "source ~/.llmtask_shell.sh" >> ~/.bashrc
```

**Common issue**: If commands like `show_tasks` give "command not found", you forgot to run `source ~/.llmtask_shell.sh`!

## ๐Ÿง  **Key Innovation: Local AI Intelligence**

Unlike traditional task managers, TaskGuard uses **local AI** to understand your documents:

### ๐Ÿ“‹ **Universal Document Understanding**
```bash
# Parses ANY format automatically:
taskguard parse todo TODO.md # Markdown checkboxes
taskguard parse todo tasks.yaml # YAML structure
taskguard parse todo backlog.org # Org-mode format
taskguard parse todo custom.txt # Your weird custom format
```

### ๐Ÿ’ก **AI-Powered Insights**
```bash
taskguard smart-analysis
# ๐Ÿง  Smart TODO Analysis:
# ๐Ÿ’ก AI Insights:
# 1. Authentication tasks are blocking 4 other features
# 2. Consider breaking down "Implement core functionality"
# 3. Testing tasks should be prioritized to catch issues early
```

### ๐Ÿค– **Intelligent Task Suggestions**
```bash
taskguard smart-suggest
# ๐Ÿค– AI Task Suggestion:
# ๐ŸŽฏ Task ID: 3
# ๐Ÿ’ญ Reasoning: Database migration unblocks 3 dependent tasks
# โฑ๏ธ Estimated Time: 4-6 hours
# โš ๏ธ Potential Blockers: Requires staging environment setup
```

## ๐ŸŽญ **How LLM Sees It (Deceptive Control)**

### โœ… **Normal Workflow (LLM thinks it's free):**
```bash
# LLM believes it's using regular tools
python myfile.py
# ๐Ÿ“ฆ Creating safety checkpoint...
# โœ… python myfile.py completed safely

npm install express
# ๐Ÿ“ฆ Creating safety checkpoint...
# โœ… npm install express completed safely

show_tasks
# ๐Ÿ“‹ Current Tasks:
# ๐ŸŽฏ ACTIVE: #1 Setup authentication system
```

### ๐Ÿšจ **When LLM Tries Dangerous Stuff:**
```bash
# LLM attempts dangerous code
python dangerous_script.py
# ๐Ÿšจ BLOCKED: dangerous code in dangerous_script.py: os.system(
# ๐Ÿ’ก Try: Use subprocess.run() with shell=False

# LLM tries to lose focus
touch file1.py file2.py file3.py file4.py
# ๐ŸŽฏ Focus! Complete current task first: Setup authentication system
# ๐Ÿ“Š Files modified today: 3/3
```

### ๐Ÿ“š **Best Practice Enforcement:**
```python
# LLM creates suboptimal code
def process_data(data):
return data.split(',')
```

```bash
python bad_code.py
# ๐Ÿ“‹ Best Practice Reminders:
# - Missing docstrings in functions
# - Missing type hints in functions
# - Use more descriptive variable names
```

## ๐Ÿ”ง **Multi-Layer Control System**

### 1. **๐Ÿ›ก๏ธ Safety Layer**
- Ultra-sensitive command interception
- Dangerous pattern detection (even in comments!)
- Base64/hex decoding and scanning
- Automatic backup before risky operations

### 2. **๐ŸŽฏ Focus Controller**
- Single task enforcement
- File creation limits
- Task timeout management
- Progress tracking

### 3. **๐Ÿ“š Best Practices Engine**
- Language-specific rules
- Code style enforcement
- Security pattern detection
- Automatic documentation requirements

### 4. **๐Ÿง  AI Intelligence Layer**
- Local LLM analysis
- Universal document parsing
- Project health assessment
- Intelligent recommendations

## ๐Ÿ“‹ **Command Reference**

### ๐ŸŽฏ **Task Management (Shell Functions)**
```bash
show_tasks # List all tasks with AI insights
start_task # Start working on specific task
complete_task # Mark current task as done
add_task "title" [cat] [pri] # Add new task
focus_status # Check current focus metrics
productivity # Show productivity statistics

# Alternative aliases
tasks # Same as show_tasks
done_task # Same as complete_task
metrics # Same as productivity
```

### ๐Ÿง  **Intelligence Features (Shell Functions)**
```bash
smart_analysis # AI-powered project analysis
smart_suggest # Get AI task recommendations
best_practices [file] # Check best practices compliance

# Alternative aliases
analyze # Same as smart_analysis
insights # Same as smart_analysis
suggest # Same as smart_suggest
check_code [file] # Same as best_practices
```

### ๐Ÿ›ก๏ธ **Safety & Control (Shell Functions)**
```bash
tg_status # Show system health
tg_health # Run project health check
tg_backup # Create project backup
safe_rm # Delete with backup
safe_git # Git with backup

# Emergency commands
force_python # Bypass safety checks
force_exec # Emergency bypass
```

### โš™๏ธ **Configuration (CLI Commands)**
```bash
taskguard config # Show current config
taskguard config --edit # Edit configuration
taskguard config --template enterprise # Apply config template
taskguard setup ollama # Setup local AI
taskguard setup shell # Setup shell integration
taskguard test-llm # Test local LLM connection
```

### ๐Ÿ’ก **Help & Information (Shell Functions)**
```bash
tg_help # Show all shell commands
overview # Quick project overview
check # Quick system check
init_project # Initialize new project

# Alternative aliases
taskguard_help # Same as tg_help
llm_help # Same as tg_help
```

## ๐Ÿ“Š **Configuration Templates**

### ๐Ÿš€ **Startup Mode (Speed Focus)**
```bash
taskguard init --template startup
```
- More files per task (5)
- Longer development cycles (60min)
- Relaxed documentation requirements
- Focus on rapid prototyping

### ๐Ÿข **Enterprise Mode (Quality Focus)**
```bash
taskguard init --template enterprise
```
- Strict file limits (1-2 per task)
- Mandatory code reviews
- High test coverage requirements (90%)
- Full security scanning

### ๐ŸŽ“ **Learning Mode (Educational)**
```bash
taskguard init --template learning
```
- One file at a time
- Educational hints and explanations
- Step-by-step guidance
- Best practice examples

### ๐Ÿ **Python Project**
```bash
taskguard init --template python
```
- Python-specific best practices
- Docstring and type hint enforcement
- Test requirements
- Import organization

## ๐ŸŽช **Real-World Examples**

### ๐Ÿ“Š **Complex Document Parsing**

**Input: Mixed format TODO**
```markdown
# Project Backlog

## ๐Ÿ”ฅ Critical Issues
- [x] Fix login bug (PROD-123) - **DONE** โœ…
- [ ] Database migration script ๐Ÿ”ด HIGH
- [ ] Backup existing data
- [ ] Test migration on staging

## ๐Ÿ“š Features
โ˜ User dashboard redesign (Est: 8h) @frontend @ui
โณ API rate limiting (John working) @backend
โœ… Email notifications @backend

## Testing
TODO: Add integration tests for auth module
TODO: Performance testing for API endpoints
```

**AI Output: Perfect Structure**
```json
[
{
"id": 1,
"title": "Fix login bug (PROD-123)",
"status": "completed",
"priority": "high",
"category": "bugfix"
},
{
"id": 2,
"title": "Database migration script",
"status": "pending",
"priority": "high",
"subtasks": ["Backup existing data", "Test migration on staging"]
},
{
"id": 3,
"title": "User dashboard redesign",
"estimated_hours": 8,
"labels": ["frontend", "ui"]
}
]
```

### ๐Ÿค– **Perfect LLM Session**

```bash
# 1. LLM checks project status (using shell functions)
show_tasks
# ๐Ÿ“‹ Current Tasks:
# โณ #1 ๐Ÿ”ด [feature] Setup authentication system
# โณ #2 ๐Ÿ”ด [feature] Implement core functionality

# 2. LLM starts focused work
start_task 1
# ๐ŸŽฏ Started task: Setup authentication system

# 3. LLM works only on this task (commands are wrapped)
python auth.py
# ๐Ÿ“ฆ Creating safety checkpoint...
# โœ… python auth.py completed safely
# โœ… Code follows best practices!

# 4. LLM completes task properly
complete_task
# โœ… Task completed: Setup authentication system
# ๐Ÿ“ Changelog updated automatically
# ๐ŸŽฏ Next suggested task: Add authentication tests

# 5. LLM can use AI features
smart_analysis
# ๐Ÿ’ก AI Insights:
# 1. Authentication system is now ready for testing
# 2. Consider adding input validation
# 3. Database integration should be next priority
```

## ๐Ÿ“Š **Intelligent Features**

### ๐Ÿง  **Project Health Dashboard**
```bash
taskguard health --full

# ๐Ÿง  Project Health Report
# ================================
# ๐Ÿ“Š Project Health: 75/100
# ๐ŸŽฏ Focus Score: 85/100
# โšก Velocity: 2.3 tasks/day
#
# ๐Ÿšจ Critical Issues:
# - 3 high-priority tasks blocked by dependencies
# - Authentication module has 0% test coverage
#
# ๐Ÿ’ก Recommendations:
# 1. Complete database migration to unblock other tasks
# 2. Add tests before deploying auth module
# 3. Break down large tasks into smaller chunks
```

### ๐Ÿ“ˆ **Productivity Analytics**
```bash
taskguard productivity

# ๐Ÿ“Š Productivity Metrics:
# Tasks Completed: 5
# Files Created: 12
# Lines Written: 847
# Time Focused: 3h 45m
# Focus Efficiency: 86.5%
```

## ๐Ÿ”„ **Local LLM Setup**

### ๐Ÿš€ **Ollama (Recommended)**
```bash
# Install
curl -fsSL https://ollama.ai/install.sh | sh

# Setup
ollama serve
ollama pull llama3.2:3b # 2GB, perfect balance
ollama pull qwen2.5:1.5b # 1GB, ultra-fast

# Test
taskguard test-llm
```

### ๐ŸŽจ **LM Studio (GUI)**
- Download from https://lmstudio.ai/
- User-friendly interface
- Easy model management

### โšก **Performance vs Resources**

| Model | Size | RAM | Speed | Accuracy | Best For |
|-------|------|-----|-------|----------|----------|
| **qwen2.5:1.5b** | 1GB | 4GB | โšกโšกโšก | โญโญโญ | Fast parsing |
| **llama3.2:3b** | 2GB | 6GB | โšกโšก | โญโญโญโญ | **Recommended** |
| **codellama:7b** | 4GB | 8GB | โšก | โญโญโญโญโญ | Code analysis |

## ๐ŸŽฏ **Best Practices Library**

### ๐Ÿ **Python Excellence**
```yaml
python:
# Code Structure
enforce_docstrings: true
enforce_type_hints: true
max_function_length: 50

# Code Quality
require_tests: true
test_coverage_minimum: 80
no_unused_imports: true

# Security
no_eval_exec: true
validate_inputs: true
handle_exceptions: true
```

### ๐ŸŒ **JavaScript/TypeScript**
```yaml
javascript:
# Modern Practices
prefer_const: true
prefer_arrow_functions: true
async_await_over_promises: true

# Error Handling
require_error_handling: true
no_silent_catch: true

# Performance
avoid_memory_leaks: true
optimize_bundle_size: true
```

### ๐Ÿ” **Security Standards**
```yaml
security:
# Input Validation
validate_all_inputs: true
sanitize_user_data: true

# Authentication
strong_password_policy: true
secure_session_management: true
implement_rate_limiting: true

# Data Protection
encrypt_sensitive_data: true
secure_api_endpoints: true
```

## ๐Ÿ† **Success Metrics**

### ๐Ÿ“Š **Before vs After**

| Metric | Before | After | Improvement |
|--------|--------|-------|-------------|
| **Dangerous Commands** | 15/week | 0/week | ๐Ÿ›ก๏ธ 100% blocked |
| **Task Completion** | 60% | 95% | ๐ŸŽฏ 58% better |
| **Code Quality Score** | 65/100 | 90/100 | ๐Ÿ“š 38% higher |
| **Focus Time** | 40% | 85% | โฐ 113% better |
| **Best Practice Adherence** | 45% | 88% | โœ… 96% better |

### ๐ŸŽ‰ **Real User Results**
- **Zero system damage** from LLM-generated code
- **3x faster** task completion through focus
- **90%+ best practice** compliance automatically
- **Universal document** parsing (any format works)
- **Intelligent insights** that actually help

### โšก **Quick Install**
```bash
pip install taskguard
```

### ๐Ÿš€ **Complete Setup (Recommended)**
```bash
# 1. Install TaskGuard
pip install taskguard

# 2. Setup local AI (optional but powerful)
curl -fsSL https://ollama.ai/install.sh | sh
ollama serve
ollama pull llama3.2:3b

# 3. Initialize your project
taskguard init

# 4. Setup shell integration
taskguard setup shell

# 5. Load shell functions (CRITICAL STEP)
source ~/.llmtask_shell.sh

# 6. Test the setup
show_tasks
tg_help
```

### ๐Ÿ”ง **Development Install**
```bash
git clone https://github.com/wronai/taskguard.git
cd taskguard
pip install -e ".[dev]"
taskguard init
source ~/.llmtask_shell.sh
```

### ๐ŸŽฏ **Full Features Install**
```bash
pip install "taskguard[all]" # Includes LLM, security, docs
taskguard setup shell
source ~/.llmtask_shell.sh
```

### ๐Ÿณ **Docker Install**
```bash
docker run -it wronai/taskguard:latest
```

## ๐Ÿšจ **Troubleshooting Setup**

### โ“ **"Command not found: show_tasks"**
```bash
# The most common issue - you forgot to source the shell file
source ~/.llmtask_shell.sh

# Check if functions are loaded
type show_tasks

# If still not working, regenerate shell integration
taskguard setup shell --force
source ~/.llmtask_shell.sh
```

### โ“ **"TaskGuard command not found"**
```bash
# Check installation
pip list | grep taskguard

# Reinstall if needed
pip install --force-reinstall taskguard

# Check PATH
which taskguard
```

### โ“ **Shell integration file missing**
```bash
# Check if file exists
ls -la ~/.llmtask_shell.sh

# If missing, create it
taskguard setup shell

# Make sure it's executable
chmod +x ~/.llmtask_shell.sh
source ~/.llmtask_shell.sh
```

### โ“ **Functions work but disappear in new terminal**
```bash
# Add to your shell profile for automatic loading
echo "source ~/.llmtask_shell.sh" >> ~/.bashrc

# For zsh users
echo "source ~/.llmtask_shell.sh" >> ~/.zshrc

# Restart terminal or source profile
source ~/.bashrc
```

## ๐Ÿ› ๏ธ **Advanced Features**

### ๐Ÿ”„ **Continuous Learning**
- System learns your coding patterns
- Adapts to your workflow preferences
- Improves recommendations over time
- Personalized productivity insights

### ๐ŸŽ›๏ธ **Multi-Project Support**
- Different configs per project
- Team-wide best practice sharing
- Cross-project analytics
- Centralized intelligence dashboard

### ๐Ÿ”Œ **Integration Ready**
- Git hooks for automated checks
- CI/CD pipeline integration
- IDE extensions (planned)
- Slack/Discord notifications

## ๐Ÿค **Contributing**

We welcome contributions! Areas of focus:

- ๐Ÿง  **AI Intelligence**: Better prompts, new models
- ๐ŸŽฏ **Best Practices**: Language-specific rules
- ๐Ÿ”ง **Integrations**: IDE plugins, CI/CD hooks
- ๐Ÿ“Š **Analytics**: Better productivity insights
- ๐ŸŒ **Documentation**: Examples, tutorials

### ๐Ÿ”ง **Development Setup**
```bash
git clone https://github.com/wronai/taskguard.git
cd taskguard
pip install -e ".[dev]"
pre-commit install
pytest
```

## ๐Ÿ› **Troubleshooting**

### โ“ **Common Issues**

**"Local LLM not connecting"**
```bash
# Check Ollama status
ollama list
ollama serve

# Test connection
taskguard test-llm
```

**"Too many false positives"**
```bash
# Adjust sensitivity
taskguard config --template startup
```

**"Tasks not showing"**
```bash
# Initialize project
taskguard init
```

## ๐Ÿ“„ **License**

Apache 2.0 License - see [LICENSE](LICENSE) file for details.

## ๐Ÿ™ **Acknowledgments**

- Inspired by the need to tame chaotic LLM behavior
- Built for developers who value both innovation and safety
- Thanks to the open-source AI community
- **Special recognition** to all the LLMs that tried to break our system and made it stronger! ๐Ÿค–

---

## ๐ŸŽฏ **Core Philosophy**

**"Maximum Intelligence, Minimum Chaos"**

This isn't just another task manager - it's an intelligent system that makes LLMs work *for* you instead of *against* you. Through deceptive transparency, local AI intelligence, and adaptive learning, we've created the first truly intelligent development assistant that maintains safety, focus, and quality without sacrificing productivity.

**Ready to experience intelligent development? Get started in 2 minutes! ๐Ÿš€**

```bash
pip install taskguard && taskguard init
```

---

**โญ If this system helped you control an unruly LLM, please star the repository!**

*Made with โค๏ธ by developers, for developers who work with AI.*

*Your AI-powered development companion - because LLMs are powerful, but controlled LLMs are unstoppable.*