{"id":44250422,"url":"https://github.com/arn-c0de/crawllama","last_synced_at":"2026-02-27T14:27:20.830Z","repository":{"id":320728728,"uuid":"1081176229","full_name":"arn-c0de/Crawllama","owner":"arn-c0de","description":"CrawlLama 🦙 is an  local AI agent that answers questions via Ollama and integrates web- and RAG-based research.","archived":false,"fork":false,"pushed_at":"2026-02-07T10:14:52.000Z","size":1655,"stargazers_count":5,"open_issues_count":12,"forks_count":1,"subscribers_count":2,"default_branch":"1.4.7","last_synced_at":"2026-02-07T19:08:18.983Z","etag":null,"topics":["automation","contribute","ethical-scraping","fastapi","help-wanted","knowledge-retrieval","local-llm","multi-hop-reasoning","osint","plugin-system","python","rag"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/arn-c0de.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":".github/CODEOWNERS","security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-10-22T12:19:19.000Z","updated_at":"2026-02-07T10:14:55.000Z","dependencies_parsed_at":"2025-10-25T15:14:01.356Z","dependency_job_id":"feb527a1-2655-4c46-9883-4d3c7884a221","html_url":"https://github.com/arn-c0de/Crawllama","commit_stats":null,"previous_names":["arn-c0de/crawllama"],"tags_count":4,"template":false,"template_full_name":null,"purl":"pkg:github/arn-c0de/Crawllama","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/arn-c0de%2FCrawllama","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/arn-c0de%2FCrawllama/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/arn-c0de%2FCrawllama/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/arn-c0de%2FCrawllama/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/arn-c0de","download_url":"https://codeload.github.com/arn-c0de/Crawllama/tar.gz/refs/heads/1.4.7","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/arn-c0de%2FCrawllama/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29303020,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-10T12:55:56.056Z","status":"ssl_error","status_checked_at":"2026-02-10T12:55:55.692Z","response_time":65,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["automation","contribute","ethical-scraping","fastapi","help-wanted","knowledge-retrieval","local-llm","multi-hop-reasoning","osint","plugin-system","python","rag"],"created_at":"2026-02-10T14:00:58.519Z","updated_at":"2026-02-10T14:01:35.893Z","avatar_url":"https://github.com/arn-c0de.png","language":"Python","readme":"\u003cdiv align=\"left\"\u003e\n  \u003ch1\u003e   \u003cimg src=\"logo.ico\" alt=\"CrawlLama Logo\" width=\"64\" height=\"64\"\u003e  CrawlLama\u003c/h1\u003e\n\u003c/div\u003e\n\n![Python Version](https://img.shields.io/badge/python-3.11%2B-blue)\n![Platform](https://img.shields.io/badge/platform-Windows-lightblue)\n![Platform](https://img.shields.io/badge/platform-Linux-lightgrey)\n![License](https://img.shields.io/badge/license-Non--Commercial-orange)\n![Status](https://img.shields.io/badge/status-Active-success)\n[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/arn-c0de/Crawllama)\n\n\n[Documentation](docs/README.md) | [Quickstart](docs/getting-started/QUICKSTART.md) | [API Guide](docs/API_USAGE.md) | [Adaptive Hops](docs/ADAPTIVE_HOPS_QUICKSTART.md) | [Security](SECURITY.md) | [Changelog](CHANGELOG.md)**\n\n[Project Website](https://arn-c0de.github.io/Crawllama/)\n\n**Production-Ready AI Research Agent with OSINT \u0026 Multi-Hop Reasoning**\n\u003cdiv align=\"left\u003e\n  \u003ch1\u003e   \u003cimg src=\"logo.ico\" alt=\"CrawlLama Logo\" width=\"64\" height=\"64\"\u003e  CrawlLama\u003c/h1\u003e\n  \u003cb\u003eCurrent Version: 1.4.7 – Security Fixes\u003c/b\u003e\n\u003c/div\u003e\n\n\n## Contributing\n\n\u003e **We welcome ideas, bug reports, and feature requests.**\n\n\u003cdiv align=\"center\"\u003e\n  \u003ca href=\"CONTRIBUTING.md\"\u003e\n    \u003cimg src=\"https://img.shields.io/badge/Contribute-Get%20Started-brightgreen?style=for-the-badge\" alt=\"Contribute Badge\"\u003e\n  \u003c/a\u003e\n\u003c/div\u003e\n\n\n## Table of contents\n\n\n- [Features](#-features)\n- [Images](#-images)\n- [Quickstart](#-quickstart)\n- [Installation](#-installation)\n- [Usage](#-usage)\n- [REST API](#-rest-api)\n- [Configuration](#️-configuration)\n- [Testing](#-testing)\n- [Documentation](#-further-documentation)\n- [Contributing](#-contributing)\n- [License](#-license)\n\n---\n\nA fully local, production-ready AI research agent with advanced intelligence features:\n- OSINT module: email, phone, and IP intelligence; social media analysis; advanced search operators\n- Multi-hop reasoning using LangGraph for complex queries\n- Adaptive agent selection based on query complexity (low/mid/high)\n- REST API with FastAPI for integration\n- Plugin system for extensibility\n- Performance optimizations with large context support and asynchronous execution\n\n\n## Features\n\n### Core features\n\n- **Adaptive agent hopping system** – Automatic agent selection based on query complexity (low/mid/high), confidence-based escalation, and resource-aware adaptation (NEW v1.4.4)\n- **UI settings for adaptive report** – Toggle the Adaptive Intelligence Report directly from the interactive settings menu (NEW v1.4.4)\n- **Multi-hop reasoning** – LangGraph-based agent with a multi-step workflow (Router → Search → Analyze → Follow-Up → Synthesize → Critique)\n- **Restart command** – Restart the agent without exiting the program\n- **Parallelization** – Multi-aspect searches using thread pools for improved performance\n- **Performance optimizations** – 16k context support for RTX 3080; async and parallel processing\n- **Multi-source web search** – DuckDuckGo, Brave Search, Serper API with fallback\n- **Wikipedia integration** – Dedicated Wikipedia search (German/English)\n- **Advanced RAG system** – Batch processing, multi-query and hybrid search ([RAG Analysis](docs/guides/RAG_ANALYSIS.md))\n- **Intelligent caching** – TTL-based with hash keys, LRU eviction, and a configurable max size (500MB)\n- **Tool orchestration** – Automatic tool selection via LLM\n- **Interactive settings menu** – Live configuration of LLM, search, RAG and OSINT\n- **Context usage tracker** – Real-time token usage monitoring using tiktoken\n- **Health monitoring dashboard** – Interactive system monitoring with a rich UI\n- **Lazy-loading** – On-demand loading for tools and plugins\n- **Async operations** – Parallel HTTP requests with aiohttp\n- **Resource monitoring** – RAM usage, performance tracking, and automated garbage collection\n- **FastAPI REST API** – 8+ endpoints with auto-documentation (`/query`, `/plugins`, `/stats`, `/health`) (see `app.py`)\n- **Plugin system** – Dynamic loading and unloading of plugins\n- **Enhanced CLI** – Rich formatting and Markdown output\n- **Setup scripts** – `setup.bat`, `setup.sh` with auto-configuration\n- Optional cloud LLM support\n\n### OSINT features\n\n- **Advanced search operators** – `site:`, `inurl:`, `intext:`, `filetype:`, `email:`, `phone:`, `ip:`\n- **Email intelligence** – Validation, MX records, disposable detection, and variations\n- **Phone intelligence** – Validation, carrier lookup, country detection, and formatting\n- **Persistent memory store** – Survives `clear`; stores emails, phones, IPs, usernames, domains and notes\n- **Memory store CRUD** – Full CRUD functionality with `forget` command\n- **Batch processing** – Analyze multiple emails or phones simultaneously with summary statistics\n- **IP intelligence** – IPv4/IPv6 analysis, geolocation, ISP info, security reputation and VPN detection\n- **Social intelligence** – Supports 12 platforms (GitHub, LinkedIn, Twitter, Instagram, Facebook, YouTube, Reddit, Pinterest, TikTok, Snapchat, Discord, Steam)\n- **AI query enhancement** – Query variations, operator suggestions, entity detection and auto-type detection\n- **Compliance module** – Rate limiting, terms of use, audit logging and robots.txt compliance\n- **Privacy protection** – Ethical scraping, usage tracking; no API keys required\n- **Safesearch quality filter** – Configurable result quality (off/moderate/strict)\n\n### Security \u0026 performance\n\n- **Code quality** – Refactored, focused methods for better maintainability\n- **Accurate token counting** – tiktoken integration for precise token counts\n- **Intelligent retry logic** – Tenacity-based retries with exponential backoff\n- **Rate limiting** – 1 request/second and robots.txt checks\n- **Fallback system** – Automatic fallbacks for API failures\n- **Secure config** – Encrypted API key storage\n- **Output validation** – Sanitization of LLM outputs\n- **Domain blacklist** – Protection against unwanted domains\n- **RTX 3080 optimization** – 16k context support (qwen3:8b), increased cache sizes\n- **Windows console compatibility** – ASCII output and UTF-8 encoding for robust CLI experience (NEW v1.4.4)\n- **Clear-all command** – Instantly reset session, cache, and memory from the CLI (NEW v1.4.4)\n\n-\n\n### Release highlights v1.4.5 (2025-10-29) (Optional cloud LLM) \n\n**Cloud LLM \u0026 provider-based configuration:**\n\n- ✅ **Cloud LLM Support** - OpenAI (GPT-4/4o-mini), Anthropic (Claude 3), Groq + local Ollama\n- ✅ Local fallback remains available for full offline operation.  \n- ✅ **Smart Token Limits** - Auto-adjust based on provider; local models high (16k), cloud conservative (~1.5k)\n- ✅ **MultiHop Agent** - Truncates web content intelligently for cloud APIs\n- ✅ **Auto Config** - Config file automatically generated from `config.json.example` during setup\n- ✅ Improved API interface for hybrid (local + cloud) inference pipelines.\n- ✅ Updated documentation for cloud setup and API key management.\n- ✅ Config file is now auto-generated from config.json.example during setup. ([config.json change](docs/getting-started/CONFIG_SETUP.md))\n- **Prevents** context_length_exceeded \u0026 rate_limit_exceeded errors\n\n\n## Release highlights v1.4.4 (2025-10-28)\n\n**Adaptive agent hopping system**\n\n- **Automatic Complexity Detection** – LLM + heuristics for LOW/MID/HIGH  \n- **Intelligent Agent Selection** – SearchAgent for simple, MultiHopAgent for complex queries  \n- **Confidence-Based Escalation** – Auto upgrade when confidence \u003c 0.5  \n- **Resource Monitoring** – Dynamic load management  \n- **Adaptive system** – Powers CLI queries with agent selection and escalation  \n- **Bug Fixes \u0026 Improvements** – MultiHopAgent robustness, Windows console support  \n\n\n## Release highlights v1.4.3 (2025-10-27)\n\n**🌍 Complete English Translation:**\n- ✅ **System Prompts** - All AI prompts translated to English (agent, OSINT, multi-hop reasoning)\n- ✅ **UI Messages** - All user-facing messages, errors, and help text\n- ✅ **GitHub Templates** - Bug reports, feature requests, documentation issues, pull request templates\n- ✅ **Documentation** - Docstrings, comments, and script descriptions\n- ✅ **26 Files Updated** - Comprehensive translation across entire codebase\n- ✅ **Functionality Preserved** - German regex patterns and multilingual features maintained\n\n## Release highlights v1.4.2 (2025-10-26)\n\n**Major Changes:**\n- **Memory store deletion**: Full CRUD functionality with `forget` command\n- **OSINT parser fixes**: Memory operators now take precedence over standard operators\n- **Phone pattern fix**: Phone numbers with extensions (e.g., 040-822268-0) are correctly parsed\n- **Live dashboard updates**: Memory Store panel updates in real-time\n- **API starter scripts**: New `run_api.bat` / `run_api.sh` for quick FastAPI server startup\n\n**Forget Command Syntax:**\n```bash\nforget email:test@example.com        # Delete specific email\nforget phone:+491234567890           # Delete phone number\nforget ip:192.168.1.1                # Delete IP address\nforget username:johndoe              # Delete username\nforget category:emails               # Delete all emails\nforget category:phones               # Delete all phone numbers\nforget all:true                      # Delete entire memory store\n```\n\n**Start API Server:**\n```bash\n# Windows\nrun_api.bat\n\n# Linux/macOS\n./run_api.sh\n\n# Or manually\npython app.py\n```\nThen open in browser: http://localhost:8000/docs\n\n### Health monitoring dashboard\nThe integrated health module offers **a unified dashboard** with two modes:\n\n#### Usage:\n```bash\n# Windows\nhealth-dashboard.bat\n\n# Linux/macOS\n./health-dashboard.sh\n\n# Directly with Python (Interactive Menu)\npython health-dashboard.py\n\n# Directly to Live Monitor\npython health-dashboard.py --monitor\n\n# Directly to Test Dashboard\npython health-dashboard.py --tests\n```\n\n#### Mode 1: Live system monitor\nReal-time monitoring with rich terminal UI:\n- **Live System Metrics** - CPU, RAM, disk, network in real-time\n- **Component Health Checks** - LLM, cache, RAG, tools automatically checked\n- **Performance Tracking** - Response times, throughput, percentiles\n- **Alert System** - Automatic warnings for threshold exceedances\n- **Rich Terminal UI** - Color-coded status displays with live updates\n\n#### Mode 2: Test dashboard (GUI)\nTkinter-based GUI for test management:\n- ✅ Automatic test detection\n- ✅ Run individual or all tests\n- ✅ Real-time progress tracking\n- ✅ Detailed error logs\n- ✅ Export (JSON/HTML)\n\n**See:** [Health Monitoring Guide](docs/health/HEALTH_MONITORING.md) for details and programmatic usage\n\n**OSINT Usage:**\n```bash\n# Email intelligence\nemail:test@example.com\n\n# Phone intelligence\nphone:\"+49 151 12345678\"\n\n# IP intelligence\nip:8.8.8.8\n\n# Batch processing (NEW in v1.4.1!)\nemail:test@example.com user@domain.com admin@site.com\nphone:+491234567890 +441234567890 +331234567890\n\n# Memory Store (NEW in v1.4.2!)\nremember email:test@example.com      # Store email\nrecall emails                        # Retrieve all emails\nforget email:test@example.com        # Delete specific email\nforget category:emails               # Delete all emails\nforget all:true                      # Delete entire memory store\n\n# Advanced search\nsite:github.com inurl:python filetype:md\n\n# Combined operators\nemail:john@example.com site:linkedin.com inurl:profile\n```\n\n**See:** [OSINT Usage Guide](docs/osint/OSINT_USAGE.md) | [OSINT Module README](core/osint/README.md)\n\n### Security \u0026 robustness\n- ✅ **Domain Blacklist** - Protection against unwanted domains\n- **Rate limiting** - 1 request/second + robots.txt checks\n- **Retry logic** - Exponential backoff with tenacity (NEW v1.3: also for LLM client)\n- **Fallback system** - Automatic fallbacks for API failures\n- 🔐 **Secure Config** - Encrypted API key storage\n- **Output validation** - Sanitization of LLM outputs\n- 💾 **Smart Caching** - LRU eviction at max_size_mb (NEW v1.3)\n\n\n## Images\n\n### Health Dashboard - Live System Monitor\nReal-time monitoring with rich terminal UI displaying system metrics, component health, and performance tracking.\n\n![Health Dashboard](images/screenshots/health.png)\n\n### Interactive CLI Interface\nCrawlLama's adaptive intelligence system with automatic agent selection and interactive commands.\n\n![CLI Interface](images/screenshots/main.png)\n\n### Test Dashboard GUI\nTkinter-based test management interface with automatic test detection and real-time progress tracking.\n\n![Test Dashboard](images/screenshots/test-dashboard.png)\n\n\n\n\n## Quickstart\n\n### Downloads\n\n**Pre-built Releases (recommended for quick start):**\n\n| Version | Download | VirusTotal Check |\n|---------|----------|------------------|\n| **v1.4 Preview** | [Crawllama-1.4-preview.zip](https://github.com/arn-c0de/Crawllama/releases/download/v.1.4_Preview/Crawllama-1.4-preview.zip) | [VirusTotal Scan](https://www.virustotal.com/gui/url/dadd0eb337f8c30dc66134248399ebd990c1b11f3a950b6b752d5d567be45127) |\n\nAll downloads include VirusTotal scans confirming no malware.  \nPlug \u0026 Play: extract and start (Ollama + Python required)\n\n## Installation\n\n**Windows:**\n1. Download [Crawllama-1.4-preview.zip](https://github.com/arn-c0de/Crawllama/releases/download/v.1.4_Preview/Crawllama-1.4-preview.zip)\n2. Extract to any folder (e.g., `C:\\Crawllama`)\n3. Install Ollama from [ollama.ai/download](https://ollama.ai/download)\n4. Start Ollama and load model:\n   ```cmd\n   ollama serve\n   ollama pull qwen3:4b\n   ```\n5. In the Crawllama folder:\n   ```cmd\n   setup.bat\n   run.bat\n   ```\n\n**Linux/macOS:**\n1. Download and extract:\n   ```bash\n   wget https://github.com/arn-c0de/Crawllama/releases/download/v.1.4_Preview/Crawllama-1.4-preview.zip\n   unzip Crawllama-1.4-preview.zip\n   cd Crawllama-1.4\n   ```\n2. Install Ollama:\n   ```bash\n   curl -fsSL https://ollama.ai/install.sh | sh\n   ollama serve \u0026\n   ollama pull qwen3:4b\n   ```\n3. Setup and start:\n   ```bash\n   chmod +x setup.sh run.sh\n   ./setup.sh\n   ./run.sh\n   ```\n\n---\n\n### Option 1: Setup Scripts (Recommended for Git Installation)\n\n**Windows:**\n```cmd\nsetup.bat\n```\n\n**Linux/macOS:**\n```bash\nchmod +x setup.sh\n./setup.sh\n```\n\nNote: After the initial setup, you must select at least one LLM model during setup. If a model is already installed, you can skip this step—otherwise, selection is required to avoid errors in the test program.\n\nThe setup script:\n- ✅ Checks Python version (3.10+)\n- ✅ Creates virtual environment\n- ✅ Lets you select features and LLM models to install (core is always installed)\n- ✅ Installs all selected dependencies\n- ✅ Creates necessary directories\n- ✅ Copies `.env.example` to `.env`\n- ✅ Checks Ollama status\n\nNote for initial installation:\n\nWhen running `pip install -r requirements.txt` for the first time within the newly created virtual environment, installing all dependencies—especially packages like `torch`, `sentence-transformers`, and scientific libraries—may take **5–10 minutes** (or longer, depending on connection and hardware). Please wait until the process completes; afterward, the virtual environment is ready for use.\n\nNote on disk space: After installation (including `venv`), the project typically requires about **1.2–1.5 GB** of free disk space (v1.4: ~1.23 GB). This value may vary significantly depending on the operating system, Python packages (e.g., larger PyTorch/CUDA wheels), and additional models. Plan for ample additional space if storage is limited.\n\nModel download sizes (approximate):\n\n- `qwen3:4b` — ~**2–4 GB** (depending on format/quantization)\n- `qwen3:8b` — ~**8–12 GB**\n- `deepseek-r1:8b` — ~**6–10 GB**\n- `llama3:7b` — ~**6–9 GB**\n- `mistral:7b` — ~**4–8 GB**\n- `phi3:14b` — ~**12–20+ GB**\n\nNote: Model sizes vary significantly depending on the provider, format (FP16, INT8 quantization, etc.), and additional assets. Quantized models (e.g., INT8) can significantly reduce size, while FP32/FP16 or models with additional tokenizer/vocab files require more space. Plan for sufficient additional storage if using larger models or multiple models simultaneously.\n\n### Option 2: Manual Installation\n\n**Prerequisites:**\n- Python 3.10+ ([python.org](https://www.python.org/downloads/))\n- Git ([git-scm.com](https://git-scm.com/downloads))\n- Ollama ([ollama.ai/download](https://ollama.ai/download))\n\n**Windows - Step by Step:**\n\n```cmd\n# 1. Clone repository\ngit clone https://github.com/arn-c0de/Crawllama.git\ncd Crawllama\n\n# 2. Create virtual environment\npython -m venv venv\nvenv\\Scripts\\activate\n\n# 3. Install dependencies (takes 5-10 min)\npip install -r requirements.txt\n\n# 4. Create directories\nmkdir data\\cache data\\embeddings data\\history logs plugins\n\n# 5. Configuration\ncopy .env.example .env\nnotepad .env  # Optional: Add API keys\n\n# 6. Start Ollama (separate terminal)\nollama serve\n\n# 7. Load model (separate terminal)\nollama pull qwen3:4b\n\n# 8. Start Crawllama\npython main.py --interactive\n```\n\n**Linux/macOS - Step by Step:**\n\n```bash\n# 1. Clone repository\ngit clone https://github.com/arn-c0de/Crawllama.git\ncd Crawllama\n\n# 2. Create virtual environment\npython3 -m venv venv\nsource venv/bin/activate\n\n# 3. Install dependencies (takes 5-10 min)\npip install -r requirements.txt\n\n# 4. Create directories\nmkdir -p data/cache data/embeddings data/history logs plugins\n\n# 5. Configuration\ncp .env.example .env\nnano .env  # Optional: Add API keys\n\n# 6. Install and start Ollama\ncurl -fsSL https://ollama.ai/install.sh | sh\nollama serve \u0026\n\n# 7. Load model\nollama pull qwen3:4b\n\n# 8. Start Crawllama\npython main.py --interactive\n```\n\n**Troubleshooting Installation:**\n\n| Problem | Solution |\n|---------|--------|\n| `python not found` | Install Python 3.10+: [python.org](https://www.python.org/downloads/) |\n| `pip install` fails | Run `python -m pip install --upgrade pip` |\n| `ollama: command not found` | Install Ollama: [ollama.ai/download](https://ollama.ai/download) |\n| `Connection refused` (Ollama) | Start Ollama: `ollama serve` |\n| `ModuleNotFoundError` | Activate virtual environment: `venv\\Scripts\\activate` (Win) or `source venv/bin/activate` (Linux) |\n| Disk space full | Ensure at least 5 GB free for venv + model |\n\n---\n\n### Option 3: Git Clone (Quick Installation)\n\n```bash\n# 1. Clone\ngit clone https://github.com/arn-c0de/Crawllama.git\ncd Crawllama\n\n# 2. Virtual Environment\npython -m venv venv\nsource venv/bin/activate  # Linux/macOS\nvenv\\Scripts\\activate     # Windows\n\n# 3. Dependencies\npip install -r requirements.txt\n\n# 4. Directories\nmkdir -p data/cache data/embeddings data/history logs plugins\n\n# 5. Config\ncp .env.example .env\n```\n\n### Ollama Setup\n\n```bash\n# Install Ollama\ncurl -fsSL https://ollama.ai/install.sh | sh  # Linux/macOS\n# or from https://ollama.ai/download           # Windows\n\n# Start Ollama\nollama serve\n\n# Load model\nollama pull qwen3:4b\n# Alternative: deepseek-r1:8b, llama3:7b, mistral\n```\n\n## Usage\n\n\u003e **Note:**  \n\u003e The first start may take significantly longer than subsequent starts!  \n\u003e Initialization, dependency installation, and model downloads may take several minutes, depending on hardware and internet connection.  \n\u003e After the first successful start, all subsequent starts are significantly faster.\n\n### 1. CLI - Interactive Mode\n\n```bash\npython main.py --interactive\n\n# Or with setup script\nrun.bat           # Windows\n./run.sh          # Linux/macOS\n```\n\n```\n╭──────────────────────────────────────────────────────────────╮\n│ CrawlLama - Local Search and Response Agent                  │\n│ Commands:                                                    │\n│   clear       - Reset session (history + cache)              │\n│   clear-cache - Clear cache only                             │\n│   save        - Manually save session                        │\n│   load        - Reload session                               │\n│   stats       - Display statistics                           │\n│   status      - Show context usage                           │\n│   settings    - Show/edit settings                           │\n│   restart     - Restart agent (reload config)                │\n│   exit, quit  - Exit                                         │\n╰──────────────────────────────────────────────────────────────╯\n\n❯ What is Machine Learning?\n```\n\n**New Commands:**\n\n- `status` - Shows token usage and available context capacity\n  ```\n  ❯ status\n\n            Context Usage Tracker\n  ┏━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━┓\n  ┃ Source            ┃    Tokens ┃    Share  ┃\n  ┡━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━┩\n  │ Conversation      │       850 │      8.5% │\n  │ Search Results    │       320 │      3.2% │\n  │ Total Used        │     1,170 │     11.7% │\n  │ Available         │     8,830 │     88.3% │\n  │ Maximum           │    10,000 │      100% │\n  └───────────────────┴───────────┴───────────┘\n  ```\n\n- `settings` - Interactive configuration editor\n  ```\n  ❯ settings\n\n  Displays all settings and allows:\n  • Category selection (llm, search, rag, cache, osint, all)\n  • Change LLM model (qwen3:8b, deepseek-r1:8b, etc.)\n  • Adjust temperature (0.0-1.0)\n  • Configure max tokens (now 16,000 for RTX 3080+)\n  • Change search region (de-de, us-en, wt-wt)\n  • Configure OSINT max results \u0026 rate limits\n  • Enable/disable RAG\n  • Enable/disable cache\n  • Save changes directly to config.json\n  • Auto-restart after saving (optional)\n  ```\n\n- `restart` - Restart agent\n  ```\n  ❯ restart\n\n  • Reloads config.json\n  • Fully reinitializes agent\n  • Optional session preservation\n  • No session interruption\n  ```\n\n### 2. Health Monitoring Dashboard\n\n```bash\n# Windows\nhealth-dashboard.bat\n\n# Linux/macOS\npython health-dashboard.py\n```\n\nThe dashboard displays:\n- ✅ System health (CPU, RAM, disk, network)\n- ✅ Component status (LLM, cache, RAG, tools)\n- ✅ Performance metrics (response times)\n- ✅ Error log (last 10 errors)\n- ✅ Auto-refresh (every 5 seconds)\n\nInteractive commands:\n- `r` - Refresh (manual)\n- `c` - Clear error log\n- `t` - Run component tests\n- `q` - Quit\n\n### 3. How does intelligent search work?\n\nThe agent automatically decides **when and how** to search:\n\n#### Automatic decision\n```\n❯ Who is the current German Chancellor?\n\n1. LLM analyzes: \"Requires current info\" ✓\n2. Agent performs web search\n3. LLM processes search results\n4. Agent delivers up-to-date response\n```\n\n#### Search operators for targeted searches\n\n**OSINT Search Operators:**\n```bash\n# Domain-specific search\n❯ site:github.com machine learning\n\n# Email Intelligence\n❯ email:john.doe@company.com\n\n# Phone Intelligence\n❯ phone:\"+49 151 12345678\"\n\n# IP Intelligence (NEW!)\n❯ ip:8.8.8.8\n❯ 192.168.1.1  # Auto-detects as IP\n\n# Social Media Intelligence (12 Platforms)\n❯ username:elonmusk\n❯ @microsoft\n❯ github  # Auto-detects as username\n\n# File format search\n❯ site:example.com filetype:pdf\n\n# URL filter\n❯ inurl:documentation python\n\n# Text in content\n❯ intext:\"contact email\" site:example.com\n```\n\n**Combined Searches:**\n```bash\n# Multiple operators\n❯ site:linkedin.com inurl:profile \"software engineer\"\n\n# Exclusion with minus\n❯ python programming -java\n\n# OR conjunction\n❯ site:github.com OR site:gitlab.com \"machine learning\"\n```\n\nSee **[OSINT Usage Guide](docs/osint/OSINT_USAGE.md)** for all features.\n\n### 4. CLI - Direct Queries\n\n```bash\n# Standard query (agent decides automatically if web search is needed)\npython main.py \"What is Python?\"\n\n# Multi-Hop Reasoning (for complex queries)\npython main.py --multihop \"Compare Python and JavaScript for web development\"\n\n# Offline mode (no web search, only LLM knowledge)\npython main.py --no-web \"Explain photosynthesis\"\n\n# OSINT search with search operators\npython main.py \"site:github.com python projects\"\npython main.py \"email:contact@example.com\"\n\n# With specific model\npython main.py --model llama3:7b \"Who discovered Einstein?\"\n```\n\n### 5. FastAPI Server\n\n```bash\n# Start server\npython app.py\n\n# Or with starter scripts\nrun_api.bat      # Windows\n./run_api.sh     # Linux/macOS\n\n# Or manually\nuvicorn app:app --host 0.0.0.0 --port 8000\n```\n\n**API Documentation:** http://localhost:8000/docs\n\n**Available Endpoints:**\n\n**Query \u0026 Reasoning:**\n- `POST /query` - Execute standard or multi-hop queries\n- `POST /osint/query` - OSINT queries with operators (email:, phone:, ip:, etc.)\n\n**Memory Store (CRUD):**\n- `GET /memory` - Retrieve all stored entries\n- `POST /memory/remember` - Store value (email, phone, ip, username, domain, note)\n- `GET /memory/recall/{category}` - Retrieve category (emails, phones, ips, etc.)\n- `DELETE /memory/forget` - Delete individual values, categories, or everything\n- `GET /memory/stats` - Memory store statistics\n\n**Session Management:**\n- `POST /session/clear` - Reset session\n- `POST /session/save` - Save session\n- `POST /session/load` - Load session\n\n**Cache:**\n- `POST /cache/clear` - Clear cache\n- `GET /cache/stats` - Cache statistics\n\n**Configuration:**\n- `GET /config` - Retrieve current configuration\n- `PATCH /config` - Modify configuration (llm, search, rag, cache, osint)\n- `GET /context/status` - Token usage \u0026 context status\n\n**Plugins \u0026 Tools:**\n- `GET /plugins` - List available plugins\n- `POST /plugins/{name}/load` - Load plugin\n- `POST /plugins/{name}/unload` - Unload plugin\n- `GET /tools` - List available tools\n\n**System:**\n- `GET /health` - Health check (agent, monitoring, components)\n- `GET /stats` - System statistics (agent stats, resources, performance)\n- `GET /security-info` - Security configuration (rate limits, features)\n\n**API Security (v1.4.2+):**\n\nThe API is protected by default with multiple security features:\n\n- ✅ **API Key Authentication** - X-API-Key header required\n- ✅ **Rate Limiting** - 60 requests/minute (configurable)\n- ✅ **Input Validation** - Pydantic-based validation\n- ✅ **Query Sanitization** - Protection against injection attacks\n- ✅ **Request Logging** - All requests are logged\n- ✅ **CORS Protection** - Configurable origins\n- ✅ **Trusted Host Middleware** - Host header validation\n\n**Setup:**\n```bash\n# 1. Set API key in .env\nCRAWLLAMA_API_KEY=your_secure_api_key_min_32_chars\n\n# 2. For local development (without API key)\nCRAWLLAMA_DEV_MODE=true\n\n# 3. Adjust rate limit (optional)\nRATE_LIMIT=100\n\n# 4. Configure CORS origins (optional)\nALLOWED_ORIGINS=http://localhost:3000,http://localhost:8080\n```\n\n**Usage with API Key:**\n```bash\n# With API key header\ncurl -X POST http://localhost:8000/query \\\n  -H \"Content-Type: application/json\" \\\n  -H \"X-API-Key: your_api_key_here\" \\\n  -d '{\"query\": \"test\"}'\n\n# Or in dev mode (without API key)\nexport CRAWLLAMA_DEV_MODE=true\npython app.py\n```\n\n**Example Requests:**\n\n```bash\n# Standard query (agent uses web search automatically if needed)\ncurl -X POST http://localhost:8000/query \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"query\": \"What is Machine Learning?\",\n    \"use_multihop\": false\n  }'\n\n# Multi-hop query (for complex analyses)\ncurl -X POST http://localhost:8000/query \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"query\": \"Compare Python and JavaScript\",\n    \"use_multihop\": true,\n    \"max_hops\": 3\n  }'\n\n# OSINT search with search operators\ncurl -X POST http://localhost:8000/query \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"query\": \"site:github.com python machine-learning\",\n    \"use_multihop\": false\n  }'\n\n# Retrieve statistics\ncurl http://localhost:8000/stats\n\n# List plugins\ncurl http://localhost:8000/plugins\n\n# Load plugin\ncurl -X POST http://localhost:8000/plugins/example_plugin/load\n```\n\n## CLI commands \u0026 options\n\n### Basic Options\n| Option | Description |\n|--------|--------------|\n| `--interactive` | Interactive mode |\n| `--debug` | Enable debug logging |\n| `--no-web` | Offline mode (no web search) |\n| `--model MODEL` | Choose Ollama model |\n| `--stats` | Display system statistics |\n| `--clear-cache` | Clear cache |\n\n### Advanced Options (v1.1)\n| Option | Description |\n|--------|--------------|\n| `--multihop` | Enable multi-hop reasoning |\n| `--max-hops N` | Max reasoning steps (1-5) |\n| `--api` | Start API server |\n| `--plugins` | List available plugins |\n| `--load-plugin NAME` | Load plugin |\n| `--help-extended` | Show extended help |\n| `--examples` | Show usage examples |\n| `--setup-keys` | Securely set up API keys |\n\n### Interactive Commands\n| Command | Description |\n|--------|--------------|\n| `exit`, `quit` | Exit program |\n| `clear` | Clear screen |\n| `stats` | Display statistics |\n| `help` | Show help |\n\n## REST API\n\nCrawlLama provides a complete REST API for integration into custom applications.\n\n### Start API Server\n\n**Windows:**\n```cmd\nrun_api.bat\n```\n\n**Linux/macOS:**\n```bash\n./run_api.sh\n```\n\nOr manually:\n```bash\nuvicorn app:app --host 0.0.0.0 --port 8000\n```\n\n### Quickstart\n\n**1. Start API Server**\n```bash\nrun_api.bat\n```\n\n**2. Open API Documentation**\n- Interactive Docs: http://localhost:8000/docs\n- ReDoc: http://localhost:8000/redoc\n\n**3. Send Query**\n```bash\ncurl -X POST http://localhost:8000/query \\\n  -H \"X-API-Key: your-key\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"query\": \"What is Python?\", \"use_tools\": false}'\n```\n\n### Key Endpoints\n\n- `POST /query` - Execute queries (with/without web search, multi-hop)\n- `GET /health` - Health check\n- `GET /stats` - System statistics\n- `POST /memory/remember` - Store data (OSINT)\n- `GET /memory/recall/{category}` - Retrieve data\n- `GET /plugins` - Manage plugins\n- `POST /cache/clear` - Clear cache\n\n### Authentication\n\nSet API key in `.env`:\n```bash\nCRAWLLAMA_API_KEY=your-secret-key-here\n```\n\nOr for testing:\n```bash\nCRAWLLAMA_DEV_MODE=true\n```\n\n### Full Documentation\n\n[API Usage Guide](docs/API_USAGE.md) - Complete API documentation with examples\n\n## Project structure\n\n👉 The complete and up-to-date project structure can be found here: [docs/development/PROJECT_STRUCTURE.md](docs/development/PROJECT_STRUCTURE.md)\n\n## Configuration\n\n### config.json\n\n```json\n{\n  \"llm\": {\n    \"base_url\": \"http://127.0.0.1:11434\",\n    \"model\": \"qwen3:8b\",\n    \"temperature\": 0.7,\n    \"max_tokens\": 10000,\n    \"stream\": true\n  },\n  \"search\": {\n    \"provider\": \"duckduckgo\",\n    \"max_results\": 5,\n    \"timeout\": 10\n  },\n  \"rag\": {\n    \"enabled\": true,\n    \"batch_size\": 100,\n    \"max_workers\": 4\n  },\n  \"cache\": {\n    \"enabled\": true,\n    \"ttl_hours\": 24,\n    \"max_size_mb\": 500,\n    \"clear_on_startup\": false\n  },\n  \"osint\": {\n    \"max_results\": 20,\n    \"email_search_limit\": 50,\n    \"phone_search_limit\": 50,\n    \"general_osint_limit\": 100\n  },\n  \"multihop\": {\n    \"enabled\": true,\n    \"max_hops\": 3,\n    \"confidence_threshold\": 0.7,\n    \"enable_critique\": true\n  },\n  \"plugins\": {\n    \"example_plugin\": {\n      \"enabled\": true\n    }\n  },\n  \"security\": {\n    \"rate_limit\": 1.0,\n    \"max_context_length\": 8000,\n    \"check_robots_txt\": true\n  }\n}\n```\n\n**Recommended `max_tokens` Settings:**\n\n| GPU/Hardware | Recommended max_tokens | Model |\n|-------------|----------------------|--------|\n| RTX 3080+ (10GB+) | 10,000 - 16,000 | qwen3:8b, deepseek-r1:8b |\n| RTX 3060/3070 (8GB) | 6,000 - 8,000 | qwen3:4b, llama3:7b |\n| CPU Only | 2,000 - 4,000 | qwen3:4b |\n\n**Tip:** Use the `status` command to monitor your token usage in real-time!\n\n### .env (Optional)\n\n```bash\n# API Keys (optional)\nBRAVE_API_KEY=your_brave_api_key\nSERPER_API_KEY=your_serper_api_key\n\n# Proxy (optional)\nHTTP_PROXY=http://proxy:port\nHTTPS_PROXY=https://proxy:port\n```\n\n## Testing\n\n```bash\n# All tests\npytest tests/ -v\n\n# With coverage\npytest --cov=core --cov=tools --cov=utils tests/\n\n# Specific tests\npytest tests/test_multihop_reasoning.py -v\npytest tests/test_error_simulation.py -v\n\n# With debug output\npytest tests/ -v --log-cli-level=INFO\n```\n\n## Plugin development\n\n### Creating a Simple Plugin\n\n```python\n# plugins/my_plugin.py\n\nfrom core.plugin_manager import Plugin, PluginMetadata\n\nclass MyPlugin(Plugin):\n    def get_metadata(self) -\u003e PluginMetadata:\n        return PluginMetadata(\n            name=\"MyPlugin\",\n            version=\"1.0.0\",\n            description=\"My custom plugin\",\n            author=\"Your Name\",\n            dependencies=[]\n        )\n\n    def get_tools(self):\n        return [self.my_tool]\n\n    def my_tool(self, input: str) -\u003e str:\n        return f\"Processed: {input}\"\n```\n\n**See:** [Plugin Tutorial](docs/guides/PLUGIN_TUTORIAL.md) for details\n\n## Technology stack\n\n### Core\n- **LLM**: Ollama (qwen3:4b, deepseek-r1:8b, llama3, mistral)\n- **Orchestration**: LangGraph (Multi-Hop Reasoning)\n- **Web Search**: duckduckgo-search, Brave API, Serper API\n- **RAG**: ChromaDB + Sentence Transformers\n\n### Backend\n- **API**: FastAPI + Uvicorn\n- **Database**: SQLite (Sessions)\n- **Async**: aiohttp, asyncio\n- **Monitoring**: psutil\n\n### Utils\n- **HTML Parsing**: BeautifulSoup4\n- **CLI**: Rich (Formatting)\n- **Retry**: Tenacity\n- **Security**: cryptography\n\n### Development\n- **Tests**: pytest, pytest-mock, pytest-cov\n- **CI/CD**: GitHub Actions (planned)\n\n## Documentation\n\n### User Guides\n- [Installation Guide](docs/getting-started/INSTALLATION.md) - Detailed installation\n- [LangGraph Guide](docs/guides/LANGGRAPH_GUIDE.md) - Multi-hop reasoning\n- [Plugin Tutorial](docs/guides/PLUGIN_TUTORIAL.md) - Plugin development\n- 🏥 [Health Monitoring](docs/health/HEALTH_MONITORING.md) - System monitoring\n\n### Developer Docs\n- [Project Structure](docs/development/PROJECT_STRUCTURE.md) - Project overview\n- [Release Process](docs/development/RELEASE_PROCESS.md) - Release workflow\n- Tests - See `tests/` for examples\n\n### API Documentation\n- Swagger UI: http://localhost:8000/docs\n- ReDoc: http://localhost:8000/redoc\n\n## 🌟 Roadmap\n\n### Phase 1: Core ✅ (Completed)\n- ✅ Ollama integration\n- ✅ Web search (DuckDuckGo)\n- ✅ Tool orchestration\n- ✅ Basic RAG \u0026 caching\n- ✅ CLI with Rich\n\n### Phase 2: Robustness ✅ (Completed)\n- ✅ Fallback system\n- ✅ Retry logic with tenacity\n- ✅ Rate limiting \u0026 robots.txt\n- ✅ Domain blacklist\n- ✅ Safe fetch with proxy support\n- ✅ Multi-source web search\n- ✅ Comprehensive tests (80%+ coverage)\n\n### Phase 3: Intelligence ✅ (Completed - v1.1)\n- ✅ Multi-Hop Reasoning with LangGraph\n- ✅ RAG optimizations (batch, multi-query, hybrid)\n- ✅ Parallelization (ThreadPoolExecutor)\n- ✅ Lazy-loading for tools/plugins\n- ✅ Async HTTP operations\n- ✅ RAM \u0026 performance monitoring\n\n### Phase 4: Production ✅ (Completed - v1.1)\n- ✅ FastAPI REST API\n- ✅ Multi-user support (SQLite)\n- ✅ Plugin system\n- ✅ Enhanced CLI\n- ✅ Setup scripts (Windows/Linux)\n- ✅ Systemd service\n- ✅ Comprehensive documentation\n\n### Phase 5: Future 📅 (Planned)\n- [ ] GUI (Streamlit/Gradio)\n- [ ] GraphQL API\n- [ ] Redis cache for production\n- [ ] Kubernetes deployment\n- [ ] Monitoring dashboard\n- [ ] Multi-language support\n- [ ] Voice interface\n\n## Contributing\n\nContributions are welcome!\n\n**Development Workflow:**\n1. Fork the repository\n2. Create a feature branch (`git checkout -b feature/amazing-feature`)\n3. Commit your changes (`git commit -m 'Add amazing feature'`)\n4. Push to the branch (`git push origin feature/amazing-feature`)\n5. Create a pull request\n\n**Coding Standards:**\n- PEP8 compliant\n- Use type hints\n- Docstrings for all functions\n- Tests for new features\n\n## 📊 Performance\n\n### Benchmarks (on i7-8700K, 32GB RAM)\n\n| Operation | Average | Notes |\n|-----------|--------------|----------|\n| Standard Query | 2-5s | Without web search |\n| Query with Web Search | 5-10s | 3-5 results |\n| Multi-Hop (3 Hops) | 15-30s | Complex |\n| RAG Search | \u003c1s | 5 results |\n| API Request | \u003c100ms | Without tools |\n\n### Resources\n\n- **RAM**: 200-500 MB (standard), 500-800 MB (with RAG)\n- **CPU**: 10-30% (idle), 50-80% (active)\n- **Disk**: ~100 MB (code), variable (cache/embeddings)\n\n## Legal notices\n\n### Web Scraping\n- ✅ Respects `robots.txt`\n- ✅ Rate limiting (1 req/s default)\n- ✅ Identifiable user agent\n- Users are responsible for compliance with local laws\n\n### Data Privacy\n- ✅ All data processed locally\n- ✅ No cloud services\n- ✅ Full control over logs/cache\n- ✅ Session data encrypted (optional)\n\n### API Keys\n- Brave Search API: [brave.com/search/api](https://brave.com/search/api)\n- Serper API: [serper.dev](https://serper.dev)\n\n## 🆘 Troubleshooting\n\n### Ollama not reachable\n```bash\n# Check status\ncurl http://127.0.0.1:11434/api/tags\n\n# Start Ollama\nollama serve\n```\n\n### Import errors\n```bash\n# Reinstall dependencies\npip install -r requirements.txt\n\n# Or re-run setup\n./setup.sh  # or setup.bat\n```\n\n### ChromaDB errors\n```bash\n# Delete embeddings\nrm -rf data/embeddings/\n\n# Restart\npython main.py\n```\n\n### API rate limits\n```bash\n# Adjust in config.json\n\"security\": {\n  \"rate_limit\": 2.0  # 2 req/s\n}\n```\n\n## 💬 Support \u0026 Community\n\n- 🐛 **Issues**: [GitHub Issues](https://github.com/arn-c0de/Crawllama/issues)\n- **Support**: [crawllama.support@protonmail.com](mailto:crawllama.support@protonmail.com)\n- **Security/Leaks**: [crawllama.support@protonmail.com](mailto:crawllama.support@protonmail.com) (encrypted via Proton Mail)\n\n## License\n\n**Crawllama License (Non-Commercial)** - Free for use and development, but no commercial sale allowed.\n\n✅ **Allowed:**\n- Personal use\n- Education \u0026 research\n- Modification \u0026 sharing (non-commercial)\n- Contributions to the project\n\n❌ **Not Allowed:**\n- Sale of the software\n- Commercial use\n- Integration into paid products\n\nSee [LICENSE](LICENSE) for full details.\n\n## 🙏 Credits\n\nBuilt with:\n- [Ollama](https://ollama.ai) - Local LLMs\n- [LangGraph](https://github.com/langchain-ai/langgraph) - Agent orchestration\n- [FastAPI](https://fastapi.tiangolo.com) - REST API\n- [ChromaDB](https://www.trychroma.com) - Vector database\n- [Rich](https://github.com/Textualize/rich) - Terminal formatting\n\n## Further documentation\n\n- **[Documentation Overview](docs/README.md)**\n- **Quickstart \u0026 Installation**\n  - [QUICKSTART.md](docs/getting-started/QUICKSTART.md) – 5-minute quickstart\n  - [INSTALLATION.md](docs/getting-started/INSTALLATION.md) – Detailed installation\n- **Feature Guides**\n  - [LANGGRAPH_GUIDE.md](docs/guides/LANGGRAPH_GUIDE.md) – Multi-Hop Reasoning\n  - [OSINT_USAGE.md](docs/osint/OSINT_USAGE.md) – OSINT Features\n  - [OSINT_CONTEXT_USAGE.md](docs/osint/OSINT_CONTEXT_USAGE.md) – OSINT Context Usage\n  - [SOCIAL_INTELLIGENCE.md](docs/SOCIAL_INTELLIGENCE.md) – Social Intelligence\n  - [PLUGIN_TUTORIAL.md](docs/guides/PLUGIN_TUTORIAL.md) – Plugin Development\n  - [HALLUCINATION_DETECTION.md](docs/HALLUCINATION_DETECTION.md) – Hallucination Detection\n  - [SEARCH_LIMITATIONS.md](docs/SEARCH_LIMITATIONS.md) – Search Limitations\n- **Health Monitoring**\n  - [HEALTH_MONITORING.md](docs/HEALTH_MONITORING.md) – Health System\n  - [HEALTH_DASHBOARD.md](docs/HEALTH_DASHBOARD.md) – Dashboard Usage\n  - [HEALTH_FEATURES.md](docs/HEALTH_FEATURES.md) – Available Features\n  - [DASHBOARD_STARTER.md](docs/DASHBOARD_STARTER.md) – Dashboard Starter\n- **Maintainer Docs**\n  - [RELEASE_PROCESS.md](docs/RELEASE_PROCESS.md) – Release Workflow\n  - [SECRET_LEAK_RESPONSE.md](docs/SECRET_LEAK_RESPONSE.md) – Secret Leak Response Plan\n  - [PRE_RELEASE_CHECK.md](docs/PRE_RELEASE_CHECK.md) – Pre-Release Checklist\n  - [PROJECT_STRUCTURE.md](docs/PROJECT_STRUCTURE.md) – Project Structure\n\n---\n\n*Last Updated: 2026-02-07*\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Farn-c0de%2Fcrawllama","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Farn-c0de%2Fcrawllama","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Farn-c0de%2Fcrawllama/lists"}