{"id":35143360,"url":"https://github.com/alob-mtc/runnerq","last_synced_at":"2026-03-04T22:02:32.000Z","repository":{"id":311496473,"uuid":"1043876625","full_name":"alob-mtc/runnerq","owner":"alob-mtc","description":"A robust, scalable activity queue and worker system for Rust applications with pluggable storage backends.","archived":false,"fork":false,"pushed_at":"2026-03-01T10:30:51.000Z","size":989,"stargazers_count":23,"open_issues_count":8,"forks_count":4,"subscribers_count":1,"default_branch":"main","last_synced_at":"2026-03-01T14:38:52.690Z","etag":null,"topics":["async-await","background-jobs","dead-letter-queue","delayed-jobs","distributed-systems","durable-execution","event-driven","job-queue","observability","orchestration","pluggable-backends","postgresql","redis","retry-mechanism","rust","scheduling","sqlx","task-queue","tokio","worker-pool"],"latest_commit_sha":null,"homepage":"","language":"Rust","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/alob-mtc.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-08-24T19:59:49.000Z","updated_at":"2026-03-01T10:28:09.000Z","dependencies_parsed_at":"2025-08-24T23:31:57.515Z","dependency_job_id":"d61f69ca-1b0d-493d-90ee-8cd75129c1c5","html_url":"https://github.com/alob-mtc/runnerq","commit_stats":null,"previous_names":["alob-mtc/runnerq"],"tags_count":16,"template":false,"template_full_name":null,"purl":"pkg:github/alob-mtc/runnerq","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/alob-mtc%2Frunnerq","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/alob-mtc%2Frunnerq/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/alob-mtc%2Frunnerq/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/alob-mtc%2Frunnerq/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/alob-mtc","download_url":"https://codeload.github.com/alob-mtc/runnerq/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/alob-mtc%2Frunnerq/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30092886,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-04T20:42:30.420Z","status":"ssl_error","status_checked_at":"2026-03-04T20:42:30.057Z","response_time":59,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["async-await","background-jobs","dead-letter-queue","delayed-jobs","distributed-systems","durable-execution","event-driven","job-queue","observability","orchestration","pluggable-backends","postgresql","redis","retry-mechanism","rust","scheduling","sqlx","task-queue","tokio","worker-pool"],"created_at":"2025-12-28T12:57:50.934Z","updated_at":"2026-03-04T22:02:31.990Z","avatar_url":"https://github.com/alob-mtc.png","language":"Rust","readme":"# Runner-Q\n\nA robust, scalable activity queue and worker system for Rust applications with pluggable storage backends.\n\n## Features\n\n- **Pluggable backend system** - Trait-based storage abstraction; PostgreSQL is built-in; Redis available via the `runner_q_redis` crate\n- **Priority-based activity processing** - Support for Critical, High, Normal, and Low priority levels\n- **Activity scheduling** - Precise timestamp-based scheduling for future execution\n- **Intelligent retry mechanism** - Built-in retry mechanism with exponential backoff\n- **Dead letter queue** - Failed activities are moved to a dead letter queue for inspection\n- **Concurrent activity processing** - Configurable number of concurrent workers\n- **Graceful shutdown** - Proper shutdown handling with signal support\n- **Activity orchestration** - Activities can execute other activities for complex workflows\n- **Comprehensive error handling** - Retryable and non-retryable error types\n- **Activity metadata** - Support for custom metadata on activities\n- **Built-in observability console** - Real-time web UI for monitoring and managing activities\n- **Worker-level activity type filtering** - Isolate workloads by restricting each engine to specific activity types\n- **Queue statistics** - Monitoring capabilities and metrics collection\n\n## Storage Backends\n\n| Backend | Status | Use Case |\n|---------|--------|----------|\n| **PostgreSQL** | ✅ Built-in | Default. Permanent persistence, SQL-based queries |\n| **Redis** | ✅ Optional | Use the `runner_q_redis` crate for Redis or Valkey |\n| **Custom** | ✅ Supported | Implement `Storage` trait for your own backend |\n\n## Installation\n\n```sh\ncargo add runner_q\n```\n\n## Quick Start\n\n```rust\nuse runner_q::{WorkerEngine, ActivityPriority, ActivityHandler, ActivityContext, ActivityHandlerResult, ActivityError};\nuse std::sync::Arc;\nuse async_trait::async_trait;\nuse serde_json::json;\nuse serde::{Serialize, Deserialize};\nuse std::time::Duration;\n\n// Define activity types\n#[derive(Debug, Clone)]\nenum MyActivityType {\n    SendEmail,\n    ProcessPayment,\n}\n\nimpl std::fmt::Display for MyActivityType {\n    fn fmt(\u0026self, f: \u0026mut std::fmt::Formatter\u003c'_\u003e) -\u003e std::fmt::Result {\n        match self {\n            MyActivityType::SendEmail =\u003e write!(f, \"send_email\"),\n            MyActivityType::ProcessPayment =\u003e write!(f, \"process_payment\"),\n        }\n    }\n}\n\n// Implement activity handler\npub struct SendEmailActivity;\n\n#[async_trait]\nimpl ActivityHandler for SendEmailActivity {\n    async fn handle(\u0026self, payload: serde_json::Value, context: ActivityContext) -\u003e ActivityHandlerResult {\n        // Parse the email data - use ? operator for clean error handling\n        let email_data: serde_json::Map\u003cString, serde_json::Value\u003e = payload\n            .as_object()\n            .ok_or_else(|| ActivityError::NonRetry(\"Invalid payload format\".to_string()))?\n            .clone();\n        \n        let to = email_data.get(\"to\")\n            .and_then(|v| v.as_str())\n            .ok_or_else(|| ActivityError::NonRetry(\"Missing 'to' field\".to_string()))?;\n        \n        // Simulate sending email\n        println!(\"Sending email to: {}\", to);\n        \n        // Return success with result data\n        Ok(Some(serde_json::json!({\n            \"message\": format!(\"Email sent to {}\", to),\n            \"status\": \"delivered\"\n        })))\n    }\n\n    fn activity_type(\u0026self) -\u003e String {\n        MyActivityType::SendEmail.to_string()\n    }\n}\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct EmailResult {\n    message: String,\n    status: String,\n}\n\n#[tokio::main]\nasync fn main() -\u003e Result\u003c(), Box\u003cdyn std::error::Error\u003e\u003e {\n    use runner_q::storage::PostgresBackend;\n    let backend = PostgresBackend::new(\"postgres://localhost/mydb\", \"my_app\").await?;\n    let engine = WorkerEngine::builder()\n        .backend(std::sync::Arc::new(backend))\n        .queue_name(\"my_app\")\n        .max_workers(8)\n        .schedule_poll_interval(Duration::from_secs(30))\n        .build()\n        .await?;\n\n    // Register activity handler\n    let send_email_activity = SendEmailActivity;\n    engine.register_activity(MyActivityType::SendEmail.to_string(), Arc::new(send_email_activity));\n\n    // Get activity executor for fluent activity execution\n    // Note: You need to get the activity executor to use the fluent API\n    let activity_executor = engine.get_activity_executor();\n    \n    // Execute an activity with custom options\n    let future = activity_executor\n        .activity(\"send_email\")\n        .payload(json!({\"to\": \"user@example.com\", \"subject\": \"Welcome!\"}))\n        .max_retries(5)\n        .timeout(Duration::from_secs(600))\n        .execute()\n        .await?;\n\n    // Schedule an activity for future execution (10 seconds from now)\n    let scheduled_future = activity_executor\n        .activity(\"send_email\")\n        .payload(json!({\"to\": \"user@example.com\", \"subject\": \"Reminder\"}))\n        .max_retries(3)\n        .timeout(Duration::from_secs(300))\n        .delay(Duration::from_secs(10))\n        .execute()\n        .await?;\n\n    // Execute an activity with default options\n    let future2 = activity_executor\n        .activity(\"send_email\")\n        .payload(json!({\"to\": \"admin@example.com\"}))\n        .execute()\n        .await?;\n\n    // Spawn a task to handle the result\n    tokio::spawn(async move {\n        if let Ok(result) = future.get_result().await {\n            match result {\n                None =\u003e {}\n                Some(data) =\u003e {\n                    let email_result: EmailResult = serde_json::from_value(data).unwrap();\n                    println!(\"Email result: {:?}\", email_result);\n                }\n            }\n        }\n    });\n\n    // Start the worker engine (this will run indefinitely)\n    engine.start().await?;\n\n    Ok(())\n}\n```\n\n## Builder Pattern API\n\nRunner-Q provides a fluent builder pattern for both `WorkerEngine` configuration and activity execution, making the API more ergonomic and easier to use.\n\n### WorkerEngine Builder\n\n```rust\nuse runner_q::WorkerEngine;\nuse std::time::Duration;\n\n// Basic configuration\nlet engine = WorkerEngine::builder()\n    .redis_url(\"redis://localhost:6379\")\n    .queue_name(\"my_app\")\n    .max_workers(8)\n    .schedule_poll_interval(Duration::from_secs(30))\n    .build()\n    .await?;\n\n// Advanced configuration with Redis config and metrics\nuse runner_q::{RedisConfig, MetricsSink};\nuse std::sync::Arc;\n\nlet redis_config = RedisConfig {\n    max_size: 100,\n    min_idle: 10,\n    conn_timeout: Duration::from_secs(60),\n    idle_timeout: Duration::from_secs(600),\n    max_lifetime: Duration::from_secs(3600),\n};\n\n// Custom metrics implementation\nstruct PrometheusMetrics;\nimpl MetricsSink for PrometheusMetrics {\n    fn inc_counter(\u0026self, name: \u0026str, value: u64) {\n        println!(\"Counter {}: {}\", name, value);\n    }\n    fn observe_duration(\u0026self, name: \u0026str, duration: Duration) {\n        println!(\"Duration {}: {:?}\", name, duration);\n    }\n}\n\nlet engine = WorkerEngine::builder()\n    .redis_url(\"redis://localhost:6379\")\n    .queue_name(\"my_app\")\n    .max_workers(8)\n    .redis_config(redis_config)\n    .metrics(Arc::new(PrometheusMetrics))\n    .build()\n    .await?;\n\n// Using a custom backend\nuse runner_q::RedisBackend;\n\nlet backend = RedisBackend::builder()\n    .redis_url(\"redis://localhost:6379\")\n    .queue_name(\"my_app\")\n    .build()\n    .await?;\n\nlet engine = WorkerEngine::builder()\n    .backend(Arc::new(backend))\n    .max_workers(8)\n    .build()\n    .await?;\n\n// Restrict this engine to specific activity types (workload isolation)\nlet engine = WorkerEngine::builder()\n    .redis_url(\"redis://localhost:6379\")\n    .queue_name(\"my_app\")\n    .activity_types(\u0026[\"send_email\", \"send_sms\"])\n    .build()\n    .await?;\n```\n\n### Activity Builder\n\n```rust\nuse runner_q::{WorkerEngine, ActivityPriority};\nuse serde_json::json;\nuse std::time::Duration;\n\n// Get activity executor for fluent activity execution\n// Note: The fluent API is available through the activity executor, not directly on the engine\nlet activity_executor = engine.get_activity_executor();\n\n// Fluent activity execution\nlet future = activity_executor\n    .activity(\"send_email\")\n    .payload(json!({\"to\": \"user@example.com\", \"subject\": \"Hello\"}))\n    .max_retries(5)\n    .timeout(Duration::from_secs(600))\n    .execute()\n    .await?;\n\n// Schedule activity for future execution\nlet scheduled_future = activity_executor\n    .activity(\"send_reminder\")\n    .payload(json!({\"user_id\": 123}))\n    .delay(Duration::from_secs(3600)) // 1 hour delay\n    .execute()\n    .await?;\n\n// Simple activity with defaults\nlet simple_future = activity_executor\n    .activity(\"process_data\")\n    .payload(json!({\"data\": \"example\"}))\n    .execute()\n    .await?;\n```\n\n## Activity Types\n\nActivity types in Runner-Q are simple strings that identify different types of activities. You can use any string as an activity type identifier.\n\n### Examples\n\n```rust\n// Common activity types\n\"send_email\"\n\"process_payment\"\n\"provision_card\"\n\"update_card_status\"\n\"process_webhook_event\"\n\n// You can use any string format you prefer\n\"user.registration\"\n\"email-notification\"\n\"background_sync\"\n```\n\n## Custom Activity Handlers\n\nYou can create custom activity handlers by implementing the `ActivityHandler` trait:\n\n```rust\nuse runner_q::{ActivityContext, ActivityHandler, ActivityResult};\nuse async_trait::async_trait;\nuse serde_json::Value;\n\npub struct PaymentActivity {\n    // Add your dependencies here (database connections, external APIs, etc.)\n}\n\n#[async_trait]\nimpl ActivityHandler for PaymentActivity {\n    async fn handle(\u0026self, payload: Value, context: ActivityContext) -\u003e ActivityHandlerResult {\n        // Parse the payment data using ? operator\n        let amount = payload[\"amount\"]\n            .as_f64()\n            .ok_or_else(|| ActivityError::NonRetry(\"Missing or invalid amount\".to_string()))?;\n\n        let currency = payload[\"currency\"]\n            .as_str()\n            .unwrap_or(\"USD\");\n\n        println!(\"Processing payment: {} {}\", amount, currency);\n\n        // Validate amount\n        if amount \u003c= 0.0 {\n            return Err(ActivityError::NonRetry(\"Invalid amount\".to_string()));\n        }\n\n        // Simulate payment processing\n        Ok(Some(serde_json::json!({\n            \"transaction_id\": \"txn_123456\",\n            \"amount\": amount,\n            \"currency\": currency,\n            \"status\": \"completed\"\n        })))\n    }\n\n    fn activity_type(\u0026self) -\u003e String {\n        \"process_payment\".to_string()\n    }\n}\n\n// Register the handler\nworker_engine.register_activity(\"process_payment\".to_string(), Arc::new(PaymentActivity {}));\n```\n\n## Activity Priority and Options\n\nActivities can be configured using the `ActivityOption` struct:\n\n```rust\nuse runner_q::{ActivityPriority, ActivityOption};\n\n// High priority with custom retry and timeout settings\nlet future = worker_engine.execute_activity(\n    \"send_email\".to_string(),\n    serde_json::json!({\"to\": \"user@example.com\"}),\n    Some(ActivityOption {\n        priority: Some(ActivityPriority::Critical), // Highest priority\n        max_retries: 10,                            // Retry up to 10 times\n        timeout_seconds: 900,                       // 15 minute timeout\n    })\n).await?;\n\n// Use default options (Normal priority, 3 retries, 300s timeout)\nlet future = worker_engine.execute_activity(\n    \"send_email\".to_string(),\n    serde_json::json!({\"to\": \"user@example.com\"}),\n    None\n).await?;\n```\n\nAvailable priorities:\n- `ActivityPriority::Critical` - Highest priority (processed first)\n- `ActivityPriority::High` - High priority\n- `ActivityPriority::Normal` - Default priority\n- `ActivityPriority::Low` - Lowest priority\n\n## Getting Activity Results\n\nActivities can return results that can be retrieved asynchronously:\n\n```rust\nuse serde::{Serialize, Deserialize};\n\n#[derive(Debug, Serialize, Deserialize)]\nstruct EmailResult {\n    message: String,\n    status: String,\n}\n\nlet future = worker_engine.execute_activity(\n    \"send_email\".to_string(),\n    serde_json::json!({\"to\": \"user@example.com\"}),\n    None\n).await?;\n\n// Get the result (this will wait until the activity completes)\nlet result_value = future.get_result().await?;\nlet email_result: EmailResult = serde_json::from_value(result_value)?;\nprintln!(\"Email result: {:?}\", email_result);\n```\n\n## Activity Orchestration\n\nActivities can execute other activities using the `ActivityExecutor` available in the `ActivityContext`. This enables powerful workflow orchestration with the improved fluent API:\n\n```rust\nuse runner_q::{ActivityHandler, ActivityContext, ActivityHandlerResult, ActivityPriority, ActivityError};\nuse async_trait::async_trait;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Deserialize)]\nstruct OrderData {\n    id: String,\n    customer_email: String,\n    items: Vec\u003cString\u003e,\n}\n\npub struct OrderProcessingActivity;\n\n#[async_trait]\nimpl ActivityHandler for OrderProcessingActivity {\n    async fn handle(\u0026self, payload: serde_json::Value, context: ActivityContext) -\u003e ActivityHandlerResult {\n        let order_id = payload[\"order_id\"]\n            .as_str()\n            .ok_or_else(|| ActivityError::NonRetry(\"Missing order_id\".to_string()))?;\n        \n        // Step 1: Validate payment using fluent API\n        let _payment_future = context.activity_executor\n            .activity(\"validate_payment\")\n            .payload(serde_json::json!({\"order_id\": order_id}))\n            .priority(ActivityPriority::High)\n            .max_retries(3)\n            .timeout(std::time::Duration::from_secs(120))\n            .execute()\n            .await.map_err(|e| ActivityError::Retry(format!(\"Failed to enqueue payment validation: {}\", e)))?;\n        \n        // Step 2: Update inventory\n        let _inventory_future = context.activity_executor\n            .activity(\"update_inventory\")\n            .payload(serde_json::json!({\"order_id\": order_id}))\n            .execute()\n            .await.map_err(|e| ActivityError::Retry(format!(\"Failed to enqueue inventory update: {}\", e)))?;\n        \n        // Step 3: Schedule delivery notification for later\n        context.activity_executor\n            .activity(\"send_delivery_notification\")\n            .payload(serde_json::json!({\"order_id\": order_id, \"customer_email\": payload[\"customer_email\"]}))\n            .priority(ActivityPriority::Normal)\n            .max_retries(5)\n            .timeout(std::time::Duration::from_secs(300))\n            .delay(std::time::Duration::from_secs(3600)) // 1 hour\n            .execute()\n            .await.map_err(|e| ActivityError::Retry(format!(\"Failed to schedule notification: {}\", e)))?;\n        \n        Ok(Some(serde_json::json!({\n            \"order_id\": order_id,\n            \"status\": \"processing\",\n            \"steps_initiated\": [\"payment_validation\", \"inventory_update\", \"delivery_notification\"]\n        })))\n    }\n\n    fn activity_type(\u0026self) -\u003e String {\n        \"process_order\".to_string()\n    }\n}\n```\n\n### Benefits of Activity Orchestration\n\n- **Modularity**: Break complex workflows into smaller, reusable activities\n- **Reliability**: Each sub-activity has its own retry logic and error handling\n- **Monitoring**: Track progress of individual workflow steps\n- **Scalability**: Sub-activities can be processed by different workers\n- **Flexibility**: Different priority levels and timeouts for different steps\n- **Scheduling**: Schedule activities for future execution\n- **Fluent API**: Clean, readable activity execution with method chaining\n\n## Metrics and Monitoring\n\nRunner-Q provides comprehensive metrics collection through the `MetricsSink` trait, allowing you to integrate with your preferred monitoring system.\n\n### Basic Metrics Implementation\n\n```rust\nuse runner_q::{MetricsSink, WorkerEngine};\nuse std::time::Duration;\nuse std::sync::Arc;\n\n// Simple logging metrics implementation\nstruct LoggingMetrics;\n\nimpl MetricsSink for LoggingMetrics {\n    fn inc_counter(\u0026self, name: \u0026str, value: u64) {\n        println!(\"METRIC: {} += {}\", name, value);\n    }\n    \n    fn observe_duration(\u0026self, name: \u0026str, duration: Duration) {\n        println!(\"METRIC: {} = {:?}\", name, duration);\n    }\n}\n\n// Use with WorkerEngine\nlet engine = WorkerEngine::builder()\n    .redis_url(\"redis://localhost:6379\")\n    .queue_name(\"my_app\")\n    .metrics(Arc::new(LoggingMetrics))\n    .build()\n    .await?;\n```\n\n### Prometheus Integration\n\n```rust\nuse runner_q::{MetricsSink, WorkerEngine};\nuse std::time::Duration;\nuse std::sync::Arc;\nuse std::collections::HashMap;\n\n// Prometheus metrics implementation\nstruct PrometheusMetrics {\n    // Contains pre-registered Prometheus metrics\n    counters: HashMap\u003cString, prometheus::Counter\u003e,\n    histograms: HashMap\u003cString, prometheus::Histogram\u003e,\n}\n\nimpl MetricsSink for PrometheusMetrics {\n    fn inc_counter(\u0026self, name: \u0026str, value: u64) {\n        if let Some(counter) = self.counters.get(name) {\n            counter.inc_by(value as f64);\n        }\n    }\n    \n    fn observe_duration(\u0026self, name: \u0026str, duration: Duration) {\n        if let Some(histogram) = self.histograms.get(name) {\n            histogram.observe(duration.as_secs_f64());\n        }\n    }\n}\n\n// Custom metrics implementation\nstruct CustomMetrics {\n    activity_completed: u64,\n    activity_failed: u64,\n    activity_retry: u64,\n    total_execution_time: Duration,\n}\n\nimpl MetricsSink for CustomMetrics {\n    fn inc_counter(\u0026self, name: \u0026str, value: u64) {\n        match name {\n            \"activity_completed\" =\u003e {\n                // Update your custom counter\n                println!(\"Activities completed: {}\", value);\n            }\n            \"activity_failed_non_retry\" =\u003e {\n                // Track non-retryable failures\n                println!(\"Activities failed (non-retry): {}\", value);\n            }\n            \"activity_retry\" =\u003e {\n                // Track retry attempts\n                println!(\"Activity retries: {}\", value);\n            }\n            _ =\u003e {}\n        }\n    }\n    \n    fn observe_duration(\u0026self, name: \u0026str, duration: Duration) {\n        match name {\n            \"activity_execution\" =\u003e {\n                // Track execution times\n                println!(\"Activity execution time: {:?}\", duration);\n            }\n            _ =\u003e {}\n        }\n    }\n}\n```\n\n### Available Metrics\n\nThe library automatically collects the following metrics:\n\n- **`activity_completed`** - Number of activities completed successfully\n- **`activity_retry`** - Number of activities that requested retry\n- **`activity_failed_non_retry`** - Number of activities that failed permanently\n- **`activity_timeout`** - Number of activities that timed out\n\n### No-op Metrics\n\nIf you don't need metrics collection, you can use the built-in `NoopMetrics`:\n\n```rust\nuse runner_q::{NoopMetrics, MetricsSink};\nuse std::time::Duration;\n\nlet metrics = NoopMetrics;\n\n// These calls do nothing\nmetrics.inc_counter(\"activities_completed\", 1);\nmetrics.observe_duration(\"activity_execution\", Duration::from_secs(5));\n```\n\n## Advanced Features\n\n### Activity Scheduling\n\nRunner-Q supports scheduling activities for future execution with precise timestamp-based scheduling:\n\n```rust\nuse runner_q::{WorkerEngine, ActivityPriority};\nuse serde_json::json;\nuse std::time::Duration;\n\n// Get activity executor for scheduling\nlet activity_executor = engine.get_activity_executor();\n\n// Schedule an activity to run in 1 hour\nlet future = activity_executor\n    .activity(\"send_reminder\")\n    .payload(json!({\"user_id\": 123, \"message\": \"Don't forget!\"}))\n    .delay(Duration::from_secs(3600)) // 1 hour from now\n    .execute()\n    .await?;\n\n// Schedule for a specific time (using chrono)\nuse chrono::{DateTime, Utc, Duration as ChronoDuration};\n\nlet scheduled_time = Utc::now() + ChronoDuration::hours(2);\nlet future = activity_executor\n    .activity(\"process_report\")\n    .payload(json!({\"report_type\": \"monthly\"}))\n    .delay(Duration::from_secs(7200)) // 2 hours\n    .execute()\n    .await?;\n```\n\n### Workload Isolation (Activity Type Filtering)\n\nBy default every worker engine dequeues all activity types from the queue. When you need to isolate workloads — for example, keeping slow report-generation jobs from starving latency-sensitive email sends — you can restrict each engine to specific activity types with `.activity_types()`:\n\n```rust\nuse runner_q::{WorkerEngine, storage::PostgresBackend};\nuse std::sync::Arc;\n\nlet backend = Arc::new(\n    PostgresBackend::new(\"postgres://localhost/runnerq\", \"my_app\").await?\n);\n\n// Node 1 — only processes email-related activities\nlet mut email_engine = WorkerEngine::builder()\n    .backend(backend.clone())\n    .activity_types(\u0026[\"send_email\", \"send_sms\"])\n    .max_workers(4)\n    .build()\n    .await?;\nemail_engine.register_activity(\"send_email\".into(), Arc::new(SendEmailHandler));\nemail_engine.register_activity(\"send_sms\".into(), Arc::new(SendSmsHandler));\n\n// Node 2 — only processes trades\nlet mut trade_engine = WorkerEngine::builder()\n    .backend(backend.clone())\n    .activity_types(\u0026[\"execute_trade\"])\n    .max_workers(8)\n    .build()\n    .await?;\ntrade_engine.register_activity(\"execute_trade\".into(), Arc::new(TradeHandler));\n\n// Node 3 — catch-all, processes anything not claimed above\nlet mut catchall_engine = WorkerEngine::builder()\n    .backend(backend.clone())\n    .max_workers(2)\n    .build()\n    .await?;\n// Register all handlers on the catch-all node\n```\n\nAll engines share the same backend and queue. Each engine's `dequeue()` only claims activities matching its declared types; an engine with no filter acts as a catch-all.\n\n**Startup validation:** If `activity_types` is set and any listed type does not have a registered handler, the engine panics at `start()` with a clear error message.\n\nSee `examples/activity_type_filtering.rs` for a complete working example.\n\n### Redis Configuration\n\nFine-tune Redis connection behavior for your specific needs:\n\n```rust\nuse runner_q::{WorkerEngine, RedisConfig};\nuse std::time::Duration;\n\nlet redis_config = RedisConfig {\n    max_size: 100,                    // Maximum connections in pool\n    min_idle: 10,                     // Minimum idle connections\n    conn_timeout: Duration::from_secs(60),    // Connection timeout\n    idle_timeout: Duration::from_secs(600),   // Idle connection timeout\n    max_lifetime: Duration::from_secs(3600),  // Maximum connection lifetime\n};\n\nlet engine = WorkerEngine::builder()\n    .redis_url(\"redis://localhost:6379\")\n    .queue_name(\"my_app\")\n    .redis_config(redis_config)\n    .build()\n    .await?;\n```\n\n### Pluggable Storage Backends\n\nRunner-Q uses a trait-based storage abstraction that allows you to swap out the persistence layer. Built-in backends include Redis (default) and PostgreSQL, with full support for custom implementations.\n\n#### Architecture\n\nFor a comprehensive deep-dive into RunnerQ's internals — trait hierarchy, worker loop, state machine, backend comparison, and more — see [`docs/architecture.md`](docs/architecture.md).\n\n```mermaid\ngraph TD\n    subgraph PublicAPI [Public API]\n        WEB[WorkerEngineBuilder]\n        WE[WorkerEngine]\n    end\n    \n    subgraph StorageTraits [Storage Module]\n        QB[QueueStorage trait]\n        IB[InspectionStorage trait]\n    end\n    \n    subgraph Implementations [Backend Implementations]\n        RB[RedisBackend]\n        PB[PostgresBackend]\n        Future[Future: KafkaBackend, etc.]\n    end\n    \n    WEB --\u003e|\".backend()\"| QB\n    WEB --\u003e|\".redis_url()\"| RB\n    WE --\u003e QB\n    WE --\u003e IB\n    RB --\u003e QB\n    RB --\u003e IB\n    PB --\u003e QB\n    PB --\u003e IB\n    Future -.-\u003e QB\n    Future -.-\u003e IB\n```\n\nThe public API remains unchanged - you can continue using `.redis_url()` for the default experience, or use `.backend()` to inject a custom implementation.\n\n#### Using the Default Redis Backend\n\n```rust\nuse runner_q::{WorkerEngine, RedisBackend};\nuse std::sync::Arc;\n\n// Option 1: Use the simple redis_url API (recommended for most cases)\nlet engine = WorkerEngine::builder()\n    .redis_url(\"redis://localhost:6379\")\n    .queue_name(\"my_app\")\n    .build()\n    .await?;\n\n// Option 2: Create a RedisBackend explicitly for more control\nlet backend = RedisBackend::builder()\n    .redis_url(\"redis://localhost:6379\")\n    .queue_name(\"my_app\")\n    .lease_ms(60_000)  // Custom lease duration\n    .build()\n    .await?;\n\nlet engine = WorkerEngine::builder()\n    .backend(Arc::new(backend))\n    .max_workers(8)\n    .build()\n    .await?;\n```\n\n#### Valkey Compatibility\n\nSince Valkey is Redis protocol-compatible, you can use it directly by pointing the URL to your Valkey server:\n\n```rust\nlet engine = WorkerEngine::builder()\n    .redis_url(\"redis://valkey-server:6379\")  // Works with Valkey!\n    .queue_name(\"my_app\")\n    .build()\n    .await?;\n```\n\n#### PostgreSQL Backend\n\nFor use cases requiring permanent persistence and SQL-based queries, RunnerQ provides a PostgreSQL backend:\n\n```toml\n# Cargo.toml\n[dependencies]\nrunner_q = { version = \"0.5\", features = [\"postgres\"] }\n```\n\n```rust\nuse runner_q::storage::PostgresBackend;\nuse std::sync::Arc;\n\n// Create PostgreSQL backend\nlet backend = Arc::new(\n    PostgresBackend::new(\n        \"postgres://user:password@localhost/runnerq\",\n        \"my_queue\"\n    ).await?\n);\n\n// Use with WorkerEngine\nlet engine = WorkerEngine::builder()\n    .backend(backend)\n    .max_workers(8)\n    .build()\n    .await?;\n```\n\n**PostgreSQL Backend Features:**\n- **Permanent Persistence** - Activities stored indefinitely (no TTL expiration)\n- **Multi-node Safe** - Uses `FOR UPDATE SKIP LOCKED` for concurrent job claiming\n- **Cross-process Events** - PostgreSQL `LISTEN/NOTIFY` for real-time event streaming\n- **Atomic Idempotency** - Separate table with `INSERT ... ON CONFLICT` for race-safe key claiming\n- **History Preservation** - Never deletes activity records\n\n**Schema Tables Created:**\n- `runnerq_activities` - Main activity storage\n- `runnerq_events` - Event history timeline\n- `runnerq_results` - Activity execution results\n- `runnerq_idempotency` - Idempotency key mapping\n\nSee `examples/postgres_example.rs` for a complete working example, and `examples/activity_type_filtering.rs` for workload isolation with multiple engines.\n\n#### Implementing a Custom Backend\n\nYou can implement your own backend by implementing the `Storage` trait (which combines `QueueStorage` and `InspectionStorage`):\n\n```rust\nuse runner_q::storage::{\n    Storage, QueueStorage, InspectionStorage, StorageError,\n    QueuedActivity, DequeuedActivity, FailureKind,\n};\nuse runner_q::{QueueStats, ActivitySnapshot, ActivityEvent, DeadLetterRecord};\nuse async_trait::async_trait;\nuse std::time::Duration;\nuse uuid::Uuid;\n\npub struct MyCustomBackend {\n    // Your backend state (connection pool, config, etc.)\n}\n\n#[async_trait]\nimpl QueueStorage for MyCustomBackend {\n    async fn enqueue(\u0026self, activity: QueuedActivity) -\u003e Result\u003c(), StorageError\u003e {\n        // Implement activity enqueuing\n        todo!()\n    }\n\n    async fn dequeue(\n        \u0026self,\n        worker_id: \u0026str,\n        timeout: Duration,\n        activity_types: Option\u003c\u0026[String]\u003e,\n    ) -\u003e Result\u003cOption\u003cDequeuedActivity\u003e, StorageError\u003e {\n        // Implement activity claiming.\n        // When activity_types is Some, only claim matching types.\n        todo!()\n    }\n\n    async fn ack_success(\n        \u0026self,\n        activity_id: Uuid,\n        lease_id: \u0026str,\n        result: Option\u003cserde_json::Value\u003e,\n        worker_id: Option\u003c\u0026str\u003e,\n    ) -\u003e Result\u003c(), StorageError\u003e {\n        // Mark activity as completed\n        todo!()\n    }\n\n    async fn ack_failure(\n        \u0026self,\n        activity_id: Uuid,\n        lease_id: \u0026str,\n        failure: FailureKind,\n        worker_id: Option\u003c\u0026str\u003e,\n    ) -\u003e Result\u003cbool, StorageError\u003e {\n        // Handle activity failure (retry or dead-letter)\n        todo!()\n    }\n\n    // ... implement other required methods\n}\n\n#[async_trait]\nimpl InspectionStorage for MyCustomBackend {\n    async fn stats(\u0026self) -\u003e Result\u003cQueueStats, StorageError\u003e {\n        // Return queue statistics\n        todo!()\n    }\n\n    async fn list_pending(\n        \u0026self,\n        offset: usize,\n        limit: usize,\n    ) -\u003e Result\u003cVec\u003cActivitySnapshot\u003e, StorageError\u003e {\n        // List pending activities\n        todo!()\n    }\n\n    // ... implement other required methods\n}\n\n// Use your custom backend\nlet backend = Arc::new(MyCustomBackend { /* ... */ });\nlet engine = WorkerEngine::builder()\n    .backend(backend)\n    .max_workers(8)\n    .build()\n    .await?;\n```\n\n#### Storage Trait Reference\n\nThe storage abstraction consists of two traits:\n\n**`QueueStorage`** - Core queue operations:\n- `enqueue()` - Add activity to the queue\n- `dequeue()` - Claim an activity for processing (PostgreSQL picks up due scheduled/retrying activities directly here)\n- `ack_success()` - Mark activity as completed\n- `ack_failure()` - Handle activity failure (retry or dead-letter)\n- `process_scheduled()` - Move due scheduled activities to ready queue (Redis only; PostgreSQL returns `Ok(0)`)\n- `requeue_expired()` - Reclaim activities with expired leases\n- `extend_lease()` - Extend activity processing lease\n- `store_result()` / `get_result()` - Activity result storage\n- `check_idempotency()` - Idempotency key handling\n- `schedules_natively()` - Whether the backend handles scheduling in `dequeue()` (skips the polling loop if `true`)\n\n**`InspectionStorage`** - Observability operations:\n- `stats()` - Get queue statistics\n- `list_pending()` / `list_processing()` / `list_scheduled()` / `list_completed()` - List activities by status\n- `list_dead_letter()` - List dead-lettered activities\n- `get_activity()` - Get specific activity details\n- `get_activity_events()` - Get activity lifecycle events\n- `event_stream()` - Stream real-time events (for SSE)\n\n### Graceful Shutdown\n\nThe worker engine supports graceful shutdown with proper cleanup:\n\n```rust\nuse runner_q::WorkerEngine;\nuse std::sync::Arc;\nuse tokio::time::{sleep, Duration};\n\n// Start the engine in a background task\nlet engine = Arc::new(engine);\nlet engine_clone = engine.clone();\n\nlet engine_handle = tokio::spawn(async move {\n    engine_clone.start().await\n});\n\n// Let it run for a while\nsleep(Duration::from_secs(10)).await;\n\n// Gracefully stop the engine\nengine.stop().await;\n\n// Wait for the engine to finish\nengine_handle.await??;\n```\n\n### Activity Context and Metadata\n\nAccess rich context information in your activity handlers:\n\n```rust\nuse runner_q::{ActivityHandler, ActivityContext, ActivityHandlerResult};\nuse async_trait::async_trait;\n\n#[async_trait]\nimpl ActivityHandler for MyActivity {\n    async fn handle(\u0026self, payload: serde_json::Value, context: ActivityContext) -\u003e ActivityHandlerResult {\n        // Access activity metadata\n        println!(\"Processing activity {} of type {}\", context.activity_id, context.activity_type);\n        println!(\"This is retry attempt #{}\", context.retry_count);\n        \n        // Check for cancellation\n        if context.cancel_token.is_cancelled() {\n            return Err(ActivityError::NonRetry(\"Activity was cancelled\".to_string()));\n        }\n        \n        // Access custom metadata\n        if let Some(correlation_id) = context.metadata.get(\"correlation_id\") {\n            println!(\"Correlation ID: {}\", correlation_id);\n        }\n        \n        Ok(Some(serde_json::json!({\"status\": \"processed\"})))\n    }\n    \n    fn activity_type(\u0026self) -\u003e String {\n        \"my_activity\".to_string()\n    }\n}\n```\n\n### Queue Statistics\n\nMonitor queue performance and health using the inspector:\n\n```rust\nuse runner_q::{WorkerEngine, QueueStats};\n\nlet engine = WorkerEngine::builder()\n    .redis_url(\"redis://localhost:6379\")\n    .queue_name(\"my_app\")\n    .build()\n    .await?;\n\n// Get the inspector\nlet inspector = engine.inspector();\n\n// Get queue statistics\nlet stats: QueueStats = inspector.stats().await?;\n\nprintln!(\"Queue stats:\");\nprintln!(\"  Pending activities: {}\", stats.pending_activities);\nprintln!(\"  Processing activities: {}\", stats.processing_activities);\nprintln!(\"  Scheduled activities: {}\", stats.scheduled_activities);\nprintln!(\"  Dead letter queue size: {}\", stats.dead_letter_activities);\nprintln!(\"Priority distribution:\");\nprintln!(\"  Critical: {}\", stats.critical_priority);\nprintln!(\"  High: {}\", stats.high_priority);\nprintln!(\"  Normal: {}\", stats.normal_priority);\nprintln!(\"  Low: {}\", stats.low_priority);\n```\n\nFor a visual dashboard with real-time updates, see the [Observability Console](#observability-console) section.\n\n## Error Handling\n\nThe library provides comprehensive error handling with clear separation between retryable and non-retryable errors.\n\n### Activity Handler Results\n\nIn your activity handlers, you can use the convenient `ActivityHandlerResult` type with the `?` operator for clean error handling:\n\n```rust\nuse runner_q::{ActivityHandler, ActivityContext, ActivityHandlerResult, ActivityError};\nuse async_trait::async_trait;\nuse serde_json::Value;\n\n#[derive(serde::Deserialize)]\nstruct MyData {\n    id: String,\n    value: String,\n}\n\n#[async_trait]\nimpl ActivityHandler for MyActivity {\n    async fn handle(\u0026self, payload: Value, context: ActivityContext) -\u003e ActivityHandlerResult {\n        // Use ? operator for automatic error conversion\n        let data: MyData = serde_json::from_value(payload)?;\n\n        // Validate data\n        if data.id.is_empty() {\n            return Err(ActivityError::NonRetry(\"Invalid data format\".to_string()));\n        }\n\n        // Perform operation that might temporarily fail\n        let result = external_api_call(\u0026data)\n            .await\n            .map_err(|e| ActivityError::Retry(format!(\"API call failed: {}\", e)))?;\n\n        // Return success with result data\n        Ok(Some(serde_json::json!({\"result\": result})))\n    }\n\n    fn activity_type(\u0026self) -\u003e String {\n        \"my_activity\".to_string()\n    }\n}\n```\n\n**Error Types:**\n- `ActivityError::Retry(message)` - Will be retried with exponential backoff\n- `ActivityError::NonRetry(message)` - Will not be retried\n- Any error implementing `Into\u003cActivityError\u003e` can be used with `?`\n\n### Dead Letter Callback\n\nWhen an activity exhausts all retries, it moves to the dead letter queue. You can handle this event by implementing the optional `on_dead_letter` callback:\n\n```rust\nuse runner_q::{ActivityHandler, ActivityContext, ActivityHandlerResult};\nuse async_trait::async_trait;\n\n#[async_trait]\nimpl ActivityHandler for MyActivity {\n    async fn handle(\u0026self, payload: serde_json::Value, context: ActivityContext) -\u003e ActivityHandlerResult {\n        // Activity logic\n        Ok(None)\n    }\n\n    fn activity_type(\u0026self) -\u003e String {\n        \"my_activity\".to_string()\n    }\n\n    async fn on_dead_letter(\n        \u0026self,\n        payload: serde_json::Value,\n        context: ActivityContext,\n        error: String,\n    ) {\n        // Called once when activity enters dead letter state\n        // Use for cleanup, notifications, or logging\n        eprintln!(\"Activity {} dead-lettered: {}\", context.activity_id, error);\n    }\n}\n```\n\nThe callback has a default empty implementation, so existing handlers continue to work without modification.\n\n### Worker Engine Errors\n\n```rust\nuse runner_q::{WorkerEngine, WorkerError, ActivityPriority};\nuse serde_json::json;\nuse std::time::Duration;\n\n// Using the fluent API for error handling\nlet activity_executor = engine.get_activity_executor();\n\nmatch activity_executor\n    .activity(\"my_activity\")\n    .payload(json!({\"id\": \"123\", \"value\": \"test\"}))\n    .execute()\n    .await \n{\n    Ok(future) =\u003e {\n        // Activity was successfully enqueued\n        match future.get_result().await {\n            Ok(result) =\u003e match result {\n                Some(data) =\u003e println!(\"Activity completed: {:?}\", data),\n                None =\u003e println!(\"Activity completed with no result\"),\n            },\n            Err(WorkerError::Timeout) =\u003e println!(\"Activity timed out\"),\n            Err(WorkerError::CustomError(msg)) =\u003e println!(\"Activity failed: {}\", msg),\n            Err(e) =\u003e println!(\"Activity failed: {}\", e),\n        }\n    }\n    Err(e) =\u003e println!(\"Failed to enqueue activity: {}\", e),\n}\n```\n\n### Error Recovery Patterns\n\n```rust\nuse runner_q::{ActivityHandler, ActivityContext, ActivityHandlerResult, ActivityError};\nuse async_trait::async_trait;\n\n#[async_trait]\nimpl ActivityHandler for ResilientActivity {\n    async fn handle(\u0026self, payload: serde_json::Value, context: ActivityContext) -\u003e ActivityHandlerResult {\n        // Check retry count for different strategies\n        match context.retry_count {\n            0..=2 =\u003e {\n                // First few attempts: retry on any error\n                self.process_with_retry(payload).await\n            }\n            3..=5 =\u003e {\n                // Middle attempts: more conservative retry\n                self.process_conservative(payload).await\n            }\n            _ =\u003e {\n                // Final attempts: only retry on specific errors\n                self.process_final_attempt(payload).await\n            }\n        }\n    }\n\n    fn activity_type(\u0026self) -\u003e String {\n        \"resilient_activity\".to_string()\n    }\n}\n\nimpl ResilientActivity {\n    async fn process_with_retry(\u0026self, payload: serde_json::Value) -\u003e ActivityHandlerResult {\n        // Implementation that retries on any error\n        Ok(Some(serde_json::json!({\"status\": \"processed\"})))\n    }\n\n    async fn process_conservative(\u0026self, payload: serde_json::Value) -\u003e ActivityHandlerResult {\n        // More conservative processing\n        Ok(Some(serde_json::json!({\"status\": \"processed_conservative\"})))\n    }\n\n    async fn process_final_attempt(\u0026self, payload: serde_json::Value) -\u003e ActivityHandlerResult {\n        // Final attempt with minimal retry\n        Ok(Some(serde_json::json!({\"status\": \"processed_final\"})))\n    }\n}\n```\n\n### Default Values\n\nWhen using the builder pattern, sensible defaults are provided:\n\n```rust\nuse runner_q::WorkerEngine;\n\n// Uses these defaults:\n// - redis_url: \"redis://127.0.0.1:6379\"\n// - queue_name: \"default\"\n// - max_workers: 10\n// - schedule_poll_interval: 5 seconds\nlet engine = WorkerEngine::builder().build().await?;\n```\n\n### Redis Configuration\n\nFine-tune Redis connection behavior:\n\n```rust\nuse runner_q::{RedisConfig, WorkerEngine};\nuse std::time::Duration;\n\nlet redis_config = RedisConfig {\n    max_size: 100,                              // Maximum connections in pool\n    min_idle: 10,                               // Minimum idle connections\n    conn_timeout: Duration::from_secs(60),      // Connection timeout\n    idle_timeout: Duration::from_secs(600),     // Idle connection timeout\n    max_lifetime: Duration::from_secs(3600),    // Maximum connection lifetime\n};\n\nlet engine = WorkerEngine::builder()\n    .redis_config(redis_config)\n    .build()\n    .await?;\n```\n\n## Observability Console\n\nRunner-Q includes a built-in web-based observability console for monitoring and managing your activity queues in real-time.\n\n### Features\n\n- **Real-time Updates** - Server-Sent Events (SSE) for instant activity updates\n- **Live Statistics** - Monitor queue health with processing, pending, scheduled, and dead-letter counts\n- **Priority Distribution** - See activity breakdown by priority level (Critical, High, Normal, Low)\n- **Activity Management** - Browse and search activities across all queues (pending, processing, scheduled, completed, dead-letter)\n- **Activity Results** - View execution results and outputs for completed activities\n- **Event Timeline** - Detailed activity lifecycle events with multiple view modes\n- **7-Day History** - Query completed activities for up to 7 days\n- **Zero Setup** - No build tools, npm, or dependencies required\n\n### Dashboard Preview\n\n![RunnerQ Console Dashboard](asset/ui-dashbord.png)\n\n### Quick Start\n\nThe console is designed to work just like Swagger UI - simply pass an inspector instance:\n\n```rust\nuse runner_q::{runnerq_ui, WorkerEngine};\nuse axum::{serve, Router};\n\n#[tokio::main]\nasync fn main() -\u003e anyhow::Result\u003c()\u003e {\n    let engine = WorkerEngine::builder()\n        .redis_url(\"redis://127.0.0.1:6379\")\n        .queue_name(\"my_app\")\n        .build()\n        .await?;\n\n    // Get inspector from engine (automatically enables event streaming)\n    let inspector = engine.inspector();\n\n    // Nest the console at /console - just like Swagger UI!\n    let app = Router::new()\n        .nest(\"/console\", runnerq_ui(inspector));\n\n    let listener = tokio::net::TcpListener::bind(\"0.0.0.0:8081\").await?;\n    println!(\"✨ RunnerQ Console: http://localhost:8081/console\");\n    \n    serve(listener, app).await?;\n    Ok(())\n}\n```\n\n### Integration with Existing Apps\n\nYou can easily integrate the console into your existing Axum application:\n\n```rust\nuse runner_q::{runnerq_ui, WorkerEngine};\nuse axum::Router;\n\nlet engine = WorkerEngine::builder()\n    .redis_url(\"redis://localhost:6379\")\n    .queue_name(\"my_app\")\n    .build()\n    .await?;\n\nlet inspector = engine.inspector();\n\n// Your existing app routes\nlet app = Router::new()\n    .route(\"/api/users\", get(list_users))\n    .route(\"/api/posts\", get(list_posts))\n    // Add the console\n    .nest(\"/console\", runnerq_ui(inspector))\n    .with_state(app_state);\n```\n\n### API-Only Mode\n\nIf you prefer to build a custom UI, you can serve just the API:\n\n```rust\nuse runner_q::observability_api;\n\nlet app = Router::new()\n    .nest(\"/api/observability\", observability_api(inspector));\n```\n\n### Example\n\nSee the complete example in `examples/console_ui.rs`:\n\n```bash\n# Start Redis\nredis-server\n\n# Run the console example\ncargo run --example console_ui\n\n# Open http://localhost:8081/console\n```\n\nFor more details, see the [UI README](ui/README.md).\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Falob-mtc%2Frunnerq","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Falob-mtc%2Frunnerq","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Falob-mtc%2Frunnerq/lists"}