{"id":41846976,"url":"https://github.com/forge-sql-orm/forge-sql-orm","last_synced_at":"2026-03-12T09:28:58.204Z","repository":{"id":280417435,"uuid":"940531356","full_name":"forge-sql-orm/forge-sql-orm","owner":"forge-sql-orm","description":"Seamlessly integrate Drizzle ORM with Forge-SQL to enable type-safe database operations in your Atlassian Forge applications. Includes a custom driver, schema migration support, two levels of caching (local in-memory and global via @forge/kvs), optimistic locking, query analysis, and more.","archived":false,"fork":false,"pushed_at":"2026-01-16T19:18:10.000Z","size":31847,"stargazers_count":15,"open_issues_count":3,"forks_count":0,"subscribers_count":1,"default_branch":"master","last_synced_at":"2026-01-18T08:31:00.169Z","etag":null,"topics":["atlassian-forge","drizzle","drizzle-framework","drizzle-mysql2","drizzle-orm","forge","forge-app","forge-sql","forge-sql-orm","migration-tool","orm","orm-framework","orm-library","rovo","sql"],"latest_commit_sha":null,"homepage":"","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/forge-sql-orm.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null},"funding":{"open_collective":"forge-sql-orm"}},"created_at":"2025-02-28T10:41:27.000Z","updated_at":"2026-01-16T19:17:14.000Z","dependencies_parsed_at":"2025-04-10T22:24:34.961Z","dependency_job_id":"3b16f151-7fa4-450e-8fd7-0931df3f4f81","html_url":"https://github.com/forge-sql-orm/forge-sql-orm","commit_stats":null,"previous_names":["vzakharchenko/forge-sql-orm","forge-sql-orm/forge-sql-orm"],"tags_count":39,"template":false,"template_full_name":null,"purl":"pkg:github/forge-sql-orm/forge-sql-orm","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/forge-sql-orm%2Fforge-sql-orm","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/forge-sql-orm%2Fforge-sql-orm/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/forge-sql-orm%2Fforge-sql-orm/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/forge-sql-orm%2Fforge-sql-orm/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/forge-sql-orm","download_url":"https://codeload.github.com/forge-sql-orm/forge-sql-orm/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/forge-sql-orm%2Fforge-sql-orm/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28751059,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-25T09:58:17.166Z","status":"ssl_error","status_checked_at":"2026-01-25T09:55:56.104Z","response_time":113,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["atlassian-forge","drizzle","drizzle-framework","drizzle-mysql2","drizzle-orm","forge","forge-app","forge-sql","forge-sql-orm","migration-tool","orm","orm-framework","orm-library","rovo","sql"],"created_at":"2026-01-25T10:02:03.797Z","updated_at":"2026-03-12T09:28:58.174Z","avatar_url":"https://github.com/forge-sql-orm.png","language":"TypeScript","readme":"# Forge SQL ORM\n\n[![npm version](https://img.shields.io/npm/v/forge-sql-orm)](https://www.npmjs.com/package/forge-sql-orm)\n[![npm downloads](https://img.shields.io/npm/dm/forge-sql-orm)](https://www.npmjs.com/package/forge-sql-orm)\n[![npm version (CLI)](https://img.shields.io/npm/v/forge-sql-orm-cli?label=cli)](https://www.npmjs.com/package/forge-sql-orm-cli)\n[![npm downloads (CLI)](https://img.shields.io/npm/dm/forge-sql-orm-cli?label=cli%20downloads)](https://www.npmjs.com/package/forge-sql-orm-cli)\n\n[![Coverage](https://sonarcloud.io/api/project_badges/measure?project=forge-sql-orm_forge-sql-orm\u0026metric=coverage)](https://sonarcloud.io/summary/new_code?id=forge-sql-orm_forge-sql-orm)\n\n[![License](https://img.shields.io/github/license/forge-sql-orm/forge-sql-orm)](https://github.com/forge-sql-orm/forge-sql-orm/blob/master/LICENSE)\n\n[![forge-sql-orm CI](https://github.com/forge-sql-orm/forge-sql-orm/actions/workflows/node.js.yml/badge.svg)](https://github.com/forge-sql-orm/forge-sql-orm/actions/workflows/node.js.yml)\n[![DeepScan grade](https://deepscan.io/api/teams/26652/projects/30920/branches/997203/badge/grade.svg)](https://deepscan.io/dashboard#view=project\u0026tid=26652\u0026pid=30920\u0026bid=997203)\n[![Snyk Vulnerabilities](https://snyk.io/test/github/forge-sql-orm/forge-sql-orm/badge.svg)](https://snyk.io/test/github/forge-sql-orm/forge-sql-orm)\n[![Security Rating](https://sonarcloud.io/api/project_badges/measure?project=forge-sql-orm_forge-sql-orm\u0026metric=security_rating)](https://sonarcloud.io/summary/new_code?id=forge-sql-orm_forge-sql-orm)\n[![Maintainability Rating](https://sonarcloud.io/api/project_badges/measure?project=forge-sql-orm_forge-sql-orm\u0026metric=sqale_rating)](https://sonarcloud.io/summary/new_code?id=forge-sql-orm_forge-sql-orm)\n[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=forge-sql-orm_forge-sql-orm\u0026metric=alert_status)](https://sonarcloud.io/summary/new_code?id=forge-sql-orm_forge-sql-orm)\n\n[![Bugs](https://sonarcloud.io/api/project_badges/measure?project=forge-sql-orm_forge-sql-orm\u0026metric=bugs)](https://sonarcloud.io/summary/new_code?id=forge-sql-orm_forge-sql-orm)\n[![Code Smells](https://sonarcloud.io/api/project_badges/measure?project=forge-sql-orm_forge-sql-orm\u0026metric=code_smells)](https://sonarcloud.io/summary/new_code?id=forge-sql-orm_forge-sql-orm)\n[![Vulnerabilities](https://sonarcloud.io/api/project_badges/measure?project=forge-sql-orm_forge-sql-orm\u0026metric=vulnerabilities)](https://sonarcloud.io/summary/new_code?id=forge-sql-orm_forge-sql-orm)\n[![Maintainability](https://qlty.sh/gh/forge-sql-orm/projects/forge-sql-orm/maintainability.svg)](https://qlty.sh/gh/forge-sql-orm/projects/forge-sql-orm)\n\n**Forge-SQL-ORM** is an ORM designed for working with [@forge/sql](https://developer.atlassian.com/platform/forge/storage-reference/sql-tutorial/) in **Atlassian Forge**. It is built on top of [Drizzle ORM](https://orm.drizzle.team) and provides advanced capabilities for working with relational databases inside Forge.\n\n## Key Features\n\n- ✅ **Custom Drizzle Driver** for direct integration with @forge/sql\n- ✅ **Local Cache System (Level 1)** for in-memory query optimization within single resolver invocation scope\n- ✅ **Global Cache System (Level 2)** with cross-invocation caching, automatic cache invalidation and context-aware operations (using [@forge/kvs](https://developer.atlassian.com/platform/forge/storage-reference/storage-api-custom-entities/) )\n- ✅ **Performance Monitoring**: Query execution metrics and analysis capabilities with automatic error analysis for timeout and OOM errors, scheduled slow query monitoring with execution plans, and async query degradation analysis for non-blocking performance monitoring\n- ✅ **Type-Safe Query Building**: Write SQL queries with full TypeScript support\n- ✅ **Supports complex SQL queries** with joins and filtering using Drizzle ORM\n- ✅ **Advanced Query Methods**: `selectFrom()`, `selectDistinctFrom()`, `selectCacheableFrom()`, `selectDistinctCacheableFrom()` for all-column queries with field aliasing\n- ✅ **Query Execution with Metadata**: `executeWithMetadata()` method for capturing detailed execution metrics including database execution time, response size, and query analysis capabilities with performance monitoring. Supports two modes for query plan printing: TopSlowest mode (default) and SummaryTable mode\n- ✅ **Raw SQL Execution**: `execute()`, `executeCacheable()`, `executeDDL()`, and `executeDDLActions()` methods for direct SQL queries with local and global caching\n- ✅ **Common Table Expressions (CTEs)**: `with()` method for complex queries with subqueries\n- ✅ **Schema migration support**, allowing automatic schema evolution\n- ✅ **Automatic entity generation** from MySQL/tidb databases\n- ✅ **Automatic migration generation** from MySQL/tidb databases\n- ✅ **Drop Migrations** Generate a migration to drop all tables and clear migrations history for subsequent schema recreation\n- ✅ **Schema Fetching** Development-only web trigger to retrieve current database schema and generate SQL statements for schema recreation\n- ✅ **Ready-to-use Migration Triggers** Built-in web triggers for applying migrations, dropping tables (development-only), and fetching schema (development-only) with proper error handling and security controls\n- ✅ **Optimistic Locking** Ensures data consistency by preventing conflicts when multiple users update the same record\n- ✅ **Query Plan Analysis**: Detailed execution plan analysis and optimization insights\n- ✅ **Rovo Integration** Secure pattern for natural-language analytics with comprehensive security validations, Row-Level Security (RLS) support, and dynamic SQL query execution\n\n## Table of Contents\n\n### 🚀 Getting Started\n\n- [Key Features](#key-features)\n- [Usage Approaches](#usage-approaches)\n- [Installation](#installation)\n- [CLI Commands](#cli-commands) | [CLI Documentation](forge-sql-orm-cli/README.md)\n- [Quick Start](#quick-start)\n\n### 📖 Core Features\n\n- [Field Name Collision Prevention](#field-name-collision-prevention-in-complex-queries)\n- [Drizzle Usage with forge-sql-orm](#drizzle-usage-with-forge-sql-orm)\n- [Direct Drizzle Usage with Custom Driver](#direct-drizzle-usage-with-custom-driver)\n\n### 🗄️ Database Operations\n\n- [Fetch Data](#fetch-data)\n- [Modify Operations](#modify-operations)\n- [SQL Utilities](#sql-utilities)\n\n### ⚡ Caching System\n\n- [Setting Up Caching with @forge/kvs](#setting-up-caching-with-forgekvs-optional)\n- [Global Cache System (Level 2)](#global-cache-system-level-2)\n- [Cache Context Operations](#cache-context-operations)\n- [Local Cache Operations (Level 1)](#local-cache-operations-level-1)\n- [Cache-Aware Query Operations](#cache-aware-query-operations)\n- [Manual Cache Management](#manual-cache-management)\n\n### 🔒 Advanced Features\n\n- [Optimistic Locking](#optimistic-locking)\n- [Rovo Integration](#rovo-integration) - Secure pattern for natural-language analytics with dynamic SQL queries\n- [Query Analysis and Performance Optimization](#query-analysis-and-performance-optimization)\n- [Automatic Error Analysis](#automatic-error-analysis) - Automatic timeout and OOM error detection with execution plans\n- [Slow Query Monitoring](#slow-query-monitoring) - Scheduled monitoring of slow queries with execution plans\n- [Date and Time Types](#date-and-time-types)\n\n### 🛠️ Development Tools\n\n- [CLI Commands](#cli-commands) | [CLI Documentation](forge-sql-orm-cli/README.md)\n- [Web Triggers for Migrations](#web-triggers-for-migrations)\n- [Step-by-Step Migration Workflow](#step-by-step-migration-workflow)\n- [Drop Migrations](#drop-migrations)\n\n### 📚 Examples\n\n- [Simple Example](examples/forge-sql-orm-example-simple)\n- [Drizzle Driver Example](examples/forge-sql-orm-example-drizzle-driver-simple)\n- [Optimistic Locking Example](examples/forge-sql-orm-example-optimistic-locking)\n- [Dynamic Queries Example](examples/forge-sql-orm-example-dynamic)\n- [Query Analysis Example](examples/forge-sql-orm-example-query-analyses)\n- [Organization Tracker Example](examples/forge-sql-orm-example-org-tracker)\n- [Checklist Example](examples/forge-sql-orm-example-checklist)\n- [Cache Example](examples/forge-sql-orm-example-cache) - Advanced caching capabilities with performance monitoring\n- [Rovo Integration Example](https://github.com/vzakharchenko/Forge-Secure-Notes-for-Jira) - Real-world Rovo AI agent implementation with secure natural-language analytics\n\n### 📚 Reference\n\n- [ForgeSqlOrmOptions](#forgesqlormoptions)\n- [Migration Guide](#migration-guide)\n\n## 🚀 Quick Navigation\n\n**New to Forge-SQL-ORM?** Start here:\n\n- [Quick Start](#quick-start) - Get up and running in 5 minutes\n- [Installation](#installation) - Complete setup guide\n- [Basic Usage Examples](#fetch-data) - Simple query examples\n\n**Looking for specific features?**\n\n- [Global Cache System (Level 2)](#global-cache-system-level-2) - Cross-invocation persistent caching\n- [Local Cache System (Level 1)](#local-cache-operations-level-1) - In-memory invocation caching\n- [Optimistic Locking](#optimistic-locking) - Data consistency\n- [Rovo Integration](#rovo-integration) - Secure natural-language analytics\n- [Migration Tools](#web-triggers-for-migrations) - Database migrations\n- [Query Analysis](#query-analysis-and-performance-optimization) - Performance optimization\n\n**Looking for practical examples?**\n\n- [Simple Example](examples/forge-sql-orm-example-simple) - Basic ORM usage\n- [Optimistic Locking Example](examples/forge-sql-orm-example-optimistic-locking) - Real-world conflict handling\n- [Organization Tracker Example](examples/forge-sql-orm-example-org-tracker) - Complex relationships\n- [Checklist Example](examples/forge-sql-orm-example-checklist) - Jira integration\n- [Cache Example](examples/forge-sql-orm-example-cache) - Advanced caching capabilities\n- [Rovo Integration Example](https://github.com/vzakharchenko/Forge-Secure-Notes-for-Jira) - Real-world Rovo AI agent with secure analytics\n\n## Usage Approaches\n\n### 1. Full Forge-SQL-ORM Usage\n\n```typescript\nimport ForgeSQL from \"forge-sql-orm\";\nconst forgeSQL = new ForgeSQL();\n```\n\nBest for: Advanced features like optimistic locking, automatic versioning, and automatic field name collision prevention in complex queries.\n\n### 2. Direct Drizzle Usage\n\n```typescript\nimport { drizzle } from \"drizzle-orm/mysql-proxy\";\nimport { forgeDriver } from \"forge-sql-orm\";\nconst db = drizzle(forgeDriver);\n```\n\nBest for: Simple Modify operations without optimistic locking. Note that you need to manually patch drizzle `patchDbWithSelectAliased` for select fields to prevent field name collisions in Atlassian Forge SQL.\n\n### 3. Local Cache Optimization\n\n```typescript\nimport ForgeSQL from \"forge-sql-orm\";\nconst forgeSQL = new ForgeSQL();\n\n// Optimize repeated queries within a single invocation\nawait forgeSQL.executeWithLocalContext(async () =\u003e {\n  // Multiple queries here will benefit from local caching\n  const users = await forgeSQL\n    .select({ id: users.id, name: users.name })\n    .from(users)\n    .where(eq(users.active, true));\n\n  // This query will use local cache (no database call)\n  const cachedUsers = await forgeSQL\n    .select({ id: users.id, name: users.name })\n    .from(users)\n    .where(eq(users.active, true));\n\n  // Using new methods for better performance\n  const usersFrom = await forgeSQL.selectFrom(users).where(eq(users.active, true));\n\n  // This will use local cache (no database call)\n  const cachedUsersFrom = await forgeSQL.selectFrom(users).where(eq(users.active, true));\n\n  // Raw SQL with local caching\n  const rawUsers = await forgeSQL.execute(\"SELECT id, name FROM users WHERE active = ?\", [true]);\n});\n```\n\nBest for: Performance optimization of repeated queries within resolvers or single invocation contexts.\n\n## Field Name Collision Prevention in Complex Queries\n\nWhen working with complex queries involving multiple tables (joins, inner joins, etc.), Atlassian Forge SQL has a specific behavior where fields with the same name from different tables get collapsed into a single field with a null value. This is not a Drizzle ORM issue but rather a characteristic of Atlassian Forge SQL's behavior.\n\nForge-SQL-ORM provides two ways to handle this:\n\n### Using Forge-SQL-ORM\n\n```typescript\nimport ForgeSQL from \"forge-sql-orm\";\n\nconst forgeSQL = new ForgeSQL();\n\n// Automatic field name collision prevention\nawait forgeSQL\n  .select({ user: users, order: orders })\n  .from(orders)\n  .innerJoin(users, eq(orders.userId, users.id));\n```\n\n### Using Direct Drizzle\n\n```typescript\nimport { drizzle } from \"drizzle-orm/mysql-proxy\";\nimport { forgeDriver, patchDbWithSelectAliased } from \"forge-sql-orm\";\n\nconst db = patchDbWithSelectAliased(drizzle(forgeDriver));\n\n// Manual field name collision prevention\nawait db\n  .selectAliased({ user: users, order: orders })\n  .from(orders)\n  .innerJoin(users, eq(orders.userId, users.id));\n```\n\n### Important Notes\n\n- This is a specific behavior of Atlassian Forge SQL, not Drizzle ORM\n- For complex queries involving multiple tables, it's recommended to always specify select fields and avoid using `select()` without field selection\n- The solution automatically creates unique aliases for each field by prefixing them with the table name\n- This ensures that fields with the same name from different tables remain distinct in the query results\n\n## Installation\n\nForge-SQL-ORM is designed to work with @forge/sql and requires some additional setup to ensure compatibility within Atlassian Forge.\n\n✅ Step 1: Install Dependencies\n\n**Basic installation (without caching):**\n\n```sh\nnpm install forge-sql-orm @forge/sql drizzle-orm -S\n```\n\n**With caching support:**\n\n```sh\nnpm install forge-sql-orm @forge/sql @forge/kvs drizzle-orm -S\n```\n\n**⚠️ Important for UI-Kit projects:**\n\nIf you're installing `forge-sql-orm` in a UI-Kit project (projects using `@forge/react`), you may encounter peer dependency conflicts with `@types/react`. This is due to a conflict between `@types/react@18` (required by `@forge/react`) and `@types/react@19` (optional peer dependency from `drizzle-orm` via `bun-types`).\n\nTo resolve this, use the `--legacy-peer-deps` flag:\n\n```sh\n# Basic installation for UI-Kit projects\nnpm install forge-sql-orm @forge/sql drizzle-orm -S --legacy-peer-deps\n\n# With caching support for UI-Kit projects\nnpm install forge-sql-orm @forge/sql @forge/kvs drizzle-orm -S --legacy-peer-deps\n```\n\n**Note:** The `--legacy-peer-deps` flag tells npm to ignore peer dependency conflicts. This is safe in this case because `bun-types` is an optional peer dependency and doesn't affect the functionality of `forge-sql-orm` in Forge environments.\n\nThis will:\n\n- Install Forge-SQL-ORM (the ORM for @forge/sql)\n- Install @forge/sql, the Forge database layer\n- Install @forge/kvs, the Forge Key-Value Store for caching (optional, only needed for caching features)\n- Install Drizzle ORM and its MySQL driver\n- Install TypeScript types for MySQL\n- Install forge-sql-orm-cli A command-line interface tool for managing Atlassian Forge SQL migrations and model generation with Drizzle ORM integration.\n\n## Quick Start\n\n### 1. Basic Setup\n\n```typescript\nimport ForgeSQL from \"forge-sql-orm\";\n\n// Initialize ForgeSQL\nconst forgeSQL = new ForgeSQL();\n\n// Simple query\nconst users = await forgeSQL.select().from(users);\n```\n\n### 2. With Caching (Optional)\n\n```typescript\nimport ForgeSQL from \"forge-sql-orm\";\n\n// Initialize with caching\nconst forgeSQL = new ForgeSQL({\n  cacheEntityName: \"cache\",\n  cacheTTL: 300,\n});\n\n// Cached query\nconst users = await forgeSQL\n  .selectCacheable({ id: users.id, name: users.name })\n  .from(users)\n  .where(eq(users.active, true));\n```\n\n### 3. Local Cache Optimization\n\n```typescript\n// Optimize repeated queries within a single invocation\nawait forgeSQL.executeWithLocalContext(async () =\u003e {\n  const users = await forgeSQL\n    .select({ id: users.id, name: users.name })\n    .from(users)\n    .where(eq(users.active, true));\n\n  // This query will use local cache (no database call)\n  const cachedUsers = await forgeSQL\n    .select({ id: users.id, name: users.name })\n    .from(users)\n    .where(eq(users.active, true));\n\n  // Using new methods for better performance\n  const usersFrom = await forgeSQL.selectFrom(users).where(eq(users.active, true));\n\n  // Raw SQL with local caching\n  const rawUsers = await forgeSQL.execute(\"SELECT id, name FROM users WHERE active = ?\", [true]);\n});\n```\n\n### 4. Resolver Performance Monitoring\n\n```typescript\n// Resolver with performance monitoring\nresolver.define(\"fetch\", async (req: Request) =\u003e {\n  try {\n    return await forgeSQL.executeWithMetadata(\n      async () =\u003e {\n        // Resolver logic with multiple queries\n        const users = await forgeSQL.selectFrom(demoUsers);\n        const orders = await forgeSQL\n          .selectFrom(demoOrders)\n          .where(eq(demoOrders.userId, demoUsers.id));\n        return { users, orders };\n      },\n      async (totalDbExecutionTime, totalResponseSize, printQueriesWithPlan) =\u003e {\n        const threshold = 500; // ms baseline for this resolver\n\n        if (totalDbExecutionTime \u003e threshold * 1.5) {\n          console.warn(\n            `[Performance Warning fetch] Resolver exceeded DB time: ${totalDbExecutionTime} ms`,\n          );\n          await printQueriesWithPlan(); // Optionally log or capture diagnostics for further analysis\n        } else if (totalDbExecutionTime \u003e threshold) {\n          console.debug(`[Performance Debug fetch] High DB time: ${totalDbExecutionTime} ms`);\n        }\n      },\n      {\n        // Optional: Configure query plan printing behavior\n        mode: \"TopSlowest\", // Print top slowest queries (default)\n        topQueries: 3, // Print top 3 slowest queries\n      },\n    );\n  } catch (e) {\n    const error = e?.cause?.debug?.sqlMessage ?? e?.cause;\n    console.error(error, e);\n    throw error;\n  }\n});\n```\n\n**Query Plan Printing Options:**\n\nThe `printQueriesWithPlan` function supports two modes:\n\n1. **TopSlowest Mode (default)**: Prints execution plans for the slowest queries from the current resolver invocation\n   - `mode`: Set to `'TopSlowest'` (default)\n   - `topQueries`: Number of top slowest queries to analyze (default: 1)\n\n2. **SummaryTable Mode**: Uses `CLUSTER_STATEMENTS_SUMMARY` for query analysis\n   - `mode`: Set to `'SummaryTable'`\n   - `summaryTableWindowTime`: Time window in milliseconds (default: 15000ms)\n   - Only works if queries are executed within the specified time window\n\n### 5. Rovo Integration (Secure Analytics)\n\n```typescript\n// Secure dynamic SQL queries for natural-language analytics\nconst rovo = forgeSQL.rovo();\nconst settings = await rovo\n  .rovoSettingBuilder(usersTable, accountId)\n  .addContextParameter(\":currentUserId\", accountId)\n  .useRLS()\n  .addRlsColumn(usersTable.id)\n  .addRlsWherePart((alias) =\u003e `${alias}.${usersTable.id.name} = '${accountId}'`)\n  .finish()\n  .build();\n\nconst result = await rovo.dynamicIsolatedQuery(\n  \"SELECT id, name FROM users WHERE status = 'active' AND userId = :currentUserId\",\n  settings,\n);\n```\n\n### 6. Next Steps\n\n- [Full Installation Guide](#installation) - Complete setup instructions\n- [Core Features](#core-features) - Learn about key capabilities\n- [Global Cache System (Level 2)](#global-cache-system-level-2) - Cross-invocation caching features\n- [Local Cache System (Level 1)](#local-cache-operations-level-1) - In-memory caching features\n- [Rovo Integration](#rovo-integration) - Secure natural-language analytics\n- [API Reference](#reference) - Complete API documentation\n\n## Drizzle Usage with forge-sql-orm\n\nIf you prefer to use Drizzle ORM with the additional features of Forge-SQL-ORM (like optimistic locking and caching), you can use the enhanced API:\n\n```typescript\nimport ForgeSQL from \"forge-sql-orm\";\nconst forgeSQL = new ForgeSQL();\n\n// Versioned operations with cache management (recommended)\nawait forgeSQL.modifyWithVersioningAndEvictCache().insert(Users, [userData]);\nawait forgeSQL.modifyWithVersioningAndEvictCache().updateById(updateData, Users);\n\n// Versioned operations without cache management\nawait forgeSQL.modifyWithVersioning().insert(Users, [userData]);\nawait forgeSQL.modifyWithVersioning().updateById(updateData, Users);\n\n// Non-versioned operations with cache management\nawait forgeSQL.insertAndEvictCache(Users).values(userData);\nawait forgeSQL.updateAndEvictCache(Users).set(updateData).where(eq(Users.id, 1));\n\n// Basic Drizzle operations (cache context aware)\nawait forgeSQL.insert(Users).values(userData);\nawait forgeSQL.update(Users).set(updateData).where(eq(Users.id, 1));\n\n// Direct Drizzle access\nconst db = forgeSQL.getDrizzleQueryBuilder();\nconst users = await db.select().from(users);\n\n// Using new methods for enhanced functionality\nconst usersFrom = await forgeSQL.selectFrom(users).where(eq(users.active, true));\n\nconst usersDistinct = await forgeSQL.selectDistinctFrom(users).where(eq(users.active, true));\n\nconst usersCacheable = await forgeSQL.selectCacheableFrom(users).where(eq(users.active, true));\n\n// Raw SQL execution\nconst rawUsers = await forgeSQL.execute(\"SELECT * FROM users WHERE active = ?\", [true]);\n\n// Raw SQL with caching\n// ⚠️ IMPORTANT: When using executeCacheable(), all table names must be wrapped with backticks (`)\nconst cachedRawUsers = await forgeSQL.executeCacheable(\n  \"SELECT * FROM `users` WHERE active = ?\",\n  [true],\n  300,\n);\n\n// Raw SQL with execution metadata and performance monitoring\nconst usersWithMetadata = await forgeSQL.executeWithMetadata(\n  async () =\u003e {\n    const users = await forgeSQL.selectFrom(usersTable);\n    const orders = await forgeSQL\n      .selectFrom(ordersTable)\n      .where(eq(ordersTable.userId, usersTable.id));\n    return { users, orders };\n  },\n  (totalDbExecutionTime, totalResponseSize, printQueriesWithPlan) =\u003e {\n    const threshold = 500; // ms baseline for this resolver\n\n    if (totalDbExecutionTime \u003e threshold * 1.5) {\n      console.warn(`[Performance Warning] Resolver exceeded DB time: ${totalDbExecutionTime} ms`);\n      await printQueriesWithPlan(); // Analyze and print query execution plans\n    } else if (totalDbExecutionTime \u003e threshold) {\n      console.debug(`[Performance Debug] High DB time: ${totalDbExecutionTime} ms`);\n    }\n\n    console.log(`DB response size: ${totalResponseSize} bytes`);\n  },\n  {\n    // Optional: Configure query plan printing\n    mode: \"TopSlowest\", // Print top slowest queries (default)\n    topQueries: 2, // Print top 2 slowest queries\n  },\n);\n\n// DDL operations for schema modifications\nawait forgeSQL.executeDDL(`\n  CREATE TABLE users (\n    id INT PRIMARY KEY AUTO_INCREMENT,\n    name VARCHAR(255) NOT NULL,\n    email VARCHAR(255) UNIQUE\n  )\n`);\n\n// Execute regular SQL queries in DDL context for performance monitoring\nawait forgeSQL.executeDDLActions(async () =\u003e {\n  // Execute regular SQL queries in DDL context for monitoring\n  const slowQueries = await forgeSQL.execute(`\n    SELECT * FROM INFORMATION_SCHEMA.STATEMENTS_SUMMARY \n    WHERE AVG_LATENCY \u003e 1000000\n  `);\n\n  // Execute complex analysis queries in DDL context\n  const performanceData = await forgeSQL.execute(`\n    SELECT * FROM INFORMATION_SCHEMA.CLUSTER_STATEMENTS_SUMMARY_HISTORY\n    WHERE SUMMARY_END_TIME \u003e DATE_SUB(NOW(), INTERVAL 1 HOUR)\n  `);\n\n  return { slowQueries, performanceData };\n});\n\n// Common Table Expressions (CTEs)\nconst userStats = await forgeSQL\n  .with(\n    forgeSQL.selectFrom(users).where(eq(users.active, true)).as(\"activeUsers\"),\n    forgeSQL.selectFrom(orders).where(eq(orders.status, \"completed\")).as(\"completedOrders\"),\n  )\n  .select({\n    totalActiveUsers: sql`COUNT(au.id)`,\n    totalCompletedOrders: sql`COUNT(co.id)`,\n  })\n  .from(sql`activeUsers au`)\n  .leftJoin(sql`completedOrders co`, eq(sql`au.id`, sql`co.userId`));\n\n// Rovo Integration for secure dynamic SQL queries\nconst rovo = forgeSQL.rovo();\nconst settings = await rovo\n  .rovoSettingBuilder(usersTable, accountId)\n  .addContextParameter(\":currentUserId\", accountId)\n  .useRLS()\n  .addRlsColumn(usersTable.id)\n  .addRlsWherePart((alias) =\u003e `${alias}.${usersTable.id.name} = '${accountId}'`)\n  .finish()\n  .build();\n\nconst rovoResult = await rovo.dynamicIsolatedQuery(\n  \"SELECT id, name FROM users WHERE status = 'active' AND userId = :currentUserId\",\n  settings,\n);\n```\n\nThis approach gives you direct access to all Drizzle ORM features while still using the @forge/sql backend with enhanced caching and versioning capabilities.\n\n## Direct Drizzle Usage with Custom Driver\n\nIf you prefer to use Drizzle ORM directly without the additional features of Forge-SQL-ORM (like optimistic locking), you can use the custom driver:\n\n```typescript\nimport { drizzle } from \"drizzle-orm/mysql-proxy\";\nimport { forgeDriver, patchDbWithSelectAliased } from \"forge-sql-orm\";\n\n// Initialize drizzle with the custom driver and patch it for aliased selects\nconst db = patchDbWithSelectAliased(drizzle(forgeDriver));\n\n// Use drizzle directly\nconst users = await db.select().from(users);\nconst users = await db.selectAliased(getTableColumns(users)).from(users);\nconst users = await db.selectAliasedDistinct(getTableColumns(users)).from(users);\nawait db.insert(users)...;\nawait db.update(users)...;\nawait db.delete(users)...;\n// Use drizzle with kvs cache\nconst users = await db.selectAliasedCacheable(getTableColumns(users)).from(users);\nconst users = await db.selectAliasedDistinctCacheable(getTableColumns(users)).from(users);\nawait db.insertAndEvictCache(users)...;\nawait db.updateAndEvictCache(users)...;\nawait db.deleteAndEvictCache(users)...;\n\n// Use drizzle with kvs cache context\nawait forgeSQL.executeWithCacheContext(async () =\u003e {\n  await db.insertWithCacheContext(users)...;\n  await db.updateWithCacheContext(users)...;\n  await db.deleteWithCacheContext(users)...;\n  // invoke without cache\n   const users = await db.selectAliasedCacheable(getTableColumns(users)).from(users);\n  // Cache is cleared only once at the end for all affected tables\n});\n\n// Using new methods with direct drizzle\nconst usersFrom = await forgeSQL.selectFrom(users)\n  .where(eq(users.active, true));\n\nconst usersDistinct = await forgeSQL.selectDistinctFrom(users)\n  .where(eq(users.active, true));\n\nconst usersCacheable = await forgeSQL.selectCacheableFrom(users)\n  .where(eq(users.active, true));\n\n// Raw SQL execution\nconst rawUsers = await forgeSQL.execute(\n  \"SELECT * FROM users WHERE active = ?\",\n  [true]\n);\n\n// Raw SQL with caching\n// ⚠️ IMPORTANT: When using executeCacheable(), all table names must be wrapped with backticks (`)\nconst cachedRawUsers = await forgeSQL.executeCacheable(\n  \"SELECT * FROM `users` WHERE active = ?\",\n  [true],\n  300\n);\n\n// Raw SQL with execution metadata and performance monitoring\nconst usersWithMetadata = await forgeSQL.executeWithMetadata(\n  async () =\u003e {\n    const users = await forgeSQL.selectFrom(usersTable);\n    const orders = await forgeSQL.selectFrom(ordersTable).where(eq(ordersTable.userId, usersTable.id));\n    return { users, orders };\n  },\n  (totalDbExecutionTime, totalResponseSize, printQueriesWithPlan) =\u003e {\n    const threshold = 500; // ms baseline for this resolver\n\n    if (totalDbExecutionTime \u003e threshold * 1.5) {\n      console.warn(`[Performance Warning] Resolver exceeded DB time: ${totalDbExecutionTime} ms`);\n      await printQueriesWithPlan(); // Analyze and print query execution plans\n    } else if (totalDbExecutionTime \u003e threshold) {\n      console.debug(`[Performance Debug] High DB time: ${totalDbExecutionTime} ms`);\n    }\n\n    console.log(`DB response size: ${totalResponseSize} bytes`);\n  },\n  {\n    // Optional: Configure query plan printing\n    mode: 'TopSlowest', // Print top slowest queries (default)\n    topQueries: 1, // Print top slowest query\n  },\n);\n```\n\n## Setting Up Caching with @forge/kvs (Optional)\n\nThe caching system is optional and only needed if you want to use cache-related features. To enable the caching system, you need to install the required dependency and configure your manifest.\n\n### How Caching Works\n\nTo use caching, you need to use Forge-SQL-ORM methods that support cache management:\n\n**Methods that perform cache eviction after execution and in cache context (batch eviction):**\n\n- `forgeSQL.insertAndEvictCache()`\n- `forgeSQL.updateAndEvictCache()`\n- `forgeSQL.deleteAndEvictCache()`\n- `forgeSQL.modifyWithVersioningAndEvictCache()`\n- `forgeSQL.getDrizzleQueryBuilder().insertAndEvictCache()`\n- `forgeSQL.getDrizzleQueryBuilder().updateAndEvictCache()`\n- `forgeSQL.getDrizzleQueryBuilder().deleteAndEvictCache()`\n\n**Methods that participate in cache context only (batch eviction):**\n\n- All methods except the default Drizzle methods:\n  - `forgeSQL.insert()`\n  - `forgeSQL.update()`\n  - `forgeSQL.delete()`\n  - `forgeSQL.modifyWithVersioning()`\n  - `forgeSQL.getDrizzleQueryBuilder().insertWithCacheContext()`\n  - `forgeSQL.getDrizzleQueryBuilder().updateWithCacheContext()`\n  - `forgeSQL.getDrizzleQueryBuilder().deleteWithCacheContext()`\n\n**Methods do not do evict cache, better do not use with cache feature:**\n\n- `forgeSQL.getDrizzleQueryBuilder().insert()`\n- `forgeSQL.getDrizzleQueryBuilder().update()`\n- `forgeSQL.getDrizzleQueryBuilder().delete()`\n\n**Cacheable methods:**\n\n- `forgeSQL.selectCacheable()`\n- `forgeSQL.selectDistinctCacheable()`\n- `forgeSQL.getDrizzleQueryBuilder().selectAliasedCacheable()`\n- `forgeSQL.getDrizzleQueryBuilder().selectAliasedDistinctCacheable()`\n\n**Cache context example:**\n\n```typescript\nawait forgeSQL.executeWithCacheContext(async () =\u003e {\n  // These methods participate in batch cache clearing\n  await forgeSQL.insert(Users).values(userData);\n  await forgeSQL.update(Users).set(updateData).where(eq(Users.id, 1));\n  await forgeSQL.delete(Users).where(eq(Users.id, 1));\n  // Cache is cleared only once at the end for all affected tables\n});\n```\n\nThe diagram below shows the lifecycle of a cacheable query in Forge-SQL-ORM:\n\n1. Resolver calls forge-sql-orm with a SQL query and parameters.\n2. forge-sql-orm generates a cache key = hash(sql, parameters).\n3. It asks @forge/kvs for an existing cached result.\n   - Cache hit → result is returned immediately.\n   - Cache miss / expired → query is executed against @forge/sql.\n4. Fresh result is stored in @forge/kvs with TTL and returned to the caller.\n\n![img.png](img/umlCache1.png)\n\nThe diagram below shows how Evict Cache works in Forge-SQL-ORM:\n\n1. **Data modification** is executed through `@forge/sql` (e.g., `UPDATE users ...`).\n2. After a successful update, **forge-sql-orm** queries the `cache` entity by using the **`sql` field** with `filter.contains(\"users\")` to find affected cached queries.\n3. The returned cache entries are deleted in **batches** (up to 25 per transaction).\n4. Once eviction is complete, the update result is returned to the resolver.\n5. **Note:** Expired entries are not processed here — they are cleaned up separately by the scheduled cache cleanup trigger using the `expiration` index.\n\n![img.png](img/umlCacheEvict1.png)\n\nThe diagram below shows how Scheduled Expiration Cleanup works:\n\n**Note:** forge-sql-orm uses Forge KVS TTL feature (`{ ttl: { unit: \"SECONDS\", value: number } }`) to mark entries as expired. However, **actual deletion is asynchronous and may take up to 48 hours**. During this window, read operations may still return expired results. The scheduler trigger proactively cleans up expired entries to prevent cache growth from impacting INSERT/UPDATE performance.\n\n1. A periodic scheduler (Forge trigger) runs cache cleanup independently of data modifications.\n2. forge-sql-orm queries the cache entity by the expiration index to find entries with expiration \u003c now.\n3. Entries are deleted in batches (up to 25 per transaction) until the page is empty; pagination is done with a cursor (e.g., 100 per page).\n4. This keeps the cache footprint small and prevents stale data accumulation, especially important when cache size impacts data modification performance.\n\n![img.png](img/umlCacheEvictScheduler1.png)\n\nThe diagram below shows how Cache Context works:\n\n`executeWithCacheContext(fn)` lets you group multiple data modifications and perform **one consolidated cache eviction** at the end:\n\n1. The context starts with an empty `affectedTables` set.\n2. Each successful `INSERT/UPDATE/DELETE` inside the context registers its table name in `affectedTables`.\n3. **Reads inside the same context** that target tables present in `affectedTables` will **bypass the cache** (read-through to SQL) to avoid serving stale data. These reads also **do not write** back to cache until eviction completes.\n4. On context completion, `affectedTables` is de-duplicated and used to build **one combined KVS query** over the `sql` field with\n   `filter.or(filter.contains(\"\u003ct1\u003e\"), filter.contains(\"\u003ct2\u003e\"), ...)`, returning all impacted cache entries in a single scan (paged by cursor, e.g., 100/page).\n5. Matching cache entries are deleted in **batches** (≤25 per transaction) until the page is exhausted; then the next page is fetched via the cursor.\n6. Expiration is handled separately by the scheduled cleanup and is **not part of** the context flow.\n\n![img.png](img/umlCacheEvictCacheContext1.png)\n\n### Important Considerations\n\n**@forge/kvs Limits:**\nPlease review the [official @forge/kvs quotas and limits](https://developer.atlassian.com/platform/forge/platform-quotas-and-limits/#kvs-and-custom-entity-store-quotas) before implementing caching.\n\n**TTL Limitations:**\n\n- **Maximum TTL**: The maximum supported TTL is 1 year from the time the expiry is set.\n- **Asynchronous deletion**: Expired data is not removed immediately upon expiry. Deletion may take up to 48 hours. During this window, read operations may still return expired results.\n- **Performance impact**: If cache grows large, expired entries can impact INSERT/UPDATE performance. Use the Clear Cache Scheduler Trigger to proactively clean up expired entries.\n\n**Caching Guidelines:**\n\n- Don't cache everything - be selective about what to cache\n- Don't cache simple and fast queries - sometimes direct query is faster than cache\n- Consider data size and frequency of changes\n- Monitor cache usage to stay within quotas\n- Use appropriate TTL values\n- If cache growth impacts performance, configure the Clear Cache Scheduler Trigger\n\n**⚠️ Important Cache Limitations:**\n\n- **Table names starting with `a_`**: Tables whose names start with `a_` (case-insensitive) are automatically ignored in cache operations. KVS Cache will not work with such tables, and they will be excluded from cache invalidation and cache key generation.\n\n### Step 1: Install Dependencies\n\n```bash\nnpm install @forge/kvs -S\n```\n\n### Step 2: Configure Manifest\n\nAdd the storage entity configuration to your `manifest.yml`. The `scheduledTrigger` is **optional** - only configure it if your cache grows large and impacts INSERT/UPDATE performance:\n\n```yaml\nmodules:\n  # Optional: Only needed if cache growth impacts INSERT/UPDATE performance\n  scheduledTrigger:\n    - key: clear-cache-trigger\n      function: clearCache\n      interval: fiveMinute\n  storage:\n    entities:\n      - name: cache\n        attributes:\n          sql:\n            type: string\n          expiration:\n            type: integer\n          data:\n            type: string\n        indexes:\n          - sql\n          - expiration\n  sql:\n    - key: main\n      engine: mysql\n  function:\n    - key: clearCache\n      handler: index.clearCache\n```\n\n```typescript\n// Example usage in your Forge app\nimport { clearCacheSchedulerTrigger } from \"forge-sql-orm\";\n\nexport const clearCache = () =\u003e {\n  return clearCacheSchedulerTrigger({\n    cacheEntityName: \"cache\",\n  });\n};\n```\n\n### Step 3: Configure ORM Options\n\nSet the cache entity name in your ForgeSQL configuration:\n\n```typescript\nconst options = {\n  cacheEntityName: \"cache\", // Must match the entity name in manifest.yml\n  cacheTTL: 300, // Default cache TTL in seconds (5 minutes)\n  cacheWrapTable: true, // Wrap table names with backticks in cache keys\n  // ... other options\n};\n\nconst forgeSQL = new ForgeSQL(options);\n```\n\n**Important Notes:**\n\n- The `cacheEntityName` must exactly match the `name` in your manifest storage entities\n- The entity attributes (`sql`, `expiration`, `data`) are required for proper cache functionality\n- Indexes on `sql` and `expiration` improve cache lookup performance\n- Cache data uses Forge KVS TTL for expiration (deletion is asynchronous, may take up to 48 hours)\n- No additional permissions are required beyond standard Forge app permissions\n\n### Complete Setup Examples\n\n**Basic setup (without caching):**\n\n**package.json:**\n\n```shell\nnpm install forge-sql-orm @forge/sql drizzle-orm -S\n# For UI-Kit projects, use: npm install forge-sql-orm @forge/sql drizzle-orm -S --legacy-peer-deps\n```\n\n**manifest.yml:**\n\n```yaml\nmodules:\n  sql:\n    - key: main\n      engine: mysql\n```\n\n**index.ts:**\n\n```typescript\nimport ForgeSQL from \"forge-sql-orm\";\n\nconst forgeSQL = new ForgeSQL();\n\n// simple insert\nawait forgeSQL.insert(Users, [userData]);\n// Use versioned operations without caching\nawait forgeSQL.modifyWithVersioning().insert(Users, [userData]);\nconst users = await forgeSQL.select({ id: Users.id });\n```\n\n**With caching support:**\n\n```shell\nnpm install forge-sql-orm @forge/sql @forge/kvs drizzle-orm -S\n# For UI-Kit projects, use: npm install forge-sql-orm @forge/sql @forge/kvs drizzle-orm -S --legacy-peer-deps\n```\n\n**manifest.yml:**\n\n```yaml\nmodules:\n  # Optional: Only needed if cache growth impacts INSERT/UPDATE performance\n  scheduledTrigger:\n    - key: clear-cache-trigger\n      function: clearCache\n      interval: fiveMinute\n  storage:\n    entities:\n      - name: cache\n        attributes:\n          sql:\n            type: string\n          expiration:\n            type: integer\n          data:\n            type: string\n        indexes:\n          - sql\n          - expiration\n  sql:\n    - key: main\n      engine: mysql\n  function:\n    - key: clearCache\n      handler: index.clearCache\n```\n\n**index.ts:**\n\n```typescript\nimport ForgeSQL from \"forge-sql-orm\";\n\nconst forgeSQL = new ForgeSQL({\n  cacheEntityName: \"cache\",\n});\n\nimport { clearCacheSchedulerTrigger } from \"forge-sql-orm\";\nimport { getTableColumns } from \"drizzle-orm\";\n\nexport const clearCache = () =\u003e {\n  return clearCacheSchedulerTrigger({\n    cacheEntityName: \"cache\",\n  });\n};\n\n// Now you can use caching features\nconst usersData = await forgeSQL\n  .selectCacheable(getTableColumns(users))\n  .from(users)\n  .where(eq(users.active, true));\n\n// simple insert\nawait forgeSQL.insertAndEvictCache(users, [userData]);\n// Use versioned operations with caching\nawait forgeSQL.modifyWithVersioningAndEvictCache().insert(users, [userData]);\n\n// use Cache Context\nconst data = await forgeSQL.executeWithCacheContextAndReturnValue(async () =\u003e {\n  // after insert mark users to evict\n  await forgeSQL.insert(users, [userData]);\n  // after insertAndEvictCache mark orders to evict\n  await forgeSQL.insertAndEvictCache(orders, [order1, order2]);\n  // execute query and put result to local cache\n  await forgeSQL\n    .selectCacheable({\n      userId: users.id,\n      userName: users.name,\n      orderId: orders.id,\n      orderName: orders.name,\n    })\n    .from(users)\n    .innerJoin(orders, eq(orders.userId, users.id))\n    .where(eq(users.active, true));\n  // use local cache without @forge/kvs and @forge/sql\n  return await forgeSQL\n    .selectCacheable({\n      userId: users.id,\n      userName: users.name,\n      orderId: orders.id,\n      orderName: orders.name,\n    })\n    .from(users)\n    .innerJoin(orders, eq(orders.userId, users.id))\n    .where(eq(users.active, true));\n});\n// execute query and put result to kvs cache\nawait forgeSQL\n  .selectCacheable({\n    userId: users.id,\n    userName: users.name,\n    orderId: orders.id,\n    orderName: orders.name,\n  })\n  .from(users)\n  .innerJoin(orders, eq(orders.userId, users.id))\n  .where(eq(users.active, true));\n\n// get result from @foge/kvs cache without real @forge/sql call\nawait forgeSQL\n  .selectCacheable({\n    userId: users.id,\n    userName: users.name,\n    orderId: orders.id,\n    orderName: orders.name,\n  })\n  .from(users)\n  .innerJoin(orders, eq(orders.userId, users.id))\n  .where(eq(users.active, true));\n\n// use Local Cache for performance optimization\nconst optimizedData = await forgeSQL.executeWithLocalCacheContextAndReturnValue(async () =\u003e {\n  // First query - hits database and caches result\n  const users = await forgeSQL\n    .select({ id: users.id, name: users.name })\n    .from(users)\n    .where(eq(users.active, true));\n\n  // Second query - uses local cache (no database call)\n  const cachedUsers = await forgeSQL\n    .select({ id: users.id, name: users.name })\n    .from(users)\n    .where(eq(users.active, true));\n\n  // Using new methods for better performance\n  const usersFrom = await forgeSQL.selectFrom(users).where(eq(users.active, true));\n\n  // This will use local cache (no database call)\n  const cachedUsersFrom = await forgeSQL.selectFrom(users).where(eq(users.active, true));\n\n  // Raw SQL with local caching\n  const rawUsers = await forgeSQL.execute(\"SELECT id, name FROM users WHERE active = ?\", [true]);\n\n  // Insert operation - evicts local cache\n  await forgeSQL.insert(users).values({ name: \"New User\", active: true });\n\n  // Third query - hits database again and caches new result\n  const updatedUsers = await forgeSQL\n    .select({ id: users.id, name: users.name })\n    .from(users)\n    .where(eq(users.active, true));\n\n  return { users, cachedUsers, updatedUsers, usersFrom, cachedUsersFrom, rawUsers };\n});\n```\n\n## Choosing the Right Method - ForgeSQL ORM\n\n### When to Use Each Approach\n\n| Method                                | Use Case                                                    | Versioning | Cache Management     |\n| ------------------------------------- | ----------------------------------------------------------- | ---------- | -------------------- |\n| `modifyWithVersioningAndEvictCache()` | High-concurrency scenarios with Cache support               | ✅ Yes     | ✅ Yes               |\n| `modifyWithVersioning()`              | High-concurrency scenarios                                  | ✅ Yes     | Cache Context        |\n| `insertAndEvictCache()`               | Simple inserts                                              | ❌ No      | ✅ Yes               |\n| `updateAndEvictCache()`               | Simple updates                                              | ❌ No      | ✅ Yes               |\n| `deleteAndEvictCache()`               | Simple deletes                                              | ❌ No      | ✅ Yes               |\n| `insert/update/delete`                | Basic Drizzle operations                                    | ❌ No      | Cache Context        |\n| `selectFrom()`                        | All-column queries with field aliasing                      | ❌ No      | Local Cache          |\n| `selectDistinctFrom()`                | Distinct all-column queries with field aliasing             | ❌ No      | Local Cache          |\n| `selectCacheableFrom()`               | All-column queries with field aliasing and caching          | ❌ No      | Local + Global Cache |\n| `selectDistinctCacheableFrom()`       | Distinct all-column queries with field aliasing and caching | ❌ No      | Local + Global Cache |\n| `execute()`                           | Raw SQL queries with local caching                          | ❌ No      | Local Cache          |\n| `executeCacheable()`                  | Raw SQL queries with local and global caching               | ❌ No      | Local + Global Cache |\n| `executeDDL()`                        | DDL operations (CREATE, ALTER, DROP, etc.)                  | ❌ No      | No Caching           |\n| `executeDDLActions()`                 | Execute regular SQL queries in DDL operation context        | ❌ No      | No Caching           |\n| `with()`                              | Common Table Expressions (CTEs)                             | ❌ No      | Local Cache          |\n\n## Choosing the Right Method - Direct Drizzle\n\n### When to Use Each Approach\n\n| Method                                                                 | Use Case                                                                                                               | Versioning | Cache Management     |\n| ---------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- | ---------- | -------------------- |\n| `insertWithCacheContext/insertWithCacheContext/updateWithCacheContext` | Basic Drizzle operations                                                                                               | ❌ No      | Cache Context        |\n| `insertAndEvictCache()`                                                | Simple inserts without conflicts                                                                                       | ❌ No      | ✅ Yes               |\n| `updateAndEvictCache()`                                                | Simple updates without conflicts                                                                                       | ❌ No      | ✅ Yes               |\n| `deleteAndEvictCache()`                                                | Simple deletes without conflicts                                                                                       | ❌ No      | ✅ Yes               |\n| `insert/update/delete`                                                 | Basic Drizzle operations                                                                                               | ❌ No      | ❌ No                |\n| `selectFrom()`                                                         | All-column queries with field aliasing                                                                                 | ❌ No      | Local Cache          |\n| `selectDistinctFrom()`                                                 | Distinct all-column queries with field aliasing                                                                        | ❌ No      | Local Cache          |\n| `selectCacheableFrom()`                                                | All-column queries with field aliasing and caching                                                                     | ❌ No      | Local + Global Cache |\n| `selectDistinctCacheableFrom()`                                        | Distinct all-column queries with field aliasing and caching                                                            | ❌ No      | Local + Global Cache |\n| `execute()`                                                            | Raw SQL queries with local caching                                                                                     | ❌ No      | Local Cache          |\n| `executeCacheable()`                                                   | Raw SQL queries with local and global caching                                                                          | ❌ No      | Local + Global Cache |\n| `executeWithMetadata()`                                                | Resolver-level profiling with execution metrics and configurable query plan printing (TopSlowest or SummaryTable mode) | ❌ No      | Local Cache          |\n| `executeDDL()`                                                         | DDL operations (CREATE, ALTER, DROP, etc.)                                                                             | ❌ No      | No Caching           |\n| `executeDDLActions()`                                                  | Execute regular SQL queries in DDL operation context                                                                   | ❌ No      | No Caching           |\n| `with()`                                                               | Common Table Expressions (CTEs)                                                                                        | ❌ No      | Local Cache          |\n\nwhere Cache context - allows you to batch cache invalidation events and bypass cache reads for affected tables.\n\n## Step-by-Step Migration Workflow\n\n1. **Install CLI and setup scripts**\n\n   ```bash\n   npm install forge-sql-orm-cli -D\n   npm pkg set scripts.models:create=\"forge-sql-orm-cli generate:model --output src/entities --saveEnv\"\n   npm pkg set scripts.migration:create=\"forge-sql-orm-cli migrations:create --force --output src/migration --entitiesPath src/entities\"\n   npm pkg set scripts.migration:update=\"forge-sql-orm-cli migrations:update --entitiesPath src/entities --output src/migration\"\n   ```\n\n   _(This is done only once when setting up the project)_\n\n2. **Generate initial schema from an existing database**\n\n   ```sh\n   npm run models:create\n   ```\n\n   _(This will prompt for database credentials on first run and save them to `.env` file)_\n\n3. **Create the first migration**\n\n   ```sh\n   npm run migration:create\n   ```\n\n   _(This initializes the database migration structure, also done once)_\n\n4. **Deploy to Forge and verify that migrations work**\n   - Deploy your **Forge app** with migrations.\n   - Run migrations using a **Forge web trigger** or **Forge scheduler**.\n\n5. **Modify the database (e.g., add a new column, index, etc.)**\n   - Use **DbSchema** or manually alter the database schema.\n\n6. **Update the migration**\n\n   ```sh\n   npm run migration:update\n   ```\n\n   - ⚠️ **Do NOT update schema before this step!**\n   - If schema is updated first, the migration will be empty!\n\n7. **Deploy to Forge and verify that the migration runs without issues**\n   - Run the updated migration on Forge.\n\n8. **Update the schema**\n\n   ```sh\n   npm run models:create\n   ```\n\n9. **Repeat steps 5-8 as needed**\n\n**⚠️ WARNING:**\n\n- **Do NOT swap steps 7 and 5!** If you update schema before generating a migration, the migration will be empty!\n- Always generate the **migration first**, then update the **schema**.\n\n## Drop Migrations\n\nThe Drop Migrations feature allows you to completely reset your database schema in Atlassian Forge SQL. This is useful when you need to:\n\n- Start fresh with a new schema\n- Reset all tables and their data\n- Clear migration history\n- Ensure your local schema matches the deployed database\n\n### Important Requirements\n\nBefore using Drop Migrations, ensure that:\n\n1. Your local schema exactly matches the current database schema deployed in Atlassian Forge SQL\n2. You have a backup of your data if needed\n3. You understand that this operation will delete all tables and data\n\n### Usage\n\n1. First, ensure your local schema matches the deployed database:\n\n   ```bash\n   npm run models:create\n   ```\n\n2. Generate the drop migration:\n\n   ```bash\n   npm run migration:drop\n   ```\n\n   _(Add this script to your package.json: `npm pkg set scripts.migration:drop=\"forge-sql-orm-cli migrations:drop --entitiesPath src/entities --output src/migration\"`)_\n\n3. Deploy and run the migration in your Forge app:\n\n   ```js\n   import migrationRunner from \"./database/migration\";\n   import { MigrationRunner } from \"@forge/sql/out/migration\";\n\n   const runner = new MigrationRunner();\n   await migrationRunner(runner);\n   await runner.run();\n   ```\n\n4. After dropping all tables, you can create a new migration to recreate the schema:\n   ```bash\n   npm run migration:create\n   ```\n   The `--force` parameter is already included in the script to allow creating migrations after dropping all tables.\n\n### Example Migration Output\n\nThe generated drop migration will look like this:\n\n```js\nimport { MigrationRunner } from \"@forge/sql/out/migration\";\n\nexport default (migrationRunner: MigrationRunner): MigrationRunner =\u003e {\n    return migrationRunner\n        .enqueue(\"v1_MIGRATION0\", \"ALTER TABLE `orders` DROP FOREIGN KEY `fk_orders_users`\")\n        .enqueue(\"v1_MIGRATION1\", \"DROP INDEX `idx_orders_user_id` ON `orders`\")\n        .enqueue(\"v1_MIGRATION2\", \"DROP TABLE IF EXISTS `orders`\")\n        .enqueue(\"v1_MIGRATION3\", \"DROP TABLE IF EXISTS `users`\")\n        .enqueue(\"MIGRATION_V1_1234567890\", \"DELETE FROM __migrations\");\n};\n```\n\n### ⚠️ Important Notes\n\n- This operation is **irreversible** - all data will be lost\n- Make sure your local schema is up-to-date with the deployed database\n- Consider backing up your data before running drop migrations\n- The migration will clear the `__migrations` table to allow for fresh migration history\n- Drop operations are performed in the correct order: first foreign keys, then indexes, then tables\n\n---\n\n## Date and Time Types\n\nWhen working with date and time fields in your models, you should use the custom types provided by Forge-SQL-ORM to ensure proper handling of date/time values. This is necessary because Forge SQL has specific format requirements for date/time values:\n\n| Date type | Required Format                | Example                    |\n| --------- | ------------------------------ | -------------------------- |\n| DATE      | YYYY-MM-DD                     | 2024-09-19                 |\n| TIME      | HH:MM:SS[.fraction]            | 06:40:34                   |\n| TIMESTAMP | YYYY-MM-DD HH:MM:SS[.fraction] | 2024-09-19 06:40:34.999999 |\n\n```typescript\n// ❌ Don't use standard Drizzle date/time types\nexport const testEntityTimeStampVersion = mysqlTable(\"test_entity\", {\n  id: int(\"id\").primaryKey().autoincrement(),\n  time_stamp: timestamp(\"times_tamp\").notNull(),\n  date_time: datetime(\"date_time\").notNull(),\n  time: time(\"time\").notNull(),\n  date: date(\"date\").notNull(),\n});\n\n// ✅ Use Forge-SQL-ORM custom types instead\nimport {\n  forgeDateTimeString,\n  forgeDateString,\n  forgeTimestampString,\n  forgeTimeString,\n} from \"forge-sql-orm\";\n\nexport const testEntityTimeStampVersion = mysqlTable(\"test_entity\", {\n  id: int(\"id\").primaryKey().autoincrement(),\n  time_stamp: forgeTimestampString(\"times_tamp\").notNull(),\n  date_time: forgeDateTimeString(\"date_time\").notNull(),\n  time: forgeTimeString(\"time\").notNull(),\n  date: forgeDateString(\"date\").notNull(),\n});\n```\n\n### Why Custom Types?\n\nThe custom types in Forge-SQL-ORM handle the conversion between JavaScript Date objects and Forge SQL's required string formats automatically. Without these custom types, you would need to manually format dates like this:\n\n```typescript\n// Without custom types, you'd need to do this manually:\nconst date = moment().format(\"YYYY-MM-DD\");\nconst time = moment().format(\"HH:mm:ss.SSS\");\nconst timestamp = moment().format(\"YYYY-MM-DDTHH:mm:ss.SSS\");\n```\n\nOur custom types provide:\n\n- Automatic conversion between JavaScript Date objects and Forge SQL's required string formats\n- Consistent date/time handling across your application\n- Type safety for date/time fields\n- Proper handling of timezone conversions\n- Support for all Forge SQL date/time types (datetime, timestamp, date, time)\n\n### Available Custom Types\n\n- `forgeDateTimeString` - For datetime fields (YYYY-MM-DD HH:MM:SS[.fraction])\n- `forgeTimestampString` - For timestamp fields (YYYY-MM-DD HH:MM:SS[.fraction])\n- `forgeDateString` - For date fields (YYYY-MM-DD)\n- `forgeTimeString` - For time fields (HH:MM:SS[.fraction])\n\nEach type ensures that the data is properly formatted according to Forge SQL's requirements while providing a clean, type-safe interface for your application code.\n\n# Connection to ORM\n\n```js\nimport ForgeSQL from \"forge-sql-orm\";\n\nconst forgeSQL = new ForgeSQL();\n```\n\nor\n\n```typescript\nimport { drizzle } from \"drizzle-orm/mysql-proxy\";\nimport { forgeDriver } from \"forge-sql-orm\";\n\n// Initialize drizzle with the custom driver\nconst db = drizzle(forgeDriver);\n\n// Use drizzle directly\nconst users = await db.select().from(users);\n```\n\n## Fetch Data\n\n### Basic Fetch Operations\n\n```js\n// Using forgeSQL.select()\nconst user = await forgeSQL.select({ user: users }).from(users);\n\n// Using forgeSQL.selectDistinct()\nconst user = await forgeSQL.selectDistinct({ user: users }).from(users);\n\n// Using forgeSQL.selectCacheable()\nconst user = await forgeSQL.selectCacheable({ user: users }).from(users);\n\n// Using forgeSQL.selectFrom() - Select all columns with field aliasing\nconst user = await forgeSQL.selectFrom(users).where(eq(users.id, 1));\n\n// Using forgeSQL.selectDistinctFrom() - Select distinct all columns with field aliasing\nconst user = await forgeSQL.selectDistinctFrom(users).where(eq(users.id, 1));\n\n// Using forgeSQL.selectCacheableFrom() - Select all columns with field aliasing and caching\nconst user = await forgeSQL.selectCacheableFrom(users).where(eq(users.id, 1));\n\n// Using forgeSQL.selectDistinctCacheableFrom() - Select distinct all columns with field aliasing and caching\nconst user = await forgeSQL.selectDistinctCacheableFrom(users).where(eq(users.id, 1));\n\n// Using forgeSQL.execute() - Execute raw SQL with local caching\nconst user = await forgeSQL.execute(\"SELECT * FROM users WHERE id = ?\", [1]);\n\n// Using forgeSQL.executeCacheable() - Execute raw SQL with local and global caching\n// ⚠️ IMPORTANT: When using executeCacheable(), all table names in SQL queries must be wrapped with backticks (`)\n// Example: SELECT * FROM `users` WHERE id = ? (NOT: SELECT * FROM users WHERE id = ?)\nconst user = await forgeSQL.executeCacheable(\"SELECT * FROM `users` WHERE id = ?\", [1], 300);\n\n// Using forgeSQL.getDrizzleQueryBuilder()\nconst user = await forgeSQL.getDrizzleQueryBuilder().select().from(Users).where(eq(Users.id, 1));\n\n// OR using direct drizzle with custom driver\nconst db = drizzle(forgeDriver);\nconst user = await db.select().from(Users).where(eq(Users.id, 1));\n// Returns: { id: 1, name: \"John Doe\" }\n\n// Using executeQueryOnlyOne for single result with error handling\nconst user = await forgeSQL\n  .fetch()\n  .executeQueryOnlyOne(\n    forgeSQL.getDrizzleQueryBuilder().select().from(Users).where(eq(Users.id, 1)),\n  );\n// Returns: { id: 1, name: \"John Doe\" }\n// Throws error if multiple records found\n// Returns undefined if no records found\n\n// Using with aliases\n// With forgeSQL\nconst usersAlias = alias(Users, \"u\");\nconst result = await forgeSQL\n  .getDrizzleQueryBuilder()\n  .select({\n    userId: sql \u003c string \u003e `${usersAlias.id} as \\`userId\\``,\n    userName: sql \u003c string \u003e `${usersAlias.name} as \\`userName\\``,\n  })\n  .from(usersAlias);\n\n// OR with direct drizzle\nconst db = drizzle(forgeDriver);\nconst result = await db\n  .select({\n    userId: sql \u003c string \u003e `${usersAlias.id} as \\`userId\\``,\n    userName: sql \u003c string \u003e `${usersAlias.name} as \\`userName\\``,\n  })\n  .from(usersAlias);\n// Returns: { userId: 1, userName: \"John Doe\" }\n```\n\n### Complex Queries\n\n```js\n// Using joins with automatic field name collision prevention\n// With forgeSQL\nconst orderWithUser = await forgeSQL\n  .select({ user: users, order: orders })\n  .from(orders)\n  .innerJoin(users, eq(orders.userId, users.id));\n\n// Using new selectFrom methods with joins\nconst orderWithUser = await forgeSQL\n  .selectFrom(orders)\n  .innerJoin(users, eq(orders.userId, users.id))\n  .where(eq(orders.id, 1));\n\n// Using selectCacheableFrom with joins and caching\nconst orderWithUser = await forgeSQL\n  .selectCacheableFrom(orders)\n  .innerJoin(users, eq(orders.userId, users.id))\n  .where(eq(orders.id, 1));\n\n// Using with() for Common Table Expressions (CTEs)\nconst userStats = await forgeSQL\n  .with(\n    forgeSQL.selectFrom(users).where(eq(users.active, true)).as(\"activeUsers\"),\n    forgeSQL.selectFrom(orders).where(eq(orders.status, \"completed\")).as(\"completedOrders\"),\n  )\n  .select({\n    totalActiveUsers: sql`COUNT(au.id)`,\n    totalCompletedOrders: sql`COUNT(co.id)`,\n  })\n  .from(sql`activeUsers au`)\n  .leftJoin(sql`completedOrders co`, eq(sql`au.id`, sql`co.userId`));\n\n// OR with direct drizzle\nconst db = patchDbWithSelectAliased(drizzle(forgeDriver));\nconst orderWithUser = await db\n  .selectAliased({ user: users, order: orders })\n  .from(orders)\n  .innerJoin(users, eq(orders.userId, users.id));\n// Returns: {\n//   user_id: 1,\n//   user_name: \"John Doe\",\n//   order_id: 1,\n//   order_product: \"Product 1\"\n// }\n\n// Using distinct with aliases\nconst uniqueUsers = await db.selectAliasedDistinct({ user: users }).from(users);\n// Returns unique users with aliased fields\n\n// Using executeQueryOnlyOne for unique results\nconst userStats = await forgeSQL.fetch().executeQueryOnlyOne(\n  forgeSQL\n    .getDrizzleQueryBuilder()\n    .select({\n      totalUsers: sql`COUNT(*) as \\`totalUsers\\``,\n      uniqueNames: sql`COUNT(DISTINCT name) as \\`uniqueNames\\``,\n    })\n    .from(Users),\n);\n// Returns: { totalUsers: 100, uniqueNames: 80 }\n// Throws error if multiple records found\n```\n\n### Raw SQL Queries\n\n```js\n// Using executeRawSQL for direct SQL queries\nconst users = await forgeSQL\n  .fetch()\n  .executeRawSQL\u003cUsers\u003e(\"SELECT * FROM users\");\n\n// Using execute() for raw SQL with local caching\nconst users = await forgeSQL\n  .execute(\"SELECT * FROM users WHERE active = ?\", [true]);\n\n// Using executeCacheable() for raw SQL with local and global caching\n// ⚠️ IMPORTANT: When using executeCacheable(), all table names in SQL queries must be wrapped with backticks (`)\n// Example: SELECT * FROM `users` WHERE active = ? (NOT: SELECT * FROM users WHERE active = ?)\nconst users = await forgeSQL\n  .executeCacheable(\"SELECT * FROM `users` WHERE active = ?\", [true], 300);\n\n// Using executeWithMetadata() for capturing execution metrics and performance monitoring\nconst usersWithMetadata = await forgeSQL.executeWithMetadata(\n  async () =\u003e {\n    const users = await forgeSQL.selectFrom(usersTable);\n    const orders = await forgeSQL.selectFrom(ordersTable).where(eq(ordersTable.userId, usersTable.id));\n    return { users, orders };\n  },\n  (totalDbExecutionTime, totalResponseSize, printQueriesWithPlan) =\u003e {\n    const threshold = 500; // ms baseline for this resolver\n\n    if (totalDbExecutionTime \u003e threshold * 1.5) {\n      console.warn(`[Performance Warning] Resolver exceeded DB time: ${totalDbExecutionTime} ms`);\n      await printQueriesWithPlan(); // Analyze and print query execution plans\n    } else if (totalDbExecutionTime \u003e threshold) {\n      console.debug(`[Performance Debug] High DB time: ${totalDbExecutionTime} ms`);\n    }\n\n    console.log(`DB response size: ${totalResponseSize} bytes`);\n  },\n  {\n    // Optional: Configure query plan printing\n    mode: 'TopSlowest', // Print top slowest queries (default)\n    topQueries: 1, // Print top slowest query\n  },\n);\n\n// Using executeDDL() for DDL operations (CREATE, ALTER, DROP, etc.)\nawait forgeSQL.executeDDL(`\n  CREATE TABLE users (\n    id INT PRIMARY KEY AUTO_INCREMENT,\n    name VARCHAR(255) NOT NULL,\n    email VARCHAR(255) UNIQUE\n  )\n`);\n\nawait forgeSQL.executeDDL(sql`\n  ALTER TABLE users\n  ADD COLUMN created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP\n`);\n\nawait forgeSQL.executeDDL(\"DROP TABLE IF EXISTS old_users\");\n\n// Using executeDDLActions() for executing regular SQL queries in DDL context\n// This method executes a series of actions within a DDL operation context for monitoring\nawait forgeSQL.executeDDLActions(async () =\u003e {\n  // Execute regular SQL queries in DDL context for performance monitoring\n  const slowQueries = await forgeSQL.execute(`\n    SELECT * FROM INFORMATION_SCHEMA.STATEMENTS_SUMMARY\n    WHERE AVG_LATENCY \u003e 1000000\n  `);\n\n  // Execute complex analysis queries in DDL context\n  const performanceData = await forgeSQL.execute(`\n    SELECT * FROM INFORMATION_SCHEMA.CLUSTER_STATEMENTS_SUMMARY_HISTORY\n    WHERE SUMMARY_END_TIME \u003e DATE_SUB(NOW(), INTERVAL 1 HOUR)\n  `);\n\n  return { slowQueries, performanceData };\n});\n\n// Using execute() with complex queries\nconst userStats = await forgeSQL\n  .execute(`\n    SELECT\n      u.id,\n      u.name,\n      COUNT(o.id) as order_count,\n      SUM(o.amount) as total_amount\n    FROM users u\n    LEFT JOIN orders o ON u.id = o.user_id\n    WHERE u.active = ?\n    GROUP BY u.id, u.name\n  `, [true]);\n```\n\n## Modify Operations\n\nForge-SQL-ORM provides multiple approaches for Modify operations, each with different characteristics:\n\n### 1. Basic Drizzle Operations (Cache Context Aware)\n\nThese operations work like standard Drizzle methods but participate in cache context when used within `executeWithCacheContext()`:\n\n```js\n// Basic insert (participates in cache context when used within executeWithCacheContext)\nawait forgeSQL.insert(Users).values({ id: 1, name: \"Smith\" });\n\n// Basic update (participates in cache context when used within executeWithCacheContext)\nawait forgeSQL.update(Users).set({ name: \"Smith Updated\" }).where(eq(Users.id, 1));\n\n// Basic delete (participates in cache context when used within executeWithCacheContext)\nawait forgeSQL.delete(Users).where(eq(Users.id, 1));\n```\n\n### 2. Non-Versioned Operations with Cache Management\n\nThese operations don't use optimistic locking but provide cache invalidation:\n\n```js\n// Insert without versioning but with cache invalidation\nawait forgeSQL.insertAndEvictCache(Users).values({ id: 1, name: \"Smith\" });\n\n// Update without versioning but with cache invalidation\nawait forgeSQL.updateAndEvictCache(Users).set({ name: \"Smith Updated\" }).where(eq(Users.id, 1));\n\n// Delete without versioning but with cache invalidation\nawait forgeSQL.deleteAndEvictCache(Users).where(eq(Users.id, 1));\n```\n\n### 3. Versioned Operations with Cache Management (Recommended)\n\nThese operations use optimistic locking and automatic cache invalidation:\n\n```js\n// Insert with versioning and cache management\nconst userId = await forgeSQL\n  .modifyWithVersioningAndEvictCache()\n  .insert(Users, [{ id: 1, name: \"Smith\" }]);\n\n// Bulk insert with versioning\nawait forgeSQL.modifyWithVersioningAndEvictCache().insert(Users, [\n  { id: 2, name: \"Smith\" },\n  { id: 3, name: \"Vasyl\" },\n]);\n\n// Update by ID with optimistic locking and cache invalidation\nawait forgeSQL\n  .modifyWithVersioningAndEvictCache()\n  .updateById({ id: 1, name: \"Smith Updated\" }, Users);\n\n// Delete by ID with versioning and cache invalidation\nawait forgeSQL.modifyWithVersioningAndEvictCache().deleteById(1, Users);\n```\n\n### 4. Versioned Operations without Cache Management\n\nThese operations use optimistic locking but don't manage cache:\n\n```js\n// Insert with versioning only (no cache management)\nconst userId = await forgeSQL.modifyWithVersioning().insert(Users, [{ id: 1, name: \"Smith\" }]);\n\n// Update with versioning only\nawait forgeSQL.modifyWithVersioning().updateById({ id: 1, name: \"Smith Updated\" }, Users);\n\n// Delete with versioning only\nawait forgeSQL.modifyWithVersioning().deleteById(1, Users);\n```\n\n### 5. Legacy Modify Operations (Removed in 2.1.x)\n\n⚠️ **BREAKING CHANGE**: The `crud()` and `modify()` methods have been completely removed in version 2.1.x.\n\n```js\n// ❌ These methods no longer exist in 2.1.x\n// const userId = await forgeSQL.crud().insert(Users, [{ id: 1, name: \"Smith\" }]);\n// await forgeSQL.crud().updateById({ id: 1, name: \"Smith Updated\" }, Users);\n// await forgeSQL.crud().deleteById(1, Users);\n\n// ✅ Use the new methods instead\nconst userId = await forgeSQL.modifyWithVersioning().insert(Users, [{ id: 1, name: \"Smith\" }]);\nawait forgeSQL.modifyWithVersioning().updateById({ id: 1, name: \"Smith Updated\" }, Users);\nawait forgeSQL.modifyWithVersioning().deleteById(1, Users);\n```\n\n### Advanced Operations\n\n```js\n// Insert with sequence (nextVal)\nimport { nextVal } from \"forge-sql-orm\";\n\nconst user = {\n  id: nextVal(\"user_id_seq\"),\n  name: \"user test\",\n  organization_id: 1,\n};\nconst id = await forgeSQL.modifyWithVersioning().insert(appUser, [user]);\n\n// Update with custom WHERE condition\nawait forgeSQL\n  .modifyWithVersioning()\n  .updateFields({ name: \"New Name\", age: 35 }, Users, eq(Users.email, \"smith@example.com\"));\n\n// Insert with duplicate handling\nawait forgeSQL.modifyWithVersioning().insert(\n  Users,\n  [\n    { id: 4, name: \"Smith\" },\n    { id: 4, name: \"Vasyl\" },\n  ],\n  true,\n);\n```\n\n## SQL Utilities\n\n### formatLimitOffset\n\nThe `formatLimitOffset` utility function is used to safely insert numeric values directly into SQL queries for LIMIT and OFFSET clauses. This is necessary because Atlassian Forge SQL doesn't support parameterized queries for these clauses.\n\n```typescript\nimport { formatLimitOffset } from \"forge-sql-orm\";\n\n// Example usage in a query\nconst result = await forgeSQL\n  .select()\n  .from(orderItem)\n  .orderBy(asc(orderItem.createdAt))\n  .limit(formatLimitOffset(10))\n  .offset(formatLimitOffset(350000));\n\n// The generated SQL will be:\n// SELECT * FROM order_item\n// ORDER BY created_at ASC\n// LIMIT 10\n// OFFSET 350000\n```\n\n**Important Notes:**\n\n- The function performs type checking to prevent SQL injection\n- It throws an error if the input is not a valid number\n- Use this function instead of direct parameter binding for LIMIT and OFFSET clauses\n- The function is specifically designed to work with Atlassian Forge SQL's limitations\n\n**Security Considerations:**\n\n- The function includes validation to ensure the input is a valid number\n- This prevents SQL injection by ensuring only numeric values are inserted\n- Always use this function instead of string concatenation for LIMIT and OFFSET values\n\n## Global Cache System (Level 2)\n\n[↑ Back to Top](#table-of-contents)\n\nForge-SQL-ORM includes a sophisticated global caching system that provides **cross-invocation caching** - the ability to share cached data between different resolver invocations. The global cache system is built on top of [@forge/kvs Custom entity store](https://developer.atlassian.com/platform/forge/storage-reference/storage-api-custom-entities/) and provides persistent cross-invocation caching with automatic serialization/deserialization of complex data structures.\n\n### Cache Levels Overview\n\nForge-SQL-ORM implements a two-level caching architecture:\n\n- **Level 1 (Local Cache)**: In-memory caching within a single resolver invocation scope\n- **Level 2 (Global Cache)**: Cross-invocation persistent caching using KVS storage\n\nThis multi-level approach provides optimal performance by checking the fastest cache first, then falling back to cross-invocation persistent storage.\n\n### Cache Configuration\n\nThe caching system uses Atlassian Forge's Custom entity store to persist cache data. Each cache entry is stored as a custom entity with TTL management via Forge KVS. Note that expired data deletion is asynchronous and may take up to 48 hours. If cache growth impacts INSERT/UPDATE performance, use the Clear Cache Scheduler Trigger for proactive cleanup.\n\n```typescript\nconst options = {\n  cacheEntityName: \"cache\", // KVS Custom entity name for cache storage\n  cacheTTL: 300, // Default cache TTL in seconds (5 minutes)\n  cacheWrapTable: true, // Wrap table names with backticks in cache keys\n  additionalMetadata: {\n    users: {\n      tableName: \"users\",\n      versionField: {\n        fieldName: \"updatedAt\",\n      },\n    },\n  },\n};\n\nconst forgeSQL = new ForgeSQL(options);\n```\n\n### How Caching Works with @forge/kvs\n\nThe caching system leverages Forge's Custom entity store to provide:\n\n- **Persistent Storage**: Cache data survives app restarts and deployments\n- **TTL Support**: Uses Forge KVS TTL feature for expiration (deletion is asynchronous, may take up to 48 hours)\n- **Efficient Retrieval**: Fast key-based lookups using Forge's optimized storage\n- **Data Serialization**: Automatic handling of complex objects and query results\n- **Batch Operations**: Efficient bulk cache operations for better performance\n\n```typescript\n// Cache entries are stored as custom entities in Forge's KVS\n// Example cache key structure:\n// Key: \"CachedQuery_8d74bdd9d85064b72fb2ee072ca948e5\"\n// Value: { data: [...], expiration: 1234567890, sql: \"select * from 1\" }\n```\n\n### Cache Context Operations\n\nThe cache context allows you to batch cache invalidation events and bypass cache reads for affected tables:\n\n```typescript\n// Execute operations within a cache context\nawait forgeSQL.executeWithCacheContext(async () =\u003e {\n  // All cache invalidation events are collected and executed in batch\n  await forgeSQL.modifyWithVersioningAndEvictCache().insert(Users, [userData]);\n  await forgeSQL.modifyWithVersioningAndEvictCache().updateById(updateData, Users);\n  // Cache is cleared only once at the end for all affected tables\n});\n\n// Execute with return value\nconst result = await forgeSQL.executeWithCacheContextAndReturnValue(async () =\u003e {\n  const user = await forgeSQL.modifyWithVersioningAndEvictCache().insert(Users, [userData]);\n  return user;\n});\n\n// Basic operations also participate in cache context\nawait forgeSQL.executeWithCacheContext(async () =\u003e {\n  // These operations will participate in batch cache clearing\n  await forgeSQL.insert(Users).values(userData);\n  await forgeSQL.update(Users).set(updateData).where(eq(Users.id, 1));\n  await forgeSQL.delete(Users).where(eq(Users.id, 1));\n  // Cache is cleared only once at the end for all affected tables\n});\n```\n\n### Local Cache Operations (Level 1)\n\nForge-SQL-ORM provides a local cache system (Level 1 cache) that stores query results in memory for the duration of a single resolver invocation. This is particularly useful for optimizing repeated queries within the same execution context(resolver invocation).\n\n#### What is Local Cache?\n\nLocal cache is an in-memory caching layer that operates within a single resolver invocation scope. Unlike the global KVS cache, local cache:\n\n- **Stores data in memory** using Node.js `AsyncLocalStorage`\n- **Automatically clears** when the invocation completes (Resolver call)\n- **Provides instant access** to previously executed queries in resolver invocation\n- **Reduces database load** for repeated operations within the same invocation\n- **Works alongside** the global KVS cache system\n\n#### Key Features of Local Cache\n\n- **In-Memory Storage**: Query results are cached in memory using Node.js `AsyncLocalStorage`\n- **Invocation-Scoped**: Cache is automatically cleared when the invocation completes\n- **Automatic Eviction**: Cache is cleared when insert/update/delete operations are performed\n- **No Persistence**: Data is not stored between Invocations (unlike global KVS cache)\n- **Performance Optimization**: Reduces database queries for repeated operations\n- **Simple Configuration**: Works out of the box with simple setup\n\n#### Usage Examples\n\n##### Basic Local Cache Usage\n\n```typescript\n// Execute operations within a local cache context\nawait forgeSQL.executeWithLocalContext(async () =\u003e {\n  // First call - executes query and caches result\n  const users = await forgeSQL\n    .select({ id: users.id, name: users.name })\n    .from(users)\n    .where(eq(users.active, true));\n\n  // Second call - gets result from local cache (no database query)\n  const cachedUsers = await forgeSQL\n    .select({ id: users.id, name: users.name })\n    .from(users)\n    .where(eq(users.active, true));\n\n  // Using new selectFrom methods with local caching\n  const usersFrom = await forgeSQL.selectFrom(users).where(eq(users.active, true));\n\n  // This will use local cache (no database call)\n  const cachedUsersFrom = await forgeSQL.selectFrom(users).where(eq(users.active, true));\n\n  // Using execute() with local caching\n  const rawUsers = await forgeSQL.execute(\"SELECT id, name FROM users WHERE active = ?\", [true]);\n\n  // This will use local cache (no database call)\n  const cachedRawUsers = await forgeSQL.execute(\"SELECT id, name FROM users WHERE active = ?\", [\n    true,\n  ]);\n\n  // Raw SQL with execution metadata and performance monitoring\n  const usersWithMetadata = await forgeSQL.executeWithMetadata(\n    async () =\u003e {\n      const users = await forgeSQL.selectFrom(usersTable);\n      const orders = await forgeSQL\n        .selectFrom(ordersTable)\n        .where(eq(ordersTable.userId, usersTable.id));\n      return { users, orders };\n    },\n    (totalDbExecutionTime, totalResponseSize, printQueriesWithPlan) =\u003e {\n      const threshold = 500; // ms baseline for this resolver\n\n      if (totalDbExecutionTime \u003e threshold * 1.5) {\n        console.warn(`[Performance Warning] Resolver exceeded DB time: ${totalDbExecutionTime} ms`);\n        await printQueriesWithPlan(); // Analyze and print query execution plans\n      } else if (totalDbExecutionTime \u003e threshold) {\n        console.debug(`[Performance Debug] High DB time: ${totalDbExecutionTime} ms`);\n      }\n\n      console.log(`DB response size: ${totalResponseSize} bytes`);\n    },\n    {\n      // Optional: Configure query plan printing\n      topQueries: 1, // Print top slowest query (default)\n      mode: \"TopSlowest\", // Print top slowest queries (default)\n    },\n  );\n\n  // Insert operation - evicts local cache for users table\n  await forgeSQL.insert(users).values({ name: \"New User\", active: true });\n\n  // Third call - executes query again and caches new result\n  const updatedUsers = await forgeSQL\n    .select({ id: users.id, name: users.name })\n    .from(users)\n    .where(eq(users.active, true));\n});\n\n// Execute with return value\nconst result = await forgeSQL.executeWithLocalCacheContextAndReturnValue(async () =\u003e {\n  // First call - executes query and caches result\n  const users = await forgeSQL\n    .select({ id: users.id, name: users.name })\n    .from(users)\n    .where(eq(users.active, true));\n\n  // Second call - gets result from local cache (no database query)\n  const cachedUsers = await forgeSQL\n    .select({ id: users.id, name: users.name })\n    .from(users)\n    .where(eq(users.active, true));\n\n  return { users, cachedUsers };\n});\n```\n\n##### Real-World Resolver Example\n\n```typescript\n// Atlassian forge resolver with local cache optimization\nconst userResolver = async (req) =\u003e {\n  return await forgeSQL.executeWithLocalCacheContextAndReturnValue(async () =\u003e {\n    // Get user details using selectFrom (all columns with field aliasing)\n    const user = await forgeSQL.selectFrom(users).where(eq(users.id, args.userId));\n\n    // Get user's orders using selectCacheableFrom (with caching)\n    const orders = await forgeSQL.selectCacheableFrom(orders).where(eq(orders.userId, args.userId));\n\n    // Get user's profile using raw SQL with execute()\n    const profile = await forgeSQL.execute(\n      \"SELECT id, bio, avatar FROM profiles WHERE user_id = ?\",\n      [args.userId],\n    );\n\n    // Get user statistics using complex raw SQL\n    const stats = await forgeSQL.execute(\n      `\n      SELECT \n        COUNT(o.id) as total_orders,\n        SUM(o.amount) as total_spent,\n        AVG(o.amount) as avg_order_value\n      FROM orders o \n      WHERE o.user_id = ? AND o.status = 'completed'\n    `,\n      [args.userId],\n    );\n\n    // If any of these queries are repeated within the same resolver,\n    // they will use the local cache instead of hitting the database\n\n    return {\n      ...user[0],\n      orders,\n      profile: profile[0],\n      stats: stats[0],\n    };\n  });\n};\n```\n\n#### Local Cache (Level 1) vs Global Cache (Level 2)\n\n| Feature            | Local Cache (Level 1)                 | Global Cache (Level 2)                                              |\n| ------------------ | ------------------------------------- | ------------------------------------------------------------------- |\n| **Storage**        | In-memory (Node.js process)           | Persistent (KVS Custom Entities)                                    |\n| **Scope**          | Single forge invocation               | Cross-invocation (between calls)                                    |\n| **Persistence**    | No (cleared on invocation end)        | Yes (survives app redeploy)                                         |\n| **Performance**    | Very fast (memory access)             | Fast (KVS optimized storage)                                        |\n| **Memory Usage**   | Low (invocation-scoped)               | Higher (persistent storage)                                         |\n| **Use Case**       | Invocation optimization               | Cross-invocation data sharing                                       |\n| **Configuration**  | None required                         | Requires KVS setup                                                  |\n| **TTL Support**    | No (invocation-scoped)                | Yes (TTL via Forge KVS, async deletion up to 48h)                   |\n| **Cache Eviction** | Automatic on DML operations           | Manual or scheduled cleanup (optional if cache impacts performance) |\n| **Best For**       | Repeated queries in single invocation | Frequently accessed data across invocations                         |\n\n#### Integration with Global Cache (Level 2)\n\nLocal cache (Level 1) works alongside the global cache (Level 2) system:\n\n```typescript\n// Multi-level cache checking: Level 1 → Level 2 → Database\nawait forgeSQL.executeWithLocalContext(async () =\u003e {\n  // This will check:\n  // 1. Local cache (Level 1 - in-memory)\n  // 2. Global cache (Level 2 - KVS)\n  // 3. Database query\n  const users = await forgeSQL\n    .selectCacheable({ id: users.id, name: users.name })\n    .from(users)\n    .where(eq(users.active, true));\n\n  // Using new methods with multi-level caching\n  const usersFrom = await forgeSQL.selectCacheableFrom(users).where(eq(users.active, true));\n\n  // Raw SQL with multi-level caching\n  // ⚠️ IMPORTANT: When using executeCacheable(), all table names must be wrapped with backticks (`)\n  const rawUsers = await forgeSQL.executeCacheable(\n    \"SELECT id, name FROM `users` WHERE active = ?\",\n    [true],\n    300, // TTL in seconds\n  );\n});\n```\n\n#### Local Cache Flow Diagram\n\nThe diagram below shows how local cache works in Forge-SQL-ORM:\n\n1. **Request Start**: Local cache context is initialized with empty cache\n2. **First Query**: Cache miss → Global cache miss → Database query → Save to local cache\n3. **Repeated Query**: Cache hit → Return cached result (no database call)\n4. **Data Modification**: Insert/Update/Delete → Evict local cache for affected table\n5. **Query After Modification**: Cache miss (was evicted) → Database query → Save to local cache\n6. **Request End**: Local cache context is destroyed, all data cleared\n\n![Local Cache Flow](img/localCacheFlow.txt)\n\n### Cache-Aware Query Operations\n\n```typescript\n// Execute queries with caching\nconst users = await forgeSQL.modifyWithVersioningAndEvictCache().executeQuery(\n  forgeSQL.select().from(Users).where(eq(Users.active, true)),\n  600, // Custom TTL in seconds\n);\n\n// Execute single result queries with caching\nconst user = await forgeSQL\n  .modifyWithVersioningAndEvictCache()\n  .executeQueryOnlyOne(forgeSQL.select().from(Users).where(eq(Users.id, 1)));\n\n// Execute raw SQL with caching\nconst results = await forgeSQL.modifyWithVersioningAndEvictCache().executeRawSQL(\n  \"SELECT * FROM users WHERE active = ?\",\n  [true],\n  300, // TTL in seconds\n);\n\n// Using new methods for cache-aware operations\nconst usersFrom = await forgeSQL.selectCacheableFrom(Users).where(eq(Users.active, true));\n\nconst usersDistinct = await forgeSQL\n  .selectDistinctCacheableFrom(Users)\n  .where(eq(Users.active, true));\n\n// Raw SQL with local and global caching\n// ⚠️ IMPORTANT: When using executeCacheable(), all table names must be wrapped with backticks (`)\nconst rawUsers = await forgeSQL.executeCacheable(\n  \"SELECT * FROM `users` WHERE active = ?\",\n  [true],\n  300, // TTL in seconds\n);\n\n// Using with() for Common Table Expressions with caching\nconst userStats = await forgeSQL\n  .with(\n    forgeSQL.selectFrom(users).where(eq(users.active, true)).as(\"activeUsers\"),\n    forgeSQL.selectFrom(orders).where(eq(orders.status, \"completed\")).as(\"completedOrders\"),\n  )\n  .select({\n    totalActiveUsers: sql`COUNT(au.id)`,\n    totalCompletedOrders: sql`COUNT(co.id)`,\n  })\n  .from(sql`activeUsers au`)\n  .leftJoin(sql`completedOrders co`, eq(sql`au.id`, sql`co.userId`));\n\n// Using executeWithMetadata() for capturing execution metrics with performance monitoring\nconst usersWithMetadata = await forgeSQL.executeWithMetadata(\n  async () =\u003e {\n    const users = await forgeSQL.selectFrom(usersTable);\n    const orders = await forgeSQL\n      .selectFrom(ordersTable)\n      .where(eq(ordersTable.userId, usersTable.id));\n    return { users, orders };\n  },\n  (totalDbExecutionTime, totalResponseSize, printQueriesWithPlan) =\u003e {\n    const threshold = 500; // ms baseline for this resolver\n\n    if (totalDbExecutionTime \u003e threshold * 1.5) {\n      console.warn(`[Performance Warning] Resolver exceeded DB time: ${totalDbExecutionTime} ms`);\n      await printQueriesWithPlan(); // Analyze and print query execution plans\n    } else if (totalDbExecutionTime \u003e threshold) {\n      console.debug(`[Performance Debug] High DB time: ${totalDbExecutionTime} ms`);\n    }\n\n    console.log(`DB response size: ${totalResponseSize} bytes`);\n  },\n  {\n    // Optional: Configure query plan printing\n    mode: \"TopSlowest\", // Print top slowest queries (default)\n    topQueries: 1, // Print top slowest query\n  },\n);\n```\n\n### Manual Cache Management\n\n```typescript\n// Clear cache for specific tables\nawait forgeSQL.modifyWithVersioningAndEvictCache().evictCache([\"users\", \"orders\"]);\n\n// Clear cache for specific entities\nawait forgeSQL.modifyWithVersioningAndEvictCache().evictCacheEntities([Users, Orders]);\n```\n\n## Optimistic Locking\n\n[↑ Back to Top](#table-of-contents)\n\nOptimistic locking is a concurrency control mechanism that prevents data conflicts when multiple transactions attempt to update the same record concurrently. Instead of using locks, this technique relies on a version field in your entity models.\n\n### Supported Version Field Types\n\n- `datetime` - Timestamp-based versioning\n- `timestamp` - Timestamp-based versioning\n- `integer` - Numeric version increment\n- `decimal` - Numeric version increment\n\n### Configuration\n\n```typescript\nconst options = {\n  additionalMetadata: {\n    users: {\n      tableName: \"users\",\n      versionField: {\n        fieldName: \"updatedAt\",\n      },\n    },\n  },\n};\n\nconst forgeSQL = new ForgeSQL(options);\n```\n\n### Example Usage\n\n```typescript\n// The version field will be automatically handled\nawait forgeSQL.modifyWithVersioning().updateById(\n  {\n    id: 1,\n    name: \"Updated Name\",\n    updatedAt: new Date(), // Will be automatically set if not provided\n  },\n  Users,\n);\n```\n\nor with cache support\n\n```typescript\n// The version field will be automatically handled\nawait forgeSQL.modifyWithVersioningAndEvictCache().updateById(\n  {\n    id: 1,\n    name: \"Updated Name\",\n    updatedAt: new Date(), // Will be automatically set if not provided\n  },\n  Users,\n);\n```\n\n## Rovo Integration\n\n[↑ Back to Top](#table-of-contents)\n\nRovo is a secure pattern for natural-language analytics in Forge apps. It enables safe execution of dynamic SQL queries with comprehensive security validations, making it ideal for AI-powered analytics features where users can query data using natural language.\n\n**📖 Real-World Example**: See [Forge-Secure-Notes-for-Jira](https://github.com/vzakharchenko/Forge-Secure-Notes-for-Jira) for a complete implementation of Rovo AI agent with secure natural-language analytics.\n\n### Key Features\n\n- **Security-First Design**: Multiple layers of security validations to prevent SQL injection and unauthorized data access\n- **Single Table Isolation**: Queries are restricted to a single table to prevent cross-table data access\n- **Row-Level Security (RLS)**: Built-in support for data isolation based on user context\n- **Comprehensive Validation**: Blocks JOINs, subqueries, window functions, and other potentially unsafe operations\n- **Post-Execution Validation**: Verifies query results to ensure security fields are present and come from the correct table\n- **Type-Safe Configuration**: Uses Drizzle ORM table objects for type-safe column references\n\n### Security Validations\n\nRovo performs multiple security checks before and after query execution:\n\n1. **Query Type Validation**: Only SELECT queries are allowed\n2. **Table Restriction**: Queries must target only the specified table\n3. **JOIN Detection**: JOINs are blocked using EXPLAIN analysis\n4. **Subquery Detection**: Scalar subqueries in SELECT columns are blocked\n5. **Window Function Detection**: Window functions are blocked for security\n6. **Execution Plan Validation**: Verifies that only the expected table is accessed\n7. **RLS Field Validation**: Ensures required security fields are present in results\n8. **Post-Execution Validation**: Verifies all fields come from the correct table\n\n### Basic Usage\n\n```typescript\nimport ForgeSQL from \"forge-sql-orm\";\n\nconst forgeSQL = new ForgeSQL();\n\n// Get Rovo instance\nconst rovo = forgeSQL.rovo();\n\n// Create settings builder using Drizzle table object\nconst settings = await rovo\n  .rovoSettingBuilder(usersTable, accountId)\n  .addContextParameter(\":currentUserId\", accountId)\n  .useRLS()\n  .addRlsColumn(usersTable.id)\n  .addRlsWherePart((alias) =\u003e `${alias}.${usersTable.id.name} = '${accountId}'`)\n  .finish()\n  .build();\n\n// Execute dynamic SQL query\nconst result = await rovo.dynamicIsolatedQuery(\n  \"SELECT id, name FROM users WHERE status = 'active' AND userId = :currentUserId\",\n  settings,\n);\n\nconsole.log(result.rows); // Query results\nconsole.log(result.metadata); // Query metadata\n```\n\n### Row-Level Security (RLS) Configuration\n\nRLS allows you to filter data based on user context, ensuring users can only access their own data:\n\n```typescript\nconst rovo = forgeSQL.rovo();\n\n// Configure RLS with conditional activation and multiple security fields\nconst settings = await rovo\n  .rovoSettingBuilder(securityNotesTable, accountId)\n  .addContextParameter(\":currentUserId\", accountId)\n  .addContextParameter(\":currentProjectKey\", projectKey)\n  .addContextParameter(\":currentIssueKey\", issueKey)\n  .useRLS()\n  .addRlsCondition(async () =\u003e {\n    // Conditionally enable RLS based on user role\n    const userService = getUserService();\n    return !(await userService.isAdmin()); // Only apply RLS for non-admin users\n  })\n  .addRlsColumn(securityNotesTable.createdBy) // Required field for RLS validation\n  .addRlsColumn(securityNotesTable.targetUserId) // Additional security field\n  .addRlsWherePart(\n    (alias) =\u003e\n      `${alias}.${securityNotesTable.createdBy.name} = '${accountId}' OR ${alias}.${securityNotesTable.targetUserId.name} = '${accountId}'`,\n  ) // RLS filter with OR condition\n  .finish()\n  .build();\n\n// The query will automatically be wrapped with RLS filtering:\n// SELECT * FROM (original_query) AS t WHERE (t.createdBy = 'accountId' OR t.targetUserId = 'accountId')\n```\n\n### Context Parameters\n\nYou can use context parameters for query substitution. Parameters use the `:parameterName` format (colon prefix, not double braces):\n\n```typescript\nconst rovo = forgeSQL.rovo();\n\nconst settings = await rovo\n  .rovoSettingBuilder(usersTable, accountId)\n  .addContextParameter(\":currentUserId\", accountId)\n  .addContextParameter(\":projectKey\", \"PROJ-123\")\n  .addContextParameter(\":status\", \"active\")\n  .useRLS()\n  .addRlsColumn(usersTable.id)\n  .addRlsWherePart((alias) =\u003e `${alias}.${usersTable.userId.name} = '${accountId}'`)\n  .finish()\n  .build();\n\n// In the SQL query, parameters are replaced:\nconst result = await rovo.dynamicIsolatedQuery(\n  \"SELECT * FROM users WHERE projectKey = :projectKey AND status = :status AND userId = :currentUserId\",\n  settings,\n);\n// Becomes: SELECT * FROM users WHERE projectKey = 'PROJ-123' AND status = 'active' AND userId = 'accountId'\n```\n\n### Using Raw Table Names\n\nYou can use `rovoRawSettingBuilder` with raw table name string:\n\n```typescript\nconst rovo = forgeSQL.rovo();\n\n// Using rovoRawSettingBuilder with raw table name\nconst settings = await rovo\n  .rovoRawSettingBuilder(\"users\", accountId)\n  .addContextParameter(\":currentUserId\", accountId)\n  .useRLS()\n  .addRlsColumnName(\"id\")\n  .addRlsWherePart((alias) =\u003e `${alias}.id = '${accountId}'`)\n  .finish()\n  .build();\n\nconst result = await rovo.dynamicIsolatedQuery(\n  \"SELECT id, name FROM users WHERE status = 'active' AND userId = :currentUserId\",\n  settings,\n);\n```\n\n### Security Restrictions\n\nRovo blocks the following operations for security:\n\n- **Data Modification**: Only SELECT queries are allowed\n- **JOINs**: JOIN operations are detected and blocked\n- **Subqueries**: Scalar subqueries in SELECT columns are blocked\n- **Window Functions**: Window functions (e.g., `COUNT(*) OVER(...)`) are blocked\n- **Multiple Tables**: Queries referencing multiple tables are blocked\n- **Table Aliases**: Post-execution validation ensures fields come from the correct table\n\n### Error Handling\n\nRovo provides detailed error messages when security violations are detected:\n\n```typescript\ntry {\n  const result = await rovo.dynamicIsolatedQuery(\n    \"SELECT * FROM users u JOIN orders o ON u.id = o.userId\",\n    settings,\n  );\n} catch (error) {\n  // Error: \"Security violation: JOIN operations are not allowed...\"\n  console.error(error.message);\n}\n```\n\n### Example: Real-World Function Implementation\n\n\u003e **💡 Full Example**: See the complete implementation in [Forge-Secure-Notes-for-Jira](https://github.com/vzakharchenko/Forge-Secure-Notes-for-Jira) repository.\n\n```typescript\nimport ForgeSQL from \"forge-sql-orm\";\nimport { Result } from \"@forge/sql\";\n\nconst FORGE_SQL_ORM = new ForgeSQL();\n\nexport async function runSecurityNotesQuery(\n  event: {\n    sql: string;\n    context: {\n      jira: {\n        issueKey: string;\n        projectKey: string;\n      };\n    };\n  },\n  context: { principal: { accountId: string } },\n): Promise\u003cResult\u003cunknown\u003e\u003e {\n  const rovoIntegration = FORGE_SQL_ORM.rovo();\n  const accountId = context.principal.accountId;\n\n  const settings = await rovoIntegration\n    .rovoSettingBuilder(securityNotesTable, accountId)\n    .addContextParameter(\":currentUserId\", accountId)\n    .addContextParameter(\":currentProjectKey\", event.context?.jira?.projectKey ?? \"\")\n    .addContextParameter(\":currentIssueKey\", event.context?.jira?.issueKey ?? \"\")\n    .useRLS()\n    .addRlsCondition(async () =\u003e {\n      // Conditionally disable RLS for admin users\n      const userService = getUserService();\n      return !(await userService.isAdmin());\n    })\n    .addRlsColumn(securityNotesTable.createdBy)\n    .addRlsColumn(securityNotesTable.targetUserId)\n    .addRlsWherePart(\n      (alias: string) =\u003e\n        `${alias}.${securityNotesTable.createdBy.name} = '${accountId}' OR ${alias}.${securityNotesTable.targetUserId.name} = '${accountId}'`,\n    )\n    .finish()\n    .build();\n\n  return await rovoIntegration.dynamicIsolatedQuery(event.sql, settings);\n}\n```\n\n## ForgeSqlOrmOptions\n\nThe `ForgeSqlOrmOptions` object allows customization of ORM behavior:\n\n| Option                     | Type      | Description                                                                                                                                                                                                                                                                    |\n| -------------------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `logRawSqlQuery`           | `boolean` | Enables logging of raw SQL queries in the Atlassian Forge Developer Console. Useful for debugging and monitoring. Defaults to `false`.                                                                                                                                         |\n| `logCache`                 | `boolean` | Enables logging of cache operations (hits, misses, evictions) in the Atlassian Forge Developer Console. Useful for debugging caching issues. Defaults to `false`.                                                                                                              |\n| `disableOptimisticLocking` | `boolean` | Disables optimistic locking. When set to `true`, no additional condition (e.g., a version check) is added during record updates, which can improve performance. However, this may lead to conflicts when multiple transactions attempt to update the same record concurrently. |\n| `additionalMetadata`       | `object`  | Allows adding custom metadata to all entities. This is useful for tracking common fields across all tables (e.g., `createdAt`, `updatedAt`, `createdBy`, etc.). The metadata will be automatically added to all generated entities.                                            |\n| `cacheEntityName`          | `string`  | KVS Custom entity name for cache storage. Must match the `name` in your `manifest.yml` storage entities configuration. Required for caching functionality. Defaults to `\"cache\"`.                                                                                              |\n| `cacheTTL`                 | `number`  | Default cache TTL in seconds. Defaults to `120` (2 minutes).                                                                                                                                                                                                                   |\n| `cacheWrapTable`           | `boolean` | Whether to wrap table names with backticks in cache keys. Defaults to `true`.                                                                                                                                                                                                  |\n| `hints`                    | `object`  | SQL hints for query optimization. Optional configuration for advanced query tuning.                                                                                                                                                                                            |\n\n## CLI Commands\n\nForge-SQL-ORM provides a command-line interface for managing database migrations and model generation.\n\n**📖 [Full CLI Documentation](forge-sql-orm-cli/README.md)** - Complete CLI reference with all commands and options.\n\n### Quick CLI Reference\n\nThe CLI tool provides the following main commands:\n\n- `generate:model` - Generate Drizzle ORM models from your database schema\n- `migrations:create` - Create new migration files\n- `migrations:update` - Update existing migrations with schema changes\n- `migrations:drop` - Create migration to drop tables\n\n### Installation\n\nThe CLI tool must be installed as a local dependency and used via npm scripts in your `package.json`:\n\n```bash\nnpm install forge-sql-orm-cli -D\n```\n\n### Setup npm Scripts\n\nAdd the following scripts to your `package.json`:\n\n```bash\nnpm pkg set scripts.models:create=\"forge-sql-orm-cli generate:model --output src/entities --saveEnv\"\nnpm pkg set scripts.migration:create=\"forge-sql-orm-cli migrations:create --force --output src/migration --entitiesPath src/entities\"\nnpm pkg set scripts.migration:update=\"forge-sql-orm-cli migrations:update --entitiesPath src/entities --output src/migration\"\n```\n\n### Basic Usage\n\nAfter setting up the scripts, use them via npm:\n\n```bash\n# Generate models from database\nnpm run models:create\n\n# Create migration\nnpm run migration:create\n\n# Update migration\nnpm run migration:update\n```\n\n**Note:** The CLI tool is designed to work as a local dependency through npm scripts. Configuration is saved to `.env` file using the `--saveEnv` flag, so you only need to provide database credentials once.\n\nFor detailed information about all available options and advanced usage, see the [Full CLI Documentation](forge-sql-orm-cli/README.md).\n\n## Web Triggers for Migrations\n\nForge-SQL-ORM provides web triggers for managing database migrations in Atlassian Forge:\n\n### 1. Apply Migrations Trigger\n\nThis trigger allows you to apply database migrations through a web endpoint. It's useful for:\n\n- Manually triggering migrations\n- Running migrations as part of your deployment process\n- Testing migrations in different environments\n\n```typescript\n// Example usage in your Forge app\nimport { applySchemaMigrations } from \"forge-sql-orm\";\nimport migration from \"./migration\";\n\nexport const handlerMigration = async () =\u003e {\n  return applySchemaMigrations(migration);\n};\n```\n\nConfigure in `manifest.yml`:\n\n```yaml\nwebtrigger:\n  - key: invoke-schema-migration\n    function: runSchemaMigration\n    security:\n      egress:\n        allowDataEgress: false\n        allowedResponses:\n          - statusCode: 200\n            body: '{\"body\": \"Migrations successfully executed\"}'\nsql:\n  - key: main\n    engine: mysql\nfunction:\n  - key: runSchemaMigration\n    handler: index.handlerMigration\n```\n\n### 2. Drop Migrations Trigger\n\n⚠️ **WARNING**: This trigger will permanently delete all data in the specified tables and clear the migrations history. This operation cannot be undone!\n\nThis trigger allows you to completely reset your database schema. It's useful for:\n\n- Development environments where you need to start fresh\n- Testing scenarios requiring a clean database\n- Resetting the database before applying new migrations\n\n**Important**: The trigger will drop all tables including migration.\n\n```typescript\n// Example usage in your Forge app\nimport { dropSchemaMigrations } from \"forge-sql-orm\";\n\nexport const dropMigrations = () =\u003e {\n  return dropSchemaMigrations();\n};\n```\n\nConfigure in `manifest.yml`:\n\n```yaml\nwebtrigger:\n  - key: drop-schema-migration\n    function: dropMigrations\nsql:\n  - key: main\n    engine: mysql\nfunction:\n  - key: dropMigrations\n    handler: index.dropMigrations\n```\n\n### 3. Fetch Schema Trigger\n\n⚠️ **DEVELOPMENT ONLY**: This trigger is designed for development environments only and should not be used in production.\n\nThis trigger retrieves the current database schema from Atlassian Forge SQL and generates SQL statements that can be used to recreate the database structure. It's useful for:\n\n- Development environment setup\n- Schema documentation\n- Database structure verification\n- Creating backup scripts\n\n**Security Considerations**:\n\n- This trigger exposes your database structure\n- It temporarily disables foreign key checks\n- It may expose sensitive table names and structures\n- Should only be used in development environments\n\n```typescript\n// Example usage in your Forge app\nimport { fetchSchemaWebTrigger } from \"forge-sql-orm\";\n\nexport const fetchSchema = async () =\u003e {\n  return fetchSchemaWebTrigger();\n};\n```\n\nConfigure in `manifest.yml`:\n\n```yaml\nwebtrigger:\n  - key: fetch-schema\n    function: fetchSchema\nsql:\n  - key: main\n    engine: mysql\nfunction:\n  - key: fetchSchema\n    handler: index.fetchSchema\n```\n\nThe response will contain SQL statements like:\n\n```sql\nSET foreign_key_checks = 0;\nCREATE TABLE IF NOT EXISTS users (...);\nCREATE TABLE IF NOT EXISTS orders (...);\nSET foreign_key_checks = 1;\n```\n\n### 4. Clear Cache Scheduler Trigger\n\nThis trigger automatically cleans up expired cache entries based on their TTL (Time To Live).\n\n**⚠️ Important:** While Forge KVS uses TTL to mark entries as expired, **actual deletion is asynchronous and may take up to 48 hours**. During this window, read operations may still return expired results. If your cache grows large and impacts INSERT/UPDATE performance, you should use this scheduler trigger to proactively clean up expired entries.\n\n**When to use:**\n\n- Your cache grows large over time\n- INSERT/UPDATE operations are slowing down due to cache size\n- You need strict expiry semantics (immediate cleanup)\n- You want to reduce storage costs proactively\n\n**When optional:**\n\n- Small cache footprint\n- No performance impact on data modifications\n- You can tolerate expired entries being returned for up to 48 hours\n\n```typescript\n// Example usage in your Forge app\nimport { clearCacheSchedulerTrigger } from \"forge-sql-orm\";\n\nexport const clearCache = () =\u003e {\n  return clearCacheSchedulerTrigger({\n    cacheEntityName: \"cache\",\n  });\n};\n```\n\nConfigure in `manifest.yml` (optional - only if cache growth impacts INSERT/UPDATE performance):\n\n```yaml\n# Optional: Only needed if cache growth impacts INSERT/UPDATE performance\nscheduledTrigger:\n  - key: clear-cache-trigger\n    function: clearCache\n    interval: fiveMinute\nfunction:\n  - key: clearCache\n    handler: index.clearCache\n```\n\n**Available Intervals**:\n\n- `fiveMinute` - Every 5 minutes\n- `hour` - Every hour\n- `day` - Every day\n\n### 5. Slow Query Scheduler Trigger\n\nThis scheduler trigger automatically monitors and analyzes slow queries on a scheduled basis. For detailed information, see the [Slow Query Monitoring](#slow-query-monitoring) section.\n\n**Quick Setup:**\n\n```typescript\nimport ForgeSQL, { slowQuerySchedulerTrigger } from \"forge-sql-orm\";\n\nconst forgeSQL = new ForgeSQL();\n\nexport const slowQueryTrigger = () =\u003e\n  slowQuerySchedulerTrigger(forgeSQL, { hours: 1, timeout: 3000 });\n```\n\nConfigure in `manifest.yml`:\n\n```yaml\nscheduledTrigger:\n  - key: slow-query-trigger\n    function: slowQueryTrigger\n    interval: hour\nfunction:\n  - key: slowQueryTrigger\n    handler: index.slowQueryTrigger\n```\n\n\u003e **💡 Note**: For complete documentation, examples, and configuration options, see the [Slow Query Monitoring](#slow-query-monitoring) section.\n\n### Important Notes\n\n**Security Considerations**:\n\n- The drop migrations trigger should be restricted to development environments\n- The fetch schema trigger should only be used in development\n- Consider implementing additional authentication for these endpoints\n\n**Best Practices**:\n\n- Always backup your data before using the drop migrations trigger\n- Test migrations in a development environment first\n- Use these triggers as part of your deployment pipeline\n- Monitor the execution logs in the Forge Developer Console\n\n## Query Analysis and Performance Optimization\n\n[↑ Back to Top](#table-of-contents)\n\nForge-SQL-ORM provides comprehensive query analysis tools to help you optimize your database queries and identify performance bottlenecks.\n\n### About Atlassian's Built-in Analysis Tools\n\nAtlassian provides comprehensive query analysis tools in the development console, including:\n\n- Basic query performance metrics\n- Slow query tracking (queries over 500ms)\n- Basic execution statistics\n- Query history and patterns\n\nOur analysis tools complement these built-in features by providing additional insights directly from TiDB's system schemas.\n\n### Automatic Error Analysis\n\nForge-SQL-ORM automatically intercepts and analyzes critical query errors to help you diagnose performance issues. When a query fails due to **timeout** or **out-of-memory** errors, the library automatically:\n\n1. **Detects the error type** (SQL_QUERY_TIMEOUT or Out of Memory)\n2. **Logs detailed error information** to the Forge Developer Console\n3. **Waits for system tables to populate** (200ms delay)\n4. **Retrieves and logs the execution plan** for the failed query\n5. **Provides performance metrics** including memory usage, execution time, and query details\n\nThis automatic analysis happens transparently - no additional code is required on your part.\n\n#### Supported Error Types\n\n- **SQL_QUERY_TIMEOUT**: Queries that exceed the execution time limit\n- **Out of Memory (OOM)**: Queries that exceed the 16 MiB memory limit (errno: 8175)\n\n#### Example Console Output\n\nWhen a query fails, you'll see output like this in the Forge Developer Console:\n\n```\n❌ TIMEOUT detected - Query exceeded time limit\n⏳ Waiting 200ms for CLUSTER_STATEMENTS_SUMMARY to populate...\n📊 Analyzing query performance and execution plan...\n⏱️  Query duration: 10500ms\n\nSQL: SELECT * FROM users u INNER JOIN orders o ON u.id = o.user_id WHERE u.active = ? | Memory: 12.45 MB | Time: 10500.00 ms | stmtType: Select | Executions: 1\n Plan:\nid task estRows operator info actRows execution info memory disk\nProjection_7 root 1000.00 forge_38dd1c6156b94bb59c2c9a45582bbfc7.users.id, ... 1000 time:10.5s, loops:1 12.45 MB N/A\n└─IndexHashJoin_14 root 1000.00 inner join, ... 1000 time:10.2s, loops:1 11.98 MB N/A\n```\n\n#### How It Works\n\nThe error analysis mechanism:\n\n1. **Error Detection**: When a query fails, the driver proxy checks the error code/errno\n2. **Error Logging**: Logs the specific error type to console.error\n3. **Data Population Wait**: Waits 200ms for TiDB's `CLUSTER_STATEMENTS_SUMMARY` table to be populated with the failed query's metadata\n4. **Query Analysis**: Automatically calls `printQueriesWithPlan()` to retrieve and display:\n   - SQL query text\n   - Memory consumption (average and max in MB)\n   - Execution time (average in ms)\n   - Statement type\n   - Number of executions\n   - Detailed execution plan\n\n#### Benefits\n\n- **Zero Configuration**: Works automatically - no setup required\n- **Immediate Insights**: Get execution plans for failed queries instantly\n- **Performance Debugging**: Identify bottlenecks without manual investigation\n- **Development Console Integration**: All logs appear in Atlassian Forge Developer Console\n- **No Code Changes**: Existing code automatically benefits from error analysis\n\n\u003e **💡 Tip**: The automatic error analysis only triggers for timeout and OOM errors. Other errors are logged normally without plan analysis.\n\n### Resolver-Level Performance Monitoring\n\nThe `executeWithMetadata()` method provides resolver-level profiling with configurable query plan printing. It aggregates metrics across all database operations within a resolver and supports two modes for query plan analysis.\n\n#### Basic Usage\n\n```typescript\nconst result = await forgeSQL.executeWithMetadata(\n  async () =\u003e {\n    const users = await forgeSQL.selectFrom(usersTable);\n    const orders = await forgeSQL\n      .selectFrom(ordersTable)\n      .where(eq(ordersTable.userId, usersTable.id));\n    return { users, orders };\n  },\n  async (totalDbExecutionTime, totalResponseSize, printQueriesWithPlan) =\u003e {\n    const threshold = 500; // ms baseline for this resolver\n\n    if (totalDbExecutionTime \u003e threshold * 1.5) {\n      console.warn(`[Performance Warning] Resolver exceeded DB time: ${totalDbExecutionTime} ms`);\n      await printQueriesWithPlan(); // Analyze and print query execution plans\n    } else if (totalDbExecutionTime \u003e threshold) {\n      console.debug(`[Performance Debug] High DB time: ${totalDbExecutionTime} ms`);\n    }\n\n    console.log(`DB response size: ${totalResponseSize} bytes`);\n  },\n);\n```\n\n#### Query Plan Printing Options\n\nThe `printQueriesWithPlan` function supports two modes, configurable via the optional `options` parameter:\n\n**1. TopSlowest Mode (default)**: Prints execution plans for the slowest queries from the current resolver invocation\n\n```typescript\n// Full configuration example\nconst result = await forgeSQL.executeWithMetadata(\n  async () =\u003e {\n    const users = await forgeSQL.selectFrom(usersTable);\n    return users;\n  },\n  async (totalDbExecutionTime, totalResponseSize, printQueriesWithPlan) =\u003e {\n    if (totalDbExecutionTime \u003e 1000) {\n      await printQueriesWithPlan(); // Will print top 3 slowest queries with execution plans\n    }\n  },\n  {\n    mode: \"TopSlowest\", // Print top slowest queries (default)\n    topQueries: 3, // Number of top slowest queries to analyze (default: 1)\n    showSlowestPlans: true, // Show execution plans (default: true)\n  },\n);\n\n// Minimal configuration - only specify what you need\nconst result2 = await forgeSQL.executeWithMetadata(\n  async () =\u003e {\n    const users = await forgeSQL.selectFrom(usersTable);\n    return users;\n  },\n  async (totalDbExecutionTime, totalResponseSize, printQueriesWithPlan) =\u003e {\n    if (totalDbExecutionTime \u003e 1000) {\n      await printQueriesWithPlan(); // Will print top 3 slowest queries (all other options use defaults)\n    }\n  },\n  {\n    topQueries: 3, // Only specify topQueries, mode and showSlowestPlans use defaults\n  },\n);\n\n// Disable execution plans - only show SQL and execution time\nconst result3 = await forgeSQL.executeWithMetadata(\n  async () =\u003e {\n    const users = await forgeSQL.selectFrom(usersTable);\n    return users;\n  },\n  async (totalDbExecutionTime, totalResponseSize, printQueriesWithPlan) =\u003e {\n    if (totalDbExecutionTime \u003e 1000) {\n      await printQueriesWithPlan(); // Will print SQL and time only, no execution plans\n    }\n  },\n  {\n    showSlowestPlans: false, // Disable execution plan printing\n  },\n);\n\n// Use all defaults - pass empty object or omit options parameter\nconst result4 = await forgeSQL.executeWithMetadata(\n  async () =\u003e {\n    const users = await forgeSQL.selectFrom(usersTable);\n    return users;\n  },\n  async (totalDbExecutionTime, totalResponseSize, printQueriesWithPlan) =\u003e {\n    if (totalDbExecutionTime \u003e 1000) {\n      await printQueriesWithPlan(); // Uses all defaults: TopSlowest mode, topQueries: 1, showSlowestPlans: true\n    }\n  },\n  {}, // Empty object - all options use defaults\n);\n```\n\n\u003c｜tool▁calls▁begin｜\u003e\u003c｜tool▁call▁begin｜\u003e\nread_file\n\n**2. SummaryTable Mode**: Uses `CLUSTER_STATEMENTS_SUMMARY` for query analysis\n\n```typescript\nconst result = await forgeSQL.executeWithMetadata(\n  async () =\u003e {\n    const users = await forgeSQL.selectFrom(usersTable);\n    return users;\n  },\n  async (totalDbExecutionTime, totalResponseSize, printQueriesWithPlan) =\u003e {\n    if (totalDbExecutionTime \u003e 1000) {\n      await printQueriesWithPlan(); // Will use CLUSTER_STATEMENTS_SUMMARY if within time window\n    }\n  },\n  {\n    mode: \"SummaryTable\", // Use SummaryTable mode\n    summaryTableWindowTime: 10000, // Time window in milliseconds (default: 15000ms)\n  },\n);\n```\n\n#### Configuration Options\n\nAll options are **optional**. If not specified, default values are used. You can pass only the options you need to customize.\n\n| Option                   | Type                             | Default        | Description                                                                                                                                                                                   |\n| ------------------------ | -------------------------------- | -------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `mode`                   | `'TopSlowest' \\| 'SummaryTable'` | `'TopSlowest'` | Query plan printing mode. `'TopSlowest'` prints execution plans for the slowest queries from the current resolver. `'SummaryTable'` uses `CLUSTER_STATEMENTS_SUMMARY` when within time window |\n| `summaryTableWindowTime` | `number`                         | `15000`        | Time window in milliseconds for summary table queries. Only used when `mode` is `'SummaryTable'`                                                                                              |\n| `topQueries`             | `number`                         | `1`            | Number of top slowest queries to analyze when `mode` is `'TopSlowest'`                                                                                                                        |\n| `showSlowestPlans`       | `boolean`                        | `true`         | Whether to show execution plans for slowest queries in TopSlowest mode. If `false`, only SQL and execution time are printed                                                                   |\n| `normalizeQuery`         | `boolean`                        | `true`         | Whether to normalize SQL queries by replacing parameter values with `?` placeholders. Set to `false` to disable normalization if it causes issues with complex queries                        |\n| `asyncQueueName`         | `string`                         | `\"\"`           | Queue name for async processing. If provided, query analysis will be queued for background processing instead of running synchronously. Requires consumer configuration in `manifest.yml`     |\n\n**Examples:**\n\n```typescript\n// Use all defaults - omit options or pass empty object\nawait forgeSQL.executeWithMetadata(queryFn, onMetadataFn); // or { }\n\n// Customize only what you need\nawait forgeSQL.executeWithMetadata(queryFn, onMetadataFn, { topQueries: 3 });\nawait forgeSQL.executeWithMetadata(queryFn, onMetadataFn, { mode: \"SummaryTable\" });\nawait forgeSQL.executeWithMetadata(queryFn, onMetadataFn, { showSlowestPlans: false });\nawait forgeSQL.executeWithMetadata(queryFn, onMetadataFn, { normalizeQuery: false }); // Disable query normalization\n\n// Combine multiple options\nawait forgeSQL.executeWithMetadata(queryFn, onMetadataFn, {\n  mode: \"TopSlowest\",\n  topQueries: 5,\n  showSlowestPlans: false,\n  normalizeQuery: true, // Enable query normalization (default)\n});\n```\n\n#### How It Works\n\n1. **TopSlowest Mode** (default):\n   - Collects all queries executed within the resolver\n   - Sorts them by execution time (slowest first)\n   - Prints execution plans for the top N queries (configurable via `topQueries`)\n   - If `showSlowestPlans` is `false`, only prints SQL and execution time without plans\n   - Works immediately after query execution\n\n2. **SummaryTable Mode**:\n   - Attempts to use `CLUSTER_STATEMENTS_SUMMARY` for query analysis\n   - Only works if queries are executed within the specified time window (`summaryTableWindowTime`)\n   - If the time window expires, falls back to TopSlowest mode\n   - Provides aggregated statistics from TiDB's system tables\n\n#### Example: Real-World Resolver\n\n```typescript\nresolver.define(\"fetch\", async (req: Request) =\u003e {\n  try {\n    return await forgeSQL.executeWithMetadata(\n      async () =\u003e {\n        const users = await forgeSQL.selectFrom(demoUsers);\n        const orders = await forgeSQL\n          .selectFrom(demoOrders)\n          .where(eq(demoOrders.userId, demoUsers.id));\n        return { users, orders };\n      },\n      async (totalDbExecutionTime, totalResponseSize, printQueriesWithPlan) =\u003e {\n        const threshold = 500; // ms baseline for this resolver\n\n        if (totalDbExecutionTime \u003e threshold * 1.5) {\n          console.warn(\n            `[Performance Warning fetch] Resolver exceeded DB time: ${totalDbExecutionTime} ms`,\n          );\n          await printQueriesWithPlan(); // Analyze and print query execution plans\n        } else if (totalDbExecutionTime \u003e threshold) {\n          console.debug(`[Performance Debug] High DB time: ${totalDbExecutionTime} ms`);\n        }\n      },\n      {\n        mode: \"TopSlowest\", // Print top slowest queries (default)\n        topQueries: 2, // Print top 2 slowest queries\n      },\n    );\n  } catch (e) {\n    const error = e?.cause?.debug?.sqlMessage ?? e?.cause;\n    console.error(error, e);\n    throw error;\n  }\n});\n```\n\n#### Benefits\n\n- **Resolver-Level Profiling**: Aggregates metrics across all database operations in a resolver\n- **Configurable Analysis**: Choose between TopSlowest mode or SummaryTable mode\n- **Automatic Plan Formatting**: Execution plans are formatted in a readable format\n- **Performance Thresholds**: Set custom thresholds for performance warnings\n- **Zero Configuration**: Works out of the box with sensible defaults\n\n\u003e **💡 Tip**: When multiple resolvers are running concurrently, their query data may also appear in `printQueriesWithPlan()` analysis when using SummaryTable mode, as it queries the global `CLUSTER_STATEMENTS_SUMMARY` table.\n\n### Async Query Degradation Analysis\n\nForge-SQL-ORM supports asynchronous processing of query degradation analysis, allowing you to offload performance analysis to a background queue. This is particularly useful for production environments where you want to avoid blocking resolver responses while still capturing detailed performance metrics.\n\n#### Key Features\n\n- **Non-Blocking Analysis**: Query analysis runs asynchronously without blocking resolver responses\n- **Automatic Fallback**: Falls back to synchronous execution if async queue fails\n- **Log Correlation**: Job IDs help correlate resolver logs with consumer logs\n- **Queue-Based Processing**: Uses Forge's event queue system for reliable processing\n- **Configurable Timeout**: Customizable timeout for event queuing (default: 1200ms)\n\n#### Basic Setup\n\n**1. Configure consumer in `manifest.yml`:**\n\n```yaml\nmodules:\n  consumer:\n    - key: print-degradation-queries\n      queue: degradationQueue\n      function: handlerAsyncDegradation\n\n  function:\n    - key: handlerAsyncDegradation\n      handler: index.handlerAsyncDegradation\n```\n\n**2. Create the handler function:**\n\n```typescript\nimport { AsyncEvent } from \"@forge/events\";\nimport { printDegradationQueriesConsumer } from \"forge-sql-orm\";\nimport { FORGE_SQL_ORM } from \"./utils/forgeSqlOrmUtils\";\n\nexport const handlerAsyncDegradation = (event: AsyncEvent) =\u003e {\n  return printDegradationQueriesConsumer(FORGE_SQL_ORM, event);\n};\n```\n\n**3. Enable async processing in resolver:**\n\n```typescript\nresolver.define(\"fetch\", async (req: Request) =\u003e {\n  return await FORGE_SQL_ORM.executeWithMetadata(\n    async () =\u003e {\n      // ... your queries ...\n      return await SQL_QUERY;\n    },\n    async (totalDbExecutionTime, totalResponseSize, printQueries) =\u003e {\n      if (totalDbExecutionTime \u003e 800) {\n        await printQueries(); // Will queue for async processing\n      }\n    },\n    { asyncQueueName: \"degradationQueue\" }, // Enable async processing\n  );\n});\n```\n\n#### Configuration Options\n\nThe `asyncQueueName` option enables async processing:\n\n| Option           | Type     | Default | Description                                                                                                                                                |\n| ---------------- | -------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `asyncQueueName` | `string` | `\"\"`    | Queue name for async processing. If provided, query analysis will be queued instead of running synchronously. If empty or not provided, runs synchronously |\n\n#### How It Works\n\n1. **Resolver Execution**: When `printQueriesWithPlan()` is called with `asyncQueueName` configured:\n   - Creates an event payload with query statistics and metadata\n   - Sends event to the specified queue with a timeout (default: 1200ms)\n   - Logs a warning message with Job ID for correlation\n   - Returns immediately without waiting for analysis\n\n2. **Async Processing**: The consumer function (`handlerAsyncDegradation`):\n   - Receives the event from the queue\n   - Logs processing start with Job ID\n   - Executes query analysis (TopSlowest or SummaryTable mode)\n   - Prints execution plans and performance metrics\n\n3. **Fallback Behavior**: If queue push fails or times out:\n   - Falls back to synchronous execution automatically\n   - Logs a warning message\n   - Analysis still completes, just synchronously\n\n#### Log Correlation\n\nBoth resolver and consumer logs include Job IDs to help you correlate related events:\n\n**Resolver log (when event is queued):**\n\n```\nWARN [Performance Analysis] Query degradation event queued for async processing | Job ID: abc-123 | Total DB time: 3531ms | Queries: 3 | Look for consumer log with jobId: abc-123\n```\n\n**Consumer log (when event is processed):**\n\n```\nWARN [Performance Analysis] Processing query degradation event | Job ID: abc-123 | Total DB time: 3531ms | Queries: 3 | Started: 2025-12-15T18:12:34.251Z\nWARN SQL: SELECT ... | Time: 3514 ms\n Plan:\n Projection_7 | task:root | ...\n```\n\n**To find all related logs:**\n\n- Search logs for: `\"Job ID: abc-123\"`\n- This will show both the queuing event and the processing event\n\n#### Example: Complete Setup\n\n**manifest.yml:**\n\n```yaml\nmodules:\n  consumer:\n    - key: print-degradation-queries\n      queue: degradationQueue\n      function: handlerAsyncDegradation\n\n  function:\n    - key: handlerAsyncDegradation\n      handler: index.handlerAsyncDegradation\n```\n\n**index.ts:**\n\n```typescript\nimport { AsyncEvent } from \"@forge/events\";\nimport { printDegradationQueriesConsumer } from \"forge-sql-orm\";\nimport { FORGE_SQL_ORM } from \"./utils/forgeSqlOrmUtils\";\n\n// Consumer handler\nexport const handlerAsyncDegradation = (event: AsyncEvent) =\u003e {\n  return printDegradationQueriesConsumer(FORGE_SQL_ORM, event);\n};\n\n// Resolver with async analysis\nresolver.define(\"fetch\", async (req: Request) =\u003e {\n  return await FORGE_SQL_ORM.executeWithMetadata(\n    async () =\u003e {\n      const users = await FORGE_SQL_ORM.selectFrom(demoUsers);\n      const orders = await FORGE_SQL_ORM.selectFrom(demoOrders);\n      return { users, orders };\n    },\n    async (totalDbExecutionTime, totalResponseSize, printQueries) =\u003e {\n      const threshold = 800; // ms baseline\n\n      if (totalDbExecutionTime \u003e threshold) {\n        console.warn(`[Performance Warning] Resolver exceeded DB time: ${totalDbExecutionTime} ms`);\n        await printQueries(); // Queued for async processing\n      }\n    },\n    {\n      asyncQueueName: \"degradationQueue\", // Enable async processing\n      mode: \"TopSlowest\",\n      topQueries: 1,\n    },\n  );\n});\n```\n\n#### Benefits\n\n- **Non-Blocking**: Resolver responses are not delayed by query analysis\n- **Production Ready**: Suitable for production environments where performance is critical\n- **Reliable**: Automatic fallback ensures analysis always completes\n- **Traceable**: Job IDs enable easy log correlation\n- **Scalable**: Queue-based processing handles high load scenarios\n\n#### When to Use Async Processing\n\n**Use async processing when:**\n\n- You're in a production environment\n- Resolver response time is critical\n- You want to avoid blocking user requests\n- You need detailed analysis but can process it later\n\n**Use synchronous processing when:**\n\n- You're in development/debugging\n- You need immediate analysis results\n- You want simpler setup (no queue configuration)\n\n\u003e **💡 Tip**: The async queue name must match the queue name configured in your `manifest.yml` consumer section. If the queue doesn't exist or the event fails to send, the system automatically falls back to synchronous execution.\n\n### Slow Query Monitoring\n\nForge-SQL-ORM provides a scheduler trigger (`slowQuerySchedulerTrigger`) that automatically monitors and analyzes slow queries on an hourly basis. This trigger queries TiDB's slow query log system table and provides detailed performance information including SQL query text, memory usage, execution time, and execution plans.\n\n#### Key Features\n\n- **Automatic Monitoring**: Runs on a scheduled interval (recommended: hourly)\n- **Detailed Performance Metrics**: Memory usage, execution time, and execution plans\n- **Console Logging**: Results are automatically logged to the Forge Developer Console\n- **Configurable Time Window**: Analyze queries from the last N hours (default: 1 hour)\n- **Automatic Plan Retrieval**: Execution plans are included for all slow queries\n\n#### Basic Setup\n\n**1. Create the trigger function:**\n\n```typescript\nimport ForgeSQL, { slowQuerySchedulerTrigger } from \"forge-sql-orm\";\n\nconst forgeSQL = new ForgeSQL();\n\n// Monitor slow queries from the last hour (recommended for hourly schedule)\nexport const slowQueryTrigger = () =\u003e\n  slowQuerySchedulerTrigger(forgeSQL, { hours: 1, timeout: 3000 });\n```\n\n**2. Configure in `manifest.yml`:**\n\n```yaml\nmodules:\n  scheduledTrigger:\n    - key: slow-query-trigger\n      function: slowQueryTrigger\n      interval: hour # Run every hour\n\n  function:\n    - key: slowQueryTrigger\n      handler: index.slowQueryTrigger\n```\n\n#### Configuration Options\n\n| Option    | Type     | Default | Description                                                |\n| --------- | -------- | ------- | ---------------------------------------------------------- |\n| `hours`   | `number` | `1`     | Number of hours to look back for slow queries              |\n| `timeout` | `number` | `3000`  | Timeout in milliseconds for the diagnostic query execution |\n\n#### Example Console Output\n\nWhen slow queries are detected, you'll see output like this in the Forge Developer Console:\n\n```\nFound SlowQuery SQL: SELECT * FROM users u INNER JOIN orders o ON u.id = o.user_id WHERE u.active = ? | Memory: 8.50 MB | Time: 2500.00 ms\n Plan:\nid task estRows operator info actRows execution info memory disk\nProjection_7 root 1000.00 forge_38dd1c6156b94bb59c2c9a45582bbfc7.users.id, ... 1000 time:2.5s, loops:1 8.50 MB N/A\n└─IndexHashJoin_14 root 1000.00 inner join, ... 1000 time:2.2s, loops:1 7.98 MB N/A\n\nFound SlowQuery SQL: SELECT * FROM products WHERE category = ? ORDER BY created_at DESC | Memory: 6.25 MB | Time: 1800.00 ms\n Plan:\n...\n```\n\n#### Advanced Configuration\n\n```typescript\nimport ForgeSQL, { slowQuerySchedulerTrigger } from \"forge-sql-orm\";\n\nconst forgeSQL = new ForgeSQL();\n\n// Monitor queries from the last 6 hours (for less frequent checks)\nexport const sixHourSlowQueryTrigger = () =\u003e\n  slowQuerySchedulerTrigger(forgeSQL, { hours: 6, timeout: 5000 });\n\n// Monitor queries from the last 24 hours (daily monitoring)\nexport const dailySlowQueryTrigger = () =\u003e\n  slowQuerySchedulerTrigger(forgeSQL, { hours: 24, timeout: 3000 });\n```\n\n#### How It Works\n\n1. **Scheduled Execution**: The trigger runs automatically on the configured interval (hourly recommended)\n2. **Query Analysis**: Queries TiDB's slow query log system table for queries executed within the specified time window\n3. **Performance Metrics**: Extracts and logs:\n   - SQL query text (sanitized for readability)\n   - Maximum memory usage (in MB)\n   - Query execution time (in ms)\n   - Detailed execution plan\n4. **Console Logging**: Results are logged to the Forge Developer Console via `console.warn()` for easy monitoring\n\n#### Best Practices\n\n- **Hourly Intervals**: Use `interval: hour` for timely detection of slow queries\n- **Default Time Window**: 1 hour is recommended for hourly schedules to avoid overlap\n- **Monitor Regularly**: Check console logs regularly to identify patterns in slow queries\n\n#### Benefits\n\n- **Proactive Monitoring**: Catch slow queries before they become critical issues\n- **Performance Trends**: Track query performance over time\n- **Optimization Insights**: Execution plans help identify optimization opportunities\n- **Zero Manual Intervention**: Fully automated monitoring with scheduled execution\n- **Production Safe**: Works silently in the background, only logs when slow queries are found\n\n\u003e **💡 Tip**: The trigger queries up to 50 slow queries to prevent excessive logging. Transient timeouts are usually fine; repeated timeouts indicate the diagnostic query itself is slow and should be investigated.\n\n### Available Analysis Tools\n\n```typescript\nimport ForgeSQL from \"forge-sql-orm\";\n\nconst forgeSQL = new ForgeSQL();\nconst analyzeForgeSql = forgeSQL.analyze();\n```\n\n#### Query Plan Analysis\n\nQuery plan analysis helps you understand how your queries are executed and identify optimization opportunities.\n\n```typescript\n// Example usage for analyzing a specific query\nconst forgeSQL = new ForgeSQL();\nconst analyzeForgeSql = forgeSQL.analyze();\n\n// Analyze a Drizzle query\nconst plan = await analyzeForgeSql.explain(\n  forgeSQL\n    .select({\n      table1: testEntityJoin1,\n      table2: { name: testEntityJoin2.name, email: testEntityJoin2.email },\n      count: rawSql\u003cnumber\u003e`COUNT(*)`,\n      table3: {\n        table12: testEntityJoin1.name,\n        table22: testEntityJoin2.email,\n        table32: testEntity.id,\n      },\n    })\n    .from(testEntityJoin1)\n    .innerJoin(testEntityJoin2, eq(testEntityJoin1.id, testEntityJoin2.id)),\n);\n\n// Analyze a raw SQL query\nconst rawPlan = await analyzeForgeSql.explainRaw(\"SELECT * FROM users WHERE id = ?\", [1]);\n\n// Analyze new methods\nconst usersFromPlan = await analyzeForgeSql.explain(\n  forgeSQL.selectFrom(users).where(eq(users.active, true)),\n);\n\nconst usersCacheablePlan = await analyzeForgeSql.explain(\n  forgeSQL.selectCacheableFrom(users).where(eq(users.active, true)),\n);\n\n// Analyze Common Table Expressions (CTEs)\nconst ctePlan = await analyzeForgeSql.explain(\n  forgeSQL\n    .with(\n      forgeSQL.selectFrom(users).where(eq(users.active, true)).as(\"activeUsers\"),\n      forgeSQL.selectFrom(orders).where(eq(orders.status, \"completed\")).as(\"completedOrders\"),\n    )\n    .select({\n      totalActiveUsers: sql`COUNT(au.id)`,\n      totalCompletedOrders: sql`COUNT(co.id)`,\n    })\n    .from(sql`activeUsers au`)\n    .leftJoin(sql`completedOrders co`, eq(sql`au.id`, sql`co.userId`)),\n);\n```\n\nThis analysis provides insights into:\n\n- How the database executes your query\n- Which indexes are being used\n- Estimated vs actual row counts\n- Resource usage at each step\n- Performance optimization opportunities\n\n## Migration Guide\n\n### Migrating from 2.0.x to 2.1.x\n\nThis section covers the breaking changes introduced in version 2.1.x and how to migrate your existing code.\n\n#### 1. Method Renaming (BREAKING CHANGES)\n\n**Removed Methods:**\n\n- `forgeSQL.modify()` → **REMOVED** (use `forgeSQL.modifyWithVersioning()`)\n- `forgeSQL.crud()` → **REMOVED** (use `forgeSQL.modifyWithVersioning()`)\n\n**Migration Steps:**\n\n1. **Replace `modify()` calls:**\n\n   ```typescript\n   // ❌ Old (2.0.x) - NO LONGER WORKS\n   await forgeSQL.modify().insert(Users, [userData]);\n   await forgeSQL.modify().updateById(updateData, Users);\n   await forgeSQL.modify().deleteById(1, Users);\n\n   // ✅ New (2.1.x) - REQUIRED\n   await forgeSQL.modifyWithVersioning().insert(Users, [userData]);\n   await forgeSQL.modifyWithVersioning().updateById(updateData, Users);\n   await forgeSQL.modifyWithVersioning().deleteById(1, Users);\n   ```\n\n2. **Replace `crud()` calls:**\n\n   ```typescript\n   // ❌ Old (2.0.x) - NO LONGER WORKS\n   await forgeSQL.crud().insert(Users, [userData]);\n   await forgeSQL.crud().updateById(updateData, Users);\n   await forgeSQL.crud().deleteById(1, Users);\n\n   // ✅ New (2.1.x) - REQUIRED\n   await forgeSQL.modifyWithVersioning().insert(Users, [userData]);\n   await forgeSQL.modifyWithVersioning().updateById(updateData, Users);\n   await forgeSQL.modifyWithVersioning().deleteById(1, Users);\n   ```\n\n#### 2. New API Methods\n\n**New Methods Available:**\n\n- `forgeSQL.insert()` - Basic Drizzle operations\n- `forgeSQL.update()` - Basic Drizzle operations\n- `forgeSQL.delete()` - Basic Drizzle operations\n- `forgeSQL.insertAndEvictCache()` - Basic Drizzle operations with evict cache after execution\n- `forgeSQL.updateAndEvictCache()` - Basic Drizzle operations with evict cache after execution\n- `forgeSQL.deleteAndEvictCache()` - Basic Drizzle operations with evict cache after execution\n- `forgeSQL.selectFrom()` - All-column queries with field aliasing\n- `forgeSQL.selectDistinctFrom()` - Distinct all-column queries with field aliasing\n- `forgeSQL.selectCacheableFrom()` - All-column queries with field aliasing and caching\n- `forgeSQL.selectDistinctCacheableFrom()` - Distinct all-column queries with field aliasing and caching\n- `forgeSQL.execute()` - Raw SQL queries with local caching\n- `forgeSQL.executeCacheable()` - Raw SQL queries with local and global caching\n- `forgeSQL.executeDDL()` - DDL operations (CREATE, ALTER, DROP, etc.)\n- `forgeSQL.executeDDLActions()` - Execute actions within DDL operation context\n- `forgeSQL.with()` - Common Table Expressions (CTEs)\n\n**Optional Migration:**\nYou can optionally migrate to the new API methods for better performance and cache management:\n\n```typescript\n// ❌ Old approach (still works)\nawait forgeSQL.modifyWithVersioning().insert(Users, [userData]);\n\n// ✅ New approach (recommended for new code)\nawait forgeSQL.insert(Users).values(userData);\n// or for versioned operations with cache management\nawait forgeSQL.modifyWithVersioningAndEvictCache().insert(Users, [userData]);\n\n// ✅ New query methods for better performance\nconst users = await forgeSQL.selectFrom(Users).where(eq(Users.active, true));\n\nconst usersDistinct = await forgeSQL.selectDistinctFrom(Users).where(eq(Users.active, true));\n\nconst usersCacheable = await forgeSQL.selectCacheableFrom(Users).where(eq(Users.active, true));\n\n// ✅ Raw SQL execution with caching\nconst rawUsers = await forgeSQL.execute(\"SELECT * FROM users WHERE active = ?\", [true]);\n\n// ⚠️ IMPORTANT: When using executeCacheable(), all table names must be wrapped with backticks (`)\nconst cachedRawUsers = await forgeSQL.executeCacheable(\n  \"SELECT * FROM `users` WHERE active = ?\",\n  [true],\n  300,\n);\n\n// ✅ Raw SQL execution with metadata capture and performance monitoring\nconst usersWithMetadata = await forgeSQL.executeWithMetadata(\n  async () =\u003e {\n    const users = await forgeSQL.selectFrom(usersTable);\n    const orders = await forgeSQL\n      .selectFrom(ordersTable)\n      .where(eq(ordersTable.userId, usersTable.id));\n    return { users, orders };\n  },\n  (totalDbExecutionTime, totalResponseSize, printQueriesWithPlan) =\u003e {\n    const threshold = 500; // ms baseline for this resolver\n\n    if (totalDbExecutionTime \u003e threshold * 1.5) {\n      console.warn(`[Performance Warning] Resolver exceeded DB time: ${totalDbExecutionTime} ms`);\n      await printQueriesWithPlan(); // Analyze and print query execution plans\n    } else if (totalDbExecutionTime \u003e threshold) {\n      console.debug(`[Performance Debug] High DB time: ${totalDbExecutionTime} ms`);\n    }\n\n    console.log(`DB response size: ${totalResponseSize} bytes`);\n  },\n  {\n    // Optional: Configure query plan printing\n    mode: \"TopSlowest\", // Print top slowest queries (default)\n    topQueries: 1, // Print top slowest query\n  },\n);\n\n// ✅ DDL operations for schema modifications\nawait forgeSQL.executeDDL(`\n  CREATE TABLE users (\n    id INT PRIMARY KEY AUTO_INCREMENT,\n    name VARCHAR(255) NOT NULL,\n    email VARCHAR(255) UNIQUE\n  )\n`);\n\nawait forgeSQL.executeDDL(sql`\n  ALTER TABLE users \n  ADD COLUMN created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP\n`);\n\n// ✅ Execute regular SQL queries in DDL context for performance monitoring\nawait forgeSQL.executeDDLActions(async () =\u003e {\n  // Execute regular SQL queries in DDL context for monitoring\n  const slowQueries = await forgeSQL.execute(`\n    SELECT * FROM INFORMATION_SCHEMA.STATEMENTS_SUMMARY \n    WHERE AVG_LATENCY \u003e 1000000\n  `);\n\n  // Execute complex analysis queries in DDL context\n  const performanceData = await forgeSQL.execute(`\n    SELECT * FROM INFORMATION_SCHEMA.CLUSTER_STATEMENTS_SUMMARY_HISTORY\n    WHERE SUMMARY_END_TIME \u003e DATE_SUB(NOW(), INTERVAL 1 HOUR)\n  `);\n\n  return { slowQueries, performanceData };\n});\n\n// ✅ Common Table Expressions (CTEs)\nconst userStats = await forgeSQL\n  .with(\n    forgeSQL.selectFrom(users).where(eq(users.active, true)).as(\"activeUsers\"),\n    forgeSQL.selectFrom(orders).where(eq(orders.status, \"completed\")).as(\"completedOrders\"),\n  )\n  .select({\n    totalActiveUsers: sql`COUNT(au.id)`,\n    totalCompletedOrders: sql`COUNT(co.id)`,\n  })\n  .from(sql`activeUsers au`)\n  .leftJoin(sql`completedOrders co`, eq(sql`au.id`, sql`co.userId`));\n```\n\n#### 3. Automatic Migration Script\n\nYou can use a simple find-and-replace to migrate your code:\n\n```bash\n# Replace modify() calls\nfind . -name \"*.ts\" -o -name \"*.js\" | xargs sed -i 's/forgeSQL\\.modify()/forgeSQL.modifyWithVersioning()/g'\n\n# Replace crud() calls\nfind . -name \"*.ts\" -o -name \"*.js\" | xargs sed -i 's/forgeSQL\\.crud()/forgeSQL.modifyWithVersioning()/g'\n```\n\n#### 4. Breaking Changes\n\n**Important:** The old methods (`modify()` and `crud()`) have been completely removed in version 2.1.x.\n\n- ❌ **2.1.x**: Old methods are no longer available\n- ✅ **Migration Required**: You must update your code to use the new methods\n\n## License\n\nThis project is licensed under the **MIT License**.  \nFeel free to use it for commercial and personal projects.\n","funding_links":["https://opencollective.com/forge-sql-orm"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fforge-sql-orm%2Fforge-sql-orm","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fforge-sql-orm%2Fforge-sql-orm","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fforge-sql-orm%2Fforge-sql-orm/lists"}