{"id":34253662,"url":"https://github.com/tabular-id/tabular","last_synced_at":"2026-01-16T19:25:12.215Z","repository":{"id":305096613,"uuid":"1021859453","full_name":"tabular-id/tabular","owner":"tabular-id","description":"Your Query Client, Forged with Rust: Fast, Safe, Efficient.","archived":false,"fork":false,"pushed_at":"2026-01-13T15:01:15.000Z","size":94429,"stargazers_count":16,"open_issues_count":2,"forks_count":2,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-01-13T16:57:51.456Z","etag":null,"topics":["client","database","database-management","rust","sql"],"latest_commit_sha":null,"homepage":"https://tabular.id","language":"Rust","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tabular-id.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-07-18T04:16:38.000Z","updated_at":"2026-01-13T15:02:03.000Z","dependencies_parsed_at":"2025-08-11T03:13:02.264Z","dependency_job_id":"7228c61a-2d6c-40ea-920c-403fa6f9bb2a","html_url":"https://github.com/tabular-id/tabular","commit_stats":null,"previous_names":["tabular-id/tabular"],"tags_count":61,"template":false,"template_full_name":null,"purl":"pkg:github/tabular-id/tabular","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tabular-id%2Ftabular","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tabular-id%2Ftabular/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tabular-id%2Ftabular/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tabular-id%2Ftabular/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tabular-id","download_url":"https://codeload.github.com/tabular-id/tabular/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tabular-id%2Ftabular/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28481681,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-16T11:59:17.896Z","status":"ssl_error","status_checked_at":"2026-01-16T11:55:55.838Z","response_time":107,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["client","database","database-management","rust","sql"],"created_at":"2025-12-16T11:49:25.695Z","updated_at":"2026-01-16T19:25:12.134Z","avatar_url":"https://github.com/tabular-id.png","language":"Rust","readme":"\u003cdiv align=\"center\"\u003e\n\n# Tabular\n\nYour fast, native, cross‑platform SQL \u0026 NoSQL database client (Desktop, with early groundwork for iPadOS) — built in Rust.\n\n![Main Window](screenshots/halaman-utama.jpg)\n\n\u003c/div\u003e\n\n## 1. About Tabular\nTabular is a lightweight, native database client built with the `eframe`/`egui` stack. It focuses on instant startup, responsive UI, safe concurrency, and a distraction‑free workflow for developers, data engineers, and DBAs. Unlike web/electron apps, Tabular ships as a single native binary with a small memory footprint while still offering rich features like autocomplete, multiple drivers, query history, export tools, and self‑update.\n\n## 2. Key Features\n- Unified UI for multiple relational \u0026 non‑relational databases\n- Drivers: PostgreSQL, MySQL/MariaDB, SQLite, SQL Server (TDS), Redis, MongoDB\n- Async runtime (Tokio) for non‑blocking execution\n- Multiple query tabs \u0026 saved query library (`queries/` folder)\n- Query history panel with search \u0026 filtering\n- Result grid with copy cell / row / full result set\n- Export to CSV \u0026 XLSX\n- Rich value formatting (dates, decimals, JSON, BSON, HEX)\n- Connection caching \u0026 quick reconnect\n- Self‑update (GitHub Releases) with semantic version check\n- Configurable data directory (env `TABULAR_DATA_DIR`)\n- Native file dialogs (`rfd`)\n- Cross‑platform theming via egui\n- Sandboxing \u0026 notarization ready for macOS\n\n### Query Editor (New, In Progress)\nThe legacy `egui::TextEdit` editor is being replaced with a custom widget backed by a rope buffer. Current capabilities:\n- Multi‑caret editing\n- Per‑line syntax highlighting cache (SQL focus first)\n- Basic scroll‑to‑caret\n- Undo/Redo\n- Multi‑line/column selection\n\nPlanned next: full autocomplete integration, diff‑based edits, revision tracking, and removal of the legacy path after feature parity.\n\n## 3. Supported Databases\n| Category   | Engines / Protocols |\n|------------|----------------------|\n| Relational | PostgreSQL, MySQL/MariaDB, SQLite, Microsoft SQL Server |\n| Document   | MongoDB (BSON \u0026 compression) |\n| Key/Value  | Redis (async connection manager) |\n\nNotes:\n- Microsoft SQL Server uses `tiberius` (TDS over TLS)\n- Redis uses pooled async managers\n- SQLite runs in‑process (file mode) — ensure write permissions\n\n## 4. Installation\n\n### Option A: Download Prebuilt Release (Recommended)\n1. Visit: https://github.com/tabular-id/tabular/releases\n2. Download the bundle for your platform:\n    - macOS: `.dmg` (notarized) or `.pkg` (if available)\n    - Linux: `.tar.gz` (extract, then place the binary in `$HOME/.local/bin` or `/usr/local/bin`)\n    - Windows: portable package is planned\n3. (macOS) Drag `Tabular.app` into `/Applications`.\n4. Run Tabular.\n\n### Option B: Build From Source\nGeneral requirements:\n- Rust (stable; https://rustup.rs)\n- Cargo (bundled with rustup)\n- Clang/LLVM (for bindgen / some native crates)\n- libclang headers (Linux)\n- (Linux) pkg-config, OpenSSL dev packages may be required depending on environment\n\n#### Arch Linux\n```bash\nsudo pacman -Syu --needed base-devel clang llvm pkgconf\nexport LIBCLANG_PATH=/usr/lib\ngit clone https://github.com/tabular-id/tabular.git\ncd tabular\ncargo build --release\n```\n\n#### Ubuntu / Debian\n```bash\nsudo apt update\nsudo apt install -y build-essential clang libclang-dev pkg-config\ngit clone https://github.com/tabular-id/tabular.git\ncd tabular\ncargo build --release\n```\n\n#### macOS\n```bash\nxcode-select --install   # Command Line Tools\nbrew install llvm        # (opsional) clang lebih baru\ngit clone https://github.com/tabular-id/tabular.git\ncd tabular\ncargo build --release\n```\nJika memakai LLVM dari Homebrew:\n```bash\nexport LIBCLANG_PATH=\"$(brew --prefix llvm)/lib\"\n```\n\n#### Windows (MSVC) – planned\nInstall the MSVC toolchain, then build with `cargo build --release`.\n\n#### Multi‑Arsitektur / Cross Compilation (Desktop + Eksperimental iOS)\n```bash\ncargo install cross\ncross build --target aarch64-apple-darwin --release\n\n# Eksperimental iPadOS (butuh wrapper Xcode/iOS)\nrustup target add aarch64-apple-ios\ncargo build --target aarch64-apple-ios --release\n```\n\n### Run\n```bash\n./target/release/tabular\n```\n\n### Optional Environment Variables\n| Variable         | Purpose                             | Example                      |\n|------------------|-------------------------------------|------------------------------|\n| TABULAR_DATA_DIR | Override data directory location    | /data/tabular                |\n| RUST_LOG         | Enable logging                      | RUST_LOG=info ./tabular      |\n\n## 5. macOS Notarized / Signed Builds\nFor distributing outside the Mac App Store:\n```bash\nexport APPLE_ID=\"your-apple-id@example.com\"\nexport APPLE_PASSWORD=\"app-specific-password\"\nexport APPLE_TEAM_ID=\"TEAMID\"\nexport APPLE_BUNDLE_ID=\"id.tabular.database\"\nexport APPLE_IDENTITY=\"Developer ID Application: Your Name (TEAMID)\"\nexport NOTARIZE=1\n./build.sh macos --deps\n```\nStaple \u0026 verify:\n```bash\nxcrun stapler staple dist/macos/Tabular-\u003cversion\u003e.dmg\nspctl -a -vv dist/macos/Tabular.app\ncodesign --verify --deep --strict --verbose=2 dist/macos/Tabular.app\n```\nSee `macos/Tabular.entitlements` for sandbox/network/file access settings. For App Store distribution use a distribution identity and provisioning profile:\n```bash\nexport APPLE_IDENTITY=\"Apple Distribution: Your Name (TEAMID)\"\nexport PROVISIONING_PROFILE=\"/path/Tabular_AppStore.provisionprofile\"\nmake pkg-macos-store\n```\n\n## 6. Data Directory (Configurable)\nDefault location:\n- macOS / Linux: `~/.tabular`\n\nChange it via Preferences (native folder picker) or force it with:\n```bash\nexport TABULAR_DATA_DIR=\"/absolute/custom/path\"\n./tabular\n```\nManual migration: copy the old folder to the new location before switching \u0026 restarting.\n\nContents of the data directory:\n- `preferences.*` — UI \u0026 app settings\n- `cache.*` — metadata \u0026 driver caches\n- `queries/` — saved queries\n- `history/` — executed query history\n\n## 7. Development Guide\n### Project Layout (selected)\n```\nsrc/\n    main.rs              # Entry point\n    window_egui.rs       # UI / egui integration\n    editor.rs            # Logika editor query\n    editor_autocomplete.rs\n    sidebar_*.rs         # Panel samping (database, history, queries)\n    driver_*.rs          # Abstraksi driver database\n    export.rs            # Ekspor CSV / XLSX\n    self_update.rs       # Cek \u0026 aplikasi update\n    config.rs            # Preferensi \u0026 direktori data\n    models/              # Struktur data \u0026 enum\n    query_ast/           # Lapisan AST kueri (eksperimental, default aktif)\n```\n\n### Quick Start (Dev)\n```bash\ngit clone https://github.com/tabular-id/tabular.git\ncd tabular\ncargo run\n```\n\n### Common Tasks\n| Action        | Command                          |\n|---------------|----------------------------------|\n| Build debug   | `cargo build`                    |\n| Run           | `cargo run`                      |\n| Test          | `cargo test`                     |\n| Lint (clippy) | `cargo clippy -- -D warnings`    |\n| Format        | `cargo fmt`                      |\n| Release build | `cargo build --release`          |\n\n### Logging\n```bash\nRUST_LOG=info cargo run\n```\n\n### Adding a New Driver (short)\n1) Create `driver_\u003cengine\u003e.rs`  2) Implement connection \u0026 execution  3) Add feature flag if optional  4) Register in `modules.rs`/factory  5) Update README.\n\n## 8. Core Dependencies (Crates)\n| Purpose        | Crate |\n|----------------|-------|\n| UI \u0026 App Shell | eframe, egui_extras |\n| Async Runtime  | tokio, futures, futures-util, tokio-util |\n| Relational DB  | sqlx (postgres, mysql, sqlite) |\n| SQL Server     | tiberius |\n| Redis          | redis |\n| MongoDB        | mongodb, bson |\n| Data Formats   | serde, serde_json, chrono, rust_decimal, hex, csv, xlsxwriter |\n| File Dialog    | rfd |\n| Update         | reqwest, self_update, semver |\n| Logging        | log, env_logger, dotenv |\n| Utilities      | dirs, regex, colorful |\n\nSee `Cargo.toml` for exact versions (current package version: 0.5.18).\n\n## 9. Contributing\nContributions are welcome (bug fixes, new drivers, UI, performance). Suggested workflow:\n1. Fork \u0026 create a feature branch.\n2. Run `cargo fmt \u0026\u0026 cargo clippy` before committing.\n3. Ensure release build compiles: `cargo build --release`.\n4. Open a PR with a concise description \u0026 screenshots (for UI changes).\n\n## 10. Troubleshooting\n| Issue                           | Hint                                               |\n|---------------------------------|----------------------------------------------------|\n| Build fails: clang not found    | Install clang / set `LIBCLANG_PATH`                |\n| TLS errors on connect           | Verify certificates \u0026 network reachability         |\n| SQLite file locked              | Close other processes; check file permissions      |\n| UI freeze on long queries       | Use server pagination; streaming improvements WIP  |\n\n## 11. Roadmap (High level)\n- Windows build \u0026 signing\n- iPadOS adaptive layout\n- Query formatter / beautifier\n- Result pagination for large datasets\n- Connection grouping \u0026 tags\n- Scripting/extension layer\n- Secure secrets storage (Keychain/KWallet/Credential Manager)\n\n## 12. License\nThis project is dual‑licensed:\n\n1) GNU Affero General Public License v3 (AGPL‑3.0) — see `LICENSE-AGPL`\n2) Commercial License — contact PT. Vneu Teknologi Indonesia (see `LICENSE`)\n\nIn short: use AGPL for OSS/non‑commercial; obtain a commercial license for closed‑source/commercial integration.\n\n## 13. Acknowledgements\nBuilt with the Rust ecosystem. egui \u0026 sqlx projects are especially instrumental.\n\n---\nMade with Rust 🦀 for people who love fast, native tools.\n\n## 14. Agnostic AST Architecture (Advanced)\n\n## 📋 Overview\n\nTabular uses a **database‑agnostic AST (Abstract Syntax Tree)** to separate query logic from database‑specific implementations. Benefits:\n\n- ✅ **Performa Optimal**: Plan caching, rewrite optimization\n- ✅ **Kemudahan Extensibility**: Tambah database baru tanpa ubah core logic\n- ✅ **Type Safety**: Compile-time checking dengan Rust\n- ✅ **Maintainability**: Clear separation of concerns\n\n## 🏗️ Architecture Layers\n\n### Layer 1: Parser (Database‑agnostic)\n```\nRaw SQL → sqlparser → AST (generic) → Logical Plan\n```\n- Uses the `sqlparser` crate for universal SQL parsing\n- Not aware of any specific database\n- Output: `LogicalQueryPlan` (database‑agnostic IR)\n\n**File**: `src/query_ast/parser.rs`\n\n### Layer 2: Logical Plan (Agnostik DB)\n```rust\npub enum LogicalQueryPlan {\n    Projection { exprs: Vec\u003cExpr\u003e, input: Box\u003cLogicalQueryPlan\u003e },\n    Filter { predicate: Expr, input: Box\u003cLogicalQueryPlan\u003e },\n    Sort { items: Vec\u003cSortItem\u003e, input: Box\u003cLogicalQueryPlan\u003e },\n    Limit { limit: u64, offset: u64, input: Box\u003cLogicalQueryPlan\u003e },\n    Join { left, right, on, kind },\n    TableScan { table, alias },\n    // ... etc\n}\n```\n\n**File**: `src/query_ast/logical.rs`\n\n**Benefits**:\n- All databases share the same structure\n- Optimizations apply to all databases\n- Easy to visualize and debug\n\n### Layer 3: Rewrite/Optimizer (Database‑agnostic)\n```\nLogical Plan → Apply Rules → Optimized Logical Plan\n```\n\n**Rules** (apply to all databases):\n- Filter pushdown\n- Projection pruning\n- CTE inlining\n- Predicate merging\n- Auto‑limit injection\n- Pagination rewrite\n\n**File**: `src/query_ast/rewrite.rs`\n\n**Example**:\n```rust\n// Before rewrite:\nProjection -\u003e Filter -\u003e Filter -\u003e TableScan\n// After rewrite:\nProjection -\u003e Filter(merged) -\u003e TableScan\n```\n\n### Layer 4: Emitter (Database‑specific)\n```\nOptimized Plan → Dialect → SQL for Target DB\n```\n\n**Trait-based**:\n```rust\npub trait SqlDialect {\n    fn quote_ident(\u0026self, ident: \u0026str) -\u003e String;\n    fn emit_limit(\u0026self, limit: u64, offset: u64) -\u003e String;\n    fn supports_window_functions(\u0026self) -\u003e bool;\n    // ... etc\n}\n```\n\n**Implementations**:\n- `MySqlDialect`: Backticks, `LIMIT n OFFSET m`\n- `PostgresDialect`: Double quotes, window functions\n- `MssqlDialect`: Square brackets, `TOP n`, `OFFSET FETCH`\n- `SqliteDialect`: Backticks, limited window support\n- `MongoDialect`: Minimal SQL, mostly native operations\n- `RedisDialect`: Very limited SQL\n\n**Files**:\n- `src/query_ast/emitter/mod.rs` (core emitter)\n- `src/query_ast/emitter/dialect.rs` (dialect trait + implementations)\n\n### Layer 5: Executor (Database‑specific)\n```\nEmitted SQL → Connection Pool → Execute → Results\n```\n\n**Trait-based**:\n```rust\n#[async_trait]\npub trait DatabaseExecutor {\n    fn database_type(\u0026self) -\u003e DatabaseType;\n    async fn execute_query(\u0026self, sql: \u0026str, ...) -\u003e Result\u003cQueryResult, ...\u003e;\n    fn supports_feature(\u0026self, feature: SqlFeature) -\u003e bool;\n}\n```\n\n**File**: `src/query_ast/executor.rs`\n\n## 📊 Performance Optimizations\n\n### 1. Multi-Level Caching\n\n```rust\n// Level 1: Structural fingerprint (quick pre-check)\nlet fp = structural_fingerprint(sql); // hash without parsing\n\n// Level 2: Logical plan hash (after parsing)\nlet plan = parse(sql);\nlet plan_hash = hash_plan(\u0026plan);\n\n// Cache key includes: plan_hash + db_type + pagination + options\nlet cache_key = format!(\"{}::{:?}::{:?}\", plan_hash, db_type, pagination);\n```\n\n**Cache Stats Available**:\n```rust\nlet (hits, misses) = query_ast::cache_stats();\nprintln!(\"Cache hit rate: {:.1}%\", hits as f64 / (hits + misses) as f64 * 100.0);\n```\n\n### 2. Plan Reuse\n\nSame logical plan works for different databases:\n```\nParse once → Rewrite once → Emit N times (one per DB type)\n```\n\n### 3. Zero-Copy Where Possible\n\n- Use `Arc\u003cLogicalQueryPlan\u003e` untuk sharing plans\n- String interning untuk column/table names\n- COW (Copy-on-Write) untuk rewrites\n\n## 🔧 Implementation Status\n\n### ✅ Completed\n\n| Feature | Status | Notes |\n|---------|--------|-------|\n| Basic SELECT parsing | ✅ | Includes JOIN, GROUP BY, HAVING, LIMIT |\n| Filter/WHERE | ✅ | AND/OR/NOT, comparison ops |\n| Projection/SELECT list | ✅ | Columns, aliases, * |\n| Sorting/ORDER BY | ✅ | Multiple columns, ASC/DESC |\n| Pagination/LIMIT | ✅ | Rewritten per database |\n| JOINs | ✅ | INNER, LEFT, RIGHT, FULL |\n| GROUP BY | ✅ | Multiple expressions |\n| HAVING | ✅ | Post-aggregation filters |\n| DISTINCT | ✅ | Deduplication |\n| CTEs/WITH | ✅ | Single-use CTE inlining |\n| UNION/UNION ALL | ✅ | Set operations |\n| Window Functions | ✅ | ROW_NUMBER, RANK, etc |\n| Subqueries | ✅ | Scalar, correlated detection |\n| Plan Caching | ✅ | Multi-level with fingerprinting |\n| Rewrite Rules | ✅ | 9 rules implemented |\n| MySQL Dialect | ✅ | Full support |\n| PostgreSQL Dialect | ✅ | Full support |\n| SQLite Dialect | ✅ | Full support |\n| MS SQL Dialect | ✅ | TOP/OFFSET FETCH syntax |\n| MongoDB Dialect | 🟡 | Limited SQL, prefer native |\n| Redis Dialect | 🟡 | Very limited |\n\n### 🚧 In Progress / TODO\n\n| Feature | Priority | Effort |\n|---------|----------|--------|\n| Executor trait implementation | HIGH | Medium |\n| Database-specific executors | HIGH | Medium |\n| Multi-statement support | MEDIUM | High |\n| DDL parsing (CREATE/ALTER/DROP) | MEDIUM | High |\n| Advanced subquery optimizations | LOW | High |\n| Correlated subquery rewrite | LOW | High |\n| Cost-based optimization | LOW | Very High |\n\n## 📈 Benchmarks (indicative)\n\n### Query Compilation Time\n\n```\nSimple SELECT:     \u003c 1ms   (cache hit: \u003c 0.1ms)\nWith JOIN:         \u003c 5ms   (cache hit: \u003c 0.1ms)\nComplex (3+ JOINs): \u003c 20ms (cache hit: \u003c 0.1ms)\n```\n\n### Cache Hit Rates\n\n```\nRepeated queries:  95%+ hit rate\nPagination queries: 90%+ hit rate\nAd-hoc queries:    40%+ hit rate (fingerprint matching)\n```\n\n### Memory Usage\n\n```\nPer cached plan:   ~5KB (typical)\nCache size limit:  1000 plans (configurable)\nTotal overhead:    ~5MB for 1000 plans\n```\n\n## 🎯 Best Practices\n\n### Untuk Penulis Driver\n\n1. **Use the AST pipeline** instead of raw SQL where possible\n2. **Implement SqlDialect** for your database\n3. **Implement DatabaseExecutor** for execution\n4. **Test with real queries** from your database\n5. **Measure performance** before/after AST integration\n\n### Untuk Penulis Query\n\n1. **Use standard SQL** for best cross-database compatibility\n2. **Avoid database-specific features** in shared code\n3. **Let the AST handle optimization** (don't manually optimize)\n4. **Check cache stats** to verify query reuse\n\n### Untuk Maintainer\n\n1. **Keep layers separate** (don't mix concerns)\n2. **Add tests for new rewrites** (prevent regressions)\n3. **Document dialect differences** in code comments\n4. **Profile regularly** to catch performance regressions\n\n## 🐛 Debugging Tools\n\n### 1. Debug Plan Visualization\n\n```rust\nlet debug_str = query_ast::debug_plan(\u0026sql, \u0026db_type)?;\nprintln!(\"{}\", debug_str);\n// Output:\n// -- debug plan for PostgreSQL\n// Projection 3\n//   Filter \"id \u003e 10\"\n//     TableScan(users alias=None)\n```\n\n### 2. Rewrite Rule Tracking\n\n```rust\nlet rules = query_ast::last_rewrite_rules();\nprintln!(\"Applied rules: {:?}\", rules);\n// Output: [\"auto_limit\", \"filter_pushdown\", \"projection_prune\"]\n```\n\n### 3. Plan Metrics\n\n```rust\nlet (nodes, depth, subqueries, correlated, windows) = query_ast::plan_metrics(\u0026sql)?;\nprintln!(\"Plan complexity: {} nodes, depth {}\", nodes, depth);\n```\n\n### 4. Structural Fingerprint\n\n```rust\nlet (hash, cache_key) = query_ast::plan_structural_hash(\u0026sql, \u0026db_type, pagination, auto_limit)?;\nprintln!(\"Plan hash: {:x}\", hash);\n```\n\n## 📚 Referensi\n\n### Key Files\n\n```\nsrc/query_ast/\n├── mod.rs              # Public API \u0026 main compilation pipeline\n├── ast.rs              # Raw AST wrapper (thin layer over sqlparser)\n├── logical.rs          # Logical plan IR (database-agnostic)\n├── parser.rs           # SQL → Logical plan conversion\n├── emitter/\n│   ├── mod.rs          # Plan → SQL emission\n│   └── dialect.rs      # Database-specific SQL generation\n├── rewrite.rs          # Optimization rules\n├── executor.rs         # Execution trait \u0026 registry\n├── plan_cache.rs       # Multi-level caching\n└── errors.rs           # Error types\n\nsrc/driver_*.rs         # Database-specific drivers (legacy + AST integration)\nsrc/connection.rs       # Connection pool management\nsrc/cache_data.rs       # Metadata caching (tables, columns, etc)\n```\n\n### External Dependencies\n\n- **sqlparser**: SQL parsing (universal)\n- **tokio**: Async runtime\n- **sqlx**: Database drivers (MySQL, PostgreSQL, SQLite)\n- **tiberius**: MS SQL driver\n- **mongodb**: MongoDB native driver\n- **redis**: Redis client\n\n## 🔮 Future Enhancements\n\n### Phase 2: Advanced Features\n\n1. **Cost-Based Optimizer**: Choose optimal join order\n2. **Materialized Views**: Cache intermediate results\n3. **Parallel Execution**: Split queries across cores\n4. **Query Federation**: JOIN across different databases\n\n### Phase 3: Code Generation\n\n1. **Compile to native code**: LLVM backend for hot queries\n2. **SIMD optimizations**: Vectorize filters and aggregations\n3. **GPU acceleration**: Offload heavy computations\n\n### Phase 4: AI Integration\n\n1. **Query suggestions**: Based on schema and data\n2. **Auto-indexing**: Recommend indexes based on query patterns\n3. **Query rewrite hints**: AI-powered optimization suggestions\n\n## 🤝 Contributing\n\nSee `Adding a New Database to Tabular` for step-by-step guide to adding new database support.\n\n### Code Review Checklist\n\n- [ ] Logical plan changes don't break existing databases\n- [ ] New rewrites have tests for correctness\n- [ ] Dialect changes respect database feature sets\n- [ ] Performance benchmarks show no regressions\n- [ ] Documentation updated for new features\n\n## 📞 Support\n\n- **Issues**: GitHub issues for bugs\n- **Discussions**: GitHub discussions for questions\n- **Docs**: This file + inline code documentation\n- **Examples**: See `tests/query_ast_tests.rs`\n\n---\n\n**Last Updated**: 2025-11-11\n**Version**: 0.5.18\n**Maintainer**: Tabular Team\n\n\n# Adding a New Database to Tabular\n\nThis guide explains how to add support for a new database type using the Agnostic AST architecture.\n\n## Overview\n\nTabular uses a layered architecture with clear separation of concerns:\n\n```\nRaw SQL → Parser → Logical Plan → Optimizer/Rewriter → Emitter → Executor\n                       ↓              ↓                    ↓         ↓\n                   DB-Agnostic   DB-Agnostic          DB-Specific  DB-Specific\n```\n\n## Steps to Add a New Database\n\n### 1. Define Database Type\n\nAdd your database to `src/models/enums.rs`:\n\n```rust\n#[derive(Clone, PartialEq, Serialize, Deserialize, Debug)]\npub enum DatabaseType {\n    MySQL,\n    PostgreSQL,\n    SQLite,\n    Redis,\n    MsSQL,\n    MongoDB,\n    YourNewDB, // \u003c- Add here\n}\n\npub enum DatabasePool {\n    MySQL(Arc\u003cMySqlPool\u003e),\n    PostgreSQL(Arc\u003cPgPool\u003e),\n    SQLite(Arc\u003cSqlitePool\u003e),\n    Redis(Arc\u003cConnectionManager\u003e),\n    MsSQL(Arc\u003cMssqlConfigWrapper\u003e),\n    MongoDB(Arc\u003cMongoClient\u003e),\n    YourNewDB(Arc\u003cYourDbConnection\u003e), // \u003c- Add here\n}\n```\n\n### 2. Implement SQL Dialect\n\nCreate `src/query_ast/emitter/your_db_dialect.rs`:\n\n```rust\nuse super::dialect::SqlDialect;\nuse crate::models::enums::DatabaseType;\n\npub struct YourDbDialect;\n\nimpl SqlDialect for YourDbDialect {\n    fn db_type(\u0026self) -\u003e DatabaseType {\n        DatabaseType::YourNewDB\n    }\n    \n    fn quote_ident(\u0026self, ident: \u0026str) -\u003e String {\n        // Your database's identifier quoting\n        format!(\"\\\"{}\\\"\", ident.replace('\"', \"\\\"\\\"\"))\n    }\n    \n    fn emit_limit(\u0026self, limit: u64, offset: u64) -\u003e String {\n        // Your database's LIMIT syntax\n        if offset \u003e 0 {\n            format!(\" LIMIT {} OFFSET {}\", limit, offset)\n        } else {\n            format!(\" LIMIT {}\", limit)\n        }\n    }\n    \n    fn supports_window_functions(\u0026self) -\u003e bool {\n        true // or false, depending on your DB\n    }\n    \n    fn supports_cte(\u0026self) -\u003e bool {\n        true // or false\n    }\n    \n    fn supports_full_join(\u0026self) -\u003e bool {\n        true // or false\n    }\n    \n    // Override other methods as needed for your database\n}\n```\n\nUpdate `src/query_ast/emitter/dialect.rs`:\n\n```rust\npub fn get_dialect(db_type: \u0026DatabaseType) -\u003e Box\u003cdyn SqlDialect\u003e {\n    match db_type {\n        DatabaseType::MySQL =\u003e Box::new(MySqlDialect),\n        DatabaseType::PostgreSQL =\u003e Box::new(PostgresDialect),\n        // ... other databases ...\n        DatabaseType::YourNewDB =\u003e Box::new(YourDbDialect), // \u003c- Add here\n    }\n}\n```\n\n### 3. Implement Database Executor\n\nCreate `src/query_ast/executors/your_db_executor.rs`:\n\n```rust\nuse async_trait::async_trait;\nuse crate::query_ast::executor::{DatabaseExecutor, QueryResult};\nuse crate::query_ast::errors::QueryAstError;\nuse crate::models::enums::DatabaseType;\n\npub struct YourDbExecutor {\n    // Connection pool or client reference\n}\n\nimpl YourDbExecutor {\n    pub fn new() -\u003e Self {\n        Self {}\n    }\n}\n\n#[async_trait]\nimpl DatabaseExecutor for YourDbExecutor {\n    fn database_type(\u0026self) -\u003e DatabaseType {\n        DatabaseType::YourNewDB\n    }\n    \n    async fn execute_query(\n        \u0026self,\n        sql: \u0026str,\n        database_name: Option\u003c\u0026str\u003e,\n        connection_id: i64,\n    ) -\u003e Result\u003cQueryResult, QueryAstError\u003e {\n        // 1. Get connection from pool using connection_id\n        // 2. Switch database if database_name is provided\n        // 3. Execute SQL query\n        // 4. Convert results to (Vec\u003cString\u003e, Vec\u003cVec\u003cString\u003e\u003e)\n        // 5. Return results\n        \n        todo!(\"Implement query execution for your database\")\n    }\n    \n    fn validate_query(\u0026self, sql: \u0026str) -\u003e Result\u003c(), QueryAstError\u003e {\n        // Optional: Add database-specific validation\n        Ok(())\n    }\n}\n```\n\nRegister in `src/query_ast/executor.rs`:\n\n```rust\nimpl ExecutorRegistry {\n    pub fn with_defaults() -\u003e Self {\n        let mut registry = Self::new();\n        \n        // ... existing executors ...\n        \n        #[cfg(feature = \"your_db\")]\n        registry.register(Box::new(super::executors::YourDbExecutor::new()));\n        \n        registry\n    }\n}\n```\n\n### 4. Create Driver Module\n\nCreate `src/driver_your_db.rs`:\n\n```rust\nuse crate::models::enums::DatabaseType;\nuse crate::window_egui::Tabular;\n\n/// Create connection pool for your database\npub async fn create_connection_pool(\n    host: \u0026str,\n    port: u16,\n    username: \u0026str,\n    password: \u0026str,\n    database: Option\u003c\u0026str\u003e,\n) -\u003e Result\u003cYourDbConnection, Box\u003cdyn std::error::Error\u003e\u003e {\n    // Implement connection creation\n    todo!()\n}\n\n/// Fetch tables from your database\npub async fn fetch_tables_from_connection(\n    tabular: \u0026Tabular,\n    connection_id: i64,\n    database_name: Option\u003cString\u003e,\n) -\u003e Result\u003cVec\u003cString\u003e, Box\u003cdyn std::error::Error\u003e\u003e {\n    // Implement table listing\n    todo!()\n}\n\n/// Fetch columns from a table\npub async fn fetch_columns_from_table(\n    tabular: \u0026Tabular,\n    connection_id: i64,\n    database_name: Option\u003cString\u003e,\n    table_name: \u0026str,\n) -\u003e Result\u003cVec\u003c(String, String)\u003e, Box\u003cdyn std::error::Error\u003e\u003e {\n    // Implement column listing (name, type)\n    todo!()\n}\n```\n\n### 5. Update Connection Module\n\nIn `src/connection.rs`, add your database to the connection creation logic:\n\n```rust\npub async fn get_or_create_connection_pool(\n    tabular: \u0026mut Tabular,\n    connection_id: i64,\n) -\u003e Result\u003cmodels::enums::DatabasePool, Box\u003cdyn std::error::Error\u003e\u003e {\n    // ... existing code ...\n    \n    match connection.connection_type {\n        models::enums::DatabaseType::MySQL =\u003e { /* ... */ }\n        models::enums::DatabaseType::PostgreSQL =\u003e { /* ... */ }\n        // ... other databases ...\n        models::enums::DatabaseType::YourNewDB =\u003e {\n            let client = driver_your_db::create_connection_pool(\n                \u0026connection.host,\n                connection.port,\n                \u0026connection.username,\n                \u0026connection.password,\n                connection.database.as_deref(),\n            ).await?;\n            \n            let pool = models::enums::DatabasePool::YourNewDB(Arc::new(client));\n            tabular.connection_pools.insert(connection_id, pool.clone());\n            Ok(pool)\n        }\n    }\n}\n```\n\n### 6. Add UI Support\n\nIn `src/sidebar_database.rs`, add folder icon and logic:\n\n```rust\npub fn create_database_folders_from_connections(connections: \u0026[models::structs::Connection]) -\u003e Vec\u003cmodels::structs::TreeNode\u003e {\n    // ... existing code ...\n    \n    models::enums::DatabaseType::YourNewDB =\u003e {\n        let mut node = models::structs::TreeNode {\n            name: format!(\"Your DB Connections ({})\", count),\n            node_type: models::enums::NodeType::YourNewDBFolder,\n            // ... rest of initialization ...\n        };\n        node\n    }\n}\n```\n\nAdd to `src/models/enums.rs`:\n\n```rust\npub enum NodeType {\n    // ... existing types ...\n    YourNewDBFolder, // \u003c- Add here\n}\n```\n\n## Testing Your Integration\n\n### Unit Tests\n\n```rust\n#[cfg(test)]\nmod tests {\n    use super::*;\n    \n    #[tokio::test]\n    async fn test_your_db_query() {\n        let executor = YourDbExecutor::new();\n        let result = executor.execute_query(\n            \"SELECT * FROM users LIMIT 10\",\n            None,\n            1,\n        ).await;\n        \n        assert!(result.is_ok());\n    }\n    \n    #[test]\n    fn test_your_db_dialect() {\n        let dialect = YourDbDialect;\n        assert_eq!(dialect.quote_ident(\"table\"), \"\\\"table\\\"\");\n    }\n}\n```\n\n### Integration Testing\n\n1. Create a test connection in the UI\n2. Test basic operations: connect, list tables, query data\n3. Test AST features: filtering, sorting, pagination\n4. Test error handling: invalid queries, connection failures\n\n## Performance Considerations\n\n### Connection Pooling\n\n- Use connection pools for better performance\n- Configure appropriate pool size (5-20 connections typically)\n- Implement connection health checks\n\n### Query Optimization\n\n- The rewrite layer applies database-agnostic optimizations\n- Add database-specific optimizations in your executor\n- Consider query result caching for expensive queries\n\n### Plan Caching\n\nThe AST layer automatically caches compiled plans. Ensure your SQL emission is deterministic for cache hits.\n\n## Feature Flags\n\nAdd a feature flag for optional compilation:\n\nIn `Cargo.toml`:\n\n```toml\n[features]\ndefault = [\"mysql\", \"postgres\", \"sqlite\"]\nmysql = [\"sqlx/mysql\"]\npostgres = [\"sqlx/postgres\"]\nsqlite = [\"sqlx/sqlite\"]\nyour_db = [\"your_db_driver\"] # \u003c- Add here\n```\n\n## Common Pitfalls\n\n1. **Quote Identifiers**: Always use `dialect.quote_ident()`, never hardcode quotes\n2. **NULL Handling**: Different databases handle NULL differently\n3. **Type Mapping**: Map SQL types to your database's native types\n4. **Transaction Support**: Implement proper transaction handling\n5. **Error Messages**: Provide clear, actionable error messages\n\n## Example: Adding CockroachDB\n\nHere's a minimal example of adding CockroachDB (which is PostgreSQL-compatible):\n\n```rust\n// 1. Add to DatabaseType enum\npub enum DatabaseType {\n    // ... existing ...\n    CockroachDB,\n}\n\n// 2. Reuse PostgreSQL dialect (it's compatible!)\npub fn get_dialect(db_type: \u0026DatabaseType) -\u003e Box\u003cdyn SqlDialect\u003e {\n    match db_type {\n        // ... existing ...\n        DatabaseType::CockroachDB =\u003e Box::new(PostgresDialect), // Reuse!\n    }\n}\n\n// 3. Create executor (can extend PostgreSQL executor)\npub struct CockroachExecutor {\n    pg_executor: PostgresExecutor, // Composition over inheritance!\n}\n\n#[async_trait]\nimpl DatabaseExecutor for CockroachExecutor {\n    fn database_type(\u0026self) -\u003e DatabaseType {\n        DatabaseType::CockroachDB\n    }\n    \n    async fn execute_query(\u0026self, sql: \u0026str, db: Option\u003c\u0026str\u003e, conn_id: i64) \n        -\u003e Result\u003cQueryResult, QueryAstError\u003e \n    {\n        // Delegate to PostgreSQL executor since CockroachDB is wire-compatible\n        self.pg_executor.execute_query(sql, db, conn_id).await\n    }\n}\n```\n\n## Performance Metrics\n\nAfter adding your database, measure:\n\n- Query compilation time (should be \u003c 1ms for simple queries)\n- Cache hit rate (should be \u003e 80% for repeated queries)\n- Execution overhead (AST layer adds \u003c 5% overhead)\n\n## Need Help?\n\n- Check existing drivers (`driver_mysql.rs`, `driver_postgres.rs`)\n- Review the executor trait in `query_ast/executor.rs`\n- Look at dialect implementations in `query_ast/emitter/dialect.rs`\n- See rewrite rules in `query_ast/rewrite.rs` for optimization ideas\n\n## Benefits of This Architecture\n\n✅ **Separation of Concerns**: Logic layer independent of database specifics\n✅ **Reusability**: Share optimizations across all databases\n✅ **Testability**: Mock executors for unit testing\n✅ **Extensibility**: Add new databases without touching core logic\n✅ **Performance**: Aggressive caching at every layer\n✅ **Type Safety**: Rust's type system catches errors at compile time\n\n\n\n# 🛠️ Build System Documentation\n\nThis document explains how to build Tabular for different platforms using the provided build scripts.\n\n## 📋 Prerequisites\n\n### Required Tools\n- **Rust** (latest stable version)\n- **Make** (for Unix-like systems)\n- **Git** (for version control)\n\n### Platform-Specific Requirements\n\n#### macOS\n- **Xcode Command Line Tools**: `xcode-select --install`\n- **lipo**: Usually included with Xcode (for universal binaries)\n- **hdiutil**: For creating DMG files (included with macOS)\n\n#### Linux\n- **GCC/Clang**: For native compilation\n- **Cross**: For cross-compilation (`cargo install cross`)\n- **Docker**: Required by cross for cross-compilation\n\n#### Windows\n- **MSVC**: Visual Studio Build Tools or Visual Studio\n- **PowerShell**: For packaging scripts\n\n## 🚀 Quick Start\n\n### Using the Build Script (Recommended)\n\nThe easiest way to build Tabular is using the provided build script:\n\n```bash\n# Build for current platform only\n./build.sh\n\n# Build for specific platform\n./build.sh macos\n./build.sh linux\n./build.sh windows\n\n# Build for all platforms\n./build.sh all\n\n# Install dependencies and build\n./build.sh all --deps\n\n# Clean and build\n./build.sh macos --clean\n```\n\n### Using Make Directly\n\nIf you prefer using Make directly:\n\n```bash\n# Show available targets\nmake help\n\n# Install build dependencies\nmake install-deps\n\n# Build for specific platforms\nmake bundle-macos\nmake bundle-linux\nmake bundle-windows\n\n# Build everything\nmake release\n\n# Clean build artifacts\nmake clean\n```\n\n## 📦 Build Targets\n\n### macOS Universal Binary\nCreates a universal binary that runs on both Intel and Apple Silicon Macs:\n\n```bash\nmake bundle-macos\n```\n\n**Outputs:**\n- `dist/macos/Tabular.app` - macOS application bundle\n- `dist/macos/Tabular.dmg` - Disk image for distribution\n\n### Linux Binaries\nCreates binaries for x86_64 and aarch64 Linux systems:\n\n```bash\nmake bundle-linux\n```\n\n**Outputs:**\n- `dist/linux/tabular-x86_64-unknown-linux-gnu.tar.gz`\n- `dist/linux/tabular-aarch64-unknown-linux-gnu.tar.gz`\n- AppDir structure for potential AppImage creation\n\n### Windows Binaries\nCreates executables for x86_64 and aarch64 Windows systems:\n\n```bash\nmake bundle-windows\n```\n\n**Outputs:**\n- `dist/windows/tabular-x86_64-pc-windows-msvc.zip`\n- `dist/windows/tabular-aarch64-pc-windows-msvc.zip`\n\n## 🔧 Development Commands\n\n### Quick Development Tasks\n\n```bash\n# Development build (debug mode)\nmake dev\n\n# Run the application\nmake run\n\n# Run tests\nmake test\n\n# Check code formatting and linting\nmake check\n\n# Format code\nmake fmt\n\n# Show project information\nmake info\n```\n\n## 🏗️ Build Architecture\n\n### Target Platforms\n\n| Platform | Architecture | Target Triple |\n|----------|-------------|---------------|\n| macOS | Intel (x86_64) | `x86_64-apple-darwin` |\n| macOS | Apple Silicon (ARM64) | `aarch64-apple-darwin` |\n| Linux | x86_64 | `x86_64-unknown-linux-gnu` |\n| Linux | ARM64 | `aarch64-unknown-linux-gnu` |\n| Windows | x86_64 | `x86_64-pc-windows-msvc` |\n| Windows | ARM64 | `aarch64-pc-windows-msvc` |\n\n### Build Process\n\n1. **Dependency Installation**: Install Rust targets and required tools\n2. **Cross-Compilation**: Build for each target platform\n3. **Universal Binary Creation**: Combine macOS binaries using `lipo`\n4. **Packaging**: Create platform-specific packages (DMG, tar.gz, zip)\n5. **Distribution**: Output ready-to-distribute packages\n\n## 📁 Directory Structure\n\nAfter building, the following structure is created:\n\n```\ntabular/\n├── target/                     # Rust build artifacts\n│   ├── x86_64-apple-darwin/\n│   ├── aarch64-apple-darwin/\n│   ├── universal-apple-darwin/\n│   └── ...\n├── dist/                       # Distribution packages\n│   ├── macos/\n│   │   ├── Tabular.app\n│   │   └── Tabular.dmg\n│   ├── linux/\n│   │   ├── tabular-x86_64.tar.gz\n│   │   └── tabular-aarch64.tar.gz\n│   └── windows/\n│       ├── tabular-x86_64.zip\n│       └── tabular-aarch64.zip\n└── ...\n```\n\n## 🤖 Continuous Integration\n\nThe project includes GitHub Actions workflows for automated building:\n\n### Workflow Triggers\n- **Push to main/develop**: Builds all platforms\n- **Pull requests**: Builds all platforms for testing\n- **Tag push (v*)**: Builds and creates a GitHub release\n\n### Artifacts\n- All builds are saved as GitHub Actions artifacts\n- Tagged releases automatically create GitHub releases with binaries\n\n## 🐛 Troubleshooting\n\n### Common Issues\n\n#### Missing Rust Targets\n```bash\n# Solution: Install missing targets\nmake install-deps\n```\n\n#### Cross-compilation Failures\n```bash\n# Solution: Install cross and Docker\ncargo install cross\n# Make sure Docker is running\n```\n\n#### macOS Code Signing Issues\n```bash\n# For development builds, you can skip code signing\n# For distribution, you'll need an Apple Developer Certificate\n```\n\n#### Windows Build Failures\n```bash\n# Make sure you have MSVC build tools installed\n# Alternative: Use the GNU toolchain (x86_64-pc-windows-gnu)\n```\n\n### Getting Help\n\nIf you encounter issues:\n\n1. Check the build logs for specific error messages\n2. Ensure all prerequisites are installed\n3. Try cleaning and rebuilding: `make clean \u0026\u0026 make release`\n4. Check the GitHub Issues for known problems\n\n## 📝 Notes\n\n- **Universal macOS Binary**: The macOS build creates a universal binary that works on both Intel and Apple Silicon Macs\n- **Cross-Compilation**: Linux and Windows builds use cross-compilation for ARM64 targets\n- **Size Optimization**: Release builds include optimizations for smaller binary size\n- **Dependencies**: The build system automatically handles Rust target installation\n\n## 🎯 Example Build Session\n\nHere's a complete example of building for all platforms:\n\n```bash\n# Clone the repository\ngit clone https://github.com/your-repo/tabular.git\ncd tabular\n\n# Install dependencies and build everything\n./build.sh all --deps\n\n# Check the results\nls -la dist/\n```\n\nThis will create distribution-ready packages for macOS, Linux, and Windows.\n\n\nJoin us at Telegram\n\n![Telegram Group](screenshots/telegram.png)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftabular-id%2Ftabular","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftabular-id%2Ftabular","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftabular-id%2Ftabular/lists"}