{"id":20664522,"url":"https://github.com/outr/lightdb","last_synced_at":"2026-04-08T03:06:47.037Z","repository":{"id":48949924,"uuid":"297003966","full_name":"outr/lightdb","owner":"outr","description":"Bare Metal Modular Database","archived":false,"fork":false,"pushed_at":"2026-03-28T01:45:46.000Z","size":24294,"stargazers_count":62,"open_issues_count":4,"forks_count":1,"subscribers_count":1,"default_branch":"master","last_synced_at":"2026-03-28T07:36:22.124Z","etag":null,"topics":["database","halodb","lucene","mapdb","modular","rocksdb","scala","sql"],"latest_commit_sha":null,"homepage":"","language":"Scala","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/outr.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2020-09-20T04:49:00.000Z","updated_at":"2026-03-28T04:50:51.000Z","dependencies_parsed_at":"2024-03-24T02:26:37.362Z","dependency_job_id":"497401c8-b676-4861-bb73-8b4a5182e9b2","html_url":"https://github.com/outr/lightdb","commit_stats":null,"previous_names":[],"tags_count":91,"template":false,"template_full_name":null,"purl":"pkg:github/outr/lightdb","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/outr%2Flightdb","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/outr%2Flightdb/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/outr%2Flightdb/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/outr%2Flightdb/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/outr","download_url":"https://codeload.github.com/outr/lightdb/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/outr%2Flightdb/sbom","scorecard":{"id":420656,"data":{"date":"2025-08-11","repo":{"name":"github.com/outr/lightdb","commit":"28f6f09f02a962f2bf44c6e73f65fefecc39ae57"},"scorecard":{"version":"v5.2.1-40-gf6ed084d","commit":"f6ed084d17c9236477efd66e5b258b9d4cc7b389"},"score":4.4,"checks":[{"name":"Code-Review","score":0,"reason":"Found 0/27 approved changesets -- score normalized to 0","details":null,"documentation":{"short":"Determines if the project requires human code review before pull requests (aka merge requests) are merged.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#code-review"}},{"name":"Maintained","score":10,"reason":"30 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 10","details":null,"documentation":{"short":"Determines if the project is \"actively maintained\".","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#maintained"}},{"name":"Token-Permissions","score":0,"reason":"detected GitHub workflow tokens with excessive permissions","details":["Warn: no topLevel permission defined: .github/workflows/ci.yml:1","Warn: no topLevel permission defined: .github/workflows/release-drafter.yml:1","Warn: no topLevel permission defined: .github/workflows/scala-steward.yml:1","Info: no jobLevel write permissions found"],"documentation":{"short":"Determines if the project's workflows follow the principle of least privilege.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#token-permissions"}},{"name":"Packaging","score":-1,"reason":"packaging workflow not detected","details":["Warn: no GitHub/GitLab publishing workflow detected."],"documentation":{"short":"Determines if the project is published as a package that others can easily download, install, easily update, and uninstall.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#packaging"}},{"name":"Dangerous-Workflow","score":10,"reason":"no dangerous workflow patterns detected","details":null,"documentation":{"short":"Determines if the project's GitHub Action workflows avoid dangerous patterns.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#dangerous-workflow"}},{"name":"Binary-Artifacts","score":10,"reason":"no binaries found in the repo","details":null,"documentation":{"short":"Determines if the project has generated executable (binary) artifacts in the source repository.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#binary-artifacts"}},{"name":"Pinned-Dependencies","score":0,"reason":"dependency not pinned by hash detected -- score normalized to 0","details":["Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/ci.yml:30: update your workflow using https://app.stepsecurity.io/secureworkflow/outr/lightdb/ci.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/ci.yml:32: update your workflow using https://app.stepsecurity.io/secureworkflow/outr/lightdb/ci.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/ci.yml:37: update your workflow using https://app.stepsecurity.io/secureworkflow/outr/lightdb/ci.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/ci.yml:39: update your workflow using https://app.stepsecurity.io/secureworkflow/outr/lightdb/ci.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/release-drafter.yml:12: update your workflow using https://app.stepsecurity.io/secureworkflow/outr/lightdb/release-drafter.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/scala-steward.yml:13: update your workflow using https://app.stepsecurity.io/secureworkflow/outr/lightdb/scala-steward.yml/master?enable=pin","Info:   0 out of   2 GitHub-owned GitHubAction dependencies pinned","Info:   0 out of   4 third-party GitHubAction dependencies pinned"],"documentation":{"short":"Determines if the project has declared and pinned the dependencies of its build process.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#pinned-dependencies"}},{"name":"CII-Best-Practices","score":0,"reason":"no effort to earn an OpenSSF best practices badge detected","details":null,"documentation":{"short":"Determines if the project has an OpenSSF (formerly CII) Best Practices Badge.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#cii-best-practices"}},{"name":"Security-Policy","score":0,"reason":"security policy file not detected","details":["Warn: no security policy file detected","Warn: no security file to analyze","Warn: no security file to analyze","Warn: no security file to analyze"],"documentation":{"short":"Determines if the project has published a security policy.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#security-policy"}},{"name":"Fuzzing","score":0,"reason":"project is not fuzzed","details":["Warn: no fuzzer integrations found"],"documentation":{"short":"Determines if the project uses fuzzing.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#fuzzing"}},{"name":"Vulnerabilities","score":10,"reason":"0 existing vulnerabilities detected","details":null,"documentation":{"short":"Determines if the project has open, known unfixed vulnerabilities.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#vulnerabilities"}},{"name":"License","score":10,"reason":"license file detected","details":["Info: project has a license file: LICENSE:0","Info: FSF or OSI recognized license: MIT License: LICENSE:0"],"documentation":{"short":"Determines if the project has defined a license.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#license"}},{"name":"Branch-Protection","score":0,"reason":"branch protection not enabled on development/release branches","details":["Warn: branch protection not enabled for branch 'master'"],"documentation":{"short":"Determines if the default and release branches are protected with GitHub's branch protection settings.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#branch-protection"}},{"name":"Signed-Releases","score":-1,"reason":"no releases found","details":null,"documentation":{"short":"Determines if the project cryptographically signs release artifacts.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#signed-releases"}},{"name":"SAST","score":0,"reason":"SAST tool is not run on all commits -- score normalized to 0","details":["Warn: 0 commits out of 7 are checked with a SAST tool"],"documentation":{"short":"Determines if the project uses static code analysis.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#sast"}}]},"last_synced_at":"2025-08-19T01:07:14.467Z","repository_id":48949924,"created_at":"2025-08-19T01:07:14.467Z","updated_at":"2025-08-19T01:07:14.467Z"},"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31537798,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-07T16:28:08.000Z","status":"online","status_checked_at":"2026-04-08T02:00:06.127Z","response_time":54,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["database","halodb","lucene","mapdb","modular","rocksdb","scala","sql"],"created_at":"2024-11-16T19:24:46.955Z","updated_at":"2026-04-08T03:06:47.030Z","avatar_url":"https://github.com/outr.png","language":"Scala","readme":"# lightdb\n[![CI](https://github.com/outr/lightdb/actions/workflows/ci.yml/badge.svg)](https://github.com/outr/lightdb/actions/workflows/ci.yml)\n\nComputationally focused database using pluggable stores\n\n## Provided Stores\n| Store                                                                 | Type               | Embedded | Persistence | Read Perf | Write Perf | Concurrency | Transactions    | Full-Text Search | Prefix Scan | Notes                                 |\n|------------------------------------------------------------------------|--------------------|----------|-------------|-----------|------------|-------------|------------------|------------------|-----------|---------------------------------------|\n| [HaloDB](https://github.com/yahoo/HaloDB)                              | KV Store           | ✅       | ✅          | ✅✅        | ✅✅       | 🟡 (Single-threaded write) | 🟡 (Basic durability) | ❌               | ❌        | Fast, simple write-optimized store    |\n| [ChronicleMap](https://github.com/OpenHFT/Chronicle-Map)              | Off-Heap Map       | ✅       | ✅ (Memory-mapped) | ✅✅     | ✅✅       | ✅✅         | ❌              | ❌               | ❌        | Ultra low-latency, off-heap storage   |\n| [LMDB](https://www.symas.com/mdb)                                      | KV Store (B+Tree)  | ✅       | ✅          | ✅✅✅     | ✅        | 🟡 (Single write txn) | ✅✅ (ACID)     | ❌               | ✅        | Read-optimized, mature B+Tree engine  |\n| [MapDB (B-Tree)](https://mapdb.org)                                   | Java Collections   | ✅       | ✅          | ✅        | ✅        | ✅           | ✅              | ❌               | ✅        | Uses BTreeMap for ordered/prefix scans|\n| [RocksDB](https://rocksdb.org)                                        | LSM KV Store       | ✅       | ✅          | ✅✅      | ✅✅✅     | ✅           | ✅              | ❌               | ✅        | High-performance LSM tree             |\n| [Redis](https://redis.io)                                             | In-Memory KV Store | 🟡 (Optional) | ✅ (RDB/AOF) | ✅✅✅     | ✅✅       | ✅           | ✅              | ❌               | ❌        | Popular in-memory data structure store|\n| [Lucene](https://lucene.apache.org)                                   | Full-Text Search   | ✅       | ✅          | ✅✅      | ✅        | ✅           | ❌              | ✅✅✅           | ❌        | Best-in-class full-text search engine |\n| [OpenSearch](https://opensearch.org)                                  | Search Server      | ❌ (Server-based) | ✅   | ✅✅✅     | ✅✅      | ✅✅         | 🟡 (Transactional batching; not ACID) | ✅✅✅           | ❌        | Distributed search, joins, aggregations |\n| [SQLite](https://www.sqlite.org)                                      | Relational DB      | ✅       | ✅          | ✅        | ✅        | 🟡 (Write lock) | ✅✅ (ACID)     | ✅ (FTS5)         | 🟡        | Lightweight embedded SQL              |\n| [H2](https://h2database.com)                                          | Relational DB      | ✅       | ✅          | ✅        | ✅        | ✅           | ✅✅ (ACID)     | ❌ (Basic LIKE)    | 🟡         | Java-native SQL engine                |\n| [DuckDB](https://duckdb.org)                                          | Analytical SQL     | ✅       | ✅          | ✅✅✅     | ✅        | ✅           | ✅              | ❌               | 🟡        | Columnar, ideal for analytics         |\n| [PostgreSQL](https://www.postgresql.org)                              | Relational DB      | ❌ (Server-based) | ✅   | ✅✅✅     | ✅✅      | ✅✅         | ✅✅✅ (ACID, MVCC) | ✅✅ (TSVector)  | 🟡         | Full-featured RDBMS                   |\n\n### Legend\n- ✅: Supported / Good\n- ✅✅: Strong\n- ✅✅✅: Best-in-class\n- 🟡: Limited or trade-offs\n- ❌: Not supported\n\n## In-Progress\n- Tantivy (https://github.com/quickwit-oss/tantivy) - Working on creating a wrapper around Rust's extremely fast alternative to Apache Lucene (See https://github.com/outr/scantivy)\n\n## SBT Configuration\n\nTo add all modules:\n```scala\nlibraryDependencies += \"com.outr\" %% \"lightdb-all\" % \"4.28.0\"\n```\n\nFor a specific implementation like Lucene:\n```scala\nlibraryDependencies += \"com.outr\" %% \"lightdb-lucene\" % \"4.28.0\"\n```\n\nFor graph traversal utilities:\n```scala\nlibraryDependencies += \"com.outr\" %% \"lightdb-traversal\" % \"4.28.0\"\n```\n\nFor OpenSearch:\n```scala\nlibraryDependencies += \"com.outr\" %% \"lightdb-opensearch\" % \"4.28.0\"\n```\n\n---\n\n## Recent Additions\n\n### Traversal module (`lightdb-traversal`)\nLightDB now includes a lightweight graph traversal DSL that works against any `PrefixScanningTransaction` (e.g. RocksDB / LMDB / MapDB B-Tree / traversal stores).\n\n- **Import**: `import lightdb.traversal.syntax._`\n- **Common helpers**:\n  - `tx.traverse.edgesFor[Edge, From, To](fromId)`\n  - `tx.traverse.reachableFrom[Edge, Node, Node](startId)`\n  - `tx.traverse.shortestPaths[Edge, From, To](fromId, toId)`\n  - `tx.traverse.bfs(...)` / `tx.traverse.dfs(...)`\n\nExample:\n```scala\nimport lightdb.traversal.syntax._\n\ndb.flights.transaction { tx =\u003e\n  val lax = Airport.id(\"LAX\")\n  tx.storage.traverse\n    .reachableFrom[Flight, Airport, Airport](lax)\n    .map(_._to)\n    .distinct\n    .toList\n}\n```\n\n### `Query.distinct(field)`\nLightDB now exposes a backend-agnostic `distinct` API:\n```scala\ndb.people.transaction { tx =\u003e\n  tx.query.distinct(_.city).toList\n}\n// res0: Task[List[Option[City]]] = FlatMap(\n//   input = FlatMap(\n//     input = Suspend(\n//       f = lightdb.store.Store$TransactionBuilder$$Lambda/0x000000002ca10000@41bcf25d,\n//       trace = SourcecodeTrace(\n//         file = File(\n//           \"/home/mhicks/projects/open/lightdb/core/src/main/scala/lightdb/store/Store.scala\"\n//         ),\n//         line = Line(177),\n//         enclosing = Enclosing(\"lightdb.store.Store#TransactionBuilder#create\"),\n//         kind = \"apply\"\n//       )\n//     ),\n//     f = lightdb.store.Store$TransactionBuilder$$Lambda/0x000000002ca11a18@44018313,\n//     trace = SourcecodeTrace(\n//       file = File(\n//         \"/home/mhicks/projects/open/lightdb/core/src/main/scala/lightdb/store/Store.scala\"\n//       ),\n//       line = Line(182),\n//       enclosing = Enclosing(\"lightdb.store.Store#TransactionBuilder#create\"),\n//       kind = \"flatMap\"\n//     )\n//   ),\n//   f = lightdb.store.Store$TransactionBuilder$$Lambda/0x000000002ca125c0@27389743,\n//   trace = SourcecodeTrace(\n//     file = File(\n//       \"/home/mhicks/projects/open/lightdb/core/src/main/scala/lightdb/store/Store.scala\"\n//     ),\n//     line = Line(160),\n//     enclosing = Enclosing(\"lightdb.store.Store#TransactionBuilder#apply\"),\n//     kind = \"flatMap\"\n//   )\n// )\n```\n\nSupported backends:\n- **OpenSearch**: composite aggregations (paged)\n- **Lucene**: grouping (docvalues-based) for scalar fields (String/Enum/Int/Double and Option variants)\n- **SQL** (SQLite/H2/DuckDB/PostgreSQL): `SELECT DISTINCT ...` with paging\n- **SplitCollection**: delegates to the searching side (e.g. RocksDB + OpenSearch)\n\n---\n\n## OpenSearch Notes (LightDB backend)\nOpenSearch support has been expanded to better support large-scale ingestion and production usage.\n\n### Fast ingestion defaults (transactional)\nWhen using OpenSearch as a searching backend, LightDB favors **fast ingestion** over read-your-writes mid-transaction, and forces visibility at commit.\n\n### Facet childCount mode (speed vs exactness)\nOpenSearch cannot return an “exact distinct bucket count” for `terms` aggregations without paging.\n\n- Default is **fast/approximate** via `cardinality`.\n- You can opt into **exact** via composite paging.\n\nConfig:\n```json\n{\n  \"lightdb\": {\n    \"opensearch\": {\n      \"facetChildCount\": {\n        \"mode\": \"cardinality\",\n        \"precisionThreshold\": 40000\n      }\n    }\n  }\n}\n```\n\n### Index sorting (OpenSearch)\nLightDB can emit OpenSearch index sorting settings at index creation time:\n```json\n{\n  \"lightdb\": {\n    \"opensearch\": {\n      \"index\": {\n        \"sort\": {\n          \"fields\": [\"unifiedEntityId.keyword\", \"__lightdb_id\"],\n          \"orders\": [\"asc\", \"asc\"]\n        }\n      }\n    }\n  }\n}\n```\nNote: index sorting requires a new index (it cannot be added to an existing index).\n\n### Truncate behavior\nOn OpenSearch stores, `truncate` is implemented as **drop + recreate index**, which is dramatically faster than `_delete_by_query` for large indices.\n\n## Videos\nWatch this [Java User Group demonstration of LightDB](https://www.youtube.com/live/E_5fwgbF4rc?si=cxyb0Br3oCEQInTW)\n\n## Getting Started\n\nThis guide will walk you through setting up and using **LightDB**, a high-performance computational database. We'll use a sample application to explore its key features.\n\n*NOTE*: This project uses Rapid (https://github.com/outr/rapid) for effects. It's somewhat similar to cats-effect, but\nwith a focus on virtual threads and simplicity. In a normal project, you likely wouldn't be using `.sync()` to invoke\neach task, but for the purposes of this documentation, this is used to make the code execute blocking.\n\n---\n\n## Prerequisites\n\nEnsure you have the following:\n\n- **Scala** installed\n- **SBT** (Scala Build Tool) installed\n\n---\n\n## Setup\n\n### Add LightDB to Your Project\n\nAdd the following dependency to your `build.sbt` file:\n\n```scala\nlibraryDependencies += \"com.outr\" %% \"lightdb-all\" % \"4.28.0\"\n```\n\n---\n\n## Example: Defining Models and Collections\n\n### Step 1: Define Your Models\n\nLightDB uses **Document** and **DocumentModel** for schema definitions. Here's an example of defining a `Person` and `City`:\n\n```scala\nimport lightdb._\nimport lightdb.id._\nimport lightdb.store._\nimport lightdb.doc._\nimport fabric.rw._\n\ncase class Person(\n  name: String,\n  age: Int,\n  city: Option[City] = None,\n  nicknames: Set[String] = Set.empty,\n  friends: List[Id[Person]] = Nil,\n  _id: Id[Person] = Person.id()\n) extends Document[Person]\n\nobject Person extends DocumentModel[Person] with JsonConversion[Person] {\n  override implicit val rw: RW[Person] = RW.gen\n\n  val name: I[String] = field.index(\"name\", _.name)\n  val age: I[Int] = field.index(\"age\", _.age)\n  val city: I[Option[City]] = field.index(\"city\", _.city)\n  val nicknames: I[Set[String]] = field.index(\"nicknames\", _.nicknames)\n  val friends: I[List[Id[Person]]] = field.index(\"friends\", _.friends)\n}\n```\n\n```scala\ncase class City(name: String)\n\nobject City {\n  implicit val rw: RW[City] = RW.gen\n}\n```\n\n### Step 2: Create the Database Class\n\nDefine the database with stores for each model:\n\n```scala\nimport lightdb.sql._\nimport lightdb.store._\nimport lightdb.upgrade._\nimport java.nio.file.Path\n\nobject db extends LightDB {\n  override type SM = CollectionManager\n  override val storeManager: CollectionManager = SQLiteStore\n   \n  lazy val directory: Option[Path] = Some(Path.of(s\"docs/db/example\"))\n   \n  lazy val people: Collection[Person, Person.type] = store(Person)\n\n  override def upgrades: List[DatabaseUpgrade] = Nil\n}\n```\n\n---\n\n## Using the Database\n\n### Step 1: Initialize the Database\n\nInitialize the database:\n\n```scala\ndb.init.sync()\n```\n\n### Step 2: Insert Data\n\nAdd records to the database:\n\n```scala\nval adam = Person(name = \"Adam\", age = 21)\n// adam: Person = Person(\n//   name = \"Adam\",\n//   age = 21,\n//   city = None,\n//   nicknames = Set(),\n//   friends = List(),\n//   _id = StringId(\"tt0Vn1ttNWlw2Yr27v4uQ5HqWE5vbfMz\")\n// )\ndb.people.transaction { implicit txn =\u003e\n  txn.insert(adam)\n}.sync()\n// res2: Person = Person(\n//   name = \"Adam\",\n//   age = 21,\n//   city = None,\n//   nicknames = Set(),\n//   friends = List(),\n//   _id = StringId(\"tt0Vn1ttNWlw2Yr27v4uQ5HqWE5vbfMz\")\n// )\n```\n\n### Step 3: Query Data\n\nRetrieve records using filters:\n\n```scala\ndb.people.transaction { txn =\u003e\n  txn.query.filter(_.age BETWEEN 20 -\u003e 29).toList.map { peopleIn20s =\u003e\n    println(s\"People in their 20s: $peopleIn20s\")\n  }\n}.sync()\n// People in their 20s: List(Person(Adam,21,None,Set(),List(),StringId(IDmTU51mzoBQCEyaxBuHrwtLEcmHTags)), Person(Adam,21,None,Set(),List(),StringId(KGrBn5aofL4Nr9U3rhfv3dFHFiZQLBBp)), Person(Adam,21,None,Set(),List(),StringId(zKsjLb0Oh67NU7cXuCqefzuYqEkLNYou)), Person(Adam,21,None,Set(),List(),StringId(YtDDj7Lf0ys2sVAl5KbaGwYX1cRJdV41)), Person(Adam,21,None,Set(),List(),StringId(JzoJoBINhzejipsrAYzdaUGVvlxEFW5g)), Person(Adam,21,None,Set(),List(),StringId(5o9UsGhDtjTKVOLvHZCg0Y9CYjoh5g7C)), Person(Adam,21,None,Set(),List(),StringId(SpOTvdzPy3w302cWeQXRvtuVrJFDm13Z)), Person(Adam,21,None,Set(),List(),StringId(9WD5mBb0Y5IXtF2vuDa7fi8Y0pSw0Da0)), Person(Adam,21,None,Set(),List(),StringId(1gh7JtBVdDNqjihBDogvU4NNRGPJsXkb)), Person(Adam,21,None,Set(),List(),StringId(Xa6wUoSrdhjLP2vkbKyiyUjlyWBAz4kD)), Person(Adam,21,None,Set(),List(),StringId(iT74rK8QvkrRf6DrevkvgwQcHRgFuoUE)), Person(Adam,21,None,Set(),List(),StringId(QLvnBifleraDeNmCHkKeIPqyzhnib2Eg)), Person(Adam,21,None,Set(),List(),StringId(SJAjOvPNYLRg5wQ00zxEZUOUES7tCxcP)), Person(Adam,21,None,Set(),List(),StringId(zdX8DTpGZyn3MkGJhHaKnUz1cu9ZUdrK)), Person(Adam,21,None,Set(),List(),StringId(rdsWgq0lgl2jDHbivox9Vfz20zQ9Oe9L)), Person(Adam,21,None,Set(),List(),StringId(W7eeWhwhqCkihCVVVgujwUFDkhEO3oRa)), Person(Adam,21,None,Set(),List(),StringId(cMMqWQP2BIdaQp51oCPxVenB4ulAtVWl)), Person(Adam,21,None,Set(),List(),StringId(M0icIK9ngQkZcxcHFWKNQjzGcxnKq5SV)), Person(Adam,21,None,Set(),List(),StringId(AAGmPac35fkhX4pwacUuJ6lk0syGxFvk)), Person(Adam,21,None,Set(),List(),StringId(a59ro7mat6N4fxgmSogH1lw70fIBUtdq)), Person(Adam,21,None,Set(),List(),StringId(l9J7x1oMU0Wl8jVu4RZcGNFJOXNUKMYe)), Person(Adam,21,None,Set(),List(),StringId(3IB9QsYE1QVmqLTyKEQVQLXsKiR7BP5J)), Person(Adam,21,None,Set(),List(),StringId(8r1oUXaNLT4UGU2zNpIMBCiDysIZkMPh)), Person(Adam,21,None,Set(),List(),StringId(ut0wFFNDcXo270dffWuTATGfHfo2vcNI)), Person(Adam,21,None,Set(),List(),StringId(S2SEtSkfIVqqRJhehK6kK0fwsVlXPPL0)), Person(Adam,21,None,Set(),List(),StringId(JbqVgdDWjMnrkVCfVrZ4JD4xWRh837BM)), Person(Adam,21,None,Set(),List(),StringId(Ke7bqs2cZ0jE1DFLyUzaV6hyeZe2VWfn)), Person(Adam,21,None,Set(),List(),StringId(MFm0BxSa3Pmp2y5y1akABtrgHdCCwrcJ)), Person(Adam,21,None,Set(),List(),StringId(l75uiBclVPZp6BKzir5v3NaoJnzlQA9v)), Person(Adam,21,None,Set(),List(),StringId(2ERom8mNTPS1884bY6vAHHA0AEYR0NSk)), Person(Adam,21,None,Set(),List(),StringId(mN1fBoA0tIwx1MbLNYsydMei78HwEesy)), Person(Adam,21,None,Set(),List(),StringId(8imp4NOlpBx1Wsu7woKLPJVq4Kv4AZDS)), Person(Adam,21,None,Set(),List(),StringId(5t9W5XM3pw2oWDszb28SWxRhh6LJvRoq)), Person(Adam,21,None,Set(),List(),StringId(yWg21ngYbyrEngNHmnrNWcTyndywDYo8)), Person(Adam,21,None,Set(),List(),StringId(dbiqhoV6tN1VdNWLBvCzWOlzwL8Ck8iC)), Person(Adam,21,None,Set(),List(),StringId(ygDHp6SVmD5yMqZMru3RnMS54mvvoGnI)), Person(Adam,21,None,Set(),List(),StringId(kSxSXVhD1jPH1cxQftUpJ1REMFe7EzEP)), Person(Adam,21,None,Set(),List(),StringId(IynPIbMMAsqpCoKLHQpRx0qMmehm79jw)), Person(Adam,21,None,Set(),List(),StringId(zk0CTrMh1gMfdNC7OegGqQc80sWKrgPM)), Person(Adam,21,None,Set(),List(),StringId(Tc6rz2qXAy2dMqr0RnSG6Dx6JuxDNplN)), Person(Adam,21,None,Set(),List(),StringId(tt0Vn1ttNWlw2Yr27v4uQ5HqWE5vbfMz)))\n```\n\n---\n\n## Features Highlight\n\n1. **Transactions:**\n   LightDB ensures atomic operations within transactions.\n\n2. **Indexes:**\n   Support for various indexes, like tokenized and field-based, ensures fast lookups.\n\n3. **Aggregation:**\n   Perform aggregations such as `min`, `max`, `avg`, and `sum`.\n\n4. **Streaming:**\n   Stream records for large-scale queries.\n\n5. **Backups and Restores:**\n   Backup and restore databases seamlessly.\n\n6. **Prefix-Scanned File Storage (chunked blobs):**\n   Store file metadata under `file:\u003cid\u003e` and data chunks under `data::\u003cid\u003e::\u003cchunk\u003e`. Requires a prefix-capable store: RocksDB, LMDB, or MapDB (B-Tree).\n\n---\n\n## Advanced Queries\n\n### Aggregations\n\n```scala\ndb.people.transaction { txn =\u003e\n  txn.query\n    .aggregate(p =\u003e List(p.age.min, p.age.max, p.age.avg, p.age.sum))\n    .toList\n    .map { results =\u003e\n      println(s\"Results: $results\")\n    }\n}.sync()\n// Results: List(MaterializedAggregate({\"ageMin\": 21, \"ageMax\": 21, \"ageAvg\": 21.0, \"ageSum\": 861},repl.MdocSession$MdocApp$Person$@4aeb417e))\n```\n\n### Grouping\n\n```scala\ndb.people.transaction { txn =\u003e\n  txn.query.grouped(_.age).toList.map { grouped =\u003e\n    println(s\"Grouped: $grouped\")\n  }\n}.sync()\n// Grouped: List(Grouped(21,List(Person(Adam,21,None,Set(),List(),StringId(IDmTU51mzoBQCEyaxBuHrwtLEcmHTags)), Person(Adam,21,None,Set(),List(),StringId(KGrBn5aofL4Nr9U3rhfv3dFHFiZQLBBp)), Person(Adam,21,None,Set(),List(),StringId(zKsjLb0Oh67NU7cXuCqefzuYqEkLNYou)), Person(Adam,21,None,Set(),List(),StringId(YtDDj7Lf0ys2sVAl5KbaGwYX1cRJdV41)), Person(Adam,21,None,Set(),List(),StringId(JzoJoBINhzejipsrAYzdaUGVvlxEFW5g)), Person(Adam,21,None,Set(),List(),StringId(5o9UsGhDtjTKVOLvHZCg0Y9CYjoh5g7C)), Person(Adam,21,None,Set(),List(),StringId(SpOTvdzPy3w302cWeQXRvtuVrJFDm13Z)), Person(Adam,21,None,Set(),List(),StringId(9WD5mBb0Y5IXtF2vuDa7fi8Y0pSw0Da0)), Person(Adam,21,None,Set(),List(),StringId(1gh7JtBVdDNqjihBDogvU4NNRGPJsXkb)), Person(Adam,21,None,Set(),List(),StringId(Xa6wUoSrdhjLP2vkbKyiyUjlyWBAz4kD)), Person(Adam,21,None,Set(),List(),StringId(iT74rK8QvkrRf6DrevkvgwQcHRgFuoUE)), Person(Adam,21,None,Set(),List(),StringId(QLvnBifleraDeNmCHkKeIPqyzhnib2Eg)), Person(Adam,21,None,Set(),List(),StringId(SJAjOvPNYLRg5wQ00zxEZUOUES7tCxcP)), Person(Adam,21,None,Set(),List(),StringId(zdX8DTpGZyn3MkGJhHaKnUz1cu9ZUdrK)), Person(Adam,21,None,Set(),List(),StringId(rdsWgq0lgl2jDHbivox9Vfz20zQ9Oe9L)), Person(Adam,21,None,Set(),List(),StringId(W7eeWhwhqCkihCVVVgujwUFDkhEO3oRa)), Person(Adam,21,None,Set(),List(),StringId(cMMqWQP2BIdaQp51oCPxVenB4ulAtVWl)), Person(Adam,21,None,Set(),List(),StringId(M0icIK9ngQkZcxcHFWKNQjzGcxnKq5SV)), Person(Adam,21,None,Set(),List(),StringId(AAGmPac35fkhX4pwacUuJ6lk0syGxFvk)), Person(Adam,21,None,Set(),List(),StringId(a59ro7mat6N4fxgmSogH1lw70fIBUtdq)), Person(Adam,21,None,Set(),List(),StringId(l9J7x1oMU0Wl8jVu4RZcGNFJOXNUKMYe)), Person(Adam,21,None,Set(),List(),StringId(3IB9QsYE1QVmqLTyKEQVQLXsKiR7BP5J)), Person(Adam,21,None,Set(),List(),StringId(8r1oUXaNLT4UGU2zNpIMBCiDysIZkMPh)), Person(Adam,21,None,Set(),List(),StringId(ut0wFFNDcXo270dffWuTATGfHfo2vcNI)), Person(Adam,21,None,Set(),List(),StringId(S2SEtSkfIVqqRJhehK6kK0fwsVlXPPL0)), Person(Adam,21,None,Set(),List(),StringId(JbqVgdDWjMnrkVCfVrZ4JD4xWRh837BM)), Person(Adam,21,None,Set(),List(),StringId(Ke7bqs2cZ0jE1DFLyUzaV6hyeZe2VWfn)), Person(Adam,21,None,Set(),List(),StringId(MFm0BxSa3Pmp2y5y1akABtrgHdCCwrcJ)), Person(Adam,21,None,Set(),List(),StringId(l75uiBclVPZp6BKzir5v3NaoJnzlQA9v)), Person(Adam,21,None,Set(),List(),StringId(2ERom8mNTPS1884bY6vAHHA0AEYR0NSk)), Person(Adam,21,None,Set(),List(),StringId(mN1fBoA0tIwx1MbLNYsydMei78HwEesy)), Person(Adam,21,None,Set(),List(),StringId(8imp4NOlpBx1Wsu7woKLPJVq4Kv4AZDS)), Person(Adam,21,None,Set(),List(),StringId(5t9W5XM3pw2oWDszb28SWxRhh6LJvRoq)), Person(Adam,21,None,Set(),List(),StringId(yWg21ngYbyrEngNHmnrNWcTyndywDYo8)), Person(Adam,21,None,Set(),List(),StringId(dbiqhoV6tN1VdNWLBvCzWOlzwL8Ck8iC)), Person(Adam,21,None,Set(),List(),StringId(ygDHp6SVmD5yMqZMru3RnMS54mvvoGnI)), Person(Adam,21,None,Set(),List(),StringId(kSxSXVhD1jPH1cxQftUpJ1REMFe7EzEP)), Person(Adam,21,None,Set(),List(),StringId(IynPIbMMAsqpCoKLHQpRx0qMmehm79jw)), Person(Adam,21,None,Set(),List(),StringId(zk0CTrMh1gMfdNC7OegGqQc80sWKrgPM)), Person(Adam,21,None,Set(),List(),StringId(Tc6rz2qXAy2dMqr0RnSG6Dx6JuxDNplN)), Person(Adam,21,None,Set(),List(),StringId(tt0Vn1ttNWlw2Yr27v4uQ5HqWE5vbfMz)))))\n```\n\n---\n\n## Backup and Restore\n\nBackup your database:\n\n```scala\nimport lightdb.backup._\nimport java.io.File\n\nDatabaseBackup.archive(db.stores, new File(\"backup.zip\")).sync()\n// res6: Int = 42\n```\n\nRestore from a backup:\n\n```scala\nDatabaseRestore.archive(db, new File(\"backup.zip\")).sync()\n// res7: Int = 42\n```\n\n---\n\n## File Storage (prefix, chunked)\n\nPrefix-capable stores only: RocksDB, LMDB, MapDB (B-Tree). Metadata lives at `file:\u003cid\u003e`, chunks at `data::\u003cid\u003e::\u003cchunk\u003e`, enabling ordered streaming by chunk index.\n\n```scala\nimport lightdb.file.FileStorage\nimport lightdb.rocksdb.RocksDBStore    // or LMDBStore / MapDBStore\nimport lightdb.KeyValue\nimport rapid.Stream\nimport java.nio.file.Path\n\nobject fileDb extends LightDB {\n  override type SM = RocksDBStore.type\n  override val storeManager: RocksDBStore.type = RocksDBStore\n  override val directory = Some(Path.of(\"db/files\"))\n  override def upgrades = Nil\n}\n\nfileDb.init.sync()\n\n// Use a dedicated KeyValue store for files (prefix-capable manager required)\nval fs = FileStorage(fileDb, \"_files\")\n\n// Write (chunk size = 4 bytes)\nval meta = fs.put(\"hello.txt\", Stream.emits(Seq(\"Hello RocksDB!\".getBytes(\"UTF-8\"))), chunkSize = 4).sync()\n\n// Read back\nval bytes = fs.readAll(meta.fileId).sync().flatten\nprintln(new String(bytes, \"UTF-8\")) // Hello RocksDB!\n\n// List and delete\nfs.list.sync().map(_.fileName)\nfs.delete(meta.fileId).sync()\n```\n\n---\n\n## Full-Text Search (Lucene)\n\n```scala\nimport lightdb._\nimport lightdb.lucene.LuceneStore\nimport lightdb.doc._\nimport lightdb.id.Id\nimport fabric.rw._\nimport java.nio.file.Path\n\ncase class Note(text: String, _id: Id[Note] = Id()) extends Document[Note]\nobject Note extends DocumentModel[Note] with JsonConversion[Note] {\n  implicit val rw: RW[Note] = RW.gen\n  val text = field.tokenized(\"text\", _.text)\n}\n\nobject luceneDb extends LightDB {\n  type SM = LuceneStore.type\n  val storeManager = LuceneStore\n  val directory = Some(Path.of(\"db/lucene\"))\n  val notes = store(Note)\n  def upgrades = Nil\n}\n\nluceneDb.init.sync()\nluceneDb.notes.transaction(_.insert(Note(\"the quick brown fox\"))).sync()\n// res9: Note = Note(\n//   text = \"the quick brown fox\",\n//   _id = StringId(\"Lhk8b2DFdR3jQRhQz6oqI1PXMTRRYdLQ\")\n// )\nval hits = luceneDb.notes.transaction { txn =\u003e\n  txn.query.search.flatMap(_.list)\n}.sync()\n// hits: List[Note] = List(\n//   Note(\n//     text = \"the quick brown fox\",\n//     _id = StringId(\"M7qo7X5v5nVvbluwSXttiSRRNsf1HLJt\")\n//   ),\n//   Note(\n//     text = \"the quick brown fox\",\n//     _id = StringId(\"ExVqgjFa1qAHCYjEcENh95NOqDgeptlc\")\n//   ),\n//   Note(\n//     text = \"the quick brown fox\",\n//     _id = StringId(\"YaVNl1h9Lrokyi92cFk7lN9PRxLuzwcB\")\n//   ),\n//   Note(\n//     text = \"the quick brown fox\",\n//     _id = StringId(\"UxL11bDmFVJeccJEe1N81s1ODCI4VRdi\")\n//   ),\n//   Note(\n//     text = \"the quick brown fox\",\n//     _id = StringId(\"8wOnyURN7pNQGdOYDYxoq2WHWfksKFIO\")\n//   ),\n//   Note(\n//     text = \"the quick brown fox\",\n//     _id = StringId(\"OYDIHiDvEXfGr7c7khBUGOwG2W8l846w\")\n//   ),\n//   Note(\n//     text = \"the quick brown fox\",\n//     _id = StringId(\"y5jmX8t4igTRWcYlE3zuQ1wMWv5hgAMs\")\n//   ),\n//   Note(\n//     text = \"the quick brown fox\",\n//     _id = StringId(\"9lVJaLpnDPgUMrCAyROzeZSn1lMiV6AU\")\n//   ),\n//   Note(\n//     text = \"the quick brown fox\",\n//     _id = StringId(\"sYZU2AenZBJqffIWoB38NWOdhLo7YJlo\")\n//   ),\n//   Note(\n//     text = \"the quick brown fox\",\n//     _id = StringId(\"U4qUYyinc2EzHsAY3qXweiIddMQ4zQcA\")\n//   ),\n//   Note(\n//     text = \"the quick brown fox\",\n//     _id = StringId(\"X2NUcxZvFLrW49DvD9lqtK5z5VyaOPGT\")\n//   ),\n//   Note(\n//     text = \"the quick brown fox\",\n//     _id = StringId(\"hZkeeBWM2FCqoyuVejKxUADcHotSXVk0\")\n//   ),\n// ...\n```\n\n## Spatial Queries\n\n```scala\nimport lightdb._\nimport lightdb.doc._\nimport lightdb.id.Id\nimport lightdb.spatial.Point\nimport lightdb.distance._\nimport lightdb.sql.SQLiteStore\nimport fabric.rw._\nimport java.nio.file.Path\n\ncase class Place(name: String, loc: Point, _id: Id[Place] = Id()) extends Document[Place]\nobject Place extends DocumentModel[Place] with JsonConversion[Place] {\n  implicit val rw: RW[Place] = RW.gen\n  val name = field(\"name\", _.name)\n  val loc  = field.index(\"loc\", _.loc) // index for spatial queries\n}\n\nobject spatialDb extends LightDB {\n  type SM = SQLiteStore.type\n  val storeManager = SQLiteStore\n  val directory = Some(Path.of(\"db/spatial\"))\n  val places = store(Place)\n  def upgrades = Nil\n}\n\nspatialDb.init.sync()\nspatialDb.places.transaction(_.insert(Place(\"NYC\", Point(40.7142, -74.0119)))).sync()\n// res11: Place = Place(\n//   name = \"NYC\",\n//   loc = Point(latitude = 40.7142, longitude = -74.0119),\n//   _id = StringId(\"LMqHG9Mdrj8ObS6B2HSigzzkHMxAIn2h\")\n// )\n// Distance filters are supported on spatial-capable backends; example filter:\nval nycFilter = Place.loc.distance(Point(40.7, -74.0), 5_000.meters)\n// nycFilter: Filter[Place] = Distance(\n//   fieldName = \"loc\",\n//   from = Point(latitude = 40.7, longitude = -74.0),\n//   radius = Distance(5000.0)\n// )\n```\n\n## Graph Traversal (Edges)\n\n```scala\nimport lightdb._\nimport lightdb.doc._\nimport lightdb.graph.{EdgeDocument, EdgeModel}\nimport lightdb.id.Id\nimport fabric.rw._\nimport java.nio.file.Path\n\ncase class GPerson(name: String, _id: Id[GPerson] = Id()) extends Document[GPerson]\nobject GPerson extends DocumentModel[GPerson] with JsonConversion[GPerson] {\n  implicit val rw: RW[GPerson] = RW.gen\n  val name = field(\"name\", _.name)\n}\n\ncase class Follows(_from: Id[GPerson], _to: Id[GPerson]) extends EdgeDocument[Follows, GPerson, GPerson] {\n  override val _id: EdgeId[Follows, GPerson, GPerson] = EdgeId(_from, _to)\n}\nobject Follows extends EdgeModel[Follows, GPerson, GPerson] with JsonConversion[Follows] {\n  implicit val rw: RW[Follows] = RW.gen\n}\n\nobject graphDb extends LightDB {\n  type SM = lightdb.store.hashmap.HashMapStore.type\n  val storeManager = lightdb.store.hashmap.HashMapStore\n  val directory = None\n  val people = store(GPerson)\n  val follows = store(Follows)\n  def upgrades = Nil\n}\n\ngraphDb.init.sync()\n```\n\n## Split Collection (storage + search)\n\n```scala\nimport lightdb._\nimport lightdb.doc._\nimport lightdb.store.split.SplitStoreManager\nimport lightdb.rocksdb.RocksDBStore\nimport lightdb.lucene.LuceneStore\nimport fabric.rw._\nimport java.nio.file.Path\n\ncase class Article(title: String, body: String, _id: Id[Article] = Id()) extends Document[Article]\nobject Article extends DocumentModel[Article] with JsonConversion[Article] {\n  implicit val rw: RW[Article] = RW.gen\n  val title = field.index(\"title\", _.title)\n  val body  = field.tokenized(\"body\", _.body)\n}\n\nobject splitDb extends LightDB {\n  type SM = SplitStoreManager[lightdb.rocksdb.RocksDBStore.type, lightdb.lucene.LuceneStore.type]\n  val storeManager = SplitStoreManager(RocksDBStore, LuceneStore)\n  val directory = Some(Path.of(\"db/split\"))\n  val articles = store(Article)\n  def upgrades = Nil\n}\n```\n\n## Sharded / MultiStore\n\n```scala\nimport lightdb._\nimport lightdb.doc._\nimport lightdb.store.hashmap.HashMapStore\nimport fabric.rw._\n\ncase class TenantDoc(value: String, _id: Id[TenantDoc] = Id()) extends Document[TenantDoc]\nobject TenantDoc extends DocumentModel[TenantDoc] with JsonConversion[TenantDoc] {\n  implicit val rw: RW[TenantDoc] = RW.gen\n  val value = field(\"value\", _.value)\n}\n\nobject shardDb extends LightDB {\n  type SM = HashMapStore.type\n  val storeManager = HashMapStore\n  val directory = None\n  def upgrades = Nil\n  val shards = multiStore[TenantDoc, TenantDoc.type, S[TenantDoc, TenantDoc.type]#TX, S[TenantDoc, TenantDoc.type], String](TenantDoc)\n    .withKeys(\"tenantA\", \"tenantB\")\n    .create()\n}\n\nshardDb.init.sync()\nval tenantA = shardDb.shards(\"tenantA\")\n// tenantA: HashMapStore[TenantDoc, TenantDoc] = lightdb.store.hashmap.HashMapStore@883cb00\ntenantA.transaction(_.insert(TenantDoc(\"hello\"))).sync()\n// res14: TenantDoc = TenantDoc(\n//   value = \"hello\",\n//   _id = StringId(\"DaAsbmuMSyJsqR5jYx1s4ihrqYs5GDc9\")\n// )\n```\n\n## Stored Values (config flags)\n\n```scala\nimport lightdb._\nimport fabric.rw._\n\nobject cfgDb extends LightDB {\n  type SM = lightdb.store.hashmap.HashMapStore.type\n  val storeManager = lightdb.store.hashmap.HashMapStore\n  val directory = None\n  def upgrades = Nil\n}\n\ncfgDb.init.sync()\nval featureFlag = cfgDb.stored[Boolean](\"featureX\", default = false)\n// featureFlag: StoredValue[Boolean] = StoredValue(\n//   key = \"featureX\",\n//   store = lightdb.store.hashmap.HashMapStore@31001f2b,\n//   default = repl.MdocSession$MdocApp$$Lambda/0x000000002cbf14f0@8ee7830,\n//   persistence = Stored\n// )\nfeatureFlag.set(true).sync()\n// res16: Boolean = true\n```\n\n## SQL Stores (DuckDB / SQLite)\n\n```scala\nimport lightdb._\nimport lightdb.doc._\nimport lightdb.id.Id\nimport lightdb.sql.SQLiteStore\nimport fabric.rw._\nimport java.nio.file.Path\n\ncase class Row(value: String, _id: Id[Row] = Id()) extends Document[Row]\nobject Row extends DocumentModel[Row] with JsonConversion[Row] {\n  implicit val rw: RW[Row] = RW.gen\n  val value = field(\"value\", _.value)\n}\n\nobject sqlDb extends LightDB {\n  type SM = SQLiteStore.type\n  val storeManager = SQLiteStore\n  val directory = Some(Path.of(\"db/sqlite-example\"))\n  val rows = store(Row)\n  def upgrades = Nil\n}\n\nsqlDb.init.sync()\nsqlDb.rows.transaction(_.insert(Row(\"hi sql\"))).sync()\n// res18: Row = Row(\n//   value = \"hi sql\",\n//   _id = StringId(\"0SrDSBnN52srB8r29uj0XQRAdApMuXzH\")\n// )\n```\n\n## Reindex / Optimize / Upgrades\n\n- `store.reIndex()` and `store.optimize()` give backends a chance to rebuild or compact data.\n- Database upgrades: implement `upgrades: List[DatabaseUpgrade]` and add migration steps; LightDB runs them on init.\n\n## Clean Up\n\nDispose of the database when done:\n\n```scala\ndb.dispose.sync()\n```\n\n---\n\n## Conclusion\n\nThis guide provided an overview of using **LightDB**. Experiment with its features to explore the full potential of this high-performance database. For advanced use cases, consult the API documentation.","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foutr%2Flightdb","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Foutr%2Flightdb","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foutr%2Flightdb/lists"}