{"id":28098712,"url":"https://github.com/target/lite-for-jdbc","last_synced_at":"2025-07-14T08:36:53.107Z","repository":{"id":64017386,"uuid":"530689815","full_name":"target/lite-for-jdbc","owner":"target","description":"Lightweight library to simplify JDBC database access","archived":false,"fork":false,"pushed_at":"2025-06-28T19:13:42.000Z","size":562,"stargazers_count":23,"open_issues_count":9,"forks_count":12,"subscribers_count":6,"default_branch":"main","last_synced_at":"2025-07-11T03:29:40.417Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Kotlin","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/target.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":"CODEOWNERS","security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-08-30T14:17:21.000Z","updated_at":"2025-04-23T19:16:20.000Z","dependencies_parsed_at":"2024-07-17T00:25:18.112Z","dependency_job_id":"0c35e027-3f53-4aa4-9265-c809d1b287b3","html_url":"https://github.com/target/lite-for-jdbc","commit_stats":null,"previous_names":[],"tags_count":13,"template":false,"template_full_name":null,"purl":"pkg:github/target/lite-for-jdbc","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/target%2Flite-for-jdbc","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/target%2Flite-for-jdbc/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/target%2Flite-for-jdbc/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/target%2Flite-for-jdbc/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/target","download_url":"https://codeload.github.com/target/lite-for-jdbc/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/target%2Flite-for-jdbc/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":265263025,"owners_count":23736525,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-05-13T17:58:41.326Z","updated_at":"2025-07-14T08:36:53.061Z","avatar_url":"https://github.com/target.png","language":"Kotlin","funding_links":[],"categories":[],"sub_categories":[],"readme":"# lite-for-jdbc\n\nLightweight library to help simplify JDBC database access. Main features:\n\n- Lets you use SQL statements with named parameters\n- Automates resource cleanup\n- Provides a functions for common database interaction patterns like individual and list result\n  handling, updates, and batch statements\n\n\u003c!-- TOC --\u003e\n* [lite-for-jdbc](#lite-for-jdbc)\n* [Gradle Setup](#gradle-setup)\n* [Db Setup](#db-setup)\n  * [Custom Database Types](#custom-database-types)\n* [Methods](#methods)\n  * [executeQuery](#executequery)\n  * [findAll](#findall)\n  * [executeUpdate](#executeupdate)\n  * [executeWithGeneratedKeys](#executewithgeneratedkeys)\n  * [executeBatch](#executebatch)\n  * [executeBatch Counts only](#executebatch-counts-only)\n  * [useNamedParamPreparedStatement](#usenamedparampreparedstatement)\n  * [useNamedParamPreparedStatementWithAutoGenKeys](#usenamedparampreparedstatementwithautogenkeys)\n  * [useConnection](#useconnection)\n* [Query Parameters](#query-parameters)\n  * [Named Parameters](#named-parameters)\n  * [Positional Params](#positional-params)\n* [Row Mapping](#row-mapping)\n  * [rowMapper](#rowmapper)\n  * [ResultSet/PreparedStatement extensions](#resultsetpreparedstatement-extensions)\n  * [Java type to Postgresql column type mapping requirements](#java-type-to-postgresql-column-type-mapping-requirements)\n    * [Storing timestamps with timezone](#storing-timestamps-with-timezone)\n  * [propertiesToMap](#propertiestomap)\n* [Transactions \u0026 Autocommit](#transactions--autocommit)\n  * [withAutoCommit](#withautocommit)\n  * [withTransaction](#withtransaction)\n    * [withTransaction - How to Specify Isolation levels](#withtransaction---how-to-specify-isolation-levels)\n  * [DataSource configuration \u0026 AutoCommit](#datasource-configuration--autocommit)\n  * [DataSource settings](#datasource-settings)\n  * [Testing with mockkTransaction](#testing-with-mockktransaction)\n* [IntelliJ SQL language integration](#intellij-sql-language-integration)\n* [Development](#development)\n  * [Building](#building)\n  * [Testing with Docker](#testing-with-docker)\n    * [For colima Users](#for-colima-users)\n  * [Issues](#issues)\n  * [Contributing](#contributing)\n    * [Code review standards](#code-review-standards)\n    * [Testing standards](#testing-standards)\n* [Breaking version changes](#breaking-version-changes)\n    * [`1.9.2` -\u003e `2.0.0`](#192---200)\n\u003c!-- TOC --\u003e\n\n# Gradle Setup\n\n```kotlin\nrepositories {\n    mavenCentral()\n}\n\ndependencies {\n    api(\"com.target:lite-for-jdbc:2.1.1\")\n}\n```\n\n# Db Setup\n\nThe core of lite-for-jdbc is the Db class. A Db object is intended to be used as a singleton and\ninjected as a dependency in repository classes. It requires a DataSource constructor argument,  \nand there is a DataSourceFactory to help with that.\nThe typical recommendation is to use Hikari, which is configured with reasonable defaults, but you can customize\nit to any DataSource.\nExamples:\n\nUsing DataSourceFactory:\n\n```kotlin\nval config = DbConfig(\n  type = \"H2_INMEM\",\n  username = \"user\",\n  password = \"password\",\n  databaseName = \"dbName\"\n)\nval dataSource = DataSourceFactory.dataSource(config)\n\nval db = Db(dataSource)\n```\n\nOr you can use the Db constructor that accepts a DbConfig directly. Db will use DataSourceFactory under the covers for you.\n\n```kotlin\nval db = Db(DbConfig(\n  type = \"H2_INMEM\",\n  username = \"user\",\n  password = \"password\",\n  databaseName = \"dbName\"\n))\n```\n\nSee `DbConfig` for a full list of configuration options available.\n\n## Custom Database Types\n\nIf another implementation of DataSource is required, you can register a custom \"Type\" to be set on `DbConfig` and build the\nrespective `DataSource` in a `DataSourceBuilder` lambda as shown below.\n\n```kotlin\nDataSourceFactory.registerDataSourceBuilder(\"custom\") {config: DbConfig -\u003e\n  val fullConfig = config.copy(\n    jdbcUrl = \"jdbc:custom:server//${config.host}:${config.port}/${config.databaseName}\"\n  )\n  hikariDataSource(fullConfig)\n}\n```\n\nOr if you don't wish to use the `DbConfig` configuration class, a dataSource can be constructed directly and injected into the\n`Db` instance.\n\n```kotlin\nval dataSource = JdbcDataSource().apply {\n    setURL(\"jdbc:oracle:thin@localhost:5221:dbName\")\n    user = \"sa\"\n    password = \"\"\n}  \nval db = Db(dataSource)\n```\n\n# Methods\n\n## executeQuery\n\n```kotlin\nfun \u003cT\u003e executeQuery(\n  query: String,\n  parameters: Map\u003cString, Any?\u003e = emptyMap(),\n  mapRow: (ResultSet) -\u003e T\n): T? { \n    TODO(\"Provide the return value\")\n}\n```\n\nexecuteQuery is used to for queries intended to return a single result. Example:\n\n```kotlin\nimport java.sql.ResultSet\n\nval user: User = db.executeQuery(\n  sql = \"SELECT * FROM USERS WHERE id = :id\",\n  args = mapOf(\"id\" to 86753)\n) { resultSet: ResultSet -\u003e\n  User(\n    id = resultSet.getLong(\"id\"),\n    userName = resultSet.getString(\"username\")\n  )\n}\n```\nIf you have more than one method in your repository that needs to map a resultSet into the same domain object,\nit's typical to extract the mapper into a standalone function.\n\n```kotlin\nval user: User = db.executeQuery(\n  sql = \"SELECT * FROM USERS WHERE id = :id\",\n  args = mapOf(\"id\" to 86753),\n  rowMapper = ::mapToUser\n)\n\nprivate fun mapToUser(resultSet: ResultSet): User = resultSet.with {\n  User(\n    id = getLong(\"id\"),\n    userName = getString(\"username\")\n  )\n}\n\n```\n\n`executeQuery` returns a nullable object. If you expect the query to never be null, a common idiom is\nto wrap the call with `checkNotNull`. e.g.\n\n```kotlin\nval user: User = checkNotNull(\n    db.executeQuery(\n        sql = \"SELECT * FROM USERS WHERE id = :id\",\n        args = mapOf(\"id\" to 86753),\n        rowMapper = ::mapToUser\n    )\n) { \"Unexpected state: Query didn't return a result.\" }\n```\n\nIf on inserting new records, you want access to the inserted content, returning * notation will give you access to this information. e.g.\n```kotlin \ndb.executeQuery(\n            sql =\n                \"\"\"\n                INSERT INTO USERS (id, username) \n                VALUES (:id, :username) \n                RETURNING *\n                \"\"\".trimIndent(),\n            args =\n                mapOf(\n                    \"id\" to user.id,\n                    \"username\" to user.userid\n                ),\n            rowMapper = ::mapToUser,\n        )\n```\n\n## findAll\n```kotlin\nfun \u003cT\u003e findAll(\n  sql: String,\n  args: Map\u003cString, Any?\u003e = mapOf(),\n  rowMapper: (ResultSet) -\u003e T\n): List\u003cT\u003e { \n    TODO()\n}\n```\n\nfindAll is used to query for a list of results. e.g.\n\n```kotlin\nval adminUsers: List\u003cUser\u003e = db.findAll(\n  sql = \"SELECT id, username FROM USERS WHERE is_admin = :isAdmin\",\n  args = mapOf(\"isAdmin\" to true),\n  rowMapper = ::mapToUser\n)\n```\n\n## executeUpdate\n```kotlin\nfun executeUpdate(\n  sql: String,\n  args: Map\u003cString, Any?\u003e = mapOf()\n): Int { \n    TODO()\n}\n```\n\nexecuteUpdate is used for statements that do not require a resultSet response. For example updates\nand DDL. It returns the number of rows affected by the query.\n\n```kotlin\nval count = db.executeUpdate(sql = \"INSERT INTO T (id, field1, field2) VALUES (:id, :field1, :field2)\",\n  args = model.propertiesToMap()\n)\nprintln(\"$count row(s) inserted\")\n```\n\nDocs on the helper function [propertiesToMap](#propertiestomap)\n\n## executeWithGeneratedKeys\n```kotlin\nfun \u003cT\u003e executeWithGeneratedKeys(\n  sql: String,\n  args: Map\u003cString, Any?\u003e = mapOf(),\n  rowMapper: (ResultSet) -\u003e T\n): List\u003cT\u003e {\n  TODO()\n}\n```\n\nexecuteWithGeneratedKeys is used for queries that generate a default value, using something like a sequence or a\nrandom UUID. These results will need to be mapped since multiple columns can be populated by defaults in a single\ninsert.\n\n```kotlin\nimport java.sql.ResultSet\n\n// Table T has an auto-generated value for the ID column in this example\nval model = Model(field1 = \"testName1\", field2 = 1001)\nval results = db.executeWithGeneratedKeys(sql = \"INSERT INTO T (field1, field2) VALUES (:field1, :field2)\",\n  args = listOf(model.propertiesToMap(), model2.propertiesToMap()),\n  rowMapper = { resultSet: ResultSet -\u003e resultSet.getString(\"id\") }\n)\n\nval newModel = model.copy(id = results.first())\n```\n\n## executeBatch\n\n```kotlin\nfun \u003cT\u003e executeBatch(\n  sql: String,\n  args: List\u003cMap\u003cString, Any?\u003e\u003e,\n  rowMapper: (ResultSet) -\u003e T\n): List\u003cT\u003e {\n  TODO()\n}\n```\n\nexecuteBatch is used to run the same SQL statement with different parameters in batch mode.\nThis can give you significant performance improvements.\n\nArgs is a list of maps. Each item in the list will be a query execution in a batch. The Map will provide the parameters\nfor that execution. In the following example there will be two queries executed in a single batch. The first will\ninsert model1, and the second will insert model2.\n\nRowMapper maps the results to the specified result type.\n\nThe response is a list of Objects of type `T`. Each object represents a batch query result. Most likely there will\nbe one result per query execution. In the following example the results list has 2 elements. The first element  \nprovides the generated ID of the model1 object, and the second element provides the generated ID of the model2 object.\n\n```kotlin\nimport java.sql.ResultSet\n\nval models = listOf(\n  Model(field1 = \"testName1\", field2 = 1001),\n  Model(field1 = \"testName2\", field2 = 1002)\n)\n\nval insertedIds = db.executeBatch(\n  sql = \"INSERT INTO T (field1, field2) VALUES (:field1, :field2)\",\n  args = models.map { it.propertiesToMap() },\n  rowMapper = { resultSet: ResultSet -\u003e resultSet.getLong(\"id\") }\n)\n```\n\n## executeBatch Counts only\n\n```kotlin\nfun executeBatch(\n  sql: String,\n  args: List\u003cMap\u003cString, Any?\u003e\u003e\n): List\u003cInt\u003e {\n  TODO()\n}\n```\n\nexecuteBatch is used to run the same SQL statement with different parameters in batch mode.\nThis can give you significant performance improvements.\n\nArgs is a list of maps. Each item in the list is a query execution in a batch. The Map provides the parameters\nfor that execution. In the following example there are two queries executed in a single bath. The first\ninserts model1, and the second inserts model2.\n\nThe response is a list of Int. Each Int indicates the rows affected by the respective query execution. In the following\nexample the results list has 2 elements. The first element indicates how many rows were affected by the\nmodel1 insert (it should be 1), and the second element indicates how many rows were affected by the model2 insert.\n\n```kotlin\nval model1 = Model(field1 = \"testName1\", field2 = 1001)\nval model2 = Model(field1 = \"testName2\", field2 = 1002)\nval results: List\u003cInt\u003e = db.executeBatch(\n  sql = \"INSERT INTO T (field1, field2) VALUES (:field1, :field2)\",\n  args = listOf(model1.propertiesToMap(), model2.propertiesToMap())\n)\nresults.forEach { println(\"$it row(s) inserted\") }\n```\n\n\n\n## useNamedParamPreparedStatement\n```kotlin\nfun \u003cT\u003e useNamedParamPreparedStatement(\n  sql: String,\n  block: (NamedParamPreparedStatement) -\u003e T\n): T {\n  TODO()\n}\n```\n\nusePreparedStatement is used to run blocks of code against a prepared statement that is created for you, and clean up\nis done automatically. This should only be used if none of the above methods meet your needs, and you need access to the\nraw NamedParamPreparedStatement.\n\nThis method will NOT return generated keys.\n\nUnlike the other methods listed here, the PositionalParam option is simply usePreparedStatement (since the vanilla\nPreparedStatement is what will be provided to you)\n\n## useNamedParamPreparedStatementWithAutoGenKeys\n```kotlin\nfun \u003cT\u003e useNamedParamPreparedStatementWithAutoGenKeys(\n  sql: String,\n  block: (NamedParamPreparedStatement) -\u003e T\n): T {\n  TODO()\n}\n``` \n\nuseNamedParamPreparedStatementWithAutoGenKeys is used to run blocks of code against a prepared statement that is created\nfor you, and clean up is done automatically. This should only be used if none of the above methods meet your needs, and\nyou need access to the raw NamedParamPreparedStatement.\n\nThis method will set the PreparedStatement to return generated keys.\n\nUnlike the other methods listed here, the PositionalParam option is simply usePreparedStatement (since the vanilla\nPreparedStatement is what will be provided to you)\n\n## useConnection\n```kotlin\nfun \u003cT\u003e useConnection(block: (Connection) -\u003e T): T {\n  TODO()\n}\n```\n\nuseConnection is the lowest level method, and should only be used if you require direct access to the\nJDBC Connection. The connection will be created and cleaned up for you.\n\n# Query Parameters\n\nlite-for-jdbc supports named parameters in your query. The named parameter syntax is the recommended pattern\nfor ease of maintenance and readability. All the examples use named parameters.\n\nPositional parameters are also supported for backward compatability. The positional parameter\nversion of each method is available by adding `PositionalParams` to the method name.\nFor example, to query using named parameters, call `executeQuery`, and to query using positional\nparameters, call `executeQueryPositionalParams`.\n\n## Named Parameters\n\nIn your query, use a colon to indicate a named parameter.\n\n```sql\nSELECT * FROM T WHERE field = :value1 OR field2 = :value2\n```\n\nIn the above example, invoking it would require a map defined like this\n```kotlin\nmapOf(\"value1\" to \"string value\", \"value2\" to 123)\n```\n\nNamed Parameters can NOT be mixed with positional parameters - doing so will result in an exception.\n\n```sql\nSELECT * FROM T WHERE field = :value1 OR field2 = ?\n```\n\nColons inside of quotes or double quotes will be ignored.\n\n```sql\nSELECT * FROM T WHERE field = 'This will ignore the : in the string'\n```\n\nIf you need a colon in the SQL, escape it with a double colon.\n\n```sql\nSELECT * FROM T WHERE field = ::systemVariableInOracle\n```\nThe above query will have no named parameters, and the sql will translate INTO the following\n```sql\nSELECT * FROM T WHERE field = :systemVariableInOracle\n```\n\n## Positional Params\n\nFavor named params if you can - they make the code easier to understand, and aren't at risk\nof parameter order bugs that can happen with positional params. This library also supports\npositional params if you want or need them for some reason. Positional Parameters pass the SQL\ndirectly to the JDBC Connection to prepare a statement. See the Java JDBC reference documentation\nfor more details on the syntax.\n\nThe Positional Parameter methods accept varargs, and the order of the arguments will dictate the position in the query\n\nThere is one exception to the use of varargs, and that's the executeBatchPositionalParams. That accepts a List of Lists.\n\n# Row Mapping\n\nOn calls on Db that will return a ResultSet, a row mapping function must be provided to the map each row to an object.\nIf the function returns a list of objects, the rowMapper will be called once per row.\n\n## rowMapper\n\nThe rowMapper takes a ResultSet and maps the current row to the returned object. It will handle looping on the\nResultSet for you where necessary. This mapper can interact directly with the ResultSet as seen in the following example:\n\n```kotlin\n\nimport java.sql.ResultSet\nimport java.time.Instant\n\ndata class Model(\n  val field1: String,\n  val field2: Int,\n  val field3: Instant\n)\n\nval results = db.findAll(sql = \"SELECT * FROM model\",\n  rowMapper = { resultSet: ResultSet -\u003e\n    Model(\n      field1 = resultSet.getString(\"field_1\"),\n      field2 = resultSet.getInt(\"field_2\"),\n      field3 = resultSet.getInstant(\"field_3\")\n    )\n  }\n)\n```\n\n## ResultSet/PreparedStatement extensions\n\nTo facilitate mapping, `ResultSet.get` and `PreparedStatement.set` extensions have been added.\n\n| Extension methods                   | Behavior of ResultSet.get              | Behavior of PreparedStatement.set                               |\n|-------------------------------------|----------------------------------------|-----------------------------------------------------------------|\n| getInstant/setInstant               | getLocalDateTime(c).toInstant(UTC)     | setObject(c, LocalDateTime.ofInstant(instant, ZoneOffset.UTC))  |\n| getLocalDateTime/setLocalDateTime   | getObject(c, LocalDateTime)            | setObject(c, LocalDateTime)                                     |\n| getLocalDate/setLocalDate           | getObject(c, LocalDate)                | setObject(c, LocalDate)                                         |\n| getLocalTime/setLocalTime           | getObject(c, LocalTime)                | setObject(c, LocalTime)                                         |\n| getOffsetDateTime/setOffsetDateTime | getObject(c, OffsetDateTime)           | setObject(c, OffsetDateTime)                                    |\n| getOffsetTime/setOffsetTime         | getObject(c, OffsetTime)               | setObject(c, OffsetTime)                                        |                                      \n| getZonedDateTime/setZonedDateTime   | getOffsetDateTime(c).toZonedDateTime() | setObject(c, zonedDateTime.toOffsetDateTime())                  |\n| getUUID/setUUID                     | getObject(c, UUID)                     | setObject(c, UUID)                                              |\n| setDbValue                          |                                        | setObject(c, DbValue.value, DbValue.type, [DbValue.percission]) |\n\n\n## Java type to Postgresql column type mapping requirements\n\nThe following table shows the Java type to Postgresql column type pairing that should be used with lite-for-jdbc.\n\n| Java Type      | Postgresql Type | Description                                         | Example fields             |\n|----------------|-----------------|-----------------------------------------------------|----------------------------|\n| Instant        | Timestamp       | A moment in time without a time zone                | created_timestamp          |\n| LocalDateTime  | Timestamp       | A date/time without considering time zones          | movie_opening              |\n| LocalDate      | Date            | A day with no time or time zone information         | product_launch_date        |\n| LocalTime      | Time            | A time of day without date or time zone information | mcdonalds_lunch_start_time |\n| OffsetDateTime | TimestampTZ     | A date/time with a set offset (-/+hh:mm)            | flight_depart_timestamp    |\n| OffsetTime     | TimeTZ          | A time with a set offset (-/+hh:mm)                 | store_open_time            |\n| ZonedDateTime  | TimestampTZ     | A date/time with a time zone code                   | meeting_start_timestamp    |\n\n### Storing timestamps with timezone\n\nPostgres stores ALL Timestamps as UTC. TimestampTZ adds support for time zone offsets in the value when it is being\ninserted, but it is stored in UTC with NO KNOWLEDGE of the Time Zone offset provided. Because of this, when Offset\nor Zoned DateTimes are read back from Postgres, you will always get the data with a Zero offset. If you need to save\na timestamp AND a timezone, you will need to have two fields: one for the timestamp and one for the timezone.\n\nWe find the simplest solution is to have a pair of fields with an Instant/Timestamp for the date/time, and a String/Text\nfield for a Zone ID along with convenience methods. See the below example code\n\n```postgresql\n-- DDL\nCREATE TABLE flight\n(\n  id                      INTEGER   GENERATED ALWAYS AS IDENTITY,\n  flight_name             TEXT      NOT NULL,\n  flight_depart_timestamp TIMESTAMP NOT NULL,\n  flight_depart_timezone  TEXT      NOT NULL\n)\n```\n\n```kotlin\n// Domain Class\nimport java.time.ZoneIdconst\nimport java.time.Instant\nimport java.time.ZonedDateTime\n\nval unSet: Long = -1\n\ndata class Delivery(\n  val id: Long = unSet,\n  val deliveryAddress: String,\n  var deliveryTimestamp: Instant,\n  var deliveryTimezone: ZoneId,\n) {\n  fun getFlightDepartDateTime(): ZonedDateTime {\n    return ZonedDateTime.ofInstant(deliveryTimestamp, deliveryTimezone)\n  }\n  fun setFlightDepartDateTime(value: ZonedDateTime) {\n    this.deliveryTimestamp = value.toInstant()\n    this.deliveryTimezone = value.zone\n  }\n}\n```\n\n```kotlin\n// Mapping\nprivate fun toDelivery(resultSet: ResultSet): Delivery = Delivery(\n  id = resultSet.getLong(\"id\"),\n  deliveryAddress = resultSet.getString(\"delivery_address\"),\n  deliveryTimestamp = resultSet.getInstant(\"delivery_timestamp\"),\n  deliveryTimezone = java.time.ZoneId.of(resultSet.getString(\"delivery_timezone\")),\n)\n```\n\n## propertiesToMap\n\nA convenience method has been added to turn an object into a map of its properties. This can be useful to turn a\ndomain into a map to then be used as named parameters.\n\n```kotlin\nval propMap = model.propertiesToMap()\n```\n\nThree optional parameters exist to fine tune the process. exclude will skip certain fields when creating the map.\n\n```kotlin\nval propMap = model.propertiesToMap(exclude = listOf(\"fieldOne\"))\npropMap.containsKey(\"fieldOne\") shoudlBe false\n```\n\n`nameTransformer` allows you to transform the keys in the map if you want to use named parameters that don't strictly\nmatch the field names on the domain class. For example, if you want the named parameters in your query to use snake case,\nyou could use the method as shown below.\n\n```kotlin\nval propMap = model.propertiesToMap(nameTransformer = ::camelToSnakeCase)\npropMap.containsKey(\"field_one\") shoudlBe true\npropMap.containsKey(\"fieldOne\") shoudlBe false\n```\n\n`override` will override the values for specific key provided. If this is paired with the\n`nameTransformer`, you should match the transformed name (not the field name).\n\n```kotlin\nval propMap = model.propertiesToMap(override = mapOf(\"fieldOne\" to \"Override\"))\npropMap[\"fieldOne\"] shoudlBe \"Override\"\n```\n\n# Transactions \u0026 Autocommit\n\nUsing withAutoCommit and withTransaction on Db will give you the opportunity to use a single ConnectionSession for\nmultiple calls. Calling the find \u0026 execute methods on Db will create a new AutoCommit ConnectionSession for each call.\n\n## withAutoCommit\n\nAutoCommit will commit any DML (INSERT, UPDATE, DELETE, ...) statements immediately upon execution. If a transaction\nisn't required, this is more performant and simpler. See the withTransaction section to determine if your use case\nmay require transactions\n\nSince Db has convenience methods for executing a single command in its own ConnectionSession, withAutoCommit is not\nrequired. But for efficiency reasons, it should be used if multiple sql commands will be executed so that a single\nConnectionSession is used.\n\nAt the end of the withAutoCommit block, the AutoCommit ConnectionSession will be closed.\n\n## withTransaction\n\nBy using a Transaction ConnectionSession, changes will NOT be immediately committed to the database. Which allows for\nmultiple features listed below. If any of these features are required, use withTransaction. Also use withTransaction if you need to specify isolation level.\n\n* Commit - Commits any existing changes to the database and clears any save points and Locks\n* Rollback - Reverts the changes since the most recent commit, or the beginning of the ConnectionSession if no commits\n  have been done. A partial rollback can also be done to a specific Savepoint\n* Savepoint - Saves the current point of the transaction which can be used to perform a partial Rollback\n* Locks - While not available as an explicit method on the transaction, executing a query to lock database resources,\n  which will prevent the use by other connections. See the documentation of your database for specifics on what locks\n  are available and what behavior they provide.\n\nAt the end of the withTransaction block, if the block is exited normally the Transaction will be committed. If an\nexception is thrown, the Transaction will be rolled back. After the final commit/rollback, the Transaction ConnectionSession\nwill be closed.\n\n### withTransaction - How to Specify Isolation levels\n\nBy default, all transactions run with `TRANSACTION_READ_COMMITTED`isolation level. The following shows how to specify a higher one:\n\n```kotlin\n  db.withTransaction(isolationLevel = Db.IsolationLevel.TRANSACTION_REPEATABLE_READ) \n\n  db.withTransaction(isolationLevel = Db.IsolationLevel.TRANSACTION_SERIALIZABLE) \n```\n\nWhen the transaction is over, isolation level is restored to the default, TRANSACTION_READ_COMMITTED.\n\n## DataSource configuration \u0026 AutoCommit\n\nA dataSource has a default setting for the autocommit flag which can be configured. But the individual connections can\nbe modified to change their autocommit flag. This will be done if the autocommit flag is set to be incompatible with the\nConnectionSession being used. withTransaction requires a connection with autocommit set to false, and withAutoCommit\nrequires a connection with autocommit set to true.\n\nBecause lite-for-jdbc will modify the setting to function with the ConnectionSession, you will not see functionality issues\nregardless of your setting. But you should set the DataSource to default to the most common use case in your application,\nas there is a potential performance impact to changing that setting.\n\n## DataSource settings\n\nIf the dataSource is set to a different autocommit mode than is being used by a call in lite-for-jdbc, the value will be\nchanged for the duration of that ConnectionSession\n\n## Testing with mockkTransaction\n\nThe mockkTransaction method works with mockk. A mock DB (using mockk) is provided to the function. The mockDb will be\nsetup to invoke any lambda provided while calling withTransaction, providing it with a mock transaction. The mock\ntransaction will be returned from the method, so additional setup can be performed using it.\n\nSeveral of the mockk parameters are passed through to the mock Transaction: relaxed, relaxedUnitFun, and name.\n\nHere is an example use of mockTransaction\n\n```kotlin\nval mockTransaction: Transaction = mockkTransaction(mockDb, relaxed = true)\n\nmockDb.withTransaction { t: Transaction -\u003e\n  // This code is actually executed because mockkTransaction has done the necessary setup to make that happen\n  t shouldBeSameInstanceAs mockTransaction\n  t.rollback()\n  t.commit()\n}\n\n// Verify the two calls were made. Because the mock transaction is relaxed, these didn't need to be setup\n// before the call\nverify {\n  mockTransaction.rollback()\n  mockTransaction.commit()\n}\n\n// Those are the only two calls that were made\nconfirmVerified(mockTransaction)\n\n```\n\n# IntelliJ SQL language integration\n\nAll the query-related methods provided by this library use `sql` as the method parameter name for SQL.\nUsing this pattern, you can add SQL language support to IntelliJ, which will then give you features\nlike auto-completion, validation, and syntax highlighting. To enable this, add or edit your\nproject's `.idea/IntelliLang.xml` file with this:\n\n```xml\n\u003c?xml version=\"1.0\" encoding=\"UTF-8\"?\u003e\n\u003cproject version=\"4\"\u003e\n  \u003ccomponent name=\"LanguageInjectionConfiguration\"\u003e\n    \u003cinjection language=\"SQL\" injector-id=\"kotlin\"\u003e\n      \u003cdisplay-name\u003elite-for-jdbc SQL parameter name\u003c/display-name\u003e\n      \u003csingle-file value=\"false\" /\u003e\n      \u003cplace\u003e\u003c![CDATA[kotlinParameter().withName(\"sql\")]]\u003e\u003c/place\u003e\n    \u003c/injection\u003e\n  \u003c/component\u003e\n\u003c/project\u003e\n```\n\nAn example showing syntax highlighting and available operations:\n\n![](/docs/resources/sql-language-integration.png)\n\nThe authors will typically add this file and `sqlDialects.xml` (which associates the project's SQL language\nwith the database dialect being used) to source control, ignoring other files in the `.idea` directory.\nExample `.gitignore` file in the `.idea/` directory:\n\n```gitignore\n# Ignore everything in this directory\n*\n\n# Except this file\n!.gitignore\n\n# store IntelliLang.xml customizations\n!IntelliLang.xml\n\n# keep our code style in source control\n!codeStyles/\n!codeStyles/*\n\n# store the SQL dialect for this project\n!sqldialects.xml\n```\n\nIn combination with the SQL dialect configured, it provides powerful SQL language support.\n[JetBrains documentation](https://www.jetbrains.com/help/idea/using-language-injections.html)\n\n# Development\n\n## Building\n\nLite for JDBC uses standard gradle tasks\n\n```shell\n./gradlew build\n```\n## Testing with Docker\n\nThis library leverages Docker for integration testing (e.g., via Testcontainers). Ensure Docker is installed and running on your system.\n\n### For colima Users\n\nIf you are using Colima as your Docker environment, you may need to create a symbolic link for compatibility with tools like TestContainers expecting Docker to be available at the default socket path (`/var/run/docker.sock`):\n```shell\nsudo ln -s $HOME/.colima/default/docker.sock /var/run/docker.sock\n```\nThis ensures that applications and libraries can communicate with Colima Docker daemon. Be cautious, as this might interfere with other Docker installations.\n\n## Issues\n\nReport issues in the issues section of our repository\n\n## Contributing\n\nFork the repository and submit a pull request containing the changes, targeting the `main` branch.\nand detail the issue it's meant to address.\n\n### Code review standards\n\nCode reviews will look for consistency with existing code standards and naming conventions.\n\n### Testing standards\n\nAll changes should include sufficient testing to prove it is working as intended.\n\n# Breaking version changes\n\n### `1.9.2` -\u003e `2.0.0`\n\n**Breaking Change**: Changed the DataSourceFactory to a singleton object and the Type on DbConfig to a String.\n**Reason**: Originally only the statically configured DataSource types were supported due to the use of an\nenum and a statically coded factory. This change was made so that users can modify the factory list to meet their\nindividual needs. \n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftarget%2Flite-for-jdbc","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftarget%2Flite-for-jdbc","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftarget%2Flite-for-jdbc/lists"}