{"id":13481528,"url":"https://github.com/mgutz/dat","last_synced_at":"2026-01-11T01:33:14.265Z","repository":{"id":27471974,"uuid":"30951279","full_name":"mgutz/dat","owner":"mgutz","description":"Go Postgres Data Access Toolkit","archived":false,"fork":false,"pushed_at":"2020-10-25T07:33:13.000Z","size":610,"stargazers_count":611,"open_issues_count":25,"forks_count":62,"subscribers_count":23,"default_branch":"v1","last_synced_at":"2024-06-20T16:52:40.250Z","etag":null,"topics":["go","nested-transactions","postgres","sql"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/mgutz.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGES.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2015-02-18T03:58:31.000Z","updated_at":"2024-06-20T01:44:38.000Z","dependencies_parsed_at":"2022-08-31T19:02:08.905Z","dependency_job_id":null,"html_url":"https://github.com/mgutz/dat","commit_stats":null,"previous_names":[],"tags_count":12,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mgutz%2Fdat","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mgutz%2Fdat/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mgutz%2Fdat/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mgutz%2Fdat/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/mgutz","download_url":"https://codeload.github.com/mgutz/dat/tar.gz/refs/heads/v1","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":213388284,"owners_count":15579711,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["go","nested-transactions","postgres","sql"],"created_at":"2024-07-31T17:00:52.528Z","updated_at":"2026-01-11T01:33:14.221Z","avatar_url":"https://github.com/mgutz.png","language":"Go","readme":"# dat\n\n[GoDoc](https://godoc.org/github.com/mgutz/dat)\n\n`dat` (Data Access Toolkit) is a fast, lightweight Postgres library for Go.\n\n*   Focused on Postgres. See `Insect`, `Upsert`, `SelectDoc`, `QueryJSON`\n\n*   Built on a solid foundation [sqlx](https://github.com/jmoiron/sqlx)\n\n    ```go\n    // child DB is *sqlx.DB\n    DB.DB.Queryx(`SELECT * FROM users`)\n    ```\n\n*   SQL and backtick friendly\n\n    ```go\n    DB.SQL(`SELECT * FROM people LIMIT 10`).QueryStructs(\u0026people)\n    ```\n\n*   JSON Document retrieval (single trip to Postgres, requires Postgres 9.3+)\n\n    ```go\n    DB.SelectDoc(\"id\", \"user_name\", \"avatar\").\n        Many(\"recent_comments\", `SELECT id, title FROM comments WHERE id = users.id LIMIT 10`).\n        Many(\"recent_posts\", `SELECT id, title FROM posts WHERE author_id = users.id LIMIT 10`).\n        One(\"account\", `SELECT balance FROM accounts WHERE user_id = users.id`).\n        From(\"users\").\n        Where(\"id = $1\", 4).\n        QueryStruct(\u0026obj) // obj must be agreeable with json.Unmarshal()\n    ```\n\n    results in\n\n    ```json\n    {\n        \"id\": 4,\n        \"user_name\": \"mario\",\n        \"avatar\": \"https://imgur.com/a23x.jpg\",\n        \"recent_comments\": [{\"id\": 1, \"title\": \"...\"}],\n        \"recent_posts\": [{\"id\": 1, \"title\": \"...\"}],\n        \"account\": {\n            \"balance\": 42.00\n        }\n    }\n    ```\n\n*   JSON marshalable bytes (requires Postgres 9.3+)\n\n    ```go\n    var b []byte\n    b, _ = DB.SQL(`SELECT id, user_name, created_at FROM users WHERE user_name = $1 `,\n        \"mario\",\n    ).QueryJSON()\n\n    // straight into map\n    var obj map[string]interface{}\n    DB.SQL(`SELECT id, user_name, created_at FROM users WHERE user_name = $1 `,\n        \"mario\",\n    ).QueryObject(\u0026obj)\n    ```\n\n*   Ordinal placeholders\n\n    ```go\n    DB.SQL(`SELECT * FROM people WHERE state = $1`, \"CA\").Exec()\n    ```\n\n*   SQL-like API\n\n    ```go\n    err := DB.\n        Select(\"id, user_name\").\n        From(\"users\").\n        Where(\"id = $1\", id).\n        QueryStruct(\u0026user)\n    ```\n\n*   Redis caching\n\n    ```go\n    // cache result for 30 seconds\n    key := \"user:\" + strconv.Itoa(user.id)\n    err := DB.\n        Select(\"id, user_name\").\n        From(\"users\").\n        Where(\"id = $1\", user.id).\n        Cache(key, 30 * time.Second, false).\n        QueryStruct(\u0026user)\n    ```\n\n*   Nested transactions\n\n*   Per query timeout with database cancellation logic `pg_cancel_backend`\n\n*   SQL and slow query logging\n\n*   Performant\n\n    -   ordinal placeholder logic is optimized to be nearly as fast as using `?`\n    -   `dat` can interpolate queries locally resulting in performance increase\n        over plain database/sql and sqlx. [Benchmarks](https://github.com/mgutz/dat/wiki/Benchmarks)\n\n## Getting Started\n\nGet it\n\ndat.v1 uses [glide](https://github.com/Masterminds/glide) package dependency manager. \nEarlier builds relied on gopkg.in which at the time was as good a solution as any.\nWill move to `dep` once it is stable.\n\n```sh\nglide get gopkg.in/mgutz/dat.v1/sqlx-runner\n```\n\nUse it\n\n```go\nimport (\n    \"database/sql\"\n\n    _ \"github.com/lib/pq\"\n    \"gopkg.in/mgutz/dat.v1\"\n    \"gopkg.in/mgutz/dat.v1/sqlx-runner\"\n)\n\n// global database (pooling provided by SQL driver)\nvar DB *runner.DB\n\nfunc init() {\n    // create a normal database connection through database/sql\n    db, err := sql.Open(\"postgres\", \"dbname=dat_test user=dat password=!test host=localhost sslmode=disable\")\n    if err != nil {\n        panic(err)\n    }\n\n    // ensures the database can be pinged with an exponential backoff (15 min)\n    runner.MustPing(db)\n\n    // set to reasonable values for production\n    db.SetMaxIdleConns(4)\n    db.SetMaxOpenConns(16)\n\n    // set this to enable interpolation\n    dat.EnableInterpolation = true\n\n    // set to check things like sessions closing.\n    // Should be disabled in production/release builds.\n    dat.Strict = false\n\n    // Log any query over 10ms as warnings. (optional)\n    runner.LogQueriesThreshold = 10 * time.Millisecond\n\n    DB = runner.NewDB(db, \"postgres\")\n}\n\ntype Post struct {\n    ID        int64         `db:\"id\"`\n    Title     string        `db:\"title\"`\n    Body      string        `db:\"body\"`\n    UserID    int64         `db:\"user_id\"`\n    State     string        `db:\"state\"`\n    UpdatedAt dat.NullTime  `db:\"updated_at\"`\n    CreatedAt dat.NullTime  `db:\"created_at\"`\n}\n\nfunc main() {\n    var post Post\n    err := DB.\n        Select(\"id, title\").\n        From(\"posts\").\n        Where(\"id = $1\", 13).\n        QueryStruct(\u0026post)\n    fmt.Println(\"Title\", post.Title)\n}\n```\n\n## Feature highlights\n\n### Use Builders or SQL\n\nQuery Builder\n\n```go\nvar posts []*Post\nerr := DB.\n    Select(\"title\", \"body\").\n    From(\"posts\").\n    Where(\"created_at \u003e $1\", someTime).\n    OrderBy(\"id ASC\").\n    Limit(10).\n    QueryStructs(\u0026posts)\n```\n\nPlain SQL\n\n```go\nerr = DB.SQL(`\n    SELECT title, body\n    FROM posts WHERE created_at \u003e $1\n    ORDER BY id ASC LIMIT 10`,\n    someTime,\n).QueryStructs(\u0026posts)\n```\n\nNote: `dat` does not trim the SQL string, thus any extra whitespace is\ntransmitted to the database.\n\nIn practice, SQL is easier to write with backticks. Indeed, the reason this\nlibrary exists is most SQL builders introduce a DSL to insulate the user\nfrom SQL.\n\nQuery builders shine when dealing with data transfer objects, structs.\n\n### Fetch Data Simply\n\nQuery then scan result to struct(s)\n\n```go\nvar post Post\nerr := DB.\n    Select(\"id, title, body\").\n    From(\"posts\").\n    Where(\"id = $1\", id).\n    QueryStruct(\u0026post)\n\nvar posts []*Post\nerr = DB.\n    Select(\"id, title, body\").\n    From(\"posts\").\n    Where(\"id \u003e $1\", 100).\n    QueryStructs(\u0026posts)\n```\n\nQuery scalar values or a slice of values\n\n```go\nvar n int64\nDB.SQL(\"SELECT count(*) FROM posts WHERE title=$1\", title).QueryScalar(\u0026n)\n\nvar ids []int64\nDB.SQL(\"SELECT id FROM posts\", title).QuerySlice(\u0026ids)\n```\n\n### Field Mapping\n\n**dat** DOES NOT map fields automatically like sqlx.\nYou must explicitly set `db` struct tags in your types.\n\nEmbedded fields are mapped breadth-first.\n\n```go\ntype Realm struct {\n    RealmUUID string `db:\"realm_uuid\"`\n}\ntype Group struct {\n    GroupUUID string `db:\"group_uuid\"`\n    *Realm\n}\n\ng := \u0026Group{Realm: \u0026Realm{\"11\"}, GroupUUID: \"22\"}\n\nsql, args := InsertInto(\"groups\").Columns(\"group_uuid\", \"realm_uuid\").Record(g).ToSQL()\nexpected := `\n    INSERT INTO groups (\"group_uuid\", \"realm_uuid\")\n    VALUES ($1, $2)\n\t`\n```\n\n### Blacklist and Whitelist\n\nControl which columns get inserted or updated when processing external data\n\n```go\n// userData came in from http.Handler, prevent them from setting protected fields\nDB.InsertInto(\"payments\").\n    Blacklist(\"id\", \"updated_at\", \"created_at\").\n    Record(userData).\n    Returning(\"id\").\n    QueryScalar(\u0026userData.ID)\n\n// ensure session user can only update his information\nDB.Update(\"users\").\n    SetWhitelist(user, \"user_name\", \"avatar\", \"quote\").\n    Where(\"id = $1\", session.UserID).\n    Exec()\n```\n\n### IN queries\n\n__applicable when dat.EnableInterpolation == true__\n\nSimpler IN queries which expand correctly\n\n```go\nids := []int64{10,20,30,40,50}\nb := DB.SQL(\"SELECT * FROM posts WHERE id IN $1\", ids)\nb.MustInterpolate() == \"SELECT * FROM posts WHERE id IN (10,20,30,40,50)\"\n```\n\n### Tracing SQL\n\n`dat` uses [logxi](https://github.com/mgutz/logxi) for logging. By default,\n*logxi* logs all warnings and errors to the console. `dat` logs the\nSQL and its arguments on any error. In addition, `dat` logs slow queries\nas warnings if `runner.LogQueriesThreshold \u003e 0`\n\nTo trace all SQL, set environment variable\n\n```sh\nLOGXI=dat* yourapp\n```\n\n## CRUD\n\n### Create\n\nUse `Returning` and `QueryStruct` to insert and update struct fields in one\ntrip\n\n```go\nvar post Post\n\nerr := DB.\n    InsertInto(\"posts\").\n    Columns(\"title\", \"state\").\n    Values(\"My Post\", \"draft\").\n    Returning(\"id\", \"created_at\", \"updated_at\").\n    QueryStruct(\u0026post)\n```\n\nUse `Blacklist` and `Whitelist` to control which record (input struct) fields\nare inserted.\n\n```go\npost := Post{Title: \"Go is awesome\", State: \"open\"}\nerr := DB.\n    InsertInto(\"posts\").\n    Blacklist(\"id\", \"user_id\", \"created_at\", \"updated_at\").\n    Record(\u0026post).\n    Returning(\"id\", \"created_at\", \"updated_at\").\n    QueryStruct(\u0026post)\n\n// use wildcard to include all columns\nerr := DB.\n    InsertInto(\"posts\").\n    Whitelist(\"*\").\n    Record(\u0026post).\n    Returning(\"id\", \"created_at\", \"updated_at\").\n    QueryStruct(\u0026post)\n\n```\n\nInsert Multiple Records\n\n```go\n// create builder\nb := DB.InsertInto(\"posts\").Columns(\"title\")\n\n// add some new posts\nfor i := 0; i \u003c 3; i++ {\n    b.Record(\u0026Post{Title: fmt.Sprintf(\"Article %s\", i)})\n}\n\n// OR (this is more efficient as it does not do any reflection)\nfor i := 0; i \u003c 3; i++ {\n    b.Values(fmt.Sprintf(\"Article %s\", i))\n}\n\n// execute statement\n_, err := b.Exec()\n```\n\nInserts if not exists or select in one-trip to database\n\n```go\nsql, args := DB.\n    Insect(\"tab\").\n    Columns(\"b\", \"c\").\n    Values(1, 2).\n    Where(\"d = $1\", 3).\n    Returning(\"id\", \"f\", \"g\").\n    ToSQL()\n\nsql == `\nWITH\n    sel AS (SELECT id, f, g FROM tab WHERE (d = $1)),\n    ins AS (\n        INSERT INTO \"tab\"(\"b\",\"c\")\n        SELECT $2,$3\n        WHERE NOT EXISTS (SELECT 1 FROM sel)\n        RETURNING \"id\",\"f\",\"g\"\n    )\nSELECT * FROM ins UNION ALL SELECT * FROM sel\n`\n```\n\n### Read\n\n```go\nvar other Post\n\nerr = DB.\n    Select(\"id, title\").\n    From(\"posts\").\n    Where(\"id = $1\", post.ID).\n    QueryStruct(\u0026other)\n\npublished := `\n    WHERE user_id = $1\n        AND state = 'published'\n`\n\nvar posts []*Post\nerr = DB.\n    Select(\"id, title\").\n    From(\"posts\").\n    Scope(published, 100).\n    QueryStructs(\u0026posts)\n```\n\n### Update\n\nUse `Returning` to fetch columns updated by triggers. For example,\nan update trigger on \"updated\\_at\" column\n\n```go\nerr = DB.\n    Update(\"posts\").\n    Set(\"title\", \"My New Title\").\n    Set(\"body\", \"markdown text here\").\n    Where(\"id = $1\", post.ID).\n    Returning(\"updated_at\").\n    QueryScalar(\u0026post.UpdatedAt)\n```\n\nUpsert - Update or Insert\n\n```go\nsql, args := DB.\n    Upsert(\"tab\").\n    Columns(\"b\", \"c\").\n    Values(1, 2).\n    Where(\"d=$1\", 4).\n    Returning(\"f\", \"g\").\n    ToSQL()\n\nexpected := `\nWITH\n    upd AS (\n        UPDATE tab\n        SET \"b\" = $1, \"c\" = $2\n        WHERE (d=$3)\n        RETURNING \"f\",\"g\"\n    ), ins AS (\n        INSERT INTO \"tab\"(\"b\",\"c\")\n        SELECT $1,$2\n        WHERE NOT EXISTS (SELECT 1 FROM upd)\n        RETURNING \"f\",\"g\"\n    )\nSELECT * FROM ins UNION ALL SELECT * FROM upd\n`\n```\n\n__applicable when dat.EnableInterpolation == true__\n\nTo reset columns to their default DDL value, use `DEFAULT`. For example,\nto reset `payment\\_type`\n\n```go\nres, err := DB.\n    Update(\"payments\").\n    Set(\"payment_type\", dat.DEFAULT).\n    Where(\"id = $1\", 1).\n    Exec()\n```\n\nUse `SetBlacklist` and `SetWhitelist` to control which fields are updated.\n\n```go\n// create blacklists for each of your structs\nblacklist := []string{\"id\", \"created_at\"}\np := paymentStructFromHandler\n\nerr := DB.\n    Update(\"payments\").\n    SetBlacklist(p, blacklist...)\n    Where(\"id = $1\", p.ID).\n    Exec()\n```\n\nUse a map of attributes\n\n``` go\nattrsMap := map[string]interface{}{\"name\": \"Gopher\", \"language\": \"Go\"}\nresult, err := DB.\n    Update(\"developers\").\n    SetMap(attrsMap).\n    Where(\"language = $1\", \"Ruby\").\n    Exec()\n```\n\n### Delete\n\n``` go\nresult, err = DB.\n    DeleteFrom(\"posts\").\n    Where(\"id = $1\", otherPost.ID).\n    Exec()\n```\n\n### Joins\n\nDefine JOINs in argument to `From`\n\n``` go\nerr = DB.\n    Select(\"u.*, p.*\").\n    From(`\n        users u\n        INNER JOIN posts p on (p.author_id = u.id)\n    `).\n    WHERE(\"p.state = 'published'\").\n    QueryStructs(\u0026liveAuthors)\n```\n\n#### Scopes\n\nScopes predefine JOIN and WHERE conditions.\nScopes may be used with `DeleteFrom`, `Select` and `Update`.\n\nAs an example, a \"published\" scope might define published posts\nby user.\n\n```go\npublishedPosts := `\n    INNER JOIN users u on (p.author_id = u.id)\n    WHERE\n        p.state == 'published' AND\n        p.deleted_at IS NULL AND\n        u.user_name = $1\n`\n\nunpublishedPosts := `\n    INNER JOIN users u on (p.author_id = u.id)\n    WHERE\n        p.state != 'published' AND\n        p.deleted_at IS NULL AND\n        u.user_name = $1\n`\n\nerr = DB.\n    Select(\"p.*\").                      // must qualify columns\n    From(\"posts p\").\n    Scope(publishedPosts, \"mgutz\").\n    QueryStructs(\u0026posts)\n```\n\n## Creating Connections\n\nAll queries are made in the context of a connection which is acquired\nfrom the underlying SQL driver's pool\n\nFor one-off operations, use `DB` directly\n\n```go\nerr := DB.SQL(sql).QueryStruct(\u0026post)\n```\n\nFor multiple operations, create a `Tx` transaction.\n__`defer Tx.AutoCommit()` or `defer Tx.AutoRollback()` MUST be called__\n\n```go\nfunc PostsIndex(rw http.ResponseWriter, r *http.Request) {\n    tx, _ := DB.Begin()\n    defer tx.AutoRollback()\n\n    // Do queries with the session\n    var post Post\n    err := tx.Select(\"id, title\").\n        From(\"posts\").\n        Where(\"id = $1\", post.ID).\n        QueryStruct(\u0026post)\n    )\n    if err != nil {\n        // `defer AutoRollback()` is used, no need to rollback on error\n        r.WriteHeader(500)\n        return\n    }\n\n    // do more queries with transaction ...\n\n    // MUST commit or AutoRollback() will rollback\n    tx.Commit()\n}\n```\n\n`DB` and `Tx` implement `runner.Connection` interface to keep code DRY\n\n```\nfunc getUsers(conn runner.Connection) ([]*dto.Users, error) {\n    sql := `\n        SELECT *\n        FROM users\n    `\n    var users []*dto.Users\n    err := conn.SQL(sql).QueryStructs(\u0026users)\n    if err != nil {\n        return err\n    }\n    return users\n}\n```\n\n### Nested Transactions\n\nNested transaction logic is as follows:\n\n*   If `Commit` is called in a nested transaction, the operation results in no operation (NOOP).\n    Only the top level `Commit` commits the transaction to the database.\n\n*   If `Rollback` is called in a nested transaction, then the entire\n    transaction is rolled back. `Tx.IsRollbacked` is set to true.\n\n*   Either `defer Tx.AutoCommit()` or `defer Tx.AutoRollback()` **MUST BE CALLED**\n    for each corresponding `Begin`. The internal state of nested transactions is\n    tracked in these two methods.\n\n```go\nfunc nested(conn runner.Connection) error {\n    tx, err := conn.Begin()\n    if err != nil {\n        return err\n    }\n    defer tx.AutoRollback()\n\n    _, err := tx.SQL(`INSERT INTO users (email) values $1`, \"me@home.com\").Exec()\n    if err != nil {\n        return err\n    }\n    // prevents AutoRollback\n    tx.Commit()\n}\n\nfunc top() {\n    tx, err := DB.Begin()\n    if err != nil {\n        logger.Fatal(\"Could not create transaction\")\n    }\n    defer tx.AutoRollback()\n\n    err := nested(tx)\n    if err != nil {\n        return\n    }\n    // top level commits the transaction\n    tx.Commit()\n}\n```\n\n### Timeouts\n\nA timeout may be set on any `Query*` or `Exec` with the `Timeout` method. When a\ntimeout is set, the query is run in a separate goroutine and should a timeout\noccur dat will cancel the query via Postgres' `pg_cancel_backend`.\n\n```go\nerr := DB.Select(\"SELECT pg_sleep(1)\").Timeout(1 * time.Millisecond).Exec()\nerr == dat.ErrTimedout\n```\n\n### Dates\n\nUse `dat.NullTime` type to properly handle nullable dates\nfrom JSON and Postgres.\n\n### Constants\n\n__applicable when dat.EnableInterpolation == true__\n\n`dat` provides often used constants in SQL statements\n\n* `dat.DEFAULT` - inserts `DEFAULT`\n* `dat.NOW` - inserts `NOW()`\n\n### Defining Constants\n\n_UnsafeStrings and constants will panic unless_ `dat.EnableInterpolation == true`\n\nTo define SQL constants, use `UnsafeString`\n\n```go\nconst CURRENT_TIMESTAMP = dat.UnsafeString(\"NOW()\")\nDB.SQL(\"UPDATE table SET updated_at = $1\", CURRENT_TIMESTAMP)\n```\n\n`UnsafeString` is exactly that, **UNSAFE**. If you must use it, create a\nconstant and **NEVER** use `UnsafeString` directly as an argument like this\n\n```go\nDB.SQL(\"UPDATE table SET updated_at = $1\", dat.UnsafeString(someVar))\n```\n\n### Primitive Values\n\nLoad scalar and slice values.\n\n```go\nvar id int64\nvar userID string\nerr := DB.\n    Select(\"id\", \"user_id\").From(\"posts\").Limit(1).QueryScalar(\u0026id, \u0026userID)\n\nvar ids []int64\nerr = DB.Select(\"id\").From(\"posts\").QuerySlice(\u0026ids)\n```\n\n### Caching\n\ndat implements caching backed by an in-memory or Redis store. The in-memory store\nis not recommended for production use. Caching can cache any struct or primitive type that\ncan be marshaled/unmarshaled cleanly with the json package due to Redis being a string\nvalue store.\n\nTime is especially problematic as JavaScript, Postgres and Go\nhave different time formats. Use the type `dat.NullTime` if you are\ngetting `cannot parse time` errors.\n\nCaching is performed before the database driver lessening the workload on\nthe database.\n\n```go\n// key-value store (kvs) package\nimport \"gopkg.in/mgutz/dat.v1/kvs\"\n\nfunc init() {\n    // Redis: namespace is the prefix for keys and should be unique\n    store, err := kvs.NewRedisStore(\"namespace:\", \":6379\", \"passwordOrEmpty\")\n\n    // Or, in-memory store provided by [go-cache](https://github.com/pmylund/go-cache)\n    cleanupInterval := 30 * time.Second\n    store = kvs.NewMemoryStore(cleanupInterval)\n\n    runner.SetCache(store)\n}\n\n// Cache states query for a year using key \"namespace:states\"\nb, err := DB.\n    SQL(`SELECT * FROM states`).\n    Cache(\"states\", 365 * 24 * time.Hour, false).\n    QueryJSON()\n\n// Without a key, the checksum of the query is used as the cache key.\n// In this example, the interpolated SQL  will contain their user_name\n// (if EnableInterpolation is true) effectively caching each user.\n//\n// cacheID == checksum(\"SELECT * FROM users WHERE user_name='mario'\")\nb, err := DB.\n    SQL(`SELECT * FROM users WHERE user_name = $1`, user).\n    Cache(\"\", 365 * 24 *  time.Hour, false).\n    QueryJSON()\n\n// Prefer using known unique IDs to avoid the computation cost\n// of the checksum key.\nkey = \"user\" + user.UserName\nb, err := DB.\n    SQL(`SELECT * FROM users WHERE user_name = $1`, user).\n    Cache(key, 15 * time.Minute, false).\n    QueryJSON()\n\n// Set invalidate to true to force setting the key\nstatesUpdated := true\nb, err := DB.\n    SQL(`SELECT * FROM states`).\n    Cache(\"states\", 365 * 24 *  time.Hour, statesUpdated).\n    QueryJSON()\n\n// Clears the entire cache\nrunner.Cache.FlushDB()\n\nrunner.Cache.Del(\"fookey\")\n```\n\n### SQL Interpolation\n\n__Interpolation is DISABLED by default. Set `dat.EnableInterpolation = true`\nto enable.__\n\n`dat` can interpolate locally to inline query arguments. For example,\nthis statement\n\ngo\n```\ndb.Exec(\n    \"INSERT INTO (a, b, c, d) VALUES ($1, $2, $3, $4)\",\n    []interface{}[1, 2, 3, 4],\n)\n```\n\nis sent to the database with inlined args bypassing prepared statement logic in\nthe lib/pq layer\n\n```sql\n\"INSERT INTO (a, b, c, d) VALUES (1, 2, 3, 4)\"\n```\n\nInterpolation provides these benefits:\n\n*   Performance improvements\n*   Debugging/tracing is simpler with interpolated SQL\n*   May use safe SQL constants like `dat.NOW` and `dat.DEFAULT`\n*   Expand placeholders with slice values `$1 =\u003e (1, 2, 3)`\n\nRead [SQL Interpolation](https://github.com/mgutz/dat/wiki/Local-Interpolation) in wiki\nfor more details and SQL injection.\n\n## LICENSE\n\n[The MIT License (MIT)](https://github.com/mgutz/dat/blob/master/LICENSE)\n","funding_links":[],"categories":["Database","數據庫","\u003cspan id=\"数据库-database\"\u003e数据库 Database\u003c/span\u003e","数据库"],"sub_categories":["高級控制台界面","Advanced Console UIs","\u003cspan id=\"高级控制台用户界面-advanced-console-uis\"\u003e高级控制台用户界面 Advanced Console UIs\u003c/span\u003e","高级控制台界面"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmgutz%2Fdat","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmgutz%2Fdat","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmgutz%2Fdat/lists"}