{"id":13412398,"url":"https://github.com/redis/rueidis","last_synced_at":"2026-02-17T05:04:41.031Z","repository":{"id":37033853,"uuid":"407831273","full_name":"redis/rueidis","owner":"redis","description":"A fast Golang Redis client that supports Client Side Caching, Auto Pipelining, Generics OM, RedisJSON, RedisBloom, RediSearch, etc.","archived":false,"fork":false,"pushed_at":"2026-01-26T02:18:40.000Z","size":8459,"stargazers_count":2887,"open_issues_count":17,"forks_count":229,"subscribers_count":16,"default_branch":"main","last_synced_at":"2026-01-26T14:57:40.299Z","etag":null,"topics":["cache","client-side-caching","distributed","generics","go","golang","lock","redis","redis-client","resp3","resp3-client"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/redis.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":"NOTICE","maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2021-09-18T10:38:58.000Z","updated_at":"2026-01-26T06:19:35.000Z","dependencies_parsed_at":"2023-07-27T09:58:00.247Z","dependency_job_id":"fb91559a-cc29-447c-85e7-5b969c2c327f","html_url":"https://github.com/redis/rueidis","commit_stats":{"total_commits":1216,"total_committers":66,"mean_commits":"18.424242424242426","dds":"0.24095394736842102","last_synced_commit":"6b9546712e6d841775b0d15ed9d810e546432031"},"previous_names":["rueian/rueidis","rueidis/rueidis"],"tags_count":732,"template":false,"template_full_name":null,"purl":"pkg:github/redis/rueidis","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/redis%2Frueidis","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/redis%2Frueidis/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/redis%2Frueidis/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/redis%2Frueidis/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/redis","download_url":"https://codeload.github.com/redis/rueidis/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/redis%2Frueidis/sbom","scorecard":{"id":767832,"data":{"date":"2025-08-11","repo":{"name":"github.com/redis/rueidis","commit":"e6887f707f33e635e4f5d174578471dcbbfdcccd"},"scorecard":{"version":"v5.2.1-40-gf6ed084d","commit":"f6ed084d17c9236477efd66e5b258b9d4cc7b389"},"score":5.3,"checks":[{"name":"Dangerous-Workflow","score":10,"reason":"no dangerous workflow patterns detected","details":null,"documentation":{"short":"Determines if the project's GitHub Action workflows avoid dangerous patterns.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#dangerous-workflow"}},{"name":"Code-Review","score":5,"reason":"Found 15/30 approved changesets -- score normalized to 5","details":null,"documentation":{"short":"Determines if the project requires human code review before pull requests (aka merge requests) are merged.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#code-review"}},{"name":"Maintained","score":10,"reason":"30 commit(s) and 25 issue activity found in the last 90 days -- score normalized to 10","details":null,"documentation":{"short":"Determines if the project is \"actively maintained\".","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#maintained"}},{"name":"Packaging","score":-1,"reason":"packaging workflow not detected","details":["Warn: no GitHub/GitLab publishing workflow detected."],"documentation":{"short":"Determines if the project is published as a package that others can easily download, install, easily update, and uninstall.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#packaging"}},{"name":"Token-Permissions","score":0,"reason":"detected GitHub workflow tokens with excessive permissions","details":["Info: jobLevel 'actions' permission set to 'read': .github/workflows/codeql.yml:19","Info: jobLevel 'contents' permission set to 'read': .github/workflows/codeql.yml:20","Warn: jobLevel 'contents' permission set to 'write': .github/workflows/release-drafter.yml:15","Info: topLevel 'contents' permission set to 'read': .github/workflows/build.yml:4","Info: topLevel 'contents' permission set to 'read': .github/workflows/codeql.yml:4","Warn: topLevel 'contents' permission set to 'write': .github/workflows/release-drafter.yml:10","Warn: topLevel 'contents' permission set to 'write': .github/workflows/tag-subpkg.yml:4"],"documentation":{"short":"Determines if the project's workflows follow the principle of least privilege.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#token-permissions"}},{"name":"Binary-Artifacts","score":10,"reason":"no binaries found in the repo","details":null,"documentation":{"short":"Determines if the project has generated executable (binary) artifacts in the source repository.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#binary-artifacts"}},{"name":"Pinned-Dependencies","score":1,"reason":"dependency not pinned by hash detected -- score normalized to 1","details":["Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/build.yml:21: update your workflow using https://app.stepsecurity.io/secureworkflow/redis/rueidis/build.yml/main?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/build.yml:42: update your workflow using https://app.stepsecurity.io/secureworkflow/redis/rueidis/build.yml/main?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/build.yml:57: update your workflow using https://app.stepsecurity.io/secureworkflow/redis/rueidis/build.yml/main?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/build.yml:60: update your workflow using https://app.stepsecurity.io/secureworkflow/redis/rueidis/build.yml/main?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/build.yml:88: update your workflow using https://app.stepsecurity.io/secureworkflow/redis/rueidis/build.yml/main?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/codeql.yml:30: update your workflow using https://app.stepsecurity.io/secureworkflow/redis/rueidis/codeql.yml/main?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/codeql.yml:33: update your workflow using https://app.stepsecurity.io/secureworkflow/redis/rueidis/codeql.yml/main?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/codeql.yml:39: update your workflow using https://app.stepsecurity.io/secureworkflow/redis/rueidis/codeql.yml/main?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/codeql.yml:42: update your workflow using https://app.stepsecurity.io/secureworkflow/redis/rueidis/codeql.yml/main?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/release-drafter.yml:20: update your workflow using https://app.stepsecurity.io/secureworkflow/redis/rueidis/release-drafter.yml/main?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/tag-subpkg.yml:16: update your workflow using https://app.stepsecurity.io/secureworkflow/redis/rueidis/tag-subpkg.yml/main?enable=pin","Warn: goCommand not pinned by hash: dockertest.sh:7","Info:   0 out of   9 GitHub-owned GitHubAction dependencies pinned","Info:   0 out of   2 third-party GitHubAction dependencies pinned","Info:   1 out of   2 goCommand dependencies pinned"],"documentation":{"short":"Determines if the project has declared and pinned the dependencies of its build process.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#pinned-dependencies"}},{"name":"CII-Best-Practices","score":0,"reason":"no effort to earn an OpenSSF best practices badge detected","details":null,"documentation":{"short":"Determines if the project has an OpenSSF (formerly CII) Best Practices Badge.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#cii-best-practices"}},{"name":"Security-Policy","score":0,"reason":"security policy file not detected","details":["Warn: no security policy file detected","Warn: no security file to analyze","Warn: no security file to analyze","Warn: no security file to analyze"],"documentation":{"short":"Determines if the project has published a security policy.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#security-policy"}},{"name":"License","score":10,"reason":"license file detected","details":["Info: project has a license file: LICENSE:0","Info: FSF or OSI recognized license: Apache License 2.0: LICENSE:0"],"documentation":{"short":"Determines if the project has defined a license.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#license"}},{"name":"Fuzzing","score":0,"reason":"project is not fuzzed","details":["Warn: no fuzzer integrations found"],"documentation":{"short":"Determines if the project uses fuzzing.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#fuzzing"}},{"name":"Signed-Releases","score":-1,"reason":"no releases found","details":null,"documentation":{"short":"Determines if the project cryptographically signs release artifacts.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#signed-releases"}},{"name":"SAST","score":7,"reason":"SAST tool detected but not run on all commits","details":["Info: SAST configuration detected: CodeQL","Warn: 1 commits out of 16 are checked with a SAST tool"],"documentation":{"short":"Determines if the project uses static code analysis.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#sast"}},{"name":"Vulnerabilities","score":10,"reason":"0 existing vulnerabilities detected","details":null,"documentation":{"short":"Determines if the project has open, known unfixed vulnerabilities.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#vulnerabilities"}},{"name":"Branch-Protection","score":0,"reason":"branch protection not enabled on development/release branches","details":["Warn: branch protection not enabled for branch 'main'"],"documentation":{"short":"Determines if the default and release branches are protected with GitHub's branch protection settings.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#branch-protection"}}]},"last_synced_at":"2025-08-23T01:21:43.825Z","repository_id":37033853,"created_at":"2025-08-23T01:21:43.825Z","updated_at":"2025-08-23T01:21:43.825Z"},"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28866752,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-29T05:56:06.453Z","status":"ssl_error","status_checked_at":"2026-01-29T05:55:57.668Z","response_time":59,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cache","client-side-caching","distributed","generics","go","golang","lock","redis","redis-client","resp3","resp3-client"],"created_at":"2024-07-30T20:01:24.219Z","updated_at":"2026-01-29T06:27:41.078Z","avatar_url":"https://github.com/redis.png","language":"Go","readme":"# rueidis\n\n[![Go Reference](https://pkg.go.dev/badge/github.com/redis/rueidis.svg)](https://pkg.go.dev/github.com/redis/rueidis)\n[![CircleCI](https://dl.circleci.com/status-badge/img/gh/redis/rueidis/tree/main.svg?style=shield)](https://dl.circleci.com/status-badge/redirect/gh/redis/rueidis/tree/main)\n[![Go Report Card](https://goreportcard.com/badge/github.com/redis/rueidis)](https://goreportcard.com/report/github.com/redis/rueidis)\n[![codecov](https://codecov.io/gh/redis/rueidis/branch/master/graph/badge.svg?token=wGTB8GdY06)](https://codecov.io/gh/redis/rueidis)\n\nA fast Golang Redis client that does auto pipelining and supports server-assisted client-side caching.\n\n## Features\n\n- [Auto pipelining for non-blocking redis commands](#auto-pipelining)\n- [Server-assisted client-side caching](#server-assisted-client-side-caching)\n- [Generic Object Mapping with client-side caching](./om)\n- [Cache-Aside pattern with client-side caching](./rueidisaside)\n- [Distributed Locks with client-side caching](./rueidislock)\n- [Helpers for writing tests with rueidis mock](./mock)\n- [OpenTelemetry integration](./rueidisotel)\n- [Hooks and other integrations](./rueidishook)\n- [Go-redis like API adapter](./rueidiscompat) by [@418Coffee](https://github.com/418Coffee)\n- Pub/Sub, Sharded Pub/Sub, Streams\n- Redis Cluster, Sentinel, RedisJSON, RedisBloom, RediSearch, RedisTimeseries, etc.\n- [Probabilistic Data Structures without Redis Stack](./rueidisprob)\n- [Availability zone affinity routing](#availability-zone-affinity-routing)\n\n---\n\n## Getting Started\n\n```golang\npackage main\n\nimport (\n  \"context\"\n  \"github.com/redis/rueidis\"\n)\n\nfunc main() {\n  client, err := rueidis.NewClient(rueidis.ClientOption{InitAddress: []string{\"127.0.0.1:6379\"}})\n  if err != nil {\n    panic(err)\n  }\n  defer client.Close()\n\n  ctx := context.Background()\n  // SET key val NX\n  err = client.Do(ctx, client.B().Set().Key(\"key\").Value(\"val\").Nx().Build()).Error()\n  // HGETALL hm\n  hm, err := client.Do(ctx, client.B().Hgetall().Key(\"hm\").Build()).AsStrMap()\n}\n```\n\nCheck out more examples: [Command Response Cheatsheet](https://github.com/redis/rueidis#command-response-cheatsheet)\n\n## Developer Friendly Command Builder\n\n`client.B()` is the builder entry point to construct a redis command:\n\n![Developer friendly command builder](https://user-images.githubusercontent.com/2727535/209358313-39000aee-eaa4-42e1-9748-0d3836c1264f.gif)\\\n\u003csub\u003e_Recorded by @FZambia [Improving Centrifugo Redis Engine throughput and allocation efficiency with Rueidis Go library\n](https://centrifugal.dev/blog/2022/12/20/improving-redis-engine-performance)_\u003c/sub\u003e\n\nOnce a command is built, use either `client.Do()` or `client.DoMulti()` to send it to redis.\n\n**You ❗️SHOULD NOT❗️ reuse the command to another `client.Do()` or `client.DoMulti()` call because it has been recycled to the underlying `sync.Pool` by default.**\n\nTo reuse a command, use `Pin()` after `Build()` and it will prevent the command from being recycled.\n\n## [Pipelining](https://redis.io/docs/latest/develop/using-commands/pipelining/)\n\n### Auto Pipelining\n\nAll concurrent non-blocking redis commands (such as `GET`, `SET`) are automatically pipelined by default,\nwhich reduces the overall round trips and system calls and gets higher throughput. You can easily get the benefit\nof [pipelining technique](https://redis.io/docs/latest/develop/using-commands/pipelining/) by just calling `client.Do()` from multiple goroutines concurrently.\nFor example:\n\n```go\nfunc BenchmarkPipelining(b *testing.B, client rueidis.Client) {\n  // the below client.Do() operations will be issued from\n  // multiple goroutines and thus will be pipelined automatically.\n  b.RunParallel(func(pb *testing.PB) {\n    for pb.Next() {\n      client.Do(context.Background(), client.B().Get().Key(\"k\").Build()).ToString()\n    }\n  })\n}\n```\n\n### Benchmark Comparison with go-redis v9\n\nCompared to go-redis, Rueidis has higher throughput across 1, 8, and 64 parallelism settings.\n\nIt is even able to achieve **~14x** throughput over go-redis in a local benchmark of MacBook Pro 16\" M1 Pro 2021. (see `parallelism(64)-key(16)-value(64)-10`)\n\n![client_test_set](https://github.com/rueian/rueidis-benchmark/blob/master/client_test_set_10.png)\n\nBenchmark source code: https://github.com/rueian/rueidis-benchmark\n\nA benchmark result performed on two GCP n2-highcpu-2 machines also shows that rueidis can achieve higher throughput with lower latencies: https://github.com/redis/rueidis/pull/93\n\n### Disable Auto Pipelining\n\nWhile auto pipelining maximizes throughput, it relies on additional goroutines to process requests and responses and may add some latencies due to goroutine scheduling and head of line blocking.\n\nYou can avoid this by setting `DisableAutoPipelining` to true, then it will switch to connection pooling approach and serve each request with dedicated connection on the same goroutine.\n\nWhen `DisableAutoPipelining` is set to true, you can still send commands for auto pipelining with `ToPipe()`:\n\n```golang\ncmd := client.B().Get().Key(\"key\").Build().ToPipe()\nclient.Do(ctx, cmd)\n```\n\nThis allows you to use connection pooling approach by default but opt-in auto pipelining for a subset of requests.\n\n### Manual Pipelining\n\nBesides auto pipelining, you can also pipeline commands manually with `DoMulti()`:\n\n```golang\ncmds := make(rueidis.Commands, 0, 10)\nfor i := 0; i \u003c 10; i++ {\n    cmds = append(cmds, client.B().Set().Key(\"key\").Value(\"value\").Build())\n}\nfor _, resp := range client.DoMulti(ctx, cmds...) {\n    if err := resp.Error(); err != nil {\n        panic(err)\n    }\n}\n```\n\nWhen using `DoMulti()` to send multiple commands, the original commands are recycled after execution by default.\nIf you need to reference them afterward (e.g. to retrieve the key), use the `Pin()` method to prevent recycling.\n\n```golang\n// Create pinned commands to preserve them from being recycled\ncmds := make(rueidis.Commands, 0, 10)\nfor i := 0; i \u003c 10; i++ {\n  cmds = append(cmds, client.B().Get().Key(strconv.Itoa(i)).Build().Pin())\n}\n\n// Execute commands and process responses\nfor i, resp := range client.DoMulti(context.Background(), cmds...) {\n  fmt.Println(resp.ToString()) // this is the result\n  fmt.Println(cmds[i].Commands()[1]) // this is the corresponding key\n}\n```\n\nAlternatively, you can use the `MGet` and `MGetCache` helper functions to easily map keys to their corresponding responses.\n\n```golang\nval, err := MGet(client, ctx, []string{\"k1\", \"k2\"})\nfmt.Println(val[\"k1\"].ToString()) // this is the k1 value\n```\n\n## [Server-Assisted Client-Side Caching](https://redis.io/docs/latest/develop/clients/client-side-caching/)\n\nThe opt-in mode of [server-assisted client-side caching](https://redis.io/docs/latest/develop/clients/client-side-caching/) is enabled by default and can be used by calling `DoCache()` or `DoMultiCache()` with client-side TTLs specified.\n\n```golang\nclient.DoCache(ctx, client.B().Hmget().Key(\"mk\").Field(\"1\", \"2\").Cache(), time.Minute).ToArray()\nclient.DoMultiCache(ctx,\n    rueidis.CT(client.B().Get().Key(\"k1\").Cache(), 1*time.Minute),\n    rueidis.CT(client.B().Get().Key(\"k2\").Cache(), 2*time.Minute))\n```\n\nCached responses, including Redis Nils, will be invalidated either when being notified by redis servers or when their client-side TTLs are reached. See https://github.com/redis/rueidis/issues/534 for more details.\n\n### Benchmark\n\nServer-assisted client-side caching can dramatically boost latencies and throughput just like **having a redis replica right inside your application**. For example:\n\n![client_test_get](https://github.com/rueian/rueidis-benchmark/blob/master/client_test_get_10.png)\n\nBenchmark source code: https://github.com/rueian/rueidis-benchmark\n\n### Client-Side Caching Helpers\n\nUse `CacheTTL()` to check the remaining client-side TTL in seconds:\n\n```golang\nclient.DoCache(ctx, client.B().Get().Key(\"k1\").Cache(), time.Minute).CacheTTL() == 60\n```\n\nUse `IsCacheHit()` to verify if the response came from the client-side memory:\n\n```golang\nclient.DoCache(ctx, client.B().Get().Key(\"k1\").Cache(), time.Minute).IsCacheHit() == true\n```\n\nIf the OpenTelemetry is enabled by the `rueidisotel.NewClient(option)`, then there are also two metrics instrumented:\n\n- rueidis_do_cache_miss\n- rueidis_do_cache_hits\n\n### MGET/JSON.MGET Client-Side Caching Helpers\n\n`rueidis.MGetCache` and `rueidis.JsonMGetCache` are handy helpers fetching multiple keys across different slots through the client-side caching.\nThey will first group keys by slot to build `MGET` or `JSON.MGET` commands respectively and then send requests with only cache missed keys to redis nodes.\n\n### Broadcast Mode Client-Side Caching\n\nAlthough the default is opt-in mode, you can use broadcast mode by specifying your prefixes in `ClientOption.ClientTrackingOptions`:\n\n```go\nclient, err := rueidis.NewClient(rueidis.ClientOption{\n  InitAddress:           []string{\"127.0.0.1:6379\"},\n  ClientTrackingOptions: []string{\"PREFIX\", \"prefix1:\", \"PREFIX\", \"prefix2:\", \"BCAST\"},\n})\nif err != nil {\n  panic(err)\n}\nclient.DoCache(ctx, client.B().Get().Key(\"prefix1:1\").Cache(), time.Minute).IsCacheHit() == false\nclient.DoCache(ctx, client.B().Get().Key(\"prefix1:1\").Cache(), time.Minute).IsCacheHit() == true\n```\n\nPlease make sure that commands passed to `DoCache()` and `DoMultiCache()` are covered by your prefixes.\nOtherwise, their client-side cache will not be invalidated by redis.\n\n### Client-Side Caching with Cache Aside Pattern\n\nCache-Aside is a widely used caching strategy.\n[rueidisaside](https://github.com/redis/rueidis/blob/main/rueidisaside/README.md) can help you cache data into your client-side cache backed by Redis. For example:\n\n```go\nclient, err := rueidisaside.NewClient(rueidisaside.ClientOption{\n    ClientOption: rueidis.ClientOption{InitAddress: []string{\"127.0.0.1:6379\"}},\n})\nif err != nil {\n    panic(err)\n}\nval, err := client.Get(context.Background(), time.Minute, \"mykey\", func(ctx context.Context, key string) (val string, err error) {\n    if err = db.QueryRowContext(ctx, \"SELECT val FROM mytab WHERE id = ?\", key).Scan(\u0026val); err == sql.ErrNoRows {\n        val = \"_nil_\" // cache nil to avoid penetration.\n        err = nil     // clear err in case of sql.ErrNoRows.\n    }\n    return\n})\n// ...\n```\n\nPlease refer to the full example at [rueidisaside](https://github.com/redis/rueidis/blob/main/rueidisaside/README.md).\n\n### Disable Client-Side Caching\n\nSome Redis providers don't support client-side caching, ex. Google Cloud Memorystore.\nYou can disable client-side caching by setting `ClientOption.DisableCache` to `true`.\nThis will also fall back `client.DoCache()` and `client.DoMultiCache()` to `client.Do()` and `client.DoMulti()`.\n\n## Context Cancellation\n\n`client.Do()`, `client.DoMulti()`, `client.DoCache()`, and `client.DoMultiCache()` can return early if the context deadline is reached.\n\n```golang\nctx, cancel := context.WithTimeout(context.Background(), time.Second)\ndefer cancel()\nclient.Do(ctx, client.B().Set().Key(\"key\").Value(\"val\").Nx().Build()).Error() == context.DeadlineExceeded\n```\n\nPlease note that though operations can return early, the command is likely sent already.\n\n### Canceling a Context Before Its Deadline\n\nManually canceling a context is only work in pipeline mode, as it requires an additional goroutine to monitor the context.\nPipeline mode will be started automatically when there are concurrent requests on the same connection, but you can start it in advance with `ClientOption.AlwaysPipelining`\nto make sure manually cancellation is respected, especially for blocking requests which are sent with a dedicated connection where pipeline mode isn't started.\n\n### Disable Auto Retry\n\nAll read-only commands are automatically retried on failures by default before their context deadlines exceeded.\nYou can disable this by setting `DisableRetry` or adjust the number of retries and durations between retries using `RetryDelay` function.\n\n### Retryable Commands\n\nWrite commands can set Retryable to automatically retried on failures like read-only commands. Make sure you only use this feature with idempotent operations.\n\n```golang\nclient.Do(ctx, client.B().Set().Key(\"key\").Value(\"val\").Build().ToRetryable())\nclient.DoMulti(ctx, client.B().Set().Key(\"key\").Value(\"val\").Build().ToRetryable())\n```\n\n## Pub/Sub\n\nTo receive messages from channels, `client.Receive()` should be used. It supports `SUBSCRIBE`, `PSUBSCRIBE`, and Redis 7.0's `SSUBSCRIBE`:\n\n```golang\nerr = client.Receive(context.Background(), client.B().Subscribe().Channel(\"ch1\", \"ch2\").Build(), func(msg rueidis.PubSubMessage) {\n    // Handle the message. If you need to perform heavy processing or issue\n    // additional commands, do that in a separate goroutine to avoid\n    // blocking the pipeline, e.g.:\n    //   go func() {\n    //       // long work or client.Do(...)\n    //   }()\n})\n```\n\nThe provided handler will be called with the received message.\n\nIt is important to note that `client.Receive()` will keep blocking until returning a value in the following cases:\n\n1. return `nil` when receiving any unsubscribe/punsubscribe message related to the provided `subscribe` command, including `sunsubscribe` messages caused by slot migrations.\n2. return `rueidis.ErrClosing` when the client is closed manually.\n3. return `ctx.Err()` when the `ctx` is done.\n4. return non-nil `err` when the provided `subscribe` command fails.\n\nWhile the `client.Receive()` call is blocking, the `Client` is still able to accept other concurrent requests,\nand they are sharing the same TCP connection. If your message handler may take some time to complete, it is recommended\nto use the `client.Receive()` inside a `client.Dedicated()` for not blocking other concurrent requests.\n\n#### Subscription confirmations\n\nUse `rueidis.WithOnSubscriptionHook` when you need to observe subscribe / unsubscribe confirmations that the server sends during the lifetime of a `client.Receive()`.\n\nThe hook can be triggered multiple times because the `client.Receive()` may automatically reconnect and resubscribe.\n\n```go\nctx := rueidis.WithOnSubscriptionHook(context.Background(), func(s rueidis.PubSubSubscription) {\n    // This hook runs in the pipeline goroutine. If you need to perform\n    // heavy work or invoke additional commands, do it in another\n    // goroutine to avoid blocking the pipeline, for example:\n    //   go func() {\n    //       // long work or client.Do(...)\n    //   }()\n    fmt.Printf(\"%s %s (count %d)\\n\", s.Kind, s.Channel, s.Count)\n})\n\nerr := client.Receive(ctx, client.B().Subscribe().Channel(\"news\").Build(), func(m rueidis.PubSubMessage) {\n    // ...\n})\n```\n\n### Alternative PubSub Hooks\n\nThe `client.Receive()` requires users to provide a subscription command in advance.\nThere is an alternative `Dedicatedclient.SetPubSubHooks()` that allows users to subscribe/unsubscribe channels later.\n\n```golang\nc, cancel := client.Dedicate()\ndefer cancel()\n\nwait := c.SetPubSubHooks(rueidis.PubSubHooks{\n  OnMessage: func(m rueidis.PubSubMessage) {\n    // Handle the message. If you need to perform heavy processing or issue\n    // additional commands, do that in a separate goroutine to avoid\n    // blocking the pipeline, e.g.:\n    //   go func() {\n    //       // long work or client.Do(...)\n    //   }()\n  }\n})\nc.Do(ctx, c.B().Subscribe().Channel(\"ch\").Build())\nerr := \u003c-wait // disconnected with err\n```\n\nIf the hooks are not nil, the above `wait` channel is guaranteed to be closed when the hooks will not be called anymore,\nand produce at most one error describing the reason. Users can use this channel to detect disconnection.\n\n## CAS Transaction\n\nTo do a [CAS Transaction](https://redis.io/docs/interact/transactions/#optimistic-locking-using-check-and-set) (`WATCH` + `MULTI` + `EXEC`), a dedicated connection should be used because there should be no\nunintentional write commands between `WATCH` and `EXEC`. Otherwise, the `EXEC` may not fail as expected.\n\n```golang\nclient.Dedicated(func(c rueidis.DedicatedClient) error {\n    // watch keys first\n    c.Do(ctx, c.B().Watch().Key(\"k1\", \"k2\").Build())\n    // perform read here\n    c.Do(ctx, c.B().Mget().Key(\"k1\", \"k2\").Build())\n    // perform write with MULTI EXEC\n    c.DoMulti(\n        ctx,\n        c.B().Multi().Build(),\n        c.B().Set().Key(\"k1\").Value(\"1\").Build(),\n        c.B().Set().Key(\"k2\").Value(\"2\").Build(),\n        c.B().Exec().Build(),\n    )\n    return nil\n})\n\n```\n\nOr use `Dedicate()` and invoke `cancel()` when finished to put the connection back to the pool.\n\n```golang\nc, cancel := client.Dedicate()\ndefer cancel()\n\nc.Do(ctx, c.B().Watch().Key(\"k1\", \"k2\").Build())\n// do the rest CAS operations with the `client` who occupies a connection\n```\n\nHowever, occupying a connection is not good in terms of throughput. It is better to use [Lua script](#lua-script) to perform\noptimistic locking instead.\n\n## Lua Script\n\nThe `NewLuaScript` or `NewLuaScriptReadOnly` will create a script which is safe for concurrent usage.\n\nWhen calling the `script.Exec`, it will try sending `EVALSHA` first and fall back to `EVAL` if the server returns `NOSCRIPT`.\n\n```golang\nscript := rueidis.NewLuaScript(\"return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}\")\n// the script.Exec is safe for concurrent call\nlist, err := script.Exec(ctx, client, []string{\"k1\", \"k2\"}, []string{\"a1\", \"a2\"}).ToArray()\n```\n\n## Streaming Read\n\n`client.DoStream()` and `client.DoMultiStream()` can be used to send large redis responses to an `io.Writer`\ndirectly without allocating them to the memory. They work by first sending commands to a dedicated connection acquired from a pool,\nthen directly copying the response values to the given `io.Writer`, and finally recycling the connection.\n\n```go\ns := client.DoMultiStream(ctx, client.B().Get().Key(\"a{slot1}\").Build(), client.B().Get().Key(\"b{slot1}\").Build())\nfor s.HasNext() {\n    n, err := s.WriteTo(io.Discard)\n    if rueidis.IsRedisNil(err) {\n        // ...\n    }\n}\n```\n\nNote that these two methods will occupy connections until all responses are written to the given `io.Writer`.\nThis can take a long time and hurt performance. Use the normal `Do()` and `DoMulti()` instead unless you want to avoid allocating memory for a large redis response.\n\nAlso note that these two methods only work with `string`, `integer`, and `float` redis responses. And `DoMultiStream` currently\ndoes not support pipelining keys across multiple slots when connecting to a redis cluster.\n\n## Memory Consumption Consideration\n\nEach underlying connection in rueidis allocates a ring buffer for pipelining.\nIts size is controlled by the `ClientOption.RingScaleEachConn` and the default value is 10 which results into each ring of size 2^10.\n\nIf you have many rueidis connections, you may find that they occupy quite an amount of memory.\nIn that case, you may consider reducing `ClientOption.RingScaleEachConn` to 8 or 9 at the cost of potential throughput degradation.\n\nYou may also consider setting the value of `ClientOption.PipelineMultiplex` to `-1`, which will let rueidis use only 1 connection for pipelining to each redis node.\n\nIn addition, each connection also allocates read and write buffers to reduce system calls during high concurrency\nor large pipelines. These buffers are controlled by:\n\n- `ClientOption.ReadBufferEachConn` (default: 0.5 MiB)\n- `ClientOption.WriteBufferEachConn` (default: 0.5 MiB)\n\nYou can adjust these values in memory-sensitive environments to lower memory usage, at the cost of potential throughput.\n\n## Instantiating a new Redis Client\n\nYou can create a new redis client using `NewClient` and provide several options.\n\n```golang\n// Connect to a single redis node:\nclient, err := rueidis.NewClient(rueidis.ClientOption{\n    InitAddress: []string{\"127.0.0.1:6379\"},\n})\n\n// Connect to a standalone redis with replicas\nclient, err := rueidis.NewClient(rueidis.ClientOption{\n    InitAddress: []string{\"127.0.0.1:6379\"},\n    Standalone: rueidis.StandaloneOption{\n        // Note that these addresses must be online and cannot be promoted.\n        // An example use case is the reader endpoint provided by cloud vendors.\n        ReplicaAddress: []string{\"reader_endpoint:port\"},\n    },\n    SendToReplicas: func(cmd rueidis.Completed) bool {\n        return cmd.IsReadOnly()\n    },\n})\n\n// Connect to a redis cluster\nclient, err := rueidis.NewClient(rueidis.ClientOption{\n    InitAddress: []string{\"127.0.0.1:7001\", \"127.0.0.1:7002\", \"127.0.0.1:7003\"},\n    ShuffleInit: true,\n})\n\n// Connect to a redis cluster and use replicas for read operations\nclient, err := rueidis.NewClient(rueidis.ClientOption{\n    InitAddress: []string{\"127.0.0.1:7001\", \"127.0.0.1:7002\", \"127.0.0.1:7003\"},\n    SendToReplicas: func(cmd rueidis.Completed) bool {\n        return cmd.IsReadOnly()\n    },\n})\n\n// Connect to sentinels\nclient, err := rueidis.NewClient(rueidis.ClientOption{\n    InitAddress: []string{\"127.0.0.1:26379\", \"127.0.0.1:26380\", \"127.0.0.1:26381\"},\n    Sentinel: rueidis.SentinelOption{\n        MasterSet: \"my_master\",\n    },\n})\n// connect to redis node through unix socket\nclient, err := rueidis.NewClient(rueidis.ClientOption{\n    InitAddress: []string{\"/run/valkey.sock\"},\n    DialCtxFn: func(ctx context.Context, s string, d *net.Dialer, c *tls.Config) (conn net.Conn, err error) {\n        return d.DialContext(ctx, \"unix\", s)\n    },\n})\n```\n\n### Redis URL\n\nYou can use `ParseURL` or `MustParseURL` to construct a `ClientOption`.\n\nThe provided URL must be started with either `redis://`, `rediss://` or `unix://`.\n\nCurrently supported url parameters are `db`, `dial_timeout`, `write_timeout`, `addr`, `protocol`, `client_cache`, `client_name`, `max_retries`, and `master_set`.\n\n```go\n// connect to a redis cluster\nclient, err = rueidis.NewClient(rueidis.MustParseURL(\"redis://127.0.0.1:7001?addr=127.0.0.1:7002\u0026addr=127.0.0.1:7003\"))\n// connect to a redis node\nclient, err = rueidis.NewClient(rueidis.MustParseURL(\"redis://127.0.0.1:6379/0\"))\n// connect to a redis sentinel\nclient, err = rueidis.NewClient(rueidis.MustParseURL(\"redis://127.0.0.1:26379/0?master_set=my_master\"))\n// connecting to redis node using unix socket\nclient, err = rueidis.NewClient(rueidis.MustParseURL(\"unix:///run/redis.conf?db=0\"))\n```\n\n### Availability Zone Affinity Routing\n\nStarting from Valkey 8.1, Valkey server provides the `availability-zone` information for clients to know where the server is located.\nFor using this information to route requests to the replica located in the same availability zone,\nset the `EnableReplicaAZInfo` option and your `ReadNodeSelector` function with helpers:\n\n- **PreferReplicaNodeSelector**: Prioritizes reading from any replica. Fallback to primary if no replicas are available.\n- **AZAffinityNodeSelector**: Prioritizes reading from replicas in the same availability zone, then any replica. Fallback to primary.\n- **AZAffinityReplicasAndPrimaryNodeSelector**: Prioritizes reading from replicas in the same availability zone, then primary in the same availability zone, then any replica. Fallback to primary.\n\nFor example:\n```go\nclient, err := rueidis.NewClient(rueidis.ClientOption{\n  InitAddress:         []string{\"address.example.com:6379\"},\n  EnableReplicaAZInfo: true,\n  SendToReplicas: func(cmd rueidis.Completed) bool {\n    return cmd.IsReadOnly()\n  },\n  ReadNodeSelector: rueidis.AZAffinityNodeSelector(\"us-east-1a\"),\n})\n```\nYou can also implement a custom selector to fit your specific needs:\n```go\nclient, err := rueidis.NewClient(rueidis.ClientOption{\n  InitAddress:         []string{\"address.example.com:6379\"},\n  EnableReplicaAZInfo: true,\n  SendToReplicas: func(cmd rueidis.Completed) bool {\n    return cmd.IsReadOnly()\n  },\n  ReadNodeSelector: func(slot uint16, nodes []rueidis.NodeInfo) int {\n    for i, node := range nodes {\n      if node.AZ == \"us-east-1a\" {\n        return i // return the index of the replica.\n      }\n    }\n    return -1 // send to the primary.\n  },\n})\n```\n\nFor deployments that only provide the availability zone via the INFO command, set the `AZFromInfo`\n option as well as `EnableReplicaAZInfo`.\n\n## Arbitrary Command\n\nIf you want to construct commands that are absent from the command builder, you can use `client.B().Arbitrary()`:\n\n```golang\n// This will result in [ANY CMD k1 k2 a1 a2]\nclient.B().Arbitrary(\"ANY\", \"CMD\").Keys(\"k1\", \"k2\").Args(\"a1\", \"a2\").Build()\n```\n\n## Working with JSON, Raw `[]byte`, and Vector Similarity Search\n\nThe command builder treats all the parameters as Redis strings, which are binary safe. This means that users can store `[]byte`\ndirectly into Redis without conversion. And the `rueidis.BinaryString` helper can convert `[]byte` to `string` without copying. For example:\n\n```golang\nclient.B().Set().Key(\"b\").Value(rueidis.BinaryString([]byte{...})).Build()\n```\n\nTreating all the parameters as Redis strings also means that the command builder doesn't do any quoting, conversion automatically for users.\n\nWhen working with RedisJSON, users frequently need to prepare JSON strings in Redis strings. And `rueidis.JSON` can help:\n\n```golang\nclient.B().JsonSet().Key(\"j\").Path(\"$.myStrField\").Value(rueidis.JSON(\"str\")).Build()\n// equivalent to\nclient.B().JsonSet().Key(\"j\").Path(\"$.myStrField\").Value(`\"str\"`).Build()\n```\n\nWhen working with vector similarity search, users can use `rueidis.VectorString32` and `rueidis.VectorString64` to build queries:\n\n```golang\ncmd := client.B().FtSearch().Index(\"idx\").Query(\"*=\u003e[KNN 5 @vec $V]\").\n    Params().Nargs(2).NameValue().NameValue(\"V\", rueidis.VectorString64([]float64{...})).\n    Dialect(2).Build()\nn, resp, err := client.Do(ctx, cmd).AsFtSearch()\n```\n\n## Command Response Cheatsheet\n\nWhile the command builder is developer-friendly, the response parser is a little unfriendly. Developers must know what type of Redis response will be returned from the server beforehand and which parser they should use.\n\nError Handling:\nIf an incorrect parser function is chosen, an errParse will be returned. Here's an example using ToArray which demonstrates this scenario:\n\n```golang\n// Attempt to parse the response. If a parsing error occurs, check if the error is a parse error and handle it.\n// Normally, you should fix the code by choosing the correct parser function.\n// For instance, use ToString() if the expected response is a string, or ToArray() if the expected response is an array as follows:\nif err := client.Do(ctx, client.B().Get().Key(\"k\").Build()).ToArray(); IsParseErr(err) {\n    fmt.Println(\"Parsing error:\", err)\n}\n```\n\nIt is hard to remember what type of message will be returned and which parsing to use. So, here are some common examples:\n\n```golang\n// GET\nclient.Do(ctx, client.B().Get().Key(\"k\").Build()).ToString()\nclient.Do(ctx, client.B().Get().Key(\"k\").Build()).AsInt64()\n// MGET\nclient.Do(ctx, client.B().Mget().Key(\"k1\", \"k2\").Build()).ToArray()\n// SET\nclient.Do(ctx, client.B().Set().Key(\"k\").Value(\"v\").Build()).Error()\n// INCR\nclient.Do(ctx, client.B().Incr().Key(\"k\").Build()).AsInt64()\n// HGET\nclient.Do(ctx, client.B().Hget().Key(\"k\").Field(\"f\").Build()).ToString()\n// HMGET\nclient.Do(ctx, client.B().Hmget().Key(\"h\").Field(\"a\", \"b\").Build()).ToArray()\n// HGETALL\nclient.Do(ctx, client.B().Hgetall().Key(\"h\").Build()).AsStrMap()\n// EXPIRE\nclient.Do(ctx, client.B().Expire().Key(\"k\").Seconds(1).Build()).AsInt64()\n// HEXPIRE\nclient.Do(ctx, client.B().Hexpire().Key(\"h\").Seconds(1).Fields().Numfields(2).Field(\"f1\", \"f2\").Build()).AsIntSlice()\n// ZRANGE\nclient.Do(ctx, client.B().Zrange().Key(\"k\").Min(\"1\").Max(\"2\").Build()).AsStrSlice()\n// ZRANK\nclient.Do(ctx, client.B().Zrank().Key(\"k\").Member(\"m\").Build()).AsInt64()\n// ZSCORE\nclient.Do(ctx, client.B().Zscore().Key(\"k\").Member(\"m\").Build()).AsFloat64()\n// ZRANGE\nclient.Do(ctx, client.B().Zrange().Key(\"k\").Min(\"0\").Max(\"-1\").Build()).AsStrSlice()\nclient.Do(ctx, client.B().Zrange().Key(\"k\").Min(\"0\").Max(\"-1\").Withscores().Build()).AsZScores()\n// ZPOPMIN\nclient.Do(ctx, client.B().Zpopmin().Key(\"k\").Build()).AsZScore()\nclient.Do(ctx, client.B().Zpopmin().Key(\"myzset\").Count(2).Build()).AsZScores()\n// SCARD\nclient.Do(ctx, client.B().Scard().Key(\"k\").Build()).AsInt64()\n// SMEMBERS\nclient.Do(ctx, client.B().Smembers().Key(\"k\").Build()).AsStrSlice()\n// LINDEX\nclient.Do(ctx, client.B().Lindex().Key(\"k\").Index(0).Build()).ToString()\n// LPOP\nclient.Do(ctx, client.B().Lpop().Key(\"k\").Build()).ToString()\nclient.Do(ctx, client.B().Lpop().Key(\"k\").Count(2).Build()).AsStrSlice()\n// SCAN\nclient.Do(ctx, client.B().Scan().Cursor(0).Build()).AsScanEntry()\n// FT.SEARCH\nclient.Do(ctx, client.B().FtSearch().Index(\"idx\").Query(\"@f:v\").Build()).AsFtSearch()\n// GEOSEARCH\nclient.Do(ctx, client.B().Geosearch().Key(\"k\").Fromlonlat(1, 1).Bybox(1).Height(1).Km().Build()).AsGeosearch()\n```\n\n## Use DecodeSliceOfJSON to Scan Array Result\n\nDecodeSliceOfJSON is useful when you would like to scan the results of an array into a slice of a specific struct.\n\n```golang\ntype User struct {\n  Name string `json:\"name\"`\n}\n\n// Set some values\nif err = client.Do(ctx, client.B().Set().Key(\"user1\").Value(`{\"name\": \"name1\"}`).Build()).Error(); err != nil {\n  return err\n}\nif err = client.Do(ctx, client.B().Set().Key(\"user2\").Value(`{\"name\": \"name2\"}`).Build()).Error(); err != nil {\n  return err\n}\n\n// Scan MGET results into []*User\nvar users []*User // or []User is also scannable\nif err := rueidis.DecodeSliceOfJSON(client.Do(ctx, client.B().Mget().Key(\"user1\", \"user2\").Build()), \u0026users); err != nil {\n  return err\n}\n\nfor _, user := range users {\n  fmt.Printf(\"%+v\\n\", user)\n}\n/*\n\u0026{name:name1}\n\u0026{name:name2}\n*/\n```\n\n### !!!!!! DO NOT DO THIS !!!!!!\n\nPlease make sure that all values in the result have the same JSON structures.\n\n```golang\n// Set a pure string value\nif err = client.Do(ctx, client.B().Set().Key(\"user1\").Value(\"userName1\").Build()).Error(); err != nil {\n  return err\n}\n\n// Bad\nusers := make([]*User, 0)\nif err := rueidis.DecodeSliceOfJSON(client.Do(ctx, client.B().Mget().Key(\"user1\").Build()), \u0026users); err != nil {\n  return err\n}\n// -\u003e Error: invalid character 'u' looking for the beginning of the value\n// in this case, use client.Do(ctx, client.B().Mget().Key(\"user1\").Build()).AsStrSlice()\n```\n\n---\n\n## Contributing\n\nContributions are welcome, including [issues](https://github.com/redis/rueidis/issues), [pull requests](https://github.com/redis/rueidis/pulls), and [discussions](https://github.com/redis/rueidis/discussions).\nContributions mean a lot to us and help us improve this library and the community!\n\nThanks to all the people who already contributed!\n\n\u003ca href=\"https://github.com/redis/rueidis/graphs/contributors\"\u003e\n  \u003cimg src=\"https://contributors-img.web.app/image?repo=redis/rueidis\" /\u003e\n\u003c/a\u003e\n\n### Generate Command Builders\n\nCommand builders are generated based on the definitions in [./hack/cmds](./hack/cmds) by running:\n\n```sh\ngo generate\n```\n\n### Testing\n\nPlease use the [./dockertest.sh](./dockertest.sh) script for running test cases locally.\nAnd please try your best to have 100% test coverage on code changes.\n","funding_links":[],"categories":["Database Drivers","Go"],"sub_categories":["NoSQL Database Drivers"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fredis%2Frueidis","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fredis%2Frueidis","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fredis%2Frueidis/lists"}