https://github.com/replikativ/konserve
A clojuresque key-value/document store protocol with core.async.
https://github.com/replikativ/konserve
clojure key-value-store konserve
Last synced: 30 days ago
JSON representation
A clojuresque key-value/document store protocol with core.async.
- Host: GitHub
- URL: https://github.com/replikativ/konserve
- Owner: replikativ
- License: epl-1.0
- Created: 2015-08-30T14:11:56.000Z (over 10 years ago)
- Default Branch: main
- Last Pushed: 2024-03-16T20:36:38.000Z (almost 2 years ago)
- Last Synced: 2024-05-01T11:48:03.808Z (almost 2 years ago)
- Topics: clojure, key-value-store, konserve
- Language: Clojure
- Homepage:
- Size: 1.85 MB
- Stars: 295
- Watchers: 19
- Forks: 25
- Open Issues: 18
-
Metadata Files:
- Readme: README.org
- License: LICENSE
Awesome Lists containing this project
- awesome-starred - replikativ/konserve - A clojuresque key-value/document store protocol with core.async. (clojure)
- awesome-clojurescript - Konserve - value/document store protocol with core.async. (Awesome ClojureScript / Database)
README
* konserve
:PROPERTIES:
:CUSTOM_ID: h:6f85a7f4-3694-4703-8c0b-ffcc34f2e5c9
:END:
[[https://clojurians.slack.com/archives/CB7GJAN0L][https://img.shields.io/badge/slack-join_chat-brightgreen.svg]]
[[https://clojars.org/org.replikativ/konserve][https://img.shields.io/clojars/v/org.replikativ/konserve.svg]]
[[https://circleci.com/gh/replikativ/konserve][https://circleci.com/gh/replikativ/konserve.svg?style=shield]]
[[https://github.com/replikativ/konserve/tree/development][https://img.shields.io/github/last-commit/replikativ/konserve/main.svg]]
[[https://whilo.github.io/old/articles/16/unified-storage-io][Simple durability, made flexible.]]
*Heads-up: Breaking change konserve 0.9.* now requires a UUID under :id in the configuration in general. See below.*
** What is konserve?
A simple document store protocol defined with synchronous and [[https://github.com/clojure/core.async][core.async]]
semantics to allow Clojuresque collection operations on associative key-value
stores, both from Clojure and ClojureScript for different backends. Data is
generally serialized with [[https://github.com/edn-format/edn][edn]] semantics or, if supported, as native binary blobs
and can be accessed similarly to =clojure.core= functions =get-in=, =assoc-in=
and =update-in=. =update-in= especially allows to run functions atomically and
returns old and new value. Each operation is run atomically and must be
consistent (in fact ACID), but further consistency across keys is, depending on the backend, only optionally supported.
*** Key Features
- /cross-platform/ between Clojure and ClojureScript
- /lowest-common denominator interface/ for an associative datastructure with =edn= semantics
- /thread-safety with atomicity over key operations/
- /fast serialization/ options (fressian, transit, ...), independent of the underlying kv-store
- /very low overhead/ protocol, including direct binary access for high throughput
- /no additional dependencies and setup/ required for IndexedDB in the browser and the file backend on the JVM and Node.js
- /avoids blocking io/, the filestore for instance will not block any thread on reading
** Quick Start
Add to your dependencies: [[https://clojars.org/org.replikativ/konserve][https://img.shields.io/clojars/v/org.replikativ/konserve.svg]]
#+BEGIN_SRC clojure
(require '[konserve.core :as k])
;; All stores require a UUID :id for global identification
;; Generate once: (java.util.UUID/randomUUID) or (random-uuid)
;; Then use the literal in your config
(def config {:backend :memory
:id #uuid "550e8400-e29b-41d4-a716-446655440000"})
;; Create new store, pass opts as separate argument
(def store (k/create-store config {:sync? true}))
;; Use the store
(k/assoc-in store [:user] {:name "Alice"} {:sync? true})
(k/get-in store [:user] nil {:sync? true})
;; => {:name "Alice"}
(k/update-in store [:user :age] (fnil inc 0) {:sync? true})
;; => [nil 1]
;; Clean up
(k/delete-store config)
#+END_SRC
** Store Identification (UUID Requirement)
:PROPERTIES:
:CUSTOM_ID: h:uuid-requirement
:END:
All konserve stores require a globally unique =:id= field containing a UUID type.
This ensures stores can be uniquely identified and matched across different backends,
machines, and synchronization contexts.
*Why UUID IDs are required:*
- *Global identifiability*: Match stores regardless of backend type or file path
- *Cross-machine sync*: Identify the same logical store across different systems
- *High entropy*: 128-bit UUIDs prevent collisions
- *Backend-agnostic*: Same ID works for memory, file, S3, Redis, etc.
*How to use UUIDs:*
#+BEGIN_SRC clojure
;; 1. Generate a UUID once (in your REPL or terminal)
(java.util.UUID/randomUUID) ;; Clojure
(random-uuid) ;; ClojureScript
;; => #uuid "550e8400-e29b-41d4-a716-446655440000"
;; 2. Copy the UUID and use it as a literal in your config
{:backend :memory
:id #uuid "550e8400-e29b-41d4-a716-446655440000"}
;; 3. Pass opts as separate argument to store functions
(k/create-store config {:sync? true})
;; 4. Use the SAME UUID every time for the same logical store
;; 5. Use DIFFERENT UUIDs for different stores (dev, test, prod)
#+END_SRC
*Important:*
- Generate a UUID *once* and use it consistently for the same store
- Store the UUID in your application config (EDN files support =#uuid= literals)
- Different stores (dev, test, prod) should have different UUIDs
- Never generate UUIDs dynamically in your code - use fixed literals
** Installation
Add to your =deps.edn=:
#+BEGIN_SRC clojure
{:deps {org.replikativ/konserve {:mvn/version "LATEST"}}}
#+END_SRC
Or to your =project.clj=:
#+BEGIN_SRC clojure
[org.replikativ/konserve "LATEST"]
#+END_SRC
** Core Concepts
*** Synchronous vs Asynchronous
Konserve supports both synchronous and asynchronous execution modes via =core.async=.
*Synchronous mode* (=:sync? true=):
#+BEGIN_SRC clojure
(def config {:backend :memory
:id #uuid "550e8400-e29b-41d4-a716-446655440000"})
(def store (k/create-store config {:sync? true}))
(k/assoc-in store [:key] "value" {:sync? true})
(k/get-in store [:key] nil {:sync? true})
;; => "value"
#+END_SRC
*Asynchronous mode* (=:sync? false=):
#+BEGIN_SRC clojure
(require '[clojure.core.async :refer [go "value"
#+END_SRC
*** Store Lifecycle
Konserve provides five key lifecycle functions:
- =create-store= - Create a new store, errors if already exists
- =connect-store= - Connect to existing store, errors if doesn't exist
- =store-exists?= - Check if store exists at the given configuration
- =release-store= - Release connections and resources held by a store
- =delete-store= - Delete underlying storage
#+BEGIN_SRC clojure
(def config {:backend :file
:id #uuid "550e8400-e29b-41d4-a716-446655440002"
:path "/tmp/my-store"})
;; Check if store exists
(k/store-exists? config {:sync? true}) ;; => false
;; Create new store (errors if already exists)
(def store (k/create-store config {:sync? true}))
;; Use the store...
(k/assoc-in store [:data] {:value 42} {:sync? true})
;; Later, connect to existing store (errors if doesn't exist)
;; (def store (k/connect-store config {:sync? true}))
;; Clean up resources
(k/release-store config store {:sync? true})
;; Delete underlying storage
(k/delete-store config {:sync? true})
;; Verify deletion
(k/store-exists? config {:sync? true}) ;; => false
#+END_SRC
*** Create vs Connect Semantics
All backends follow consistent strict semantics:
*Strict semantics* (All backends: File, S3, DynamoDB, Redis, LMDB, RocksDB, IndexedDB, Memory with :id):
- =create-store= - Creates new store, errors if already exists
- =connect-store= - Connects to existing store, errors if doesn't exist
- =store-exists?= - Checks for existence before create/connect
** Built-in Backends
*** Memory Store
An in-memory store wrapping an Atom, available for both Clojure and ClojureScript.
#+BEGIN_SRC clojure
(require '[konserve.core :as k])
;; Persistent registry-based store with ID
(def config {:backend :memory
:id #uuid "550e8400-e29b-41d4-a716-446655440003"})
(def my-db (k/create-store config {:sync? true}))
;; Later sessions can reconnect:
;; (def my-db (k/connect-store config {:sync? true}))
#+END_SRC
*** File Store (JVM)
A file-system store using [[https://github.com/clojure/data.fressian][fressian]] serialization. No setup or additional dependencies needed.
#+BEGIN_SRC clojure
(require '[konserve.core :as k])
(def config {:backend :file
:id #uuid "550e8400-e29b-41d4-a716-446655440004"
:path "/tmp/konserve-store"})
;; Create new store
(def my-db (k/create-store config {:sync? true}))
;; Or connect to existing
;; (def my-db (k/connect-store config {:sync? true}))
#+END_SRC
The file store supports:
- Optional fsync control via =:sync-blob? false= for better performance
- Custom =java.nio.file.FileSystem= instances via =:filesystem= parameter
- Thoroughly tested using [[https://github.com/google/jimfs][Jimfs]] (Google's in-memory NIO filesystem)
*** File Store (Node.js)
For Node.js environments, require the Node.js-specific file store:
#+BEGIN_SRC clojure
(require '[konserve.core :as k]
'[konserve.node-filestore] ;; Registers :file backend for Node.js
'[clojure.core.async :refer [go {:user1 {:name "Alice"} :user2 {:name "Bob"}}
;; Atomic bulk delete
( true/false
;; Atomic bulk write
(k/multi-assoc store {:user1 {:name "Alice"}
:user2 {:name "Bob"}
:user3 {:name "Carol"}}
{:sync? true})
;; Efficient bulk read - returns sparse map (only found keys)
(k/multi-get store [:user1 :user2 :nonexistent] {:sync? true})
;; => {:user1 {:name "Alice"} :user2 {:name "Bob"}}
;; Atomic bulk delete
(k/multi-dissoc store [:user1 :user2] {:sync? true})
#+END_SRC
*Backends with multi-key support:*
- Memory store
- IndexedDB
- Tiered store (when both layers support it)
*** Write Hooks
Write hooks are invoked after every successful write operation, enabling reactive patterns
like store synchronization, change logging, or triggering side effects.
#+BEGIN_SRC clojure
(require '[konserve.core :as k])
(def config {:backend :memory
:id #uuid "550e8400-e29b-41d4-a716-44665544000b"})
(def store (k/create-store config {:sync? true}))
;; Register a hook to log all writes
(k/add-write-hook! store ::my-logger
(fn [{:keys [api-op key value]}]
(println "Write:" api-op key "->" value)))
;; Writes now trigger the hook
(k/assoc-in store [:user] {:name "Alice"} {:sync? true})
;; Prints: Write: :assoc-in :user -> {:name "Alice"}
;; Remove hook when done
(k/remove-write-hook! store ::my-logger)
#+END_SRC
*Hook function receives:*
- =:api-op= - The operation (=:assoc-in=, =:update-in=, =:dissoc=, =:bassoc=, =:multi-assoc=, =:multi-dissoc=)
- =:key= - The top-level key being written
- =:key-vec= - Full key path (for =assoc-in= / =update-in=)
- =:value= - The value written
- =:old-value= - Previous value (for update operations)
- =:kvs= - Map of key->value (for =multi-assoc=)
- =:keys= - Collection of keys (for =multi-dissoc=)
Hooks are invoked at the API layer (in =konserve.core=), so they work consistently
across all store backends. Stores must implement the =PWriteHookStore= protocol.
*** Garbage Collection
Konserve has a garbage collector that can be called manually when the store gets
too crowded.
#+BEGIN_SRC clojure
(require '[konserve.gc :as gc])
;; Evict keys older than cutoff date, keep whitelisted keys
(gc/sweep! store cutoff-date whitelist {:sync? true})
#+END_SRC
The function =konserve.gc/sweep!= allows you to provide a cut-off date to evict old keys
and a whitelist for keys that should be kept.
*** Compression and Encryption
Compression and encryption are supported by the default store implementation
used by all current backends except lmdb and memory.
#+BEGIN_SRC clojure
;; Store configuration with compression and encryption
(def config {:backend :file
:id #uuid "550e8400-e29b-41d4-a716-44665544000c"
:path "/tmp/secure-store"
:config {:encryptor {:type :aes
:key "s3cr3t"}
:compressor {:type :lz4}}})
(def store (k/create-store config {:sync? true}))
#+END_SRC
*Compression:*
- LZ4 compression (JVM only)
*Encryption:*
- AES/CBC/PKCS{5/7}Padding with 256 bit
- Different salt for each written value
- Same cold storage format for JVM and JS (cross-runtime compatible)
*** Serialization Formats
Different formats for =edn= serialization like [[https://github.com/clojure/data.fressian][fressian]], [[http://blog.cognitect.com/blog/2014/7/22/transit][transit]] or a simple
=pr-str= version are supported and can be combined with different stores. Stores
have a reasonable default setting. You can extend the serialization
protocol to other formats if needed. [[https://github.com/replikativ/incognito][Incognito]] support is available for
custom records.
**** Tagged Literals
You can read and write custom records according to [[https://github.com/replikativ/incognito][incognito]].
*** Error Handling
For synchronous execution, normal exceptions are thrown. For asynchronous
error handling, we follow the semantics of =go-try= and == introduced [[https://swannodette.github.io/2013/08/31/asynchronous-error-handling][here]].
The [[https://github.com/replikativ/superv.async/][superv.async]] library provides error handling for core.async. You just need two
macros: == checks for an exception and rethrows, =go-try= catches and passes it
along as a return value so errors don't get lost.
** Backend Implementation Guide
:PROPERTIES:
:CUSTOM_ID: h:7582b1c9-e305-4d51-a808-c10eb447f3de
:END:
We provide a [[file:doc/backend.org][backend implementation guide]].
*New in 2025:* External backends can register with the unified store dispatch
system by defining multimethod implementations for:
- =konserve.store/create-store= - Create new store, error if exists
- =konserve.store/connect-store= - Connect to existing store, error if doesn't exist
- =konserve.store/store-exists?= - Check if store exists
- =konserve.store/delete-store= - Delete underlying storage
- =konserve.store/release-store= - Release resources
All backends must implement strict semantics where =create-store= errors if the store
already exists and =connect-store= errors if the store doesn't exist. See existing
external backends (konserve-s3, konserve-lmdb, konserve-rocksdb, konserve-redis,
konserve-dynamodb) for reference implementations.
** Projects Building on Konserve
:PROPERTIES:
:CUSTOM_ID: h:79876ac1-414b-4180-8d65-63737cb3bc53
:END:
- The protocol is used in production and originates as an elementary
storage protocol for [[https://github.com/replikativ/replikativ][replikativ]] and [[https://github.com/replikativ/datahike][datahike]].
- [[https://github.com/danielsz/kampbell][kampbell]] maps collections of
entities to konserve and enforces specs.
** Combined Usage with Other Writers
:PROPERTIES:
:CUSTOM_ID: h:8a1b4a06-4b9f-496b-9eb2-52ac953a8e35
:END:
Konserve assumes it accesses its keyspace in the store exclusively. It uses
[[https://github.com/replikativ/hasch][hasch]] to support arbitrary edn keys and hence does not normally clash with
outside usage even when the same keys are used. To support multiple
konserve clients in the store, the backend must support locking and
proper transactions on keys internally, which is the case for backends
like CouchDB, Redis and Riak.
** License
:PROPERTIES:
:CUSTOM_ID: h:8153b6f6-d253-4863-86b4-038dd383b6fe
:END:
Copyright © 2014-2026 Christian Weilbach and contributors
Distributed under the Eclipse Public License either version 1.0 or (at
your option) any later version.