{"id":21177617,"url":"https://github.com/tersesystems/blacklite","last_synced_at":"2025-07-09T22:30:47.989Z","repository":{"id":41455240,"uuid":"312916293","full_name":"tersesystems/blacklite","owner":"tersesystems","description":"\"Fast as internal ring buffer\" Logback/Log4J2 appender using SQLite with zstandard dictionary compression and rollover.","archived":false,"fork":false,"pushed_at":"2022-08-19T16:44:47.000Z","size":1206,"stargazers_count":63,"open_issues_count":4,"forks_count":4,"subscribers_count":3,"default_branch":"main","last_synced_at":"2025-04-05T08:01:39.541Z","etag":null,"topics":["log4j2","logback","logging","slf4j","sqlite","zstandard"],"latest_commit_sha":null,"homepage":"https://tersesystems.com/blog/2020/11/26/queryable-logging-with-blacklite/","language":"Java","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tersesystems.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-11-14T22:50:56.000Z","updated_at":"2024-10-05T09:58:12.000Z","dependencies_parsed_at":"2022-09-09T10:00:50.942Z","dependency_job_id":null,"html_url":"https://github.com/tersesystems/blacklite","commit_stats":null,"previous_names":[],"tags_count":9,"template":false,"template_full_name":null,"purl":"pkg:github/tersesystems/blacklite","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tersesystems%2Fblacklite","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tersesystems%2Fblacklite/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tersesystems%2Fblacklite/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tersesystems%2Fblacklite/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tersesystems","download_url":"https://codeload.github.com/tersesystems/blacklite/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tersesystems%2Fblacklite/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":264504572,"owners_count":23618825,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["log4j2","logback","logging","slf4j","sqlite","zstandard"],"created_at":"2024-11-20T17:16:37.211Z","updated_at":"2025-07-09T22:30:47.526Z","avatar_url":"https://github.com/tersesystems.png","language":"Java","readme":"# Blacklite\n\n\u003c!---freshmark shields\noutput = [\n    link(shield('Maven central', 'mavencentral', '{{group}}:{{artifactIdMaven}}', 'blue'), 'https://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22{{group}}%22%20AND%20a%3A%22{{artifactIdMaven}}%22'),\n\tlink(shield('License Apache-2.0', 'license', 'Apache-2.0', 'blue'), 'https://www.tldrlegal.com/l/apache2'),\n\t''\n\t].join('\\n')\n--\u003e\n[![Maven central](https://img.shields.io/badge/mavencentral-com.tersesystems.blacklite%3Ablacklite--logback-blue.svg)](https://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22com.tersesystems.blacklite%22%20AND%20a%3A%22blacklite-logback%22)\n[![License Apache-2.0](https://img.shields.io/badge/license-Apache--2.0-blue.svg)](https://www.tldrlegal.com/l/apache2)\n\u003c!---freshmark /shields --\u003e\n\n[![CI](https://github.com/tersesystems/blacklite/actions/workflows/gradle.yml/badge.svg)](https://github.com/tersesystems/blacklite/actions/workflows/gradle.yml)\n\nBlacklite is an [SQLite](https://www.sqlite.org/index.html) appender that is intended for cases where you want buffer of logging data, and also want the option of querying logs from different processes with a built in query language.\n\nSo why use Blacklite?  Logback does come with a built-in [circular buffer appender](https://github.com/qos-ch/logback/blob/master/logback-core/src/main/java/ch/qos/logback/core/read/CyclicBufferAppender.java), but there is no way to dump it on command.  The terse-logback [ring buffer](https://tersesystems.github.io/terse-logback/1.0.0/guide/ringbuffer/) classes will dump a ring buffer, but flush the entire buffer to logs.  This works in a single-user or client scenario as described in [Using Ring Buffer Logging to Help Find Bugs](http://www.exampler.com/writing/ring-buffer.pdf), but does not work in a server-side environment, where there may be many concurrent operations and a much greater volume of logs which would make \"dumping\" in appropriate.\n\nBlacklite provides this functionality by using a queue **roughly equivalent to an in-memory ring buffer**, and then writing to a database configured for write throughput by using [memory mapping](https://www.sqlite.org/mmap.html) and [write ahead logging](https://sqlite.org/wal.html).  Using SQLite, the buffer can be queried instead of dumped, and a much broader array of ecosystem of tools can be used with SQLite than with an in-memory ring buffer.\n\nPractically speaking, with some decent hardware you can budget around 800 debugging statements per 1 ms request -- see [benchmarks](BENCHMARKS.md).  Using [conditional logging](https://github.com/tersesystems/echopraxia#why-conditions), you can turn on debug logging in production and get a complete picture of what a single request is doing.  See [echopraxia-examples](https://github.com/tersesystems/echopraxia-examples) and [terse-logback-showcase](https://github.com/tersesystems/terse-logback-showcase) for a live demonstration.\n\nBlog post [here](https://tersesystems.com/blog/2020/11/26/queryable-logging-with-blacklite/).\n\n## Core Features\n\nBlacklite supports both [Logback](http://logback.qos.ch/) and [Log4J 2](https://logging.apache.org/log4j/2.x/).\n\nBlacklite writes to a single table with the following structure:\n\n```sql\nCREATE TABLE IF NOT EXISTS entries (\n  epoch_secs LONG, // number of seconds since epoch\n  nanos INTEGER,  // nanoseconds in the second\n  level INTEGER,  // numeric level of logging\n  content BLOB    // raw bytes from logging framework encoder / layout\n);\n```\n\nThe `content` column contains the log entry itself, as bytes.  The only other columns are longs and integers.  There\nare no indexes or autoincrement field.  Logs stored in Blacklite are the same size as raw files.  In addition, using\nSQLite file means [total compatibility](https://sqlite.org/locrsf.html) and support over all platforms.\n\nThe appender incorporates a queue that is bound by default to a maximum capacity of 1,048,576 entries: you can add a [budget filter](https://tersesystems.github.io/terse-logback/1.0.0/guide/budget/) to impose a limit on the number of entries logged in a duration.  The queue must be bounded because if the filesystem fails completely e.g. there is no space left on the device, the queue cannot be left to fill indefinitely and must error out at some point.\n\n## Archiving and Compression\n\nIn addition, there are a number of features that Blacklite has above and beyond raw append speed:\n\n* Built-in archiving and rollover based on number of rows.\n* Automatic ZStandard dictionary training and compression for 4x disk space savings in archives.\n* `blacklite-core` module allows direct entry writing with no logging framework needed.\n* Database reader to search logs from command line by \"natural language\" date ranges.\n\nBlacklite also provides a codec for [zstandard](https://facebook.github.io/zstd/), using\nthe [zstd-jni](https://github.com/luben/zstd-jni) library. which is extremely fast and can be tweaked to be competitive\nwith LZ4 using \"negative\" compression levels like \"-4\".  This codec is provided with the archiver so that older records can be automatically compressed.\n\nIn addition, the archiver also includes a [dictionary compression](https://facebook.github.io/zstd/#small-data) option.\nIf a dictionary is found, then the archiver will write the compressed content to the archive file. If no dictionary is\nfound, the archiver will feed a dictionary using the incoming log entries, then switch over to dictionary compression\nonce the dictionary has been trained.\n\nUsing a dictionary provides both speed and size improvements. An entry that is typically 185 bytes with JSON can shrink\ndown to as few as 32 bytes. This adds up extremely quickly when you start working with larger log files.\n\nThis is all very abstract, so here's a real life example using 2,001,000 log entries with the logstash logback encoder\nwriting out JSON.\n\nFor the unencoded content:\n\n```\n❱ ls -lh blacklite.json\n-rw-rw-r-- 1 wsargent wsargent 431M Oct 18 14:14 blacklite.json\n```\n\nCompare with the encoded SQLite database using dictionary compression:\n\n```\n❱ ls -lh archive.db\n-rw-rw-r-- 1 wsargent wsargent 177M Oct 18 14:14 archive.db\n```\n\nBut still have the same number of records:\n\n```\n❱ sqlite3 archive.db  \"select count(*) from entries\"\n2001000\n❱ wc blacklite.json\n  2001000   6002000 451212069 blacklite.json\n```\n\n## Reading\n\nProviding data in SQLite format means you can leverage tools built using SQLite.  I typically connect [DB Browser](https://sqlitebrowser.org/) as the default application for `*.db` files in IntelliJ IDEA so double clicking will bring up GUI.\n\n### Editor / IDE Plugins\n\n* [sqlite VS Code Plugin](https://marketplace.visualstudio.com/items?itemName=alexcvzz.vscode-sqlite)\n* [Database Navigator for IntelliJ IDEA](https://plugins.jetbrains.com/plugin/1800-database-navigator)\n\n### GUI Tools\n\n* [SQLite Browser](https://sqlitebrowser.org/)\n* [SQLite Speed](https://sqlitespeed.com/)\n\n### Command Line Tools\n\n* [blacklite-reader](https://github.com/tersesystems/blacklite/tree/main/blacklite-reader/)\n* [sqlite-utils](https://sqlite-utils.readthedocs.io/en/stable/): Read and process SQLite files from command line\n\n### Web Applications\n\n* [Datasette](https://docs.datasette.io/en/stable/): Exposing SQLite files as web applications\n* [Observable HQ](https://observablehq.com/@mbostock/sqlite): Using SQLite data in visualization notebooks\n\n### Scripts\n\nThere are scripts available for manipulating SQLite in REPL environments and processing through small programs in JSON.\n\nSee the [jbang scripts](scripts/jbang/README.md) and the [Python scripts](scripts/python/README.md) for more detail.\n\nAlso you can work with sqlite [directly](SQLITE.md).\n\n## Installation\n\n### Gradle\n\nAdd the following resolver:\n\n```\nrepositories {\n    mavenCentral()\n}\n```\n\nAnd then add the libraries and codecs that you want.\n\nFor logback:\n\n```\nimplementation 'com.tersesystems.blacklite:blacklite-logback:\u003clatestVersion\u003e'\nimplementation 'com.tersesystems.blacklite:blacklite-codec-zstd:\u003clatestVersion\u003e'\n```\n\nor for log4j:\n\n```\nimplementation 'com.tersesystems.blacklite:blacklite-log4j2:\u003clatestVersion\u003e'\nimplementation 'com.tersesystems.blacklite:blacklite-log4j2-codec-zstd:\u003clatestVersion\u003e'\n```\n\n### Maven\n\nFor logback:\n\n```xml\n\u003cdependency\u003e\n  \u003cgroupId\u003ecom.tersesystems.blacklite\u003c/groupId\u003e\n  \u003cartifactId\u003eblacklite-logback\u003c/artifactId\u003e\n  \u003cversion\u003e$latestVersion\u003c/version\u003e\n\u003c/dependency\u003e\n\n\u003cdependency\u003e\n  \u003cgroupId\u003ecom.tersesystems.blacklite\u003c/groupId\u003e\n  \u003cartifactId\u003eblacklite-codec-zstd\u003c/artifactId\u003e\n  \u003cversion\u003e$latestVersion\u003c/version\u003e\n\u003c/dependency\u003e\n```\n\nor log4j:\n\n```xml\n\u003cdependency\u003e\n  \u003cgroupId\u003ecom.tersesystems.blacklite\u003c/groupId\u003e\n  \u003cartifactId\u003eblacklite-log4j\u003c/artifactId\u003e\n  \u003cversion\u003e$latestVersion\u003c/version\u003e\n\u003c/dependency\u003e\n\n\u003cdependency\u003e\n  \u003cgroupId\u003ecom.tersesystems.blacklite\u003c/groupId\u003e\n  \u003cartifactId\u003eblacklite-log4j2-codec-zstd\u003c/artifactId\u003e\n  \u003cversion\u003e$latestVersion\u003c/version\u003e\n\u003c/dependency\u003e\n```\n\n### SBT\n\nSBT installation is fairly straightforward.\n\n```sbt\nlibraryDependencies += \"com.tersesystems.blacklite\" % \"blacklite-logback\" % \"\u003clatestVersion\u003e\"\nlibraryDependencies += \"com.tersesystems.blacklite\" % \"blacklite-codec-zstd\" % \"\u003clatestVersion\u003e\"\n```\n\nOr log4j:\n\n```sbt\n//libraryDependencies += \"com.tersesystems.blacklite\" % \"blacklite-log4j\" % \"\u003clatestVersion\u003e\"\n//libraryDependencies += \"com.tersesystems.blacklite\" % \"blacklite-log4j2-codec-zstd\" % \"\u003clatestVersion\u003e\"\n```\n\n## Configuration\n\n### Logback\n\nThe logback appender uses [JCTools](https://jctools.github.io/JCTools/) internally as an asynchronous queue.  This means you don't need to use an `AsyncAppender` or `LoggingEventAsyncDisruptorAppender` on top.\n\nYou should always use a `shutdownHook` to allow Logback to drain the queue before exiting.\n\nThe appender consists of a `file` property, and an `encoder` which encodes the bytes written to the `content` field in an entry.\n\nThe `batchInsertSize` property determines the number of entries to batch before writing to the database.  This is a highwater mark that only applies when the number of inserts has gone over a certain point without idling -- this situation only usually applies when using an archiver which will take over the connection for the duration.  When archiving, new entries will buffer in the queue, and then be drained and inserted in batches.   Under normal circumstances, when the thread is idle, it will `executeBatch/commit` any outstanding inserts, meaning you will see database entries immediately.\n\nIf not defined, the default archiver is the `DeletingArchiver` set to `10000` rows.\n\n```xml\n\u003cconfiguration\u003e\n \u003cproperty name=\"db.dir\" value=\"${java.io.tmpdir}/blacklite-logback\"/\u003e\n\n \u003cshutdownHook class=\"ch.qos.logback.core.hook.DelayingShutdownHook\"\u003e\n  \u003cdelay\u003e1000\u003c/delay\u003e\n \u003c/shutdownHook\u003e\n\n \u003cappender name=\"BLACKLITE\" class=\"com.tersesystems.blacklite.logback.BlackliteAppender\"\u003e\n  \u003cfile\u003e${db.dir}/live.db\u003c/file\u003e\n\n  \u003carchiver class=\"com.tersesystems.blacklite.archive.DeletingArchiver\"\u003e\n   \u003carchiveAfterRows\u003e10000000\u003c/archiveAfterRows\u003e\n  \u003c/archiver\u003e\n\n  \u003cencoder class=\"net.logstash.logback.encoder.LogstashEncoder\"\u003e\n  \u003c/encoder\u003e\n \u003c/appender\u003e\n\n \u003croot level=\"TRACE\"\u003e\n  \u003cappender-ref ref=\"BLACKLITE\"/\u003e\n \u003c/root\u003e\n\n\u003c/configuration\u003e\n```\n\n#### Deleting Archiver\n\nThe deleting archiver will delete the oldest entries in the database when the highwater mark is reached.\n\nNote that the database file size may be notably larger than the number of rows after deletion, because SQLite will reuse pages after deletion.  You can run `VACUUM` at regular intervals to recover space.\n\nThe maximum number of rows in the table is set using the `archiveAfterRows` property. There is no facility for unbounded growth, but you can set this number to `Long.MaxValue` which is 2\u003csup\u003e63\u003c/sup\u003e-1.\n\n```xml\n\u003carchiver class=\"com.tersesystems.blacklite.archive.DeletingArchiver\"\u003e\n    \u003carchiveAfterRows\u003e10000\u003c/archiveAfterRows\u003e\n\u003c/archiver\u003e\n```\n\n#### Rolling Archiver\n\nThe rolling archiver can be a bit complicated, but it works much the same way that rolling file appenders do.\n\nThe archiver has a `archiveAfterRows` property that is the maximum number of rows in the live database.  When there are more rows, then archiving takes place.\n\nThe rolling archiver will keep older log entries by moving them into other sqlite databases. When the maximum number of rows is reached, the oldest rows will be moved into the archive specified by the `file` property.  A codec compression can be\napplied when rows are moved into the archive to save on disk space.\n\nThe archive file will be rolled over when the triggering policy is matched.  In the case of the `RowBasedTriggeringPolicy`,\nthis is the maximum number of rows in the archive database -- after that, the archive database will be renamed according to\nthe rolling strategy and another archive file will be created.\n\n```xml\n\u003carchiver class=\"com.tersesystems.blacklite.archive.RollingArchiver\"\u003e\n    \u003cfile\u003e/tmp/blacklite/archive.db\u003c/file\u003e\n    \u003carchiveAfterRows\u003e10000\u003c/archiveAfterRows\u003e\n\n    \u003ccodec class=\"com.tersesystems.blacklite.codec.zstd.ZStdCodec\"\u003e\n        \u003clevel\u003e9\u003c/level\u003e\n    \u003c/codec\u003e\n\n    \u003ctriggeringPolicy class=\"com.tersesystems.blacklite.archive.RowBasedTriggeringPolicy\"\u003e\n        \u003cmaximumNumRows\u003e500000\u003c/maximumNumRows\u003e\n    \u003c/triggeringPolicy\u003e\n\n    \u003crollingStrategy class=\"com.tersesystems.blacklite.logback.TimeBasedRollingStrategy\"\u003e\n        \u003cfileNamePattern\u003elogs/archive.%d{yyyyMMdd'T'hhmm,utc}.db\u003c/fileNamePattern\u003e\n        \u003cmaxHistory\u003e20\u003c/maxHistory\u003e\n    \u003c/rollingStrategy\u003e\n\n\u003c/archiver\u003e\n```\n\n##### Codec\n\nThe rolling archiver can take a codec that compresses the content of the bytes produced by the encoder.  This can be very effective.\n\n```xml\n\u003ccodec class=\"com.tersesystems.blacklite.codec.zstd.ZStdCodec\"\u003e\n    \u003clevel\u003e9\u003c/level\u003e\n\u003c/codec\u003e\n```\n\nIf using dictionary compression, it's `ZStdDictCodec` and the dictionary must be defined in a repository.\n\nThere are two repositories for dictionaries: `ZstdDictFileRepository` which points directly to a zstandard\n dictionary on the filesystem, and `SqliteRepository` which keeps dictionaries in an sqlite database.\n\nBlacklite will automatically train a dictionary from the incoming content if it does not exist.  You can\ntweak the dictionary parameters, but the defaults work fine.\n\n```xml\n\u003ccodec class=\"com.tersesystems.blacklite.codec.zstd.ZStdDictCodec\"\u003e\n\u003clevel\u003e9\u003c/level\u003e\n  \u003crepository class=\"com.tersesystems.blacklite.codec.zstd.ZstdDictFileRepository\"\u003e\n    \u003cfile\u003elogs/dictionary\u003c/file\u003e\n  \u003c/repository\u003e\n\u003c/codec\u003e\n```\n\nYou can also specify a SQLite database containing dictionaries, using the zstandard dictionary ids as a lookup.  This lets you use multiple dictionaries.\n\n```xml\n\u003crepository class=\"com.tersesystems.blacklite.codec.zstd.ZStdDictSqliteRepository\"\u003e\n  \u003cfile\u003elogs/dictionary.db\u003c/file\u003e\n\u003c/repository\u003e\n```\n\nBe aware that if you use a zstandard dictionary, you must have it available to read the logs.  If you lose it, the logs will be unreadable!\n\n##### Triggering Policy\n\nThere is one triggering policy, using the maximum number of rows in the archive.\n\n```xml\n\u003ctriggeringPolicy class=\"com.tersesystems.blacklite.archive.RowBasedTriggeringPolicy\"\u003e\n    \u003cmaximumNumRows\u003e500000\u003c/maximumNumRows\u003e\n\u003c/triggeringPolicy\u003e\n```\n\n##### Rolling Strategies\n\nFixed Window Rolling Strategy will set up a number of SQLite archive databases, using `%i` to indicate the index.\n\n```xml\n\u003crollingStrategy class=\"com.tersesystems.blacklite.logback.FixedWindowRollingStrategy\"\u003e\n  \u003cfileNamePattern\u003elogs/archive.%i.db\u003c/fileNamePattern\u003e\n  \u003cminIndex\u003e1\u003c/minIndex\u003e\n  \u003cmaxIndex\u003e10\u003c/maxIndex\u003e\n\u003c/rollingStrategy\u003e\n```\n\nTime Based Rolling Strategy uses a date system, which will roll over renaming the file to the given date.\n\n```xml\n\u003crollingStrategy class=\"com.tersesystems.blacklite.logback.TimeBasedRollingStrategy\"\u003e\n  \u003cfileNamePattern\u003elogs/archive.%d{yyyyMMdd'T'hhmm,utc}.db\u003c/fileNamePattern\u003e\n  \u003cmaxHistory\u003e20\u003c/maxHistory\u003e\n  \u003ctotalSizeCap\u003e10M\u003c/totalSizeCap\u003e\n  \u003ccleanHistoryOnStartup\u003etrue\u003c/cleanHistoryOnStartup\u003e\n\u003c/rollingStrategy\u003e\n```\n\n### Log4J 2\n\nThe Log4J 2 is similar to the Logback appender:\n\n```xml\n\u003c?xml version=\"1.0\" encoding=\"UTF-8\"?\u003e\n\u003cConfiguration status=\"INFO\" packages=\"com.tersesystems.blacklite.log4j2,com.tersesystems.blacklite.log4j2.zstd\"\u003e\n \u003cappenders\u003e\n  \u003cConsole name=\"Console\" target=\"SYSTEM_OUT\"\u003e\n   \u003cPatternLayout pattern=\"%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n\"/\u003e\n  \u003c/Console\u003e\n\n  \u003cBlacklite name=\"Blacklite\" file=\"/${sys:java.io.tmpdir}/blacklite-log4j2-zstd/live.db\"\u003e\n   \u003cLogstashLayout dateTimeFormatPattern=\"yyyy-MM-dd'T'HH:mm:ss.SSSSSSZZZ\"\n                   eventTemplateUri=\"classpath:LogstashJsonEventLayoutV1.json\"\n                   prettyPrintEnabled=\"false\"/\u003e\n\n   \u003cRollingArchiver file=\"/${sys:java.io.tmpdir}/blacklite-log4j2-zstd/archive.db\"\u003e\n    \u003c!--\u003cZStdCodec level=\"3\"/\u003e--\u003e\n    \u003cZStdDictCodec\u003e\n     \u003clevel\u003e3\u003c/level\u003e\n     \u003csampleSize\u003e102400000\u003c/sampleSize\u003e\n     \u003cdictSize\u003e10485760\u003c/dictSize\u003e\n     \u003c!-- \u003cFileRepository file=\"${sys:java.io.tmpdir}/blacklite/dictionary\"/\u003e --\u003e\n     \u003cSqliteRepository url=\"jdbc:sqlite:${sys:java.io.tmpdir}/blacklite-log4j2-zstd/dict.db\"/\u003e\n    \u003c/ZStdDictCodec\u003e\n\n    \u003cFixedWindowRollingStrategy\n            min=\"1\"\n            max=\"5\"\n            filePattern=\"${sys:java.io.tmpdir}/blacklite-log4j2-zstd/archive-%i.db\"/\u003e\n    \u003cRowBasedTriggeringPolicy\u003e\n     \u003cmaximumNumRows\u003e500000\u003c/maximumNumRows\u003e\n    \u003c/RowBasedTriggeringPolicy\u003e\n   \u003c/RollingArchiver\u003e\n  \u003c/Blacklite\u003e\n \u003c/appenders\u003e\n \u003cLoggers\u003e\n  \u003cRoot level=\"DEBUG\"\u003e\n   \u003cAppenderRef ref=\"Blacklite\"/\u003e\n  \u003c/Root\u003e\n \u003c/Loggers\u003e\n\u003c/Configuration\u003e\n```\n\nIt is broadly similar to the Logback system, with the same settings. .\n\n#### NoOpArchiver\n\nThe no-op archiver does nothing:\n\n```xml\n\u003cNoOpArchiver/\u003e\n```\n\n#### DeletingArchiver\n\nThe deleting archiver will delete all rows greater than the `archiveAfterRows` property:\n\n```xml\n\u003cDeletingArchiver archiveAfterRows=\"100\"\u003e\n  \u003cRowBasedTriggeringPolicy\u003e\n    \u003cmaximumNumRows\u003e100\u003c/maximumNumRows\u003e\n  \u003c/RowBasedTriggeringPolicy\u003e\n\u003c/DeletingArchiver\u003e\n```\n\n#### RollingArchiver\n\nThe rolling archiver is as follows:\n\n```xml\n\u003cRollingArchiver file=\"${sys:java.io.tmpdir}/blacklite-log4j2/archive.db\" archiveAfterRows=\"10000\"\u003e\n \u003c!-- rolling strategy --\u003e\n \u003c!-- triggering policy --\u003e\n\u003c/RollingArchiver\u003e\n```\n\n##### Fixed Window Rolling Strategy\n\nThe fixed window rolling strategy is as follows:\n\n```xml\n\u003cFixedWindowRollingStrategy\n            min=\"1\"\n            max=\"5\"\n            filePattern=\"${sys:java.io.tmpdir}/blacklite-log4j2-zstd/archive-%i.db\"/\u003e\n```\n\nThere is no time based rolling strategy for Log4J2 at this time: I don't understand how to extract the functionality and make it available.\n\n## Benchmarks\n\nSee [BENCHMARKS.md](BENCHMARKS.md)\n\nLogging takes between 25 and 60 ns to enter the in-memory queue, depending on the queue size.  The appender will happily accept bursts of logging to the queue, and will drain from queue and insert into the database in batches.\n\nOn the backend, the SQLite consumer is single threaded, and can sustain ~2 us/op of small entries using batched commits with an SQLite instance mounted on a `tmpfs` filesystem.  For comparison, using Logback with a file appender with `immediateFlush=false` is between [636 and 850 ns/op](https://github.com/wsargent/slf4j-benchmark) but lacks the row-based truncation, querying, indexing, and backup that come with SQLite.\n\nAll of this is of course subject to your encoding, your logging framework, and your specific hardware.\n\n## Setting up tmpfs\n\nIn cases where you want to use Blacklite as a persistent ring buffer, using a `tmpfs` filesystem as a backing store is a great way to avoid fsync.  This is a tactic used by [Alluxio](https://github.com/Alluxio/alluxio/blob/master/core/common/src/main/java/alluxio/worker/block/meta/StorageTier.java#L141), for example.\n\nThe easiest thing to do is to set up `/var/log` as [tmpfs](\nhttps://forums.gentoo.org/viewtopic-t-371889-start-0-postdays-0-postorder-asc-highlight-tmpfs.html?sid=13bc57e79de631391821d1869615eb45) and go from there.\n\nUsing a `tmpfs` filesystem does not require that you constrain your logs to the amount of memory you have, but it does mean that the logs will be removed when the server shuts down.  To get around this, you can [run some scripts on shutdown](https://web.archive.org/web/20200809170437/https://debian-administration.org/article/661/A_transient_/var/log) to transfer the log files.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftersesystems%2Fblacklite","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftersesystems%2Fblacklite","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftersesystems%2Fblacklite/lists"}