{"id":13630751,"url":"https://github.com/uber/logtron","last_synced_at":"2025-06-11T17:39:57.084Z","repository":{"id":66084606,"uuid":"25270179","full_name":"uber/logtron","owner":"uber","description":"A logging MACHINE","archived":false,"fork":false,"pushed_at":"2023-04-14T07:31:55.000Z","size":234,"stargazers_count":159,"open_issues_count":10,"forks_count":21,"subscribers_count":2680,"default_branch":"master","last_synced_at":"2025-04-10T16:52:06.127Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/uber.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2014-10-15T19:33:03.000Z","updated_at":"2024-08-09T10:11:49.000Z","dependencies_parsed_at":null,"dependency_job_id":"2e2fff7e-fbcb-4f7b-b62a-321b24e1cf80","html_url":"https://github.com/uber/logtron","commit_stats":{"total_commits":176,"total_committers":28,"mean_commits":6.285714285714286,"dds":0.6931818181818181,"last_synced_commit":"356f5d6814e88915ad097ab1d779bb107b139905"},"previous_names":[],"tags_count":60,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/uber%2Flogtron","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/uber%2Flogtron/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/uber%2Flogtron/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/uber%2Flogtron/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/uber","download_url":"https://codeload.github.com/uber/logtron/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":249360008,"owners_count":21257148,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T22:01:58.421Z","updated_at":"2025-04-17T17:31:44.090Z","avatar_url":"https://github.com/uber.png","language":"JavaScript","readme":"# logtron\n\n[![build status](https://secure.travis-ci.org/uber/logtron.svg)](http://travis-ci.org/uber/logtron)\n\nlogger used in realtime\n\n## Example\n\n```js\nvar Logger = require('logtron');\n\nvar statsd =  StatsdClient(...)\n\n/*  configure your logger\n\n     - pass in meta data to describe your service\n     - pass in your backends of choice\n*/\nvar logger = Logger({\n    meta: {\n        team: 'my-team',\n        project: 'my-project'\n    },\n    backends: Logger.defaultBackends({\n        logFolder: '/var/log/nodejs',\n        console: true,\n        kafka: { proxyHost: 'localhost', proxyPort: 9093 },\n        sentry: { id: '{sentryId}' }\n    }, {\n        // pass in a statsd client to turn on an airlock prober\n        // on the kafka and sentry connection\n        statsd: statsd\n    })\n});\n\n/* now write your app and use your logger */\nvar http = require('http');\n\nvar server = http.createServer(function (req, res) {\n    logger.info('got a request', {\n        uri: req.url\n    });\n\n    res.end('hello world');\n});\n\nserver.listen(8000, function () {\n    var addr = server.address();\n    logger.info('server bound', {\n        port: addr.port,\n        address: addr.address\n    });\n});\n\n/* maybe some error handling */\nserver.on(\"error\", function (err) {\n    logger.error(\"unknown server error\", err);\n});\n```\n\n## Docs\n\n### Type definitions\n\nSee [docs.mli](docs.mli) for type definitions\n\n### `var logger = Logger(options)`\n\n```ocaml\ntype Backend := {\n    createStream: (meta: Object) =\u003e WritableStream\n}\n\ntype Entry := {\n    level: String,\n    message: String,\n    meta: Object,\n    path: String\n}\n\ntype Logger := {\n    trace: (message: String, meta: Object, cb? Callback) =\u003e void,\n    debug: (message: String, meta: Object, cb? Callback) =\u003e void,\n    info: (message: String, meta: Object, cb? Callback) =\u003e void,\n    access?: (message: String, meta: Object, cb? Callback) =\u003e void,\n    warn: (message: String, meta: Object, cb? Callback) =\u003e void,\n    error: (message: String, meta: Object, cb? Callback) =\u003e void,\n    fatal: (message: String, meta: Object, cb? Callback) =\u003e void,\n    writeEntry: (Entry, cb?: Callback) =\u003e void,\n    createChild: (path: String, Object\u003clevelName: String\u003e, Object\u003copts: String\u003e) =\u003e Logger\n}\n\ntype LogtronLogger := EventEmitter \u0026 Logger \u0026 {\n    instrument: (server?: HttpServer, opts?: Object) =\u003e void,\n    destroy: ({\n        createStream: (meta: Object) =\u003e WritableStream\n    }) =\u003e void\n}\n\nlogtron/logger := ((LoggerOpts) =\u003e LogtronLogger) \u0026 {\n    defaultBackends: (config: {\n        logFolder?: String,\n        kafka?: {\n            proxyHost: String,\n            proxyPort: Number\n        },\n        console?: Boolean,\n        sentry?: {\n            id: String\n        }\n    }, clients?: {\n        statsd: StatsdClient,\n        kafkaClient?: KafkaClient\n    }) =\u003e {\n        disk: Backend | null,\n        kafka: Backend | null,\n        console: Backend | null,\n        sentry: Backend | null\n    }\n}\n```\n\n`Logger` takes a set of meta information for the logger, that\n  will be used by each backend to customize the log formatting\n  and a set of backends that you want to be able to write to.\n\n`Logger` returns a logger object that has some method names\n  in common with `console`.\n\n#### `options.meta.name`\n\n`options.meta.name` is the name of your application, you should\n  supply a string for this option. Various backends may use\n  this value to configure themselves.\n\nFor example the `Disk` backend uses the `name` to create a\n  filename for you.\n\n#### `options.meta.team`\n\n`options.meta.team` is the name of the team that this application\n  belongs to. Various backends may use this value to configure\n  themselves.\n\nFor example the `Disk` backend uses the` team` to create a\n  filename for you.\n\n####`options.meta.hostname`\n\n`options.meta.hostname` is the the hostname of the server this\n  application is running on. You can use\n  `require('os').hostname()` to get the hostname of your process.\n  Various backends may use this value to configure themselves.\n\nFor example the `Sentry` backend uses the `hostname` as meta\n  data to send to sentry so you can identify which host caused\n  the sentry error in their visual error inspector.\n\n#### `options.meta.pid`\n\n`options.meta.pid` is the `pid` of your process. You can get the\n  `pid` of your process by reading `process.pid`. Various\n  backends may use this value to configure themselves.\n\nFor example the `Disk` backend or `Console` backend may prepend\n  all log messages or somehow embed the process pid in the log\n  message. This allows you to tail a log and identify which\n  process misbehaves.\n\n#### `options.backends`\n\n`options.backends` is how you specify the backends you want to\n  set for your logger. `backends` should be an object of key\n  value pairs, where the key is the name of the backend and the\n  value is something matching the `Backend` interface.\n\nOut of the box, the `logger` comes with four different backend\n  names it supports, `\"disk\"`, `\"console\"`, `\"kafka\"`\n  and `\"sentry\"`.\n\nIf you want to disable a backend, for example `\"console\"` then\n  you should just not pass in a console backend to the logger.\n\nA valid `Backend` is an object with a `createStream` method.\n  `createStream` gets passed `options.meta` and must return a\n  `WritableStream`.\n\nThere are a set of functions in `logtron/backends` that you\n  require to make the specifying of backends easier.\n\n - `require('logtron/backends/disk')`\n - `require('logtron/backends/console')`\n - `require('logtron/backends/kafka')`\n - `require('logtron/backends/sentry')`\n\n#### `options.transforms`\n\n`options.transforms` is an optional array of transform functions.\n  The transform functions get called with\n  `[levelName, message, metaObject]` and must return a tuple of\n  `[levelName, message, metaObject]`.\n\nA `transform` is a good place to put transformation logic before\n  it get's logged to a backend.\n\nEach funciton in the transforms array will get called in order.\n\nA good use-case for the transforms array is pretty printing\n  certain objects like `HttpRequest` or `HttpResponse`. Another\n  good use-case is scrubbing sensitive data\n\n#### `logger`\n\n`Logger(options)` returns a `logger` object. The `logger` has\n  a set of logging methods named after the levels for the\n  logger and a `destroy()` method.\n\nEach level method (`info()`, `warn()`, `error()`, etc.) takes\n  a string and an object of more information. You can also pass\n  in an optional callback as the third parameter.\n\nThe `string` message argument to the level method should be\n  a static string, not a dynamic string. This allows anyone\n  analyzing the logs to quickly find the callsite in the code\n  and anyone looking at the callsite in the code to quickly\n  grep through the logs to find all prints.\n\nThe `object` information argument should be the dynamic\n  information that you want to log at the callsite. Things like\n  an id, an uri, extra information, etc are great things to add\n  here. You should favor placing dynamic information in the \n  information object, not in the message string.\n\nEach level method will write to a different set of backends.\n\nSee [bunyan level descriptions][bunyan] for more / alternative \n  suggestions around how to use levels.\n\n#### `logger.trace(message, information, callback?)`\n\n`trace()` will write your log message to the\n  `[\"console\"]` backends.\n\nNote that due to the high volume nature of `trace()` it should\n  not be spamming `\"disk\"`.\n\n`trace()` is meant to be used to write tracing information to\n  your logger. This is mainly used for high volume performance\n  debugging.\n\nIt's expected you change the `trace` level configuration to\n  basically write nowhere in production and be manually toggled\n  on to write to local disk / stdout if you really want to \n  trace a production process.\n\n#### `logger.debug(message, information, callback?)`\n\n`debug()` will write your log message to the \n  `[\"disk\", \"console\"]` backends.\n\nNote that due to the higher volume nature of `debug()` it should\n  not be spamming `\"kafka\"`.\n\n`debug()` is meant to be used to write debugging information.\n  debugging information is information that is purely about the\n  code and not about the business logic. You might want to \n  print a debug if there is a programmer bug instead of an \n  application / business logic bug.\n\nIf your going to add a high volume `debug()` callsite that will\n  get called a lot or get called in a loop consider using\n  `trace()` instead.\n\nIt's expected that the `debug` level is enabled in production\n  by default.\n\n#### `logger.info(message, information, callback?)`\n\n`info()` will write your log message to the \n  `[\"disk\", \"kafka\", \"console\"]` backends.\n\n`info()` is meant to used when you want to print informational\n  messages that concern application or business logic. These\n  messages should just record that a \"useful thing\" has happened.\n\nYou should use `warn()` or `error()` if you want to print that\n  a \"strange thing\" or \"wrong thing\" has happened\n\nIf your going to print information that does not concern\n  business or application logic consider using `debug()` instead.\n\n#### `logger.warn(message, information, callback?)`\n\n`warn()` will write your log message to the \n  `[\"disk\", \"kafka\", \"console\"]` backends.\n\n`warn()` is meant to be used when you want to print warning\n  messages that concern application or business logic. These\n  messages should just record that an \"unusual thing\" has\n  happened. \n\nIf your in a code path where you cannot recover or continue\n  cleanly you should consider using `error()` instead. `warn()`\n  is generally used for code paths that are correct but not\n  normal.\n\n#### `logger.error(message, information, callback?)`\n\n`error()` will write your log message to the \n  `[\"disk\", \"kafka\", \"console\", \"sentry\"]` backends.\n\nNote that due to importance of error messages it should be\n  going to `\"sentry\"` so we can track all errors for an \n  application using sentry.\n\n`error()` is meant to be used when you want to print error\n  messages that concern application or business logic. These\n  messages should just record that a \"wrong thing\" has happened.\n\nYou should use `error()` whenever something incorrect or \n  unhandlable happens.\n\nIf your in a code path that is uncommon but still correct \n  consider using `warn()` instead. \n\n#### `logger.fatal(message, information, callback?)`\n\n`fatal()` will write your log message to the \n  `[\"disk\", \"kafka\", \"console\", \"sentry\"]` backends.\n\n`fatal()` is meant to be used to print a fatal error. A fatal\n  error should happen when something unrecoverable happens, i.e.\n  it is fatal for the currently running node process.\n\nYou should use `fatal()` when something becomes corrupt and it\n  cannot be recovered without a restart or when key part of\n  infrastructure is fatally missing. You should also use\n  `fatal()` when you interact with an unrecoverable error.\n\nIf your error is recoverable or you are not going to shutdown\n  the process you should use `error()` instead.\n\nIt's expected that shutdown the process once you have verified\n  that the `fatal()` error message has been logged. You can\n  do either a hard or soft shutdown.\n\n#### `logger.createChild({path: String, levels?, opts?})`\n\nThe `createChild` method returns a Logger that will create entries at a\n  nested path.\n\nPaths are lower-case and dot.delimited.\n  Child loggers can be nested within other child loggers to\n  construct deeper paths.\n\nChild loggers implement log level methods for every key in\n  the given levels, or the default levels. The levels must\n  be given as an object, and the values are not important\n  for the use of `createChild`, but `true` will suffice if\n  there isn't an object laying around with the keys you\n  need.\n\nOpts specifies options for the child logger. The available\n  options are to enable strict mode, and to add metadata to\n  each entry. To enable strict mode pass the `strict` key in\n  the options with a true value. In strict mode the child\n  logger will ensure that each log level has a corresponding\n  backend in the parent logger. Otherwise the logger will\n  replace any missing parent methods with a no-op function.\n  If you wish to add meta data to each log entry the child\n  set the `extendMeta` key to `true` and the `meta` to an\n  object with your meta data. The `metaFilter` key takes an\n  array of objects which will create filters that are run \n  at log time. This allows you to automatically add the \n  current value of an object property to the log meta without \n  having to manual add the values at each log site. The format\n  of a filter object is: `{'oject': targetObj, 'mappings': {'src': 'dst', 'src2': 'dst2'}}`.\n  Each filter has an object key which is the target the data\n  will be taken from. The mappings object contains keys which\n  are the src of the data on the target object as a dot path \n  and the destination it will be placed in on the meta object.\n  A log site can still override this destination though. If\n  you want the child logger to inherit it's parent logger's\n  `meta` and `metaFilter`, set `mergeParentMeta` to `true`.\n  If there are conflicts, the child meta will win.\n\n```js\n\nlogger.createChild(\"requestHandler\", {\n    info: true,\n    warn: true,\n    log: true,\n    trace: true\n}, {\n    extendMeta: true,\n    // Each time we log this will include the session key\n    meta: {\n        sessionKey: 'abc123'\n    },\n    // Each time we log this will include if the headers\n    // have been written to the client yet based on the\n    // current value of res.headersSent\n    metaFilter: [\n        {object: res, mappings: {\n            'headersSent' : 'headersSent'\n        }\n    ],\n    mergeParentMeta: true\n})\n```\n\n#### `logger.writeEntry(Entry, callback?)`\n\nAll of the log level methods internally create an `Entry` and use the\n  `writeEntry` method to send it into routing.  Child loggers use this method\n  directly to forward arbitrary entries to the root level logger.\n\n```ocaml\ntype Entry := {\n    level: String,\n    message: String,\n    meta: Object,\n    path: String\n}\n```\n\n### `var backends = Logger.defaultBackends(options, clients)`\n\n```ocaml\ntype Logger : { ... }\n\ntype KafkaClient : Object\ntype StatsdClient := {\n    increment: (String) =\u003e void\n}\n\nlogtron := Logger \u0026 {\n    defaultBackends: (config: {\n        logFolder?: String,\n        kafka?: {\n            proxyHost: String,\n            proxyPort: Number\n        },\n        console?: Boolean,\n        sentry?: {\n            id: String\n        }\n    }, clients?: {\n        statsd: StatsdClient,\n        kafkaClient?: KafkaClient,\n        isKafkaDisabled?: () =\u003e Boolean\n    }) =\u003e {\n        disk: Backend | null,\n        kafka: Backend | null,\n        console: Backend | null,\n        sentry: Backend | null\n    }\n}\n```\n\nRather then configuring the backends for `logtron` yourself\n  you can use the `defaultBackend` function\n\n`defaultBackends` takes a set of options and returns a hash of\n  backends that you can pass to a logger like\n\n```js\nvar logger = Logger({\n    backends: Logger.defaultBackends(backendConfig)\n})\n```\n\nYou can also pass `defaultBackends` a `clients` argument to pass\n  in a statsd client. The statsd client will then be passed to the backends so that they can be instrumented with statsd.\n\nYou can also configure a reusable `kafkaClient` on the `clients`\n  object. This must be an instance of `uber-nodesol-write`.\n\n#### `options.logFolder`\n\n`options.logFolder` is an optional string, if you want the disk\n  backend enabled you should set this to a folder on disk where\n  you want your disk logs written to.\n\n#### `options.kafka`\n\n`options.kafka` is an optional object, if you want the kafka\n  backend enabled you should set this to an object containing\n  a `\"proxyHost\"` and `\"proxyPort\"` key.\n\n`options.kafka.proxyHost` should be a string and is the hostname \n  of the kafka REST proxy server to write to.\n\n`options.kafka.proxyPort` should be a port and is the port\n  of the kafka REST proxy server to write to.\n\n#### `options.console`\n\n`options.console` is an optional boolean, if you want the \n  console backend enabled you should set this to `true`\n\n#### `options.sentry`\n\n`options.sentry` is an optional object, if you want the \n  sentry backend enabled you should set this to an object \n  containing an `\"id\"` key.\n\n`options.sentry.id` is the dsn uri used to talk to sentry.\n\n#### `clients`\n\n`clients` is an optional object, it contains all the concrete\n  service clients that the backends will use to communicate with\n  external services.\n\n#### `clients.statsd`\n\nIf you want you backends instrumented with statsd you should\n  pass in a `statsd` client to `clients.statsd`. This ensures\n  that we enable airlock monitoring on the kafka and sentry\n  backend\n\n#### `clients.kafkaClient`\n\nIf you want to re-use a single `kafkaClient` in your application\n  you can pass in an instance of the `uber-nodesol-write` module\n  and the logger will re-use this client isntead of creating\n  its own kafka client.\n\n#### `clients.isKafkaDisabled`\n\nIf you want to be able to disable kafka at run time you can\n  pass an `isKafkaDisabled` predicate function.\n\nIf this function returns `true` then `logtron` will stop writing\n  to kafka.\n\n### Logging Errors\n\n\u003e I want to log errors when I get them in my callbacks\n\nThe `logger` supports passing in an `Error` instance as the \n  metaObject field.\n\nFor example:\n\n```js\nfs.readFile(uri, function (err, content) {\n    if (err) {\n        logger.error('got file error', err);\n    }\n})\n```\n\nIf you want to add extra information you can also make the err\n  one of the keys in the meta object.\n\nFor example:\n\n```js\nfs.readFile(uri, function (err, content) {\n    if (err) {\n        logger.error('got file error', {\n            error: err,\n            uri: uri\n        });\n    }\n})\n```\n\n### Custom levels\n\n\u003e I want to add my own levels to the logger, how can I tweak\n\u003e the logger to use different levels\n\nBy default the logger has the levels as specified above.\n\nHowever you can pass in your own level definition.\n\n#### I want to remove a level\n\nYou can set a level to `null` to remove it. For example this is\nhow you would remove the `trace()` level.\n\n```js\nvar logger = Logger({\n    meta: { ... },\n    backends: { ... },\n    levels: {\n        trace: null\n    }\n})\n```\n\n#### I want to add my own levels\n\nYou can add a level to a logger by adding a new `Level` record.\n\nFor example this is how you would define an `access` level\n\n```js\nvar logger = Logger({\n    meta: {},\n    backends: {},\n    levels: {\n        access: {\n            level: 25,\n            backends: ['disk', 'console']\n        }\n    }\n})\n\nlogger.access('got request', {\n    uri: '/some-uri'\n});\n```\n\nThis adds an `access()` method to your logger that will write\n  to the backend named `\"disk\"` and the backend named\n  `\"console\"`.\n\n#### I want to change an existing level\n\nYou can change an existing backend by just redefining it.\n\nFor example this is how you mute the trace level\n\n```js\nvar logger = Logger({\n    meta: {},\n    backends: {},\n    levels: {\n        trace: {\n            level: 10,\n            backends: []\n        }\n    }\n})\n```\n\n#### I want to add a level that writes to a custom backend\n\nYou can add a level that writes to a new backend name and then\n  add a backend with that name\n\n```js\nvar logger = Logger({\n    meta: {},\n    backends: {\n        custom: CustomBackend()\n    },\n    levels: {\n        custom: {\n            level: 15,\n            backends: [\"custom\"]\n        }\n    }\n})\n\nlogger.custom('hello', { foo: \"bar\" });\n```\n\nAs long as your `CustomBackend()` returns an object with a \n  `createStream()` method that returns a `WritableStream` this\n  will work like you want it to.\n\n### `var backend = Console()`\n\n```ocaml\nlogtron/backends/console := () =\u003e {\n    createStream: (meta: Object) =\u003e WritableStream\n}\n```\n\n`Console()` can be used to create a backend that writes to the\n  console.\n\nThe `Console` backend just writes to stdout.\n\n### `var backend = Disk(options)`\n\n```ocaml\nlogtron/backends/disk := (options: {\n    folder: String\n}) =\u003e {\n    createStream: (meta: Object) =\u003e WritableStream\n}\n```\n\n`Disk(options)` can be used to create a backend that writes to\n  rotating files on disk.\n\nThe `Disk` depends on `meta.team` and `meta.project` to be\n  defined on the logger and it uses those to create the filename\n  it will write to.\n\n#### `options.folder`\n\n`options.folder` must be specificied as a string and it\n  determines which folder the `Disk` backend will write to.\n\n### `var backend = Kafka(options)`\n\n```ocaml\nlogtron/backends/kafka := (options: {\n    proxyHost: String,\n    proxyPort: Number,\n    statsd?: Object,\n    isDisabled: () =\u003e Boolean\n}) =\u003e {\n    createStream: (meta: Object) =\u003e WritableStream\n}\n```\n\n`Kafka(options)` can be used to create a backend that writes to\n  a kafka topic. \n\nThe `Kafka` backend depends on `meta.team` and `meta.project`\n  and uses those to define which topic it will write to.\n\n#### `options.proxyHost`\n\nSpecify the `proxyHost` which we should use when connecting to kafka REST proxy\n\n#### `options.proxyPort`\n\nSpecify the `proxyPort` which we should use when connecting to kafka REST proxy\n\n#### `options.statsd`\n\nIf you pass a `statsd` client to the `Kafka` backend it will use\n  the `statsd` client to record information about the health\n  of the `Kafka` backend.\n\n#### `options.kafkaClient`\n\nIf you pass a `kafkaClient` to the `Kafka` backend it will use\n  this to write to kafka instead of creating it's own client.\n  You must ensure this is an instance of the `uber-nodesol-write`\n  module.\n\n#### `options.isDisabled`\n\nIf you want to be able to disable this backend at run time you\n  can pass in a predicate function.\n\nWhen this predicate function returns `true` the `KafkaBackend`\n  will stop writing to kafka.\n\n### `var backend = Sentry(options)`\n\n```ocaml\nlogtron/backends/sentry := (options: {\n    dsn: String,\n    statsd?: Object\n}) =\u003e {\n    createStream: (meta: Object) =\u003e WritableStream\n}\n```\n\n`Sentry(options)` can be used to create a backend that will\n  write to a sentry server.\n\n#### `options.dsn`\n\nSpecify the `dsn` host to be used when connection to sentry.\n\n#### `options.statsd`\n\nIf you pass a `statsd` client to the `Sentry` backend it will\n  use the `statsd` client to record information about the\n  health of the `Sentry` backend.\n\n## Installation\n\n`npm install logtron`\n\n## Tests\n\n`npm test`\n\nThere is a `kafka.js` that will talk to kafka if it is running\nand just gets skipped if its not running.\n\nTo run the kafka test you have to run zookeeper \u0026 kafka with\n  `npm run start-zk` and `npm run start-kafka`\n\n  [bunyan]: https://github.com/trentm/node-bunyan#levels\n","funding_links":[],"categories":["JavaScript"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fuber%2Flogtron","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fuber%2Flogtron","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fuber%2Flogtron/lists"}