{"id":13783277,"url":"https://github.com/phobos/phobos","last_synced_at":"2025-04-07T13:08:32.937Z","repository":{"id":10510702,"uuid":"65630666","full_name":"phobos/phobos","owner":"phobos","description":"Simplifying Kafka for ruby apps","archived":false,"fork":false,"pushed_at":"2023-08-31T14:02:53.000Z","size":6654,"stargazers_count":219,"open_issues_count":14,"forks_count":38,"subscribers_count":9,"default_branch":"master","last_synced_at":"2024-05-18T11:42:46.766Z","etag":null,"topics":["kafka","kafka-client","kafka-events","phobos","ruby","ruby-kafka"],"latest_commit_sha":null,"homepage":"https://rubygems.org/gems/phobos","language":"Ruby","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/phobos.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2016-08-13T18:14:32.000Z","updated_at":"2024-06-18T18:37:42.327Z","dependencies_parsed_at":"2024-06-18T18:37:40.908Z","dependency_job_id":"38478764-4069-49fc-9325-19520552dfc1","html_url":"https://github.com/phobos/phobos","commit_stats":null,"previous_names":["klarna/phobos"],"tags_count":17,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/phobos%2Fphobos","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/phobos%2Fphobos/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/phobos%2Fphobos/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/phobos%2Fphobos/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/phobos","download_url":"https://codeload.github.com/phobos/phobos/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247657281,"owners_count":20974345,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["kafka","kafka-client","kafka-events","phobos","ruby","ruby-kafka"],"created_at":"2024-08-03T19:00:17.789Z","updated_at":"2025-04-07T13:08:32.899Z","avatar_url":"https://github.com/phobos.png","language":"Ruby","readme":"![Phobos](https://raw.githubusercontent.com/klarna/phobos/master/logo.png)\n\n[![Build Status](https://travis-ci.com/phobos/phobos.svg?branch=master)](https://travis-ci.com/phobos/phobos)\n[![Maintainability](https://api.codeclimate.com/v1/badges/e3814d747c91247b24c6/maintainability)](https://codeclimate.com/github/phobos/phobos/maintainability)\n[![Test Coverage](https://api.codeclimate.com/v1/badges/e3814d747c91247b24c6/test_coverage)](https://codeclimate.com/github/phobos/phobos/test_coverage)\n\n# Phobos\n\nSimplifying Kafka for Ruby apps!\n\nPhobos is a micro framework and library for applications dealing with [Apache Kafka](http://kafka.apache.org/).\n\n- It wraps common behaviors needed by consumers and producers in an easy and convenient API\n- It uses [ruby-kafka](https://github.com/zendesk/ruby-kafka) as its Kafka client and core component\n- It provides a CLI for starting and stopping a standalone application ready to be used for production purposes\n\nWhy Phobos? Why not `ruby-kafka` directly? Well, `ruby-kafka` is just a client. You still need to write a lot of code to manage proper consuming and producing of messages. You need to do proper message routing, error handling, retrying, backing off and maybe logging/instrumenting the message management process. You also need to worry about setting up a platform independent test environment that works on CI as well as any local machine, and even on your deployment pipeline. Finally, you also need to consider how to deploy your app and how to start it.\n\nWith Phobos by your side, all this becomes smooth sailing.\n\n## Table of Contents\n\n1. [Installation](#installation)\n1. [Usage](#usage)\n  1. [Standalone apps](#usage-standalone-apps)\n  1. [Consuming messages from Kafka](#usage-consuming-messages-from-kafka)\n  1. [Producing messages to Kafka](#usage-producing-messages-to-kafka)\n  1. [As library in another app](#usage-as-library)\n  1. [Configuration file](#usage-configuration-file)\n  1. [Instrumentation](#usage-instrumentation)\n1. [Plugins](#plugins)\n1. [Development](#development)\n1. [Test](#test)\n1. [Upgrade Notes](#upgrade-notes)\n\n## \u003ca name=\"installation\"\u003e\u003c/a\u003e Installation\n\nAdd this line to your application's Gemfile:\n\n```ruby\ngem 'phobos'\n```\n\nAnd then execute:\n\n```sh\n$ bundle\n```\n\nOr install it yourself as:\n\n```sh\n$ gem install phobos\n```\n\n## \u003ca name=\"usage\"\u003e\u003c/a\u003e Usage\n\nPhobos can be used in two ways: as a standalone application or to support Kafka features in your existing project - including Rails apps. It provides a CLI tool to run it.\n\n### \u003ca name=\"usage-standalone-apps\"\u003e\u003c/a\u003e Standalone apps\n\nStandalone apps have benefits such as individual deploys and smaller code bases. If consuming from Kafka is your version of microservices, Phobos can be of great help.\n\n### Setup\n\nTo create an application with Phobos you need two things:\n  * A configuration file (more details in the [Configuration file](#usage-configuration-file) section)\n  * A `phobos_boot.rb` (or the name of your choice) to properly load your code into Phobos executor\n\nUse the Phobos CLI command __init__ to bootstrap your application. Example:\n\n```sh\n# call this command inside your app folder\n$ phobos init\n    create  config/phobos.yml\n    create  phobos_boot.rb\n```\n\n`phobos.yml` is the configuration file and `phobos_boot.rb` is the place to load your code.\n\n### Consumers (listeners and handlers)\n\nIn Phobos apps __listeners__ are configured against Kafka - they are our consumers. A listener requires a __handler__ (a ruby class where you should process incoming messages), a Kafka __topic__, and a Kafka __group_id__. Consumer groups are used to coordinate the listeners across machines. We write the __handlers__ and Phobos makes sure to run them for us. An example of a handler is:\n\n```ruby\nclass MyHandler\n  include Phobos::Handler\n\n  def consume(payload, metadata)\n    # payload  - This is the content of your Kafka message, Phobos does not attempt to\n    #            parse this content, it is delivered raw to you\n    # metadata - A hash with useful information about this event, it contains: The event key,\n    #            partition number, offset, retry_count, topic, group_id, and listener_id\n  end\nend\n```\n\nWriting a handler is all you need to allow Phobos to work - it will take care of execution, retries and concurrency.\n\nTo start Phobos the __start__ command is used, example:\n\n```sh\n$ phobos start\n[2016-08-13T17:29:59:218+0200Z] INFO  -- Phobos : \u003cHash\u003e {:message=\u003e\"Phobos configured\", :env=\u003e\"development\"}\n______ _           _\n| ___ \\ |         | |\n| |_/ / |__   ___ | |__   ___  ___\n|  __/| '_ \\ / _ \\| '_ \\ / _ \\/ __|\n| |   | | | | (_) | |_) | (_) \\__ \\\n\\_|   |_| |_|\\___/|_.__/ \\___/|___/\n\nphobos_boot.rb - find this file at ~/Projects/example/phobos_boot.rb\n\n[2016-08-13T17:29:59:272+0200Z] INFO  -- Phobos : \u003cHash\u003e {:message=\u003e\"Listener started\", :listener_id=\u003e\"6d5d2c\", :group_id=\u003e\"test-1\", :topic=\u003e\"test\"}\n```\n\nBy default, the __start__ command will look for the configuration file at `config/phobos.yml` and it will load the file `phobos_boot.rb` if it exists. In the example above all example files generated by the __init__ command are used as is. It is possible to change both files, use `-c` for the configuration file and `-b` for the boot file. Example:\n\n```sh\n$ phobos start -c /var/configs/my.yml -b /opt/apps/boot.rb\n```\n\nYou may also choose to configure phobos with a hash from within your boot file.\nIn this case, disable loading the config file with the `--skip-config` option:\n\n```sh\n$ phobos start -b /opt/apps/boot.rb --skip-config\n```\n\n### \u003ca name=\"usage-consuming-messages-from-kafka\"\u003e\u003c/a\u003e Consuming messages from Kafka\n\nMessages from Kafka are consumed using __handlers__. You can use Phobos __executors__ or include it in your own project [as a library](#usage-as-library), but __handlers__ will always be used. To create a handler class, simply include the module `Phobos::Handler`. This module allows Phobos to manage the life cycle of your handler.\n\nA handler is required to implement the method `#consume(payload, metadata)`.\n\nInstances of your handler will be created for every message, so keep a constructor without arguments. If `consume` raises an exception, Phobos will retry the message indefinitely, applying the back off configuration presented in the configuration file. The `metadata` hash will contain a key called `retry_count` with the current number of retries for this message. To skip a message, simply return from `#consume`.\n\nThe `metadata` hash will also contain a key called `headers` with the headers of the consumed message.\n\nWhen the listener starts, the class method `.start` will be called with the `kafka_client` used by the listener. Use this hook as a chance to setup necessary code for your handler. The class method `.stop` will be called during listener shutdown.\n\n```ruby\nclass MyHandler\n  include Phobos::Handler\n\n  def self.start(kafka_client)\n    # setup handler\n  end\n\n  def self.stop\n    # teardown\n  end\n\n  def consume(payload, metadata)\n    # consume or skip message\n  end\nend\n```\n\nIt is also possible to control the execution of `#consume` with the method `#around_consume(payload, metadata)`. This method receives the payload and metadata, and then invokes `#consume` method by means of a block; example:\n\n```ruby\nclass MyHandler\n  include Phobos::Handler\n\n  def around_consume(payload, metadata)\n    Phobos.logger.info \"consuming...\"\n    output = yield payload, metadata\n    Phobos.logger.info \"done, output: #{output}\"\n  end\n\n  def consume(payload, metadata)\n    # consume or skip message\n  end\nend\n```\n\nNote: `around_consume` was previously defined as a class method. The current code supports both implementations, giving precendence to the class method, but future versions will no longer support `.around_consume`.\n\n```ruby\nclass MyHandler\n  include Phobos::Handler\n\n  def self.around_consume(payload, metadata)\n    Phobos.logger.info \"consuming...\"\n    output = yield payload, metadata\n    Phobos.logger.info \"done, output: #{output}\"\n  end\n\n  def consume(payload, metadata)\n    # consume or skip message\n  end\nend\n```\n\nTake a look at the examples folder for some ideas.\n\nThe hander life cycle can be illustrated as:\n\n  `.start` -\u003e `#consume` -\u003e `.stop`\n\nor optionally,\n\n  `.start` -\u003e `#around_consume` [ `#consume` ] -\u003e `.stop`\n\n#### Batch Consumption\n\nIn addition to the regular handler, Phobos provides a `BatchHandler`. The\nbasic ideas are identical, except that instead of being passed a single message\nat a time, the `BatchHandler` is passed a batch of messages. All methods\nfollow the same pattern as the regular handler except that they each\nend in `_batch` and are passed an array of `Phobos::BatchMessage`s instead\nof a single payload.\n\nTo enable handling of batches on the consumer side, you must specify\na delivery method of `inline_batch` in [phobos.yml](config/phobos.yml.example),\nand your handler must include `BatchHandler`. Using a delivery method of `batch`\nassumes that you are still processing the messages one at a time and should\nuse `Handler`.\n\nWhen using `inline_batch`, each instance of `Phobos::BatchMessage` will contain an\ninstance method `headers` with the headers for that message.\n\n```ruby\nclass MyBatchHandler\n  include Phobos::BatchHandler\n\n  def around_consume_batch(payloads, metadata)\n    payloads.each do |p|\n      p.payload[:timestamp] = Time.zone.now\n    end\n\n    yield payloads, metadata\n  end\n\n  def consume_batch(payloads, metadata)\n    payloads.each do |p|\n      logger.info(\"Got payload #{p.payload}, #{p.partition}, #{p.offset}, #{p.key}, #{p.payload[:timestamp]}\")\n    end\n  end\n\nend\n```\n\nNote that retry logic will happen on the *batch* level in this case. If you are\nprocessing messages individually and an error happens in the middle, Phobos's\nretry logic will retry the entire batch. If this is not the behavior you want,\nconsider using `batch` instead of `inline_batch`.\n\n### \u003ca name=\"usage-producing-messages-to-kafka\"\u003e\u003c/a\u003e Producing messages to Kafka\n\n`ruby-kafka` provides several options for publishing messages, Phobos offers them through the module `Phobos::Producer`. It is possible to turn any ruby class into a producer (including your handlers), just include the producer module, example:\n\n```ruby\nclass MyProducer\n  include Phobos::Producer\nend\n```\n\nPhobos is designed for multi threading, thus the producer is always bound to the current thread. It is possible to publish messages from objects and classes, pick the option that suits your code better.\nThe producer module doesn't pollute your classes with a thousand methods, it includes a single method the class and in the instance level: `producer`.\n\n```ruby\nmy = MyProducer.new\nmy.producer.publish(topic: 'topic', payload: 'message-payload', key: 'partition and message key')\n\n# The code above has the same effect of this code:\nMyProducer.producer.publish(topic: 'topic', payload: 'message-payload', key: 'partition and message key')\n```\n\nThe signature for the `publish` method is as follows:\n\n```ruby\ndef publish(topic: topic, payload: payload, key: nil, partition_key: nil, headers: nil)\n```\n\nWhen publishing a message with headers, the `headers` argument must be a hash:\n\n```ruby\nmy = MyProducer.new\nmy.producer.publish(topic: 'topic', payload: 'message-payload', key: 'partition and message key', headers: { header_1: 'value 1' })\n```\n\nIt is also possible to publish several messages at once:\n\n```ruby\nMyProducer\n  .producer\n  .publish_list([\n    { topic: 'A', payload: 'message-1', key: '1' },\n    { topic: 'B', payload: 'message-2', key: '2' },\n    { topic: 'B', payload: 'message-3', key: '3', headers: { header_1: 'value 1', header_2: 'value 2' } }\n  ])\n```\n\nThere are two flavors of producers: __regular__ producers and __async__ producers.\n\nRegular producers will deliver the messages synchronously and disconnect, it doesn't matter if you use `publish` or `publish_list`; by default, after the messages get delivered the producer will disconnect.\n\nAsync producers will accept your messages without blocking, use the methods `async_publish` and `async_publish_list` to use async producers.\n\nAn example of using handlers to publish messages:\n\n```ruby\nclass MyHandler\n  include Phobos::Handler\n  include Phobos::Producer\n\n  PUBLISH_TO = 'topic2'\n\n  def consume(payload, metadata)\n    producer.async_publish(topic: PUBLISH_TO, payload: {key: 'value'}.to_json)\n  end\nend\n```\n\n#### \u003ca name=\"producer-config\"\u003e\u003c/a\u003e Note about configuring producers\n\nSince the handler life cycle is managed by the Listener, it will make sure the producer is properly closed before it stops. When calling the producer outside a handler remember, you need to shutdown them manually before you close the application. Use the class method `async_producer_shutdown` to safely shutdown the producer.\n\nWithout configuring the Kafka client, the producers will create a new one when needed (once per thread). To disconnect from kafka call `kafka_client.close`.\n\n```ruby\n# This method will block until everything is safely closed\nMyProducer\n  .producer\n  .async_producer_shutdown\n\nMyProducer\n  .producer\n  .kafka_client\n  .close\n```\n\n### \u003ca name=\"persistent-connection\"\u003e\u003c/a\u003e Note about producers with persistent connections\n\nBy default, regular producers will automatically disconnect after every `publish` call. You can change this behavior (which reduces connection overhead, TLS etc - which increases speed significantly) by setting the `persistent_connections` config in `phobos.yml`. When set, regular producers behave identically to async producers and will also need to be shutdown manually using the `sync_producer_shutdown` method.\n\nSince regular producers with persistent connections have open connections, you need to manually disconnect from Kafka when ending your producers' life cycle:\n\n```ruby\nMyProducer\n  .producer\n  .sync_producer_shutdown\n```\n\n### \u003ca name=\"usage-as-library\"\u003e\u003c/a\u003e Phobos as a library in an existing project\n\nWhen running as a standalone service, Phobos sets up a `Listener` and `Executor` for you. When you use Phobos as a library in your own project, you need to set these components up yourself.\n\nFirst, call the method `configure` with the path of your configuration file or with configuration settings hash.\n\n```ruby\nPhobos.configure('config/phobos.yml')\n```\nor\n```ruby\nPhobos.configure(kafka: { client_id: 'phobos' }, logger: { file: 'log/phobos.log' })\n```\n\n__Listener__ connects to Kafka and acts as your consumer. To create a listener you need a handler class, a topic, and a group id.\n\n```ruby\nlistener = Phobos::Listener.new(\n  handler: Phobos::EchoHandler,\n  group_id: 'group1',\n  topic: 'test'\n)\n\n# start method blocks\nThread.new { listener.start }\n\nlistener.id # 6d5d2c (all listeners have an id)\nlistener.stop # stop doesn't block\n```\n\nThis is all you need to consume from Kafka with back off retries.\n\nAn __executor__ is the supervisor of all listeners. It loads all listeners configured in `phobos.yml`. The executor keeps the listeners running and restarts them when needed.\n\n```ruby\nexecutor = Phobos::Executor.new\n\n# start doesn't block\nexecutor.start\n\n# stop will block until all listers are properly stopped\nexecutor.stop\n```\n\nWhen using Phobos __executors__ you don't care about how listeners are created, just provide the configuration under the `listeners` section in the configuration file and you are good to go.\n\n### \u003ca name=\"usage-configuration-file\"\u003e\u003c/a\u003e Configuration file\nThe configuration file is organized in 6 sections. Take a look at the example file, [config/phobos.yml.example](https://github.com/klarna/phobos/blob/master/config/phobos.yml.example).\n\nThe file will be parsed through ERB so ERB syntax/file extension is supported beside the YML format.\n\n__logger__ configures the logger for all Phobos components. It automatically\noutputs to `STDOUT` and it saves the log in the configured file.\n\n__kafka__ provides configurations for every `Kafka::Client` created over the application.\nAll [options supported by  `ruby-kafka`][ruby-kafka-client] can be provided.\n\n__producer__ provides configurations for all producers created over the application,\nthe options are the same for regular and async producers.\nAll [options supported by  `ruby-kafka`][ruby-kafka-producer] can be provided.\nIf the __kafka__ key is present under __producer__, it is merged into the top-level __kafka__, allowing different connection configuration for producers.\n\n__consumer__ provides configurations for all consumer groups created over the application.\nAll [options supported by  `ruby-kafka`][ruby-kafka-consumer] can be provided.\nIf the __kafka__ key is present under __consumer__, it is merged into the top-level __kafka__, allowing different connection configuration for consumers.\n\n\n__backoff__ Phobos provides automatic retries for your handlers. If an exception\nis raised, the listener will retry following the back off configured here.\nBackoff can also be configured per listener.\n\n__listeners__ is the list of listeners configured. Each listener represents a consumer group.\n\n[ruby-kafka-client]: http://www.rubydoc.info/gems/ruby-kafka/Kafka%2FClient%3Ainitialize\n[ruby-kafka-consumer]: http://www.rubydoc.info/gems/ruby-kafka/Kafka%2FClient%3Aconsumer\n[ruby-kafka-producer]: http://www.rubydoc.info/gems/ruby-kafka/Kafka%2FClient%3Aproducer\n\n#### Additional listener configuration\n\nIn some cases it's useful to  share _most_ of the configuration between\nmultiple phobos processes, but have each process run different listeners. In\nthat case, a separate yaml file can be created and loaded with the `-l` flag.\nExample:\n\n```sh\n$ phobos start -c /var/configs/my.yml -l /var/configs/additional_listeners.yml\n```\n\nNote that the config file _must_ still specify a listeners section, though it\ncan be empty.\n\n#### Custom configuration/logging\n\nPhobos can be configured using a hash rather than the config file directly. This\ncan be useful if you want to do some pre-processing before sending the file\nto Phobos. One particularly useful aspect is the ability to provide Phobos\nwith a custom logger, e.g. by reusing the Rails logger:\n\n```ruby\nPhobos.configure(\n  custom_logger: Rails.logger,\n  custom_kafka_logger: Rails.logger\n)\n```\n\nIf these keys are given, they will override the `logger` keys in the Phobos\nconfig file.\n\n### \u003ca name=\"usage-instrumentation\"\u003e\u003c/a\u003e Instrumentation\n\nSome operations are instrumented using [Active Support Notifications](http://api.rubyonrails.org/classes/ActiveSupport/Notifications.html).\n\nIn order to receive notifications you can use the module `Phobos::Instrumentation`, example:\n\n```ruby\nPhobos::Instrumentation.subscribe('listener.start') do |event|\n  puts(event.payload)\nend\n```\n\n`Phobos::Instrumentation` is a convenience module around `ActiveSupport::Notifications`, feel free to use it or not. All Phobos events are in the `phobos` namespace. `Phobos::Instrumentation` will always look at `phobos.` events.\n\n#### Executor notifications\n  * `executor.retry_listener_error` is sent when the listener crashes and the executor wait for a restart. It includes the following payload:\n    * listener_id\n    * retry_count\n    * waiting_time\n    * exception_class\n    * exception_message\n    * backtrace\n  * `executor.stop` is sent when executor stops\n\n#### Listener notifications\n  * `listener.start_handler` is sent when invoking `handler.start(kafka_client)`. It includes the following payload:\n    * listener_id\n    * group_id\n    * topic\n    * handler\n  * `listener.start` is sent when listener starts. It includes the following payload:\n    * listener_id\n    * group_id\n    * topic\n    * handler\n  * `listener.process_batch` is sent after process a batch. It includes the following payload:\n    * listener_id\n    * group_id\n    * topic\n    * handler\n    * batch_size\n    * partition\n    * offset_lag\n    * highwater_mark_offset\n  * `listener.process_message` is sent after processing a message. It includes the following payload:\n    * listener_id\n    * group_id\n    * topic\n    * handler\n    * key\n    * partition\n    * offset\n    * retry_count\n  * `listener.process_batch_inline` is sent after processing a batch with `batch_inline` mode. It includes the following payload:\n    * listener_id\n    * group_id\n    * topic\n    * handler\n    * batch_size\n    * partition\n    * offset_lag\n    * retry_count\n  * `listener.retry_handler_error` is sent after waiting for `handler#consume` retry. It includes the following payload:\n    * listener_id\n    * group_id\n    * topic\n    * handler\n    * key\n    * partition\n    * offset\n    * retry_count\n    * waiting_time\n    * exception_class\n    * exception_message\n    * backtrace\n  * `listener.retry_handler_error_batch` is sent after waiting for `handler#consume_batch` retry. It includes the following payload:\n    * listener_id\n    * group_id\n    * topic\n    * handler\n    * batch_size\n    * partition\n    * offset_lag\n    * retry_count\n    * waiting_time\n    * exception_class\n    * exception_message\n    * backtrace\n  * `listener.retry_aborted` is sent after waiting for a retry but the listener was stopped before the retry happened. It includes the following payload:\n    * listener_id\n    * group_id\n    * topic\n    * handler\n  * `listener.stopping` is sent when the listener receives signal to stop.\n    * listener_id\n    * group_id\n    * topic\n    * handler\n  * `listener.stop_handler` is sent after stopping the handler.\n    * listener_id\n    * group_id\n    * topic\n    * handler\n  * `listener.stop` is send after stopping the listener.\n    * listener_id\n    * group_id\n    * topic\n    * handler\n\n## \u003ca name=\"plugins\"\u003e\u003c/a\u003e Plugins\n\nList of gems that enhance Phobos:\n\n* [Phobos DB Checkpoint](https://github.com/klarna/phobos_db_checkpoint) is drop in replacement to Phobos::Handler, extending it with the following features:\n  * Persists your Kafka events to an active record compatible database\n  * Ensures that your handler will consume messages only once\n  * Allows your system to quickly reprocess events in case of failures\n\n* [Phobos Checkpoint UI](https://github.com/klarna/phobos_checkpoint_ui) gives your Phobos DB Checkpoint powered app a web gui with the features below. Maintaining a Kafka consumer app has never been smoother:\n  * Search events and inspect payload\n  * See failures and retry / delete them\n\n* [Phobos Prometheus](https://github.com/phobos/phobos_prometheus) adds prometheus metrics to your phobos consumer.\n  * Measures total messages and batches processed\n  * Measures total duration needed to process each message (and batch)\n  * Adds `/metrics` endpoit to scrape data\n\n## \u003ca name=\"development\"\u003e\u003c/a\u003e Development\n\nAfter checking out the repo:\n* make sure `docker` is installed and running (for windows and mac this also includes `docker-compose`).\n* Linux: make sure `docker-compose` is installed and running.\n* run `bin/setup` to install dependencies\n* run `docker-compose up -d --force-recreate kafka zookeeper` to start the required kafka containers\n* run tests to confirm no environmental issues\n  * wait a few seconds for kafka broker to get set up - `sleep 30`\n  * run `docker-compose run --rm test`\n  * make sure it reports `X examples, 0 failures`\n\nYou can also run `bin/console` for an interactive prompt that will allow you to experiment.\n\nTo install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).\n\n## \u003ca name=\"test\"\u003e\u003c/a\u003e Test\n\nPhobos exports a spec helper that can help you test your consumer. The Phobos lifecycle will conveniently be activated for you with minimal setup required.\n\n* `process_message(handler:, payload:, metadata: {}, encoding: nil)` - Invokes your handler with payload and metadata, using a dummy listener (encoding and metadata are optional).\n\n```ruby\n### spec_helper.rb\nrequire 'phobos/test/helper'\nRSpec.configure do |config|\n  config.include Phobos::Test::Helper\n  config.before(:each) do\n    Phobos.configure(path_to_my_config_file)\n  end\nend \n\n### Spec file\ndescribe MyConsumer do\n  let(:payload) { 'foo' }\n  let(:metadata) { Hash(foo: 'bar') }\n\n  it 'consumes my message' do\n    expect_any_instance_of(described_class).to receive(:around_consume).with(payload, metadata).once.and_call_original\n    expect_any_instance_of(described_class).to receive(:consume).with(payload, metadata).once.and_call_original\n\n    process_message(handler: described_class, payload: payload, metadata: metadata)\n  end\nend\n```\n\n## \u003ca name=\"upgrade-notes\"\u003e\u003c/a\u003e Upgrade Notes\n\nVersion 2.0 removes deprecated ways of defining producers and consumers:\n* The `before_consume` method has been removed. You can have this behavior in the first part of an `around_consume` method.\n* `around_consume` is now only available as an instance method, and it must yield the values to pass to the `consume` method.\n* `publish` and `async_publish` now only accept keyword arguments, not positional arguments.\n\nExample pre-2.0:\n```ruby\nclass MyHandler\n  include Phobos::Handler\n\n  def before_consume(payload, metadata)\n    payload[:id] = 1\n  end\n\n  def self.around_consume(payload, metadata)\n    metadata[:key] = 5\n    yield\n  end\nend\n```\n\nIn 2.0:\n```ruby\nclass MyHandler\n  include Phobos::Handler\n\n  def around_consume(payload, metadata)\n    new_payload = payload.dup\n    new_metadata = metadata.dup\n    new_payload[:id] = 1\n    new_metadata[:key] = 5\n    yield new_payload, new_metadata\n  end\nend\n```\n\nProducer, 1.9:\n```ruby\n  producer.publish('my-topic', { payload_value: 1}, 5, 3, {header_val: 5})\n```\n\nProducer 2.0:\n```ruby\n  producer.publish(topic: 'my-topic', payload: { payload_value: 1}, key: 5, \n     partition_key: 3, headers: { header_val: 5})\n```\n\nVersion 1.8.2 introduced a new `persistent_connections` setting for regular producers. This reduces the number of connections used to produce messages and you should consider setting it to true. This does require a manual shutdown call -  please see [Producers with persistent connections](#persistent-connection).\n\n## Contributing\n\nBug reports and pull requests are welcome on GitHub at https://github.com/klarna/phobos.\n\n## Linting\n\nPhobos projects Rubocop to lint the code, and in addition all projects use [Rubocop Rules](https://github.com/klippx/rubocop_rules) to maintain a shared rubocop configuration. Updates to the shared configurations are done in [phobos/shared](https://github.com/phobos/shared) repo, where you can also find instructions on how to apply the new settings to the Phobos projects.\n\n## Acknowledgements\n\nThanks to Sebastian Norde for the awesome logo!\n\n## License\n\nCopyright 2016 Klarna\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License.\n\nYou may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.\n","funding_links":[],"categories":["Development","NLP Pipeline Subtasks","Ruby"],"sub_categories":["Client libraries","Pipeline Generation"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fphobos%2Fphobos","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fphobos%2Fphobos","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fphobos%2Fphobos/lists"}