{"id":25257534,"url":"https://github.com/michelin/kstreamplify","last_synced_at":"2025-04-05T06:03:18.370Z","repository":{"id":169030031,"uuid":"601052996","full_name":"michelin/kstreamplify","owner":"michelin","description":"Swiftly build and enhance your Kafka Streams applications.","archived":false,"fork":false,"pushed_at":"2025-03-28T08:59:39.000Z","size":3679,"stargazers_count":118,"open_issues_count":7,"forks_count":22,"subscribers_count":5,"default_branch":"main","last_synced_at":"2025-03-29T05:07:08.970Z","etag":null,"topics":["java","kafka","kafka-streams","spring-boot","topology"],"latest_commit_sha":null,"homepage":"","language":"Java","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/michelin.png","metadata":{"files":{"readme":"README.md","changelog":"changelog-builder.json","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-02-13T09:11:40.000Z","updated_at":"2025-03-28T18:29:45.000Z","dependencies_parsed_at":"2023-09-22T11:42:09.049Z","dependency_job_id":"c4f1de55-8c76-47e7-85ac-2a687efe405c","html_url":"https://github.com/michelin/kstreamplify","commit_stats":null,"previous_names":["michelin/spring-kafka-streams","michelin/kstreamplify"],"tags_count":14,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/michelin%2Fkstreamplify","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/michelin%2Fkstreamplify/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/michelin%2Fkstreamplify/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/michelin%2Fkstreamplify/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/michelin","download_url":"https://codeload.github.com/michelin/kstreamplify/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247294514,"owners_count":20915340,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["java","kafka","kafka-streams","spring-boot","topology"],"created_at":"2025-02-12T06:49:01.800Z","updated_at":"2025-04-05T06:03:18.363Z","avatar_url":"https://github.com/michelin.png","language":"Java","readme":"\u003cdiv align=\"center\"\u003e\n\n\u003cimg src=\".readme/logo.svg\" alt=\"Kstreamplify\"/\u003e\n\n# Kstreamplify\n\n[![GitHub Build](https://img.shields.io/github/actions/workflow/status/michelin/kstreamplify/build.yml?branch=main\u0026logo=github\u0026style=for-the-badge)](https://img.shields.io/github/actions/workflow/status/michelin/kstreamplify/build.yml)\n[![Maven Central](https://img.shields.io/nexus/r/com.michelin/kstreamplify?server=https%3A%2F%2Fs01.oss.sonatype.org%2F\u0026style=for-the-badge\u0026logo=apache-maven\u0026label=Maven%20Central)](https://central.sonatype.com/search?q=com.michelin.kstreamplify\u0026sort=name)\n![Supported Java Versions](https://img.shields.io/badge/Java-17--21-blue.svg?style=for-the-badge\u0026logo=openjdk)\n[![Kafka Version](https://img.shields.io/badge/dynamic/xml?url=https%3A%2F%2Fraw.githubusercontent.com%2Fmichelin%2Fkstreamplify%2Fmain%2Fpom.xml\u0026query=%2F*%5Blocal-name()%3D'project'%5D%2F*%5Blocal-name()%3D'properties'%5D%2F*%5Blocal-name()%3D'kafka.version'%5D%2Ftext()\u0026style=for-the-badge\u0026logo=apachekafka\u0026label=version)](https://github.com/michelin/kstreamplify/blob/main/pom.xml)\n[![Spring Boot Version](https://img.shields.io/badge/dynamic/xml?url=https%3A%2F%2Fraw.githubusercontent.com%2Fmichelin%2Fkstreamplify%2Fmain%2Fpom.xml\u0026query=%2F*%5Blocal-name()%3D'project'%5D%2F*%5Blocal-name()%3D'properties'%5D%2F*%5Blocal-name()%3D'spring-boot.version'%5D%2Ftext()\u0026style=for-the-badge\u0026logo=spring-boot\u0026label=version)](https://github.com/michelin/kstreamplify/blob/main/pom.xml)\n[![GitHub Stars](https://img.shields.io/github/stars/michelin/kstreamplify?logo=github\u0026style=for-the-badge)](https://github.com/michelin/kstreamplify)\n[![SonarCloud Coverage](https://img.shields.io/sonar/coverage/michelin_kstreamplify?logo=sonarcloud\u0026server=https%3A%2F%2Fsonarcloud.io\u0026style=for-the-badge)](https://sonarcloud.io/component_measures?id=michelin_kstreamplify\u0026metric=coverage\u0026view=list)\n[![SonarCloud Tests](https://img.shields.io/sonar/tests/michelin_kstreamplify/main?server=https%3A%2F%2Fsonarcloud.io\u0026style=for-the-badge\u0026logo=sonarcloud)](https://sonarcloud.io/component_measures?metric=tests\u0026view=list\u0026id=michelin_kstreamplify)\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg?logo=apache\u0026style=for-the-badge)](https://opensource.org/licenses/Apache-2.0)\n\n[Overview](#overview) • [Getting Started](#getting-started)\n\nSwiftly build and enhance your Kafka Streams applications.\n\nKstreamplify adds extra features to Kafka Streams, simplifying development so you can write applications with minimal effort and stay focused on business implementation.\n\n\u003cimg src=\".readme/topology.gif\" alt=\"Kstreamplify application\" /\u003e\n\n\u003c/div\u003e\n\n## Table of Contents\n\n* [Overview](#overview)\n* [Getting Started](#getting-started)\n  * [Spring Boot](#spring-boot)\n  * [Java](#java)\n  * [Test](#test)\n    * [Override Properties](#override-properties)\n* [Avro Serializer and Deserializer](#avro-serializer-and-deserializer)\n* [Error Handling](#error-handling)\n  * [Set up DLQ Topic](#set-up-dlq-topic)\n  * [Handling Processing Errors](#handling-processing-errors)\n    * [DSL](#dsl)\n    * [Processor API](#processor-api)\n  * [Production and Deserialization Errors](#production-and-deserialization-errors)\n  * [Avro Schema](#avro-schema)\n  * [Uncaught Exception Handler](#uncaught-exception-handler)\n* [Web Services](#web-services)\n  * [Topology](#topology)\n  * [Interactive Queries](#interactive-queries)\n  * [Kubernetes](#kubernetes)\n* [TopicWithSerde API](#topicwithserde-api)\n  * [Declaration](#declaration)\n  * [Prefix](#prefix)\n  * [Remapping](#remapping)\n  * [Unit Testing](#unit-testing)\n* [Interactive Queries](#interactive-queries-1)\n  * [Configuration](#configuration)\n  * [Services](#services)\n  * [Web Services](#web-services-1)\n* [Hooks](#hooks)\n  * [On Start](#on-start)\n* [Deduplication](#deduplication)\n  * [By Key](#by-key)\n  * [By Key and Value](#by-key-and-value)\n  * [By Predicate](#by-predicate)\n* [Open Telemetry](#open-telemetry)\n* [Swagger](#swagger)\n* [Motivation](#motivation)\n* [Contribution](#contribution)\n\n## Overview\n\nWondering what makes Kstreamplify stand out? Here are some of the key features that make it a must-have for Kafka Streams:\n\n- **🚀 Bootstrapping**: Automatically handles the startup, configuration, and initialization of Kafka Streams so you can focus on business logic instead of setup.\n\n- **📝 Avro Serializer and Deserializer**: Provides common Avro serializers and deserializers out of the box.\n\n- **⛑️ Error Handling**: Catches and routes errors to a dead-letter queue (DLQ) topic.\n\n- **☸️ Kubernetes**: Built-in readiness and liveness probes for Kubernetes deployments.\n\n- **🤿 Interactive Queries**: Easily access and interact with Kafka Streams state stores.\n\n- **🫧 Deduplication**: Remove duplicate events from your stream.\n\n- **🧪 Testing**: Automatically sets up the Topology Test Driver so you can start writing tests right away.\n\n## Getting Started\n\nKstreamplify simplifies bootstrapping Kafka Streams applications by handling startup, configuration, and initialization for you.\n\n### Spring Boot\n\n[![javadoc](https://javadoc.io/badge2/com.michelin/kstreamplify-spring-boot/javadoc.svg?style=for-the-badge\u0026)](https://javadoc.io/doc/com.michelin/kstreamplify-spring-boot)\n\nFor Spring Boot applications, add the following dependency:\n\n```xml\n\u003cdependency\u003e\n    \u003cgroupId\u003ecom.michelin\u003c/groupId\u003e\n    \u003cartifactId\u003ekstreamplify-spring-boot\u003c/artifactId\u003e\n    \u003cversion\u003e${kstreamplify.version}\u003c/version\u003e\n\u003c/dependency\u003e\n```\n\nThen, create a `KafkaStreamsStarter` bean and override the `KafkaStreamsStarter#topology()` method:\n\n```java\n@Component\npublic class MyKafkaStreams extends KafkaStreamsStarter {\n    @Override\n    public void topology(StreamsBuilder streamsBuilder) {\n        // Define your topology here\n    }\n\n    @Override\n    public String dlqTopic() {\n        return \"dlq_topic\";\n    }\n}\n```\n\nDefine all your Kafka Streams properties directly from the `application.yml` file, under the `kafka.properties` key:\n\n```yml\nkafka:\n  properties:\n    application.id: 'myKafkaStreams'\n    bootstrap.servers: 'localhost:9092'\n    schema.registry.url: 'http://localhost:8081'\n```\n\nYou're now ready to start your Kstreamplify Spring Boot application.\n\n### Java\n\n[![javadoc](https://javadoc.io/badge2/com.michelin/kstreamplify-core/javadoc.svg?style=for-the-badge)](https://javadoc.io/doc/com.michelin/kstreamplify-core)\n\nFor simple Java applications, add the following dependency:\n\n```xml\n\u003cdependency\u003e\n    \u003cgroupId\u003ecom.michelin\u003c/groupId\u003e\n    \u003cartifactId\u003ekstreamplify-core\u003c/artifactId\u003e\n    \u003cversion\u003e${kstreamplify.version}\u003c/version\u003e\n\u003c/dependency\u003e\n```\n\nThen, create a class that extends `KafkaStreamsStarter` and override the `KafkaStreamsStarter#topology()` method:\n\n```java\npublic class MyKafkaStreams extends KafkaStreamsStarter {\n    @Override\n    public void topology(StreamsBuilder streamsBuilder) {\n        // Define your topology here\n    }\n\n    @Override\n    public String dlqTopic() {\n        return \"dlq_topic\";\n    }\n}\n```\n\nFrom your `main` method, create a `KafkaStreamsInitializer` instance and initialize it with your `KafkaStreamsStarter` child class:\n\n```java\npublic class MainKstreamplify {\n\n    public static void main(String[] args) {\n        KafkaStreamsInitializer myKafkaStreamsInitializer = new KafkaStreamsInitializer();\n        myKafkaStreamsInitializer.init(new MyKafkaStreams());\n    }\n}\n```\n\nDefine all your Kafka Streams properties in an `application.yml` file, under the `kafka.properties` key:\n\n```yml\nkafka:\n  properties:\n    application.id: 'myKafkaStreams'\n    bootstrap.servers: 'localhost:9092'\n    schema.registry.url: 'http://localhost:8081'\nserver:\n  port: 8080\n```\n\nYou're now ready to start your Kstreamplify Java application.\n\nA few important notes:\n- A `server.port` is required to enable the [web services](#web-services).\n- The core dependency does not include a logger—be sure to add one to your project.\n\n## Test\n\n[![javadoc](https://javadoc.io/badge2/com.michelin/kstreamplify-core-test/javadoc.svg?style=for-the-badge\u0026)](https://javadoc.io/doc/com.michelin/kstreamplify-core-test)\n\nKstreamplify simplifies the use of the **Topology Test Driver** for testing Kafka Streams applications.\n\nFor both Java and Spring Boot applications, add the following dependency:\n\n```xml\n\u003cdependency\u003e\n    \u003cgroupId\u003ecom.michelin\u003c/groupId\u003e\n    \u003cartifactId\u003ekstreamplify-core-test\u003c/artifactId\u003e\n    \u003cversion\u003e${kstreamplify.version}\u003c/version\u003e\n    \u003cscope\u003etest\u003c/scope\u003e\n\u003c/dependency\u003e\n```\n\nCreate a test class that extends `KafkaStreamsStarterTest`.\nOverride the `getKafkaStreamsStarter()` method to provide your custom to provide your `KafkaStreamsStarter` implementation.\n\n```java\npublic class MyKafkaStreamsTest extends KafkaStreamsStarterTest {\n    private TestInputTopic\u003cString, KafkaUser\u003e inputTopic;\n    private TestOutputTopic\u003cString, KafkaUser\u003e outputTopic;\n\n    @Override\n    protected KafkaStreamsStarter getKafkaStreamsStarter() {\n        return new MyKafkaStreams();\n    }\n\n    @BeforeEach\n    void setUp() {\n        inputTopic = testDriver.createInputTopic(\"input_topic\", new StringSerializer(),\n            SerdesUtils.\u003cKafkaUser\u003egetValueSerdes().serializer());\n\n        outputTopic = testDriver.createOutputTopic(\"output_topic\", new StringDeserializer(),\n            SerdesUtils.\u003cKafkaUser\u003egetValueSerdes().deserializer());\n    }\n\n    @Test\n    void shouldUpperCase() {\n        inputTopic.pipeInput(\"1\", user);\n        List\u003cKeyValue\u003cString, KafkaUser\u003e\u003e results = outputTopic.readKeyValuesToList();\n        assertEquals(\"FIRST NAME\", results.get(0).value.getFirstName());\n        assertEquals(\"LAST NAME\", results.get(0).value.getLastName());\n    }\n\n    @Test\n    void shouldFailAndRouteToDlqTopic() {\n        inputTopic.pipeInput(\"1\", user);\n        List\u003cKeyValue\u003cString, KafkaError\u003e\u003e errors = dlqTopic.readKeyValuesToList();\n        assertEquals(\"1\", errors.get(0).key);\n        assertEquals(\"Something bad happened...\", errors.get(0).value.getContextMessage());\n        assertEquals(0, errors.get(0).value.getOffset());\n    }\n}\n```\n\n### Override Properties\n\nKstreamplify uses default properties for the tests. \nYou can provide additional properties or override the default ones by overriding the `getSpecificProperties()` method:\n\n```java\npublic class MyKafkaStreamsTest extends KafkaStreamsStarterTest {\n    @Override\n    protected Map\u003cString, String\u003e getSpecificProperties() {\n        return Map.of(\n            STATE_DIR_CONFIG, \"/tmp/kafka-streams\"\n        );\n    }\n}\n```\n\n## Avro Serializer and Deserializer\n\nWhen working with Avro schemas, you can use the `SerdesUtils` class to easily serialize or deserialize records:\n\n```java\nSerdesUtils.\u003cMyAvroValue\u003egetValueSerdes()\n```\n\nor\n\n```java\nSerdesUtils.\u003cMyAvroValue\u003egetKeySerdes()\n```\n\nHere’s an example of how to use these methods in your topology:\n\n```java\n@Component\npublic class MyKafkaStreams extends KafkaStreamsStarter {\n    @Override\n    public void topology(StreamsBuilder streamsBuilder) {\n        streamsBuilder\n            .stream(\"input_topic\", Consumed.with(Serdes.String(), SerdesUtils.\u003cKafkaUser\u003egetValueSerdes()))\n            .to(\"output_topic\", Produced.with(Serdes.String(), SerdesUtils.\u003cKafkaUser\u003egetValueSerdes()));\n    }\n}\n```\n\n## Error Handling\n\nKstreamplify makes it easy to handle errors and route them to a dead-letter queue (DLQ) topic.\n\n### Set up DLQ Topic\n\nOverride the `dlqTopic()` method and return the name of your DLQ topic:\n\n```java\n@Component\npublic class MyKafkaStreams extends KafkaStreamsStarter {\n    @Override\n    public void topology(StreamsBuilder streamsBuilder) {\n        // Define your topology here\n    }\n\n    @Override\n    public String dlqTopic() {\n        return \"dlq_topic\";\n    }\n}\n```\n\n### Handling Processing Errors\n\nTo catch processing errors and route them to the DLQ, use the `ProcessingResult` class.\n\n#### DSL\n\n```java\n@Component\npublic class MyKafkaStreams extends KafkaStreamsStarter {\n    @Override\n    public void topology(StreamsBuilder streamsBuilder) {\n        KStream\u003cString, KafkaUser\u003e stream = streamsBuilder\n            .stream(\"input_topic\", Consumed.with(Serdes.String(), SerdesUtils.getValueSerdes()));\n\n        TopologyErrorHandler\n            .catchErrors(stream.mapValues(MyKafkaStreams::toUpperCase))\n            .to(\"output_topic\", Produced.with(Serdes.String(), SerdesUtils.getValueSerdes()));\n    }\n\n    @Override\n    public String dlqTopic() {\n        return \"dlq_topic\";\n    }\n\n    private static ProcessingResult\u003cKafkaUser, KafkaUser\u003e toUpperCase(KafkaUser value) {\n        try {\n            value.setLastName(value.getLastName().toUpperCase());\n            return ProcessingResult.success(value);\n        } catch (Exception e) {\n            return ProcessingResult.fail(e, value, \"Something went wrong...\");\n        }\n    }\n}\n```\n\nThe `mapValues` operation returns a `ProcessingResult\u003cV, V2\u003e`, where:\n\n- The first type parameter (`V`) represents the transformed value upon success.\n- The second type (`V2`) represents the original value if an error occurs.\n\nTo mark a result as successful:\n\n```java\nProcessingResult.success(value);\n```\n\nTo mark it as failed:\n\n```java\nProcessingResult.fail(e, value, \"Something went wrong...\");\n```\n\nUse `TopologyErrorHandler#catchErrors()` to catch and route failed records to the DLQ topic. A healthy stream is returned and can be further processed as needed.\n\n#### Processor API\n\n```java\n@Component\npublic class MyKafkaStreams extends KafkaStreamsStarter {\n    @Override\n    public void topology(StreamsBuilder streamsBuilder) {\n        TopologyErrorHandler.catchErrors(\n            streamsBuilder.stream(\"input_topic\", Consumed.with(Serdes.String(), Serdes.String()))\n                .process(CustomProcessor::new)\n            )\n            .to(\"output_topic\", Produced.with(Serdes.String(), Serdes.String()));\n    }\n\n    @Override\n    public String dlqTopic() {\n        return \"dlq_topic\";\n    }\n\n    public static class CustomProcessor extends ContextualProcessor\u003cString, String, String, ProcessingResult\u003cString, String\u003e\u003e {\n        @Override\n        public void process(Record\u003cString, String\u003e record) {\n            try {\n              context().forward(ProcessingResult.wrapRecordSuccess(record.withValue(record.value().toUpperCase())));\n            } catch (Exception e) {\n              context().forward(ProcessingResult.wrapRecordFailure(e, record.withValue(record.value()), \"Something went wrong...\"));\n            }\n        }\n    }\n}\n```\n\nThe `process` operation forwards a `ProcessingResult\u003cV, V2\u003e`, where:\n\n- The first type parameter (`V`) represents the transformed value upon success.\n- The second type (`V2`) represents the original value if an error occurs.\n\nTo mark a result as successful:\n\n```java\nProcessingResult.wrapRecordSuccess(record);\n```\n\nTo mark it as failed:\n\n```java\nProcessingResult.wrapRecordFailure(e, record, \"Something went wrong...\");\n```\n\nUse `TopologyErrorHandler#catchErrors()` to catch and route failed records to the DLQ topic. A healthy stream is returned and can be further processed as needed.\n\n### Production and Deserialization Errors\n\nKstreamplify also provides handlers to manage production and deserialization errors by forwarding them to the DLQ.\n\nAdd the following properties to your `application.yml`\n\n```yml\nkafka:\n  properties:\n    default.deserialization.exception.handler: 'com.michelin.kstreamplify.error.DlqDeserializationExceptionHandler'\n    default.production.exception.handler: 'com.michelin.kstreamplify.error.DlqProductionExceptionHandler'\n```\n\n### Avro Schema\n\nThe DLQ topic must have an associated Avro schema registered in the Schema Registry.\nYou can find the schema [here](https://github.com/michelin/kstreamplify/blob/main/kstreamplify-core/src/main/avro/kafka-error.avsc).\n\n### Uncaught Exception Handler\n\nBy default, uncaught exceptions will shut down the Kafka Streams client.\n\nTo customize this behavior, override the `KafkaStreamsStarter#uncaughtExceptionHandler()` method:\n\n```java\n@Override\npublic StreamsUncaughtExceptionHandler uncaughtExceptionHandler() {\n    return throwable -\u003e StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.SHUTDOWN_APPLICATION;\n}\n```\n\n## Web Services\n\nKstreamplify exposes web services on top of your Kafka Streams application.\n\n### Topology\n\nThe `/topology` endpoint returns the Kafka Streams topology description by default.\nYou can customize the path by setting the following property:\n\n```yml\ntopology:\n  path: 'custom-topology'\n```\n\n### Interactive Queries\n\nA set of endpoints is available to query the state stores of your Kafka Streams application.\nThese endpoints leverage [interactive queries](https://docs.confluent.io/platform/current/streams/developer-guide/interactive-queries.html) and handle state stores across different Kafka Streams instances by providing an [RPC layer](https://docs.confluent.io/platform/current/streams/developer-guide/interactive-queries.html#adding-an-rpc-layer-to-your-application).\n\nThe following state store types are supported:\n- Key-Value store\n- Timestamped Key-Value store\n- Window store\n- Timestamped Window store\n\nNote that only state stores with String keys are supported.\n\n### Kubernetes\n\nReadiness and liveness probes are exposed for Kubernetes deployment, reflecting the Kafka Streams state.\nThese are available at `/ready` and `/liveness` by default.\nYou can customize the paths by setting the following properties:\n\n```yml\nkubernetes:\n  liveness:\n    path: 'custom-liveness'\n  readiness:\n    path: 'custom-readiness'\n```\n\n## TopicWithSerde API\n\nKstreamplify provides an API called `TopicWithSerde` that unifies all consumption and production points, simplifying the management of topics owned by different teams across multiple environments.\n\n### Declaration\n\nYou can declare your consumption and production points in a separate class. This requires a topic name, a key SerDe, and a value SerDe.\n\n```java\npublic static TopicWithSerde\u003cString, KafkaUser\u003e inputTopic() {\n    return new TopicWithSerde\u003c\u003e(\n        \"input_topic\",\n        Serdes.String(),\n        SerdesUtils.getValueSerdes()\n    );\n}\n\npublic static TopicWithSerde\u003cString, KafkaUser\u003e outputTopic() {\n    return new TopicWithSerde\u003c\u003e(\n        \"output_topic\",\n        Serdes.String(),\n        SerdesUtils.getValueSerdes()\n    );\n}\n```\n\nUse it in your topology:\n\n```java\n@Slf4j\n@Component\npublic class MyKafkaStreams extends KafkaStreamsStarter {\n    @Override\n    public void topology(StreamsBuilder streamsBuilder) {\n        KStream\u003cString, KafkaUser\u003e stream = inputTopic().stream(streamsBuilder);\n        outputTopic().produce(stream);\n    }\n}\n```\n\n### Prefix\n\nThe `TopicWithSerde` API is designed to handle topics owned by different teams across various environments without changing the topology. It uses prefixes to differentiate teams and topic ownership.\n\nIn your `application.yml` file, declare the prefixes in a `key: value` format:\n\n```yml\nkafka:\n  properties:\n    prefix:\n      self: 'staging.team1.'\n      team2: 'staging.team2.'\n      team3: 'staging.team3.'\n```\n\nThen, include the prefix when declaring your `TopicWithSerde`:\n\n```java\npublic static TopicWithSerde\u003cString, KafkaUser\u003e inputTopic() {\n    return new TopicWithSerde\u003c\u003e(\n        \"input_topic\",\n        \"team1\",\n        Serdes.String(),\n        SerdesUtils.getValueSerdes()\n    );\n}\n```\n\n\u003e The topic `staging.team1.input_topic` will be consumed when running the application with the staging `application.yml` file.\n\nBy default, if no prefix is specified, `self` is used.\n\n### Remapping\n\nKstreamplify encourages the use of fixed topic names in the topology, using the prefix feature to manage namespacing for virtual clusters and permissions. \nHowever, there are situations where you might want to reuse the same topology with different input or output topics.\n\nIn the `application.yml` file, you can declare dynamic remappings in a `key: value` format:\n\n```yml\nkafka:\n  properties:\n    topic:\n      remap:\n        oldTopicName: newTopicName\n        foo: bar\n```\n\n\u003e The topic `oldTopicName` in the topology will be mapped to `newTopicName`.\n\nThis feature works with both input and output topics.\n\n### Unit Testing\n\nWhen testing, you can use the `TopicWithSerde` API to create test topics with the same name as those in your topology.\n\n```java\nTestInputTopic\u003cString, KafkaUser\u003e inputTopic = createInputTestTopic(inputTopic());\nTestInputTopic\u003cString, KafkaUser\u003e outputTopic = createOutputTestTopic(outputTopic());\n```\n\n## Interactive Queries\n\nKstreamplify aims to simplify the use of [interactive queries](https://docs.confluent.io/platform/current/streams/developer-guide/interactive-queries.html) in Kafka Streams application.\n\n### Configuration\n\nThe value for the \"[application.server](https://docs.confluent.io/platform/current/streams/developer-guide/config-streams.html#application-server)\" property can be derived from various sources, following this order of priority:\n\n1. The environment variable defined by the `application.server.var.name` property.\n\n```yml\nkafka:\n  properties:\n    application.server.var.name: 'MY_APPLICATION_SERVER'\n```\n\n2. If not defined, it defaults to the `APPLICATION_SERVER` environment variable.\n3. If neither of the above is set, it defaults to `localhost:\u003cserverPort\u003e`.\n\n### Services\n\nYou can leverage the interactive query services provided by the web services layer to access and query the state stores of your Kafka Streams application:\n\n```java\n@Component\npublic class MyService {\n    @Autowired\n    KeyValueStoreService keyValueStoreService;\n\n    @Autowired\n    TimestampedKeyValueStoreService timestampedKeyValueStoreService;\n  \n    @Autowired\n    WindowStoreService windowStoreService;\n\n    @Autowired\n    TimestampedWindowStoreService timestampedWindowStoreService;\n}\n```\n\n### Web Services\n\nThe web services layer provides a set of endpoints that allow you to query the state stores of your Kafka Streams application. You can find more details in the [Interactive Queries Web Services](#interactive-queries) section.\n\n## Hooks\n\nKstreamplify provides the flexibility to execute custom code through hooks at various stages of your Kafka Streams application lifecycle.\n\n### On Start\n\nThe **On Start** hook allows you to execute custom code before the Kafka Streams instance starts.\n\n```java\n@Component\npublic class MyKafkaStreams extends KafkaStreamsStarter {\n    @Override\n    public void onStart(KafkaStreams kafkaStreams) {\n        // Execute code before starting the Kafka Streams instance\n    }\n}\n```\n\n## Deduplication\n\nKstreamplify provides an easy way to deduplicate streams through the `DeduplicationUtils` class. \nYou can deduplicate based on various criteria and within a specified time frame.\n\nAll deduplication methods return a `KStream\u003cString, ProcessingResult\u003cV,V2\u003e`, which allows you to handle errors and route them to the `TopologyErrorHandler#catchErrors()`.\n\n**Note**: Only streams with String keys and Avro values are supported.\n\n### By Key\n\n```java\n@Component\npublic class MyKafkaStreams extends KafkaStreamsStarter {\n    @Override\n    public void topology(StreamsBuilder streamsBuilder) {\n        KStream\u003cString, KafkaUser\u003e myStream = streamsBuilder\n            .stream(\"input_topic\");\n\n        DeduplicationUtils\n            .deduplicateKeys(streamsBuilder, myStream, Duration.ofDays(60))\n            .to(\"output_topic\");\n    }\n}\n```\n\n### By Key and Value\n\n```java\n@Component\npublic class MyKafkaStreams extends KafkaStreamsStarter {\n    @Override\n    public void topology(StreamsBuilder streamsBuilder) {\n        KStream\u003cString, KafkaUser\u003e myStream = streamsBuilder\n            .stream(\"input_topic\");\n\n        DeduplicationUtils\n            .deduplicateKeyValues(streamsBuilder, myStream, Duration.ofDays(60))\n            .to(\"output_topic\");\n    }\n}\n```\n\n### By Predicate\n\n```java\n@Component\npublic class MyKafkaStreams extends KafkaStreamsStarter {\n    @Override\n    public void topology(StreamsBuilder streamsBuilder) {\n        KStream\u003cString, KafkaUser\u003e myStream = streamsBuilder\n            .stream(\"input_topic\");\n\n        DeduplicationUtils\n            .deduplicateWithPredicate(streamsBuilder, myStream, Duration.ofDays(60),\n                value -\u003e value.getFirstName() + \"#\" + value.getLastName())\n            .to(\"output_topic\");\n    }\n}\n```\n\nIn the predicate approach, the provided predicate is used as the key in the window store. The stream will be deduplicated based on the values derived from the predicate.\n\n## Open Telemetry\n\nKstreamplify simplifies the integration of [Open Telemetry](https://opentelemetry.io/) with Kafka Streams applications in Spring Boot.\nIt binds all Kafka Streams metrics to the Spring Boot registry, making monitoring and observability easier.\n\nTo run your application with the Open Telemetry Java agent, include the following JVM options:\n\n```console\n-javaagent:/opentelemetry-javaagent.jar -Dotel.traces.exporter=otlp -Dotel.logs.exporter=otlp -Dotel.metrics.exporter=otlp\n```\n\n### Custom Tags for Metrics\n\nYou can also add custom tags to the Open Telemetry metrics to help organize and filter them in your observability tools, like Grafana. \nUse the following JVM options to specify custom tags:\n\n```console\n-Dotel.resource.attributes=environment=production,service.namespace=myNamespace,service.name=myKafkaStreams,category=orders\n```\n\nThese tags will be included in the metrics, and you'll be able to see them in your logs during application startup, helping to track and filter metrics based on attributes like environment, service name, and category.\n\n## Swagger\n\nThe Kstreamplify Spring Boot module integrates with [Springdoc](https://springdoc.org/) to automatically generate API documentation for your Kafka Streams application.\n\nBy default:\n- The Swagger UI is available at `http://host:port/swagger-ui/index.html`.\n- The OpenAPI documentation can be accessed at `http://host:port/v3/api-docs`.\n\nBoth the Swagger UI and the OpenAPI description can be customized using the [Springdoc properties](https://springdoc.org/#properties).\n\n## Motivation\n\nDeveloping applications with Kafka Streams can be challenging, with developers often facing various questions and obstacles. \nKey considerations include efficiently bootstrapping Kafka Streams applications, handling unexpected business logic issues, integrating Kubernetes probes, and more.\n\nTo assist developers in overcoming these challenges, we have built **Kstreamplify**.\nOur goal is to provide a comprehensive solution that simplifies the development process and addresses the common pain points encountered when working with Kafka Streams.\nBy offering easy-to-use utilities, error handling mechanisms, testing support, and integration with modern tools like Kubernetes and OpenTelemetry, Kstreamplify aims to streamline Kafka Streams application development.\n\n## Contribution\n\nWe welcome contributions from the community to help make Kstreamplify even better!\nIf you'd like to contribute, please take a moment to review our [contribution guide](https://github.com/michelin/kstreamplify/blob/master/CONTRIBUTING.md).\nThere, you'll find our guidelines and best practices for contributing code, reporting issues, or suggesting new features.\n\nWe appreciate your support in making Kstreamplify a powerful and user-friendly tool for everyone working with Kafka Streams.\n","funding_links":[],"categories":["进程间通信","\u003ca name=\"Java\"\u003e\u003c/a\u003eJava"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmichelin%2Fkstreamplify","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmichelin%2Fkstreamplify","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmichelin%2Fkstreamplify/lists"}