https://github.com/rcardin/kafkaesque
A testing 🧰 library for Kafka-based applications
https://github.com/rcardin/kafkaesque
hacktoberfest kafka testcontainers testing-tools
Last synced: 8 months ago
JSON representation
A testing 🧰 library for Kafka-based applications
- Host: GitHub
- URL: https://github.com/rcardin/kafkaesque
- Owner: rcardin
- License: mit
- Created: 2020-07-19T13:08:41.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2025-06-19T02:10:39.000Z (8 months ago)
- Last Synced: 2025-06-20T04:07:54.850Z (8 months ago)
- Topics: hacktoberfest, kafka, testcontainers, testing-tools
- Language: Java
- Homepage:
- Size: 284 KB
- Stars: 26
- Watchers: 2
- Forks: 4
- Open Issues: 14
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README



[]()
# 🐛 Kafkaesque
_Kafkaesque_ is a test library whose aim is to make the experience in testing Kafka application less painful. By now, the project is in its early stage, defining the API that we will implement in the near future.
Every help will be very useful :)
The library allows to test the following use cases:
## Use Case 1: The Application Produces Some Messages on a Topic
The first use case tests the messages produced by an application, reading them from the topic. The code, producing the messages, is external to _Kafkaesque_. Through _Kafkaesque_, it is possible to assert some properties on the messages generated by the application.
```java
Kafkaesque
.at("broker:port")
.consume()
.fromTopic("topic-name")
.withDeserializers(keyDeserializer, valueDeserializer)
.waitingAtMost(10, SECONDS)
.waitingEmptyPolls(2, 50L, MILLISECONDS)
.expectingConsumed()
.havingRecordsSize(3) // <-- from here we use a ConsumedResult
.havingHeaders(headers -> {
// Assertions on headers
})
.havingKeys(keys -> {
// Assertions on keys
})
.havingPayloads(payloads -> {
// Asserions on payloads
})
.havingConsumerRecords(records -> {
// Assertions on the full list of ConsumerRecord
})
.assertingThatPayloads(contains("42")) // Uses Hamcrest.Matchers on collections :)
.andCloseConsumer();
```
## Use Case 2: The Application Consumes Some Messages from a Topic
The second use case tests an application that reads messages from a topic. _Kafkaesque_ is responsible to produce such messages to trigger the execution of the application. It is also possible to assert conditions on the system state after the consumption of the messages.
```java
Kafkaesque
.at("broker:port")
.produce()
.toTopic("topic-name")
.withDeserializers(keyDeserializer, valueDeserializer)
.messages( /* Some list of messages, eventually with headers */)
.waitingAtMostForEachAck(100, MILLISECONDS) // Waiting time for each ack from the broker
.waitingForTheConsumerAtMost(10, SECONDS) // Waiting time for the consumer to read one / all the messages
.andAfterAll()
.asserting(messages -> {
// Assertions on the consumer process after the sending of all the messages
});
```
An equivalent method pipeline is available to test assertions after the consumption of each message:
```java
Kafkaesque
.at("broker:port")
.produce()
.toTopic("topic-name")
.withDeserializers(keyDeserializer, valueDeserializer)
.messages( /* Some list of messages, eventually with headers */)
.waitingAtMostForEachAck(100, MILLISECONDS) // Waiting time for each ack from the broker
.waitingForTheConsumerAtMost(10, SECONDS) // Waiting time for the consumer to read one / all the messages
.andAfterEach()
.asserting(message -> {
// Assertions on the consumer process after the sending of each message
});
```
## Use Case 3: Synchronize on Produced or Consumed Messages and Test Them Outside Kafkaesque
The [`kafka-streams-test-utils`](https://kafka.apache.org/documentation/streams/developer-guide/testing.html) testing library offers to developers some useful and powerful abstractions. Indeed, the `TestInputTopic` and the `TestOutputTopic` let developers manage asynchronous communication with a broker as it is fully synchronous. In this case, the library does not start any broker, not even embedded.
Kafkaesque offers to developers the same abstractions, trying to achieve the same synchronous behavior, using the `yolo.Kfksq` class.
```java
var kfksq = Kfksq.at("broker:port");
var inputTopic = kfksq.createInputTopic("inputTopic", keySerializer, valueSerializer);
inputTopic.pipeInput("key", "value");
var outputTopic = kfksq.createOutputTopic("outputTopic", keyDeserializer, valueDeserializer);
var records = outputTopic.readRecordsToList();
```
## Modules
### Core module
The _Kafkaesque_ library contains many submodules. The `kafkaesque-core` module contains the interfaces and agnostic concrete classes offering the above fluid API. Add the following dependency to your `pom.xml` file to use module:
```xml
in.rcard
kafkaesque-core
0.2.0
test
```
In detail, the `kafkaesque-core` module uses the [Awaitility](http://www.awaitility.org/) Java library to deal with the asynchronicity nature of each of the above use cases.
## Configuration
_Kafkaesque_ also supports internal producers and consumers configuration via an external configuration file. Kafkaesque can read multiple file formats. The available ones are the [HOCON](https://github.com/lightbend/config/blob/main/HOCON.md) file format, JSON format, and Java properties format.
The configurations must be prefixed with `kafkaesque.consumer` for consumers. The available configuration are:
* `group-id`
* `auto-offset-reset`
* `enable-auto-commit`
* `auto-commit-interval`
* `client-id`
* `fetch-max-wait`
* `fetch-min-size`
* `isolation-level`
* `max-poll-records`
The configurations must be prefixed with `kafkaesque.producer` for producers, instead. The available configuration are:
* `client-id`
* `retries`
* `acks`
* `batch-size`
* `buffer-memory`
* `compression-type`
You can pass the path to the file using the `withConfiguration` method available both for consumers and producers. Here is an example:
```java
Kafkaesque
.at("broker:port")
.produce()
.toTopic("topic-name")
.withDeserializers(keyDeserializer, valueDeserializer)
.withConfiguration("path-to-the-file.conf")
.messages( /* Some list of messages */)
.waitingAtMostForEachAck(100, MILLISECONDS) // Waiting time for each ack from the broker
.waitingForTheConsumerAtMost(10, SECONDS) // Waiting time for the consumer to read one / all the messages
.andAfterAll()
.asserting(messages -> {
// Assertions on the consumer process after the sending of all the messages
});
```
The path is relative to the `/src/test/resources` folder.
An example of configuration file could be the following. The file contains the configurations for both producers and consumers:
```hocon
kafkaesque {
consumer {
group-id: "kfksq-test-consumer"
client-id: "kfksq-client-id"
auto-commit-interval: 5000
auto-offset-reset: "earliest"
enable-auto-commit: false
fetch-max-wait: 500
fetch-min-size: 1
heartbeat-interval: 3000
isolation-level: "read_uncommitted"
max-poll-records: 500
}
producer {
acks: "all"
batch-size: 16384
buffer-memory: 33554432
client-id: "kfksq-client-id"
compression-type: "none"
retries: 2147483647
}
}
```