https://github.com/asirwad/reactive-restapi-with-kafka-and-webflux
https://github.com/asirwad/reactive-restapi-with-kafka-and-webflux
apache-kafka spring-boot spring-boot-reactive webflux
Last synced: 2 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/asirwad/reactive-restapi-with-kafka-and-webflux
- Owner: Asirwad
- Created: 2025-03-12T10:00:22.000Z (2 months ago)
- Default Branch: main
- Last Pushed: 2025-03-12T11:27:03.000Z (2 months ago)
- Last Synced: 2025-03-12T11:28:49.918Z (2 months ago)
- Topics: apache-kafka, spring-boot, spring-boot-reactive, webflux
- Language: Java
- Homepage:
- Size: 12.7 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# 🚀 Spring Boot Kafka Reactive Project


A modern Spring Boot implementation demonstrating Kafka integration with Reactive Streams. Features real-time Wikimedia data processing with DynamoDB persistence.
## 📚 Table of Contents
- [✨ Features](#-features)
- [⚙️ Requirements](#️-requirements)
- [🚦 Getting Started](#-getting-started)
- [🧠 Key Concepts](#-key-concepts)
- [🎯 Demo Application](#-demo-application)## ✨ Features
- Real-time data streaming from Wikimedia
- Reactive Kafka producers/consumers
- DynamoDB integration for data persistence
- Custom serialization/deserialization
- Consumer group management
- Offset tracking implementation## ⚙️ Requirements
- 
- 
- ## 🚦 Getting Started
### 🐳 Kafka Setup
Start Kafka Services
```bash
# Start ZooKeeper
$ bin/zookeeper-server-start.sh config/zookeeper.properties# Start Kafka Broker (in new terminal)
$ bin/kafka-server-start.sh config/server.properties
```### 🛠️ Project Setup
Clone and Run
```bash
$ git clone https://github.com/Asirwad/Reactive-RESTAPI-with-Kafka-and-Webflux
$ cd Reactive-RESTAPI-with-Kafka-and-Webflux
$ ./mvnw spring-boot:run
```## 🧠 Key Concepts
📦 Kafka Architecture Overview

| Component | Description |
|-----------------|----------------------------------------------|
| **Producer** | Publishes messages to topics |
| **Consumer** | Subscribes and processes messages |
| **Broker** | Manages data storage and distribution |
| **ZooKeeper** | Handles cluster coordination |🏗️ Kafka Cluster

- Distributed message broker system
- Horizontal scaling capabilities
- Automatic failover handling📮 Kafka Producer

```java
// Example Reactive Producer
public Mono> sendMessage(String topic, String message) {
return kafkaSender.send(
Mono.just(SenderRecord.create(topic, null, null, null, message, null))
);
}
```📥 Kafka Consumer

```java
// Example Reactive Consumer
@Bean
public ReceiverOptions receiverOptions() {
return ReceiverOptions.create(consumerProps())
.subscription(Collections.singleton("wikimedia.recentchange"));
}
```📂 Topics & Partitions

| Feature | Benefit |
|-----------------|----------------------------------------------|
| Partitions | Enable parallel processing |
| Replication | Ensure data redundancy |
| Retention | Configurable message persistence |🔢 Offsets & Consumer Groups

- **Offset Tracking**: Consumer position management
- **Group Coordination**: Parallel message processing
- **Rebalancing**: Automatic partition redistribution
## 🎯 Demo Application
### 🔥 Real-time Pipeline
📡 Producer Implementation
**Wikimedia Stream Processor**
- Reactive HTTP client for stream consumption
- Kafka Template for message publishing
- Backpressure management
- Error handling with retries```java
webClient.get()
.uri("/v2/stream/recentchange")
.retrieve()
.bodyToFlux(String.class)
.doOnNext(event -> kafkaTemplate.send("wikimedia.recentchange", event))
.subscribe();
```💾 Consumer Implementation
**DynamoDB Persistence**
- Batch record processing
- Exponential backoff strategy
- Consumer group management
- Offset commit strategies```java
@Bean
public Consumer>> dynamoDbSaver() {
return flux -> flux
.bufferTimeout(100, Duration.ofMillis(500))
.flatMap(batch -> dynamoService.saveBatch(batch))
.subscribe();
}
```## 🛠️ Troubleshooting
Common Issues
**⚠️ ZooKeeper Connection Problems**
- Verify zookeeper.properties configuration
- Check for port conflicts (default 2181)**⚠️ Consumer Lag**
- Monitor with `kafka-consumer-groups.sh`
- Adjust `max.poll.records` if needed**⚠️ Serialization Errors**
- Validate message formats
- Check key/value serializer configurations