https://github.com/csini/catalog-synchronizer
This spring-boot 3 software reads a large google product list file in tsv format, validates it and merge it in a sqlite database. it uses apache kafka to handle the readed data. at the end it generates an XUnit-type report.
https://github.com/csini/catalog-synchronizer
batch java jpa kafka kafka-streams spring-boot xunit
Last synced: 5 months ago
JSON representation
This spring-boot 3 software reads a large google product list file in tsv format, validates it and merge it in a sqlite database. it uses apache kafka to handle the readed data. at the end it generates an XUnit-type report.
- Host: GitHub
- URL: https://github.com/csini/catalog-synchronizer
- Owner: Csini
- License: mit
- Created: 2024-04-16T13:46:18.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-10-10T12:30:07.000Z (9 months ago)
- Last Synced: 2025-01-06T10:23:59.868Z (6 months ago)
- Topics: batch, java, jpa, kafka, kafka-streams, spring-boot, xunit
- Language: Java
- Homepage:
- Size: 2.15 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# catalog-synchronizer
this spring-boot 3 software reads a TSV format google product list file (default from resource directory /input, configurable with `path.input`), validates it and merge it in a sqlite database (`dbfilenamewithpath`). it uses apache kafka to handle the readed data. at the end it generates an XUni-type report (`path.output`) about how many record was readed and how long does it take. This report s easy to integrate in a CI Tool (e.g. Jenkins) because it there is an error, the job can be easily switched to "unsuccessfull". More details about the readed data and what exacly was done in the database, is stored in the kafka. every run has an unique requestid. Default the kafka-stream flushed 1000 records (`flushSize`) and Spring-JPA handles 100 entities (`hibernate.jdbc.batch_size`). Before every run, this software makes a backup about the sqlite database.# How to run this Software
Start your own Kafka-Server (on localhost:9092) or go to the kafka folder and start a Kafka in docker with make start-kafkaThan run `mvn spring-boot:run -Dspring-boot.run.arguments="file1.txt"` where file1.txt is under resource directory /input when not configured otherwise (`path.input`).
Example output report file: [doc/report-72b8bde0-342b-4511-bf52-a565d9841b23.xml](doc/report-72b8bde0-342b-4511-bf52-a565d9841b23.xml)
# Configuration properties
application.properties```
spring.kafka.bootstrap-servers=localhost:9092flushSize=1000
store.name.productPair=productPairStore
path.input=/input
path.output=output
```persistence.properties
```
url=jdbc:sqlite:${dbfilenamewithpath}?date_class=TEXT
dbfilenamewithpath=src/main/resources/products.db
spring.jpa.database-platform=org.hibernate.community.dialect.SQLiteDialect
hibernate.jdbc.batch_size=100
hibernate.jdbc.batch_versioned_data=true
```
# Kafka Topology
# Kafka Producers and Consumers
[doc/springwolf.yaml](doc/springwolf.yaml){% include "doc/springwolf.yaml" %}
# Product DB Schema
