{"id":28680720,"url":"https://github.com/pellse/assembler","last_synced_at":"2026-01-11T17:01:30.723Z","repository":{"id":32659964,"uuid":"113101219","full_name":"pellse/assembler","owner":"pellse","description":"Assembler is a reactive data aggregation library for querying and merging data from multiple data sources/services. Assembler enables efficient implementation of the API Composition Pattern and is also designed to solve the N + 1 query problem. Architecture-agnostic, it can be used as part of a monolithic or microservice architecture.","archived":false,"fork":false,"pushed_at":"2025-05-29T19:22:30.000Z","size":25608,"stargazers_count":128,"open_issues_count":12,"forks_count":16,"subscribers_count":11,"default_branch":"main","last_synced_at":"2025-06-14T02:03:43.974Z","etag":null,"topics":["composition-api","cqrs","datasource","event-driven","event-sourcing","java","microservices","project-reactor","reactive","reactive-programming","reactive-streams"],"latest_commit_sha":null,"homepage":"","language":"Java","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/pellse.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2017-12-04T22:15:18.000Z","updated_at":"2025-05-29T19:22:33.000Z","dependencies_parsed_at":"2023-02-17T11:30:49.316Z","dependency_job_id":"ec0a906f-2609-4a66-9d7e-cadeb2ed1303","html_url":"https://github.com/pellse/assembler","commit_stats":null,"previous_names":["pellse/cohereflux","pellse/assembler"],"tags_count":39,"template":false,"template_full_name":null,"purl":"pkg:github/pellse/assembler","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pellse%2Fassembler","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pellse%2Fassembler/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pellse%2Fassembler/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pellse%2Fassembler/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/pellse","download_url":"https://codeload.github.com/pellse/assembler/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pellse%2Fassembler/sbom","scorecard":{"id":726882,"data":{"date":"2025-08-11","repo":{"name":"github.com/pellse/assembler","commit":"48dcb620ed52a0ffa59bb5fde0c37cbeda8ca59c"},"scorecard":{"version":"v5.2.1-40-gf6ed084d","commit":"f6ed084d17c9236477efd66e5b258b9d4cc7b389"},"score":4,"checks":[{"name":"Token-Permissions","score":-1,"reason":"No tokens found","details":null,"documentation":{"short":"Determines if the project's workflows follow the principle of least privilege.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#token-permissions"}},{"name":"Dangerous-Workflow","score":-1,"reason":"no workflows found","details":null,"documentation":{"short":"Determines if the project's GitHub Action workflows avoid dangerous patterns.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#dangerous-workflow"}},{"name":"Maintained","score":4,"reason":"5 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 4","details":null,"documentation":{"short":"Determines if the project is \"actively maintained\".","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#maintained"}},{"name":"Packaging","score":-1,"reason":"packaging workflow not detected","details":["Warn: no GitHub/GitLab publishing workflow detected."],"documentation":{"short":"Determines if the project is published as a package that others can easily download, install, easily update, and uninstall.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#packaging"}},{"name":"Code-Review","score":0,"reason":"Found 0/13 approved changesets -- score normalized to 0","details":null,"documentation":{"short":"Determines if the project requires human code review before pull requests (aka merge requests) are merged.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#code-review"}},{"name":"Binary-Artifacts","score":9,"reason":"binaries present in source code","details":["Warn: binary detected: gradle/wrapper/gradle-wrapper.jar:1"],"documentation":{"short":"Determines if the project has generated executable (binary) artifacts in the source repository.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#binary-artifacts"}},{"name":"CII-Best-Practices","score":0,"reason":"no effort to earn an OpenSSF best practices badge detected","details":null,"documentation":{"short":"Determines if the project has an OpenSSF (formerly CII) Best Practices Badge.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#cii-best-practices"}},{"name":"Security-Policy","score":0,"reason":"security policy file not detected","details":["Warn: no security policy file detected","Warn: no security file to analyze","Warn: no security file to analyze","Warn: no security file to analyze"],"documentation":{"short":"Determines if the project has published a security policy.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#security-policy"}},{"name":"Vulnerabilities","score":10,"reason":"0 existing vulnerabilities detected","details":null,"documentation":{"short":"Determines if the project has open, known unfixed vulnerabilities.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#vulnerabilities"}},{"name":"Fuzzing","score":0,"reason":"project is not fuzzed","details":["Warn: no fuzzer integrations found"],"documentation":{"short":"Determines if the project uses fuzzing.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#fuzzing"}},{"name":"License","score":10,"reason":"license file detected","details":["Info: project has a license file: LICENSE:0","Info: FSF or OSI recognized license: Apache License 2.0: LICENSE:0"],"documentation":{"short":"Determines if the project has defined a license.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#license"}},{"name":"Signed-Releases","score":-1,"reason":"no releases found","details":null,"documentation":{"short":"Determines if the project cryptographically signs release artifacts.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#signed-releases"}},{"name":"Branch-Protection","score":-1,"reason":"internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration","details":null,"documentation":{"short":"Determines if the default and release branches are protected with GitHub's branch protection settings.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#branch-protection"}},{"name":"Pinned-Dependencies","score":-1,"reason":"no dependencies found","details":null,"documentation":{"short":"Determines if the project has declared and pinned the dependencies of its build process.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#pinned-dependencies"}},{"name":"SAST","score":0,"reason":"SAST tool is not run on all commits -- score normalized to 0","details":["Warn: 0 commits out of 19 are checked with a SAST tool"],"documentation":{"short":"Determines if the project uses static code analysis.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#sast"}}]},"last_synced_at":"2025-08-22T13:05:06.317Z","repository_id":32659964,"created_at":"2025-08-22T13:05:06.317Z","updated_at":"2025-08-22T13:05:06.317Z"},"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28314259,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-11T14:58:17.114Z","status":"ssl_error","status_checked_at":"2026-01-11T14:55:53.580Z","response_time":60,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["composition-api","cqrs","datasource","event-driven","event-sourcing","java","microservices","project-reactor","reactive","reactive-programming","reactive-streams"],"created_at":"2025-06-14T02:01:11.663Z","updated_at":"2026-01-11T17:01:30.711Z","avatar_url":"https://github.com/pellse.png","language":"Java","readme":"# Assembler\n[![Maven Central](https://img.shields.io/maven-central/v/io.github.pellse/assembler.svg?label=assembler)](https://central.sonatype.com/artifact/io.github.pellse/assembler)\n\n***Assembler*** is a [reactive](https://www.reactivemanifesto.org), functional, type-safe, and stateless data aggregation library for querying and merging data from multiple data sources/services. ***Assembler*** enables efficient implementation of the [API Composition Pattern](https://microservices.io/patterns/data/api-composition.html) and is also designed to solve the N + 1 query problem in a data polyglot environment. ***Assembler*** is architecture-agnostic, making it versatile for use in monolithic or microservice architectures, implementing REST or GraphQL endpoints, stream processing, and other scenarios.\n\nInternally, ***Assembler*** leverages [Project Reactor](https://projectreactor.io) to implement end-to-end reactive stream pipelines and maintain all the reactive stream properties as defined by the [Reactive Manifesto](https://www.reactivemanifesto.org), including responsiveness, resilience, elasticity, message-driven with back-pressure, non-blocking, and more.\n\nSee the [demo app](https://github.com/pellse/assembler-spring-example) for a comprehensive project utilizing ***Assembler***.\n\nCheck out this brief presentation for a walkthrough of the ***Assembler*** API for the real-time streaming example from the [demo app](https://github.com/pellse/assembler-spring-example), which integrates ***Assembler*** with Spring WebFlux and Spring GraphQL to implement real-time data composition of multiple data sources:\n\nhttps://github.com/user-attachments/assets/5d9efa18-521f-4bcc-b6ec-5bb0d9ca3a59\n\nYou can also view the presentation [here](https://snappify.com/view/a113a410-7957-4e39-898e-38bff1ec7982) and go through each slide at your own speed.\n\n## Table of Contents\n\n- **[Use Cases](#use-cases)**\n- **[Basic Usage](#basic-usage)**\n  - [Default values for missing data](#default-values-for-missing-data)\n- **[Infinite Stream of Data](#infinite-stream-of-data)**\n- **[ID Joins](#id-joins)**\n- **[Complex Relationship Graph And Cartesian Product](#complex-relationship-graph-and-cartesian-product)**\n- **[Reactive Caching](#reactive-caching)**\n  - [Third Party Reactive Cache Provider Integration](#third-party-reactive-cache-provider-integration)\n  - [Stream Table](#stream-table)\n    - *[Event Based Stream Table](#event-based-stream-table)*\n- **[Integration with non-reactive sources](#integration-with-non-reactive-sources)**\n- **[What's Next?](#whats-next)**\n\n## Use Cases\n\n***Assembler*** can be used in situations where an application needs to access data or functionality that is spread across multiple services. Some common use cases include:\n\n1. **CQRS/Event Sourcing**: ***Assembler*** can be used on the read side of a CQRS and Event Sourcing architecture to efficiently build materialized views that aggregate data from multiple sources.\n2. **API Gateway**: ***Assembler*** can be used in conjunction with an API Gateway, which acts as a single entry point for all client requests. The API Gateway can combine multiple APIs into a single, unified API, simplifying the client's interactions with the APIs and providing a unified interface for the client to use.\n3. **Backends for Frontends**: ***Assembler*** can also be used in conjunction with Backends for Frontends (BFFs). A BFF is a dedicated backend service that provides a simplified and optimized API specifically tailored for a particular client or group of clients.\n4. **Reduce network overhead**: By combining multiple APIs into a single API, ***Assembler*** can reduce the amount of network traffic required for a client to complete a task. This can improve the performance of the client application and reduce the load on the server.\n5. **Solve the N + 1 Query Problem**:  ***Assembler*** can solve the N + 1 query problem by allowing a client to make a single request to a unified API that includes all the necessary data. This approach reduces the number of requests required and database queries, further optimizing the application's performance.\n\n[:arrow_up:](#table-of-contents)\n\n## Basic Usage\nHere is an example of how to use ***Assembler*** to generate transaction information from a list of customers of an online store. This example assumes the following fictional data model and API to access different services:\n```java\npublic record Customer(Long customerId, String name) {}\npublic record BillingInfo(Long id, Long customerId, String creditCardNumber) {}\npublic record OrderItem(String id, Long customerId, String orderDescription, Double price) {}\npublic record Transaction(Customer customer, BillingInfo billingInfo, List\u003cOrderItem\u003e orderItems) {}\n```\n```mermaid\nclassDiagram\n    direction LR\n\n    class Customer {\n        Long customerId\n        String name\n    }\n\n    class BillingInfo {\n        Long id\n        Long customerId\n        String creditCardNumber\n    }\n\n    class OrderItem {\n        String id\n        Long customerId\n        String orderDescription\n        Double price\n    }\n\n    class Transaction {\n        Customer customer\n        BillingInfo billingInfo\n        List~OrderItem~ orderItems\n    }\n\n    Transaction o-- Customer\n    Transaction o-- BillingInfo\n    Transaction o-- OrderItem\n    BillingInfo --\u003e Customer : customerId\n    OrderItem --\u003e Customer : customerId\n```\n```java\nFlux\u003cCustomer\u003e getCustomers(); // e.g. call to a microservice or a Flux connected to a Kafka source\nFlux\u003cBillingInfo\u003e getBillingInfo(List\u003cLong\u003e customerIds); // e.g. connects to relational database (R2DBC)\nFlux\u003cOrderItem\u003e getAllOrders(List\u003cLong\u003e customerIds); // e.g. connects to MongoDB\n```\nIn cases where the `getCustomers()` method returns a substantial number of customers, retrieving the associated `BillingInfo` for each customer would require an additional call per `customerId`. This would result in a considerable increase in network calls, causing the N + 1 queries issue. To mitigate this, we can retrieve all the `BillingInfo` for all the customers returned by `getCustomers()` with a single additional call. The same approach can be used for retrieving OrderItem information.\n\nAs we are working with three distinct and independent data sources, the process of joining data from `Customer`, `BillingInfo`, and `OrderItem` into a `Transaction` must be performed at the application level. This is the primary objective of ***Assembler***.\n\nWhen utilizing the [Assembler](https://central.sonatype.com/artifact/io.github.pellse/assembler), the aggregation of multiple reactive data sources and the implementation of the [API Composition Pattern](https://microservices.io/patterns/data/api-composition.html) can be accomplished as follows:\n\n```java\nimport reactor.core.publisher.Flux;\nimport io.github.pellse.assembler.Assembler;\n\nimport static io.github.pellse.assembler.AssemblerBuilder.assemblerOf;\nimport static io.github.pellse.assembler.RuleMapper.oneToMany;\nimport static io.github.pellse.assembler.RuleMapper.oneToOne;\nimport static io.github.pellse.assembler.RuleMapperSource.call;\nimport static io.github.pellse.assembler.Rule.rule;\n\nAssembler\u003cCustomer, Transaction\u003e assembler = assemblerOf(Transaction.class)\n  .withCorrelationIdResolver(Customer::customerId)\n  .withRules(\n    rule(BillingInfo::customerId, oneToOne(call(this::getBillingInfo))),\n    rule(OrderItem::customerId, oneToMany(OrderItem::id, call(this::getAllOrders))),\n    Transaction::new)\n  .build();\n\nFlux\u003cTransaction\u003e transactionFlux = assembler.assemble(getCustomers());\n```\nThe code snippet above demonstrates the process of first retrieving all customers, followed by the concurrent retrieval of all billing information and orders (in a single query) associated with the previously retrieved customers, as defined by the ***Assembler*** rules. The final step involves aggregating each customer, their respective billing information, and list of order items (related by the same customer id) into a `Transaction` object. This results in a reactive stream (`Flux`) of `Transaction` objects.\n\n[:arrow_up:](#table-of-contents)\n\n### Default values for missing data\nTo provide a default value for each missing values from the result of the API call, a factory function can also be supplied as a 2nd parameter to the `oneToOne()` function. For example, when `getCustomers()` returns 3 `Customer` *[C1, C2, C3]*, and `getBillingInfo([ID1, ID2, ID3])` returns only 2 associated `BillingInfo` *[B1, B2]*, the missing value *B3* can be generated as a default value. By doing so, a `null` `BillingInfo` is never passed to the `Transaction` constructor:\n```java\nrule(BillingInfo::customerId, oneToOne(call(this::getBillingInfo), customerId -\u003e createDefaultBillingInfo(customerId)))\n``` \nor more concisely:\n```java\nrule(BillingInfo::customerId, oneToOne(call(this::getBillingInfo), this::createDefaultBillingInfo))\n```\nUnlike the `oneToOne()` function, `oneToMany()` will always default to generating an empty collection. Therefore, providing a default factory function is not needed. In the example above, an empty `List\u003cOrderItem\u003e` is passed to the `Transaction` constructor if `getAllOrders([1, 2, 3])` returns `null`.\n\n[:arrow_up:](#table-of-contents)\n\n## Infinite Stream of Data\nIn situations where an infinite or very large stream of data is being handled, such as dealing with 100,000+ customers, ***Assembler*** needs to completely drain the upstream from `getCustomers()` to gather all correlation IDs (customerId). This can lead to resource exhaustion if not handled correctly. To mitigate this issue, the stream can be split into multiple smaller streams and processed in batches. Most reactive libraries already support this concept. Below is an example of this approach, utilizing [Project Reactor](https://projectreactor.io):\n```java\nFlux\u003cTransaction\u003e transactionFlux = getCustomers()\n  .windowTimeout(100, ofSeconds(5))\n  .flatMapSequential(assembler::assemble);\n```\n[:arrow_up:](#table-of-contents)\n\n## ID Joins\n***Assembler*** supports the concept of ID joins, semantically similar to SQL joins, to solve the issue of missing correlation IDs between primary and dependent entities. For example, assuming the following data model:\n```java\npublic record PostDetails(Long id, Long userId, String content) {}\npublic record User(Long Id, String username) {} // No postId field i.e. no correlation Id back to PostDetails\npublic record Reply(Long id, Long postId, Long userId, String content) {}\npublic record Post(PostDetails post, User author, List\u003cReply\u003e replies) {}\n```\n```mermaid\nclassDiagram\n    direction LR\n\n    class PostDetails {\n        Long id\n        Long userId\n        String content\n    }\n\n    class User {\n        Long Id\n        String username\n    }\n\n    class Reply {\n        Long id\n        Long postId\n        Long userId\n        String content\n    }\n\n    class Post {\n        PostDetails post\n        User author\n        List~Reply~ replies\n    }\n\n    Post o-- PostDetails\n    Post o-- User\n    Post o-- Reply\n    Reply --\u003e PostDetails : postId\n    Reply --\u003e User : userId\n    PostDetails --\u003e User : userId\n```\nWithout ID Join, there is no way to express the relationship between e.g. a `PostDetails` and a `User` because `User` doesn't have a `postId` field like `Reply` does:\n```java\nAssembler\u003cPostDetails, Post\u003e assembler = assemblerOf(Post.class)\n  .withCorrelationIdResolver(PostDetails::id)\n  .withRules(\n    rule(XXXXX, oneToOne(call(PostDetails::userId, this::getUsersById))), // What should XXXXX be?\n    rule(Reply::postId, oneToMany(Reply::id, call(this::getRepliesById))),\n    Post::new)\n  .build();\n```\nWith ID Join, this relationship can now be expressed:\n```java\nAssembler\u003cPostDetails, Post\u003e assembler = assemblerOf(Post.class)\n  .withCorrelationIdResolver(PostDetails::id)\n  .withRules(\n    rule(User::Id, PostDetails::userId, oneToOne(call(this::getUsersById))), // ID Join\n    rule(Reply::postId, oneToMany(Reply::id, call(this::getRepliesById))),\n    Post::new)\n  .build();\n```\nThis would be semantically equivalent to the following SQL query if all entities were stored in the same relational database:\n```sql\nSELECT \n    p.id AS post_id,\n    p.userId AS post_userId,\n    p.content AS post_content,\n    u.id AS author_id,\n    u.username AS author_username,\n    r.id AS reply_id,\n    r.postId AS reply_postId,\n    r.userId AS reply_userId,\n    r.content AS reply_content\nFROM \n    PostDetails p\nJOIN \n    User u ON p.userId = u.id -- rule(User::Id, PostDetails::userId, ...)\nLEFT JOIN \n    Reply r ON p.id = r.postId -- rule(Reply::postId, ...)\nWHERE \n    p.id IN (1, 2, 3); -- withCorrelationIdResolver(PostDetails::id)\n```\n[:arrow_up:](#table-of-contents)\n\n## Complex Relationship Graph And Cartesian Product\nThe _Cartesian Product_ problem occurs when multiple data sources (e.g. tables in relational databases) are joined in such a way that every row from one table is paired with every row from another, leading to an excessive and inefficient number of rows. This can happen unintentionally, especially with complex joins, causing performance bottlenecks.\n\nThis great [article](https://vladmihalcea.com/blaze-persistence-multiset) from [Vlad Mihalcea](https://vladmihalcea.com/), which was the inspiration for the implementation of this feature available since [v0.7.6](https://github.com/pellse/assembler/releases/tag/v0.7.6), explains _how we can fetch multiple JPA entity collections without generating an implicit Cartesian Product_, in the context of relational databases.\n\nBut what happens when trying to query, to quote the article, a \"_multi-level hierarchical structure_\" over multiple types of data sources distributed across multiple servers?\n\nThe ***Assembler*** addresses this problem by aggregating sub-queries through the connection of embedded ***Assembler*** instances, enabling the modeling of complex relationship graphs across disparate data sources (e.g., microservices, relational or non-relational databases, message queues, etc.) without triggering N+1 queries or _Cartesian Products_, while maintaining structured concurrency and preserving the system's non-blocking, reactive properties.\n\nFor example, assuming the following data model:\n```java\nimport org.jspecify.annotations.NonNull;\nimport org.jspecify.annotations.Nullable;\n\nrecord Post(PostDetails postDetails, List\u003cPostComment\u003e comments, List\u003cPostTag\u003e postTags) {}\n\nrecord PostDetails(Long id, String title) {}\n\nrecord PostComment(Long id, Long postId, String review, @Nullable List\u003cUserVote\u003e userVotes) {\n  PostComment(PostComment postComment, @NonNull List\u003cUserVote\u003e userVotes) {\n    this(postComment.id(), postComment.postId(), postComment.review(), userVotes);\n  }\n}\n\nrecord UserVoteView(Long id, Long commentId, Long userId, int score) {}\n\nrecord UserVote(Long id, Long commentId, User user, int score) {\n  UserVote(UserVoteView userVoteView, User user) {\n    this(userVoteView.id(), userVoteView.commentId(), user, userVoteView.score());\n  }\n}\n\nrecord User(Long id, String firstName, String lastName) {}\n\nrecord PostTag(Long id, Long postId, String name) {}\n```\n```mermaid\nclassDiagram\n    direction LR\n\n    class Post {\n        PostDetails postDetails\n        List~PostComment~ comments\n        List~PostTag~ postTags\n    }\n\n    class PostDetails {\n        Long id\n        String title\n    }\n\n    class PostComment {\n        Long id\n        Long postId\n        String review\n        List~UserVote~ userVotes\n    }\n\n    class UserVoteView {\n        Long id\n        Long commentId\n        Long userId\n        int score\n    }\n\n    class UserVote {\n        Long id\n        Long commentId\n        User user\n        int score\n    }\n\n    class User {\n        Long id\n        String firstName\n        String lastName\n    }\n\n    class PostTag {\n        Long id\n        Long postId\n        String name\n    }\n\n    Post o-- PostDetails\n    Post o-- PostComment\n    Post o-- PostTag\n    PostComment o-- UserVote\n    UserVote o-- User\n    UserVote ..\u003e UserVoteView\n\n    PostComment --\u003e PostDetails : postId\n    PostTag --\u003e PostDetails : postId\n\n    UserVoteView --\u003e PostComment : commentId\n    UserVoteView --\u003e User : userId\n\n    style Post stroke:#006400, stroke-width:2px\n    style PostComment stroke:#006400, stroke-width:2px\n    style UserVote stroke:#006400, stroke-width:2px\n    style User stroke:#006400, stroke-width:2px\n    style PostTag stroke:#006400, stroke-width:2px\n```\nHere is how we would connect ***Assembler*** instances together to build our entity graph:\n```java\nimport io.github.pellse.assembler.Assembler;\nimport reactor.core.publisher.Flux;\n\nimport static io.github.pellse.assembler.Assembler.assemble;\nimport static io.github.pellse.assembler.AssemblerBuilder.assemblerOf;\nimport static io.github.pellse.assembler.Rule.rule;\nimport static io.github.pellse.assembler.RuleMapper.oneToMany;\nimport static io.github.pellse.assembler.RuleMapper.oneToOne;\nimport static io.github.pellse.assembler.RuleMapperSource.call;\nimport static java.time.Duration.ofSeconds;\n\nAssembler\u003cUserVoteView, UserVote\u003e userVoteAssembler = assemblerOf(UserVote.class)\n  .withCorrelationIdResolver(UserVoteView::id)\n  .withRules(\n    rule(User::id, UserVoteView::userId, oneToOne(call(this::getUsersById))),\n    UserVote::new)\n  .build();\n\nAssembler\u003cPostComment, PostComment\u003e postCommentAssembler = assemblerOf(PostComment.class)\n  .withCorrelationIdResolver(PostComment::id)\n  .withRules(\n    rule(UserVote::commentId, oneToMany(UserVote::id, call(assemble(this::getUserVoteViewsById, userVoteAssembler)))),\n    PostComment::new)\n  .build();\n\nAssembler\u003cPostDetails, Post\u003e postAssembler = assemblerOf(Post.class)\n  .withCorrelationIdResolver(PostDetails::id)\n  .withRules(\n    rule(PostComment::postId, oneToMany(PostComment::id, call(assemble(this::getPostCommentsById, postCommentAssembler)))),\n    rule(PostTag::postId, oneToMany(PostTag::id, call(this::getPostTagsById))),\n    Post::new)\n  .build();\n\n// If getPostDetails() is a finite sequence\nFlux\u003cPost\u003e postFlux = postAssembler.assemble(getPostDetails());\n\n// If getPostDetails() is a continuous stream\nFlux\u003cPost\u003e postFlux = getPostDetails()\n  .windowTimeout(100, ofSeconds(5))\n  .flatMapSequential(postAssembler::assemble);\n```\nSee [EmbeddedAssemblerTest.java](assembler/src/test/java/io/github/pellse/assembler/test/EmbeddedAssemblerTest.java) for the complete example of how to use this feature.\n\n[:arrow_up:](#table-of-contents)\n\n## Reactive Caching\nApart from offering convenient helper functions to define mapping semantics such as `oneToOne()` and `oneToMany()`, ***Assembler*** also includes a caching/memoization mechanism for the downstream subqueries via the `cached()` and `cachedMany()` wrapper functions:\n\n```java\nimport io.github.pellse.assembler.Assembler;\n\nimport static io.github.pellse.assembler.AssemblerBuilder.assemblerOf;\nimport static io.github.pellse.assembler.RuleMapper.oneToMany;\nimport static io.github.pellse.assembler.RuleMapper.oneToOne;\nimport static io.github.pellse.assembler.RuleMapperSource.call;\nimport static io.github.pellse.assembler.Rule.rule;\nimport static io.github.pellse.assembler.caching.factory.CacheFactory.cached;\nimport static io.github.pellse.assembler.caching.factory.CacheFactory.cachedMany;\n\nvar assembler = assemblerOf(Transaction.class)\n  .withCorrelationIdResolver(Customer::customerId)\n  .withRules(\n    rule(BillingInfo::customerId, oneToOne(cached(call(this::getBillingInfo)))),\n    rule(OrderItem::customerId, oneToMany(OrderItem::id, cachedMany(call(this::getAllOrders)))),\n    Transaction::new)\n  .build();\n\nvar transactionFlux = getCustomers()\n  .window(3)\n  .flatMapSequential(assembler::assemble);\n```\n[:arrow_up:](#table-of-contents)\n\n### Third Party Reactive Cache Provider Integration\n\nThe `cached()` and `cachedMany()` functions include overloaded versions that enable users to utilize different `Cache` implementations. By providing an additional parameter of type `CacheFactory` to the `cached()` method, users can customize the caching mechanism as per their requirements. In case no `CacheFactory` parameter is passed to `cached()`, the default implementation will internally use a `Cache` based on `ConcurrentHashMap`.\n\n***All `Cache` implementations are internally decorated with non-blocking concurrency controls, making them safe for concurrent access and modifications.***\n\nBelow is a compilation of supplementary modules that are available for integration with third-party caching libraries. Additional modules will be incorporated in the future:\n\n| Assembler add-on module                                                                                                                                                                                                      | Third party cache library                               |\n|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------|\n| [![Maven Central](https://img.shields.io/maven-central/v/io.github.pellse/assembler-cache-caffeine.svg?label=assembler-cache-caffeine)](https://central.sonatype.com/artifact/io.github.pellse/assembler-cache-caffeine)     | [Caffeine](https://github.com/ben-manes/caffeine)       |\n| [![Maven Central](https://img.shields.io/maven-central/v/io.github.pellse/assembler-cache-caffeine.svg?label=assembler-spring-cache)](https://central.sonatype.com/artifact/io.github.pellse/assembler-spring-cache) | [Spring Caching](https://docs.spring.io/spring-boot/reference/io/caching.html) |\n\nHere is a sample implementation of `CacheFactory` that showcases the use of the [Caffeine](https://github.com/ben-manes/caffeine) library, which can be accomplished via the `caffeineCache()` helper method. This helper method is provided as part of the caffeine add-on module:\n\n```java\nimport com.github.benmanes.caffeine.cache.Caffeine;\n\nimport static com.github.benmanes.caffeine.cache.Caffeine.newBuilder;\n\nimport static io.github.pellse.assembler.AssemblerBuilder.assemblerOf;\nimport static io.github.pellse.assembler.RuleMapper.oneToMany;\nimport static io.github.pellse.assembler.RuleMapper.oneToOne;\nimport static io.github.pellse.assembler.RuleMapperSource.call;\nimport static io.github.pellse.assembler.Rule.rule;\nimport static io.github.pellse.assembler.caching.factory.CacheFactory.cached;\nimport static io.github.pellse.assembler.caching.factory.CacheFactory.cachedMany;\nimport static io.github.pellse.assembler.caching.caffeine.CaffeineCacheFactory.caffeineCache;\n\nCaffeine\u003cObject, Object\u003e cacheBuilder = newBuilder()\n  .recordStats()\n  .expireAfterWrite(ofMinutes(10))\n  .maximumSize(1000);\n\nvar assembler = assemblerOf(Transaction.class)\n  .withCorrelationIdResolver(Customer::customerId)\n  .withRules(\n    rule(BillingInfo::customerId, oneToOne(cached(call(this::getBillingInfo), caffeineCache(cacheBuilder)))),\n    rule(OrderItem::customerId, oneToMany(OrderItem::id, cachedMany(call(this::getAllOrders), caffeineCache()))),\n    Transaction::new)\n  .build();\n```\n[:arrow_up:](#table-of-contents)\n\n### Stream Table\nIn addition to the cache mechanism provided by the `cached()` and `cachedMany()` functions, ***Assembler*** also provides a mechanism to automatically and asynchronously update the cache in real-time as new data becomes available via the `streamTable()` function. This ensures that the cache is always up-to-date and avoids in most cases the need for `cached()` to fall back to fetch missing data.\n\nThe Stream Table mechanism in ***Assembler*** (via `streamTable()`) can be seen as being conceptually similar to a `KTable` in Kafka. Both mechanisms provide a way to keep a key-value store updated in real-time with the latest value per key from its associated data stream. However, ***Assembler*** is not only limited to Kafka data sources and can work with any data source that can be consumed in a reactive stream.\n\nThis is how `streamTable()` connects to a data stream and automatically and asynchronously update the cache in real-time:\n\n```java\nimport reactor.core.publisher.Flux;\nimport io.github.pellse.assembler.Assembler;\n\nimport static io.github.pellse.assembler.AssemblerBuilder.assemblerOf;\nimport static io.github.pellse.assembler.RuleMapper.oneToMany;\nimport static io.github.pellse.assembler.RuleMapper.oneToOne;\nimport static io.github.pellse.assembler.RuleMapperSource.call;\nimport static io.github.pellse.assembler.Rule.rule;\nimport static io.github.pellse.assembler.caching.factory.CacheFactory.cached;\nimport static io.github.pellse.assembler.caching.factory.CacheFactory.cachedMany;\nimport static io.github.pellse.assembler.caching.factory.StreamTableFactory;\n\nFlux\u003cBillingInfo\u003e billingInfoFlux = ... // From e.g. Debezium/Kafka, RabbitMQ, etc.;\nFlux\u003cOrderItem\u003e orderItemFlux = ... // From e.g. Debezium/Kafka, RabbitMQ, etc.;\n\nvar assembler = assemblerOf(Transaction.class)\n  .withCorrelationIdResolver(Customer::customerId)\n  .withRules(\n    rule(BillingInfo::customerId,\n      oneToOne(cached(call(this::getBillingInfo), caffeineCache(), streamTable(billingInfoFlux)))),\n    rule(OrderItem::customerId,\n      oneToMany(OrderItem::id, cachedMany(call(this::getAllOrders), streamTable(orderItemFlux)))),\n    Transaction::new)\n  .build();\n\nvar transactionFlux = getCustomers()\n  .window(3)\n  .flatMapSequential(assembler::assemble);\n```\n\nIt is also possible to customize the Stream Table configuration via `streamTableBuilder()`:\n\n```java\nimport reactor.core.publisher.Flux;\nimport io.github.pellse.assembler.Assembler;\n\nimport static io.github.pellse.assembler.AssemblerBuilder.assemblerOf;\nimport static io.github.pellse.assembler.RuleMapper.oneToMany;\nimport static io.github.pellse.assembler.RuleMapper.oneToOne;\nimport static io.github.pellse.assembler.RuleMapperSource.call;\nimport static io.github.pellse.assembler.Rule.rule;\nimport static io.github.pellse.assembler.caching.factory.CacheFactory.cached;\nimport static io.github.pellse.assembler.caching.factory.CacheFactory.cachedMany;\nimport static io.github.pellse.assembler.caching.factory.StreamTableFactoryBuilder.streamTableBuilder;\nimport static io.github.pellse.assembler.caching.factory.StreamTableFactory.OnErrorMap.onErrorMap;\nimport static reactor.core.scheduler.Schedulers.newParallel;\nimport static java.lang.System.getLogger;\n\nvar logger = getLogger(\"stream-table-logger\");\n\nFlux\u003cBillingInfo\u003e billingInfoFlux = ... // From e.g. Debezium/Kafka, RabbitMQ, etc.;\nFlux\u003cOrderItem\u003e orderItemFlux = ... // From e.g. Debezium/Kafka, RabbitMQ, etc.;\n\nvar assembler = assemblerOf(Transaction.class)\n  .withCorrelationIdResolver(Customer::customerId)\n  .withRules(\n    rule(BillingInfo::customerId, oneToOne(cached(call(this::getBillingInfo),\n      streamTableBuilder(billingInfoFlux)\n        .maxWindowSizeAndTime(100, ofSeconds(5))\n        .errorHandler(error -\u003e logger.log(WARNING, \"Error in streamTable\", error))\n        .scheduler(newParallel(\"billing-info\"))\n        .build()))),\n    rule(OrderItem::customerId, oneToMany(OrderItem::id, cachedMany(call(this::getAllOrders),\n      streamTableBuilder(orderItemFlux)\n        .maxWindowSize(50)\n        .errorHandler(onErrorMap(MyException::new))\n        .scheduler(newParallel(\"order-item\"))\n        .build()))),\n    Transaction::new)\n  .build();\n\nvar transactionFlux = getCustomers()\n  .window(3)\n  .flatMapSequential(assembler::assemble);\n```\nBy default, the cache is updated for every element from the incoming stream of data, but it can be configured to batch the cache updates, useful when we are updating a remote cache to optimize network calls\n\n[:arrow_up:](#table-of-contents)\n\n### Event Based Stream Table\nAssuming the following custom domain events not known by ***Assembler***:\n```java\nsealed interface MyEvent\u003cT\u003e {\n  T item();\n}\n\nrecord ItemUpdated\u003cT\u003e(T item) implements MyEvent\u003cT\u003e {}\nrecord ItemDeleted\u003cT\u003e(T item) implements MyEvent\u003cT\u003e {}\n\nrecord MyOtherEvent\u003cT\u003e(T value, boolean isAddOrUpdateEvent) {}\n\n// E.g. Flux coming from a Change Data Capture/Kafka source\nFlux\u003cMyOtherEvent\u003cBillingInfo\u003e\u003e billingInfoFlux = Flux.just(\n  new MyOtherEvent\u003c\u003e(billingInfo1, true), new MyOtherEvent\u003c\u003e(billingInfo2, true),\n  new MyOtherEvent\u003c\u003e(billingInfo2, false), new MyOtherEvent\u003c\u003e(billingInfo3, false));\n\n// E.g. Flux coming from a Change Data Capture/Kafka source\nFlux\u003cMyEvent\u003cOrderItem\u003e\u003e orderItemFlux = Flux.just(\n  new ItemUpdated\u003c\u003e(orderItem11), new ItemUpdated\u003c\u003e(orderItem12), new ItemUpdated\u003c\u003e(orderItem13),\n  new ItemDeleted\u003c\u003e(orderItem31), new ItemDeleted\u003c\u003e(orderItem32), new ItemDeleted\u003c\u003e(orderItem33));\n```\nHere is how `streamTable()` can be used to adapt those custom domain events to add, update or delete entries from the cache in real-time:\n\n```java\nimport io.github.pellse.assembler.Assembler;\n\nimport static io.github.pellse.assembler.AssemblerBuilder.assemblerOf;\nimport static io.github.pellse.assembler.RuleMapper.oneToMany;\nimport static io.github.pellse.assembler.RuleMapper.oneToOne;\nimport static io.github.pellse.assembler.RuleMapperSource.call;\nimport static io.github.pellse.assembler.Rule.rule;\nimport static io.github.pellse.assembler.caching.factory.CacheFactory.cached;\nimport static io.github.pellse.assembler.caching.factory.CacheFactory.cachedMany;\nimport static io.github.pellse.assembler.caching.factory.StreamTableFactory.streamTable;\n\nAssembler\u003cCustomer, Transaction\u003e assembler = assemblerOf(Transaction.class)\n  .withCorrelationIdResolver(Customer::customerId)\n  .withRules(\n    rule(BillingInfo::customerId, oneToOne(cached(call(this::getBillingInfo),\n      streamTable(billingInfoFlux, MyOtherEvent::isAddOrUpdateEvent, MyOtherEvent::value)))),\n    rule(OrderItem::customerId, oneToMany(OrderItem::id, cachedMany(call(this::getAllOrders),\n      streamTable(orderItemFlux, ItemUpdated.class::isInstance, MyEvent::item)))),\n    Transaction::new)\n  .build();\n\nvar transactionFlux = getCustomers()\n  .window(3)\n  .flatMapSequential(assembler::assemble);\n```\n[:arrow_up:](#table-of-contents)\n\n## Integration with non-reactive sources\nA utility function `toPublisher()` is also provided to wrap non-reactive sources, useful when e.g. calling 3rd party synchronous APIs:\n\n```java\nimport reactor.core.publisher.Flux;\nimport io.github.pellse.assembler.Assembler;\n\nimport static io.github.pellse.assembler.AssemblerBuilder.assemblerOf;\nimport static io.github.pellse.assembler.RuleMapper.oneToMany;\nimport static io.github.pellse.assembler.RuleMapper.oneToOne;\nimport static io.github.pellse.assembler.RuleMapperSource.call;\nimport static io.github.pellse.assembler.Rule.rule;\nimport static io.github.pellse.assembler.QueryUtils.toPublisher;\n\nList\u003cBillingInfo\u003e getBillingInfo(List\u003cLong\u003e customerIds); // non-reactive source\n\nList\u003cOrderItem\u003e getAllOrders(List\u003cLong\u003e customerIds); // non-reactive source\n\nAssembler\u003cCustomer, Transaction\u003e assembler = assemblerOf(Transaction.class)\n  .withCorrelationIdResolver(Customer::customerId)\n  .withRules(\n    rule(BillingInfo::customerId, oneToOne(call(toPublisher(this::getBillingInfo)))),\n    rule(OrderItem::customerId, oneToMany(OrderItem::id, call(toPublisher(this::getAllOrders)))),\n    Transaction::new)\n  .build();\n```\n[:arrow_up:](#table-of-contents)\n\n## What's Next?\nSee the [list of issues](https://github.com/pellse/assembler/issues) for planned improvements in a near future.\n\n[:arrow_up:](#table-of-contents)\n","funding_links":[],"categories":["开发框架"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpellse%2Fassembler","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fpellse%2Fassembler","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpellse%2Fassembler/lists"}