{"id":14972930,"url":"https://github.com/spring-projects/spring-integration-aws","last_synced_at":"2025-05-14T21:03:00.367Z","repository":{"id":23191414,"uuid":"26547803","full_name":"spring-projects/spring-integration-aws","owner":"spring-projects","description":null,"archived":false,"fork":false,"pushed_at":"2025-02-20T19:12:21.000Z","size":2150,"stargazers_count":225,"open_issues_count":2,"forks_count":167,"subscribers_count":25,"default_branch":"main","last_synced_at":"2025-03-30T12:04:11.187Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Java","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/spring-projects.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":"CODE_OF_CONDUCT.adoc","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2014-11-12T17:35:05.000Z","updated_at":"2025-03-16T10:52:17.000Z","dependencies_parsed_at":"2023-10-19T20:02:44.925Z","dependency_job_id":"3e90f6fe-80da-4658-bf4a-d98be4a039cb","html_url":"https://github.com/spring-projects/spring-integration-aws","commit_stats":{"total_commits":554,"total_committers":61,"mean_commits":9.081967213114755,"dds":0.6407942238267148,"last_synced_commit":"06ca18a8f5becb367a89e165f1699880dc70e3d5"},"previous_names":[],"tags_count":48,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/spring-projects%2Fspring-integration-aws","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/spring-projects%2Fspring-integration-aws/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/spring-projects%2Fspring-integration-aws/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/spring-projects%2Fspring-integration-aws/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/spring-projects","download_url":"https://codeload.github.com/spring-projects/spring-integration-aws/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247485279,"owners_count":20946398,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-09-24T13:47:46.756Z","updated_at":"2025-04-06T13:06:13.133Z","avatar_url":"https://github.com/spring-projects.png","language":"Java","readme":"[![Build Status](https://github.com/spring-projects/spring-integration-aws/actions/workflows/ci-snapshot.yml/badge.svg)](https://github.com/spring-projects/spring-integration-aws/actions/workflows/ci-snapshot.yml)\n[![Revved up by Develocity](https://img.shields.io/badge/Revved%20up%20by-Develocity-06A0CE?logo=Gradle\u0026labelColor=02303A)](https://ge.spring.io/scans?search.rootProjectNames=spring-integration-aws)\n\nSpring Integration Extension for Amazon Web Services (AWS)\n==========================================================\n\n# Introduction\n\n## Amazon Web Services (AWS)\n\nLaunched in 2006, [Amazon Web Services][] (AWS) provides key infrastructure services for business through its cloud computing platform. \nUsing cloud computing businesses can adopt a new business model whereby they do not have to plan and invest in procuring their own IT infrastructure. \nThey can use the infrastructure and services provided by the cloud service provider and pay as they use the services. Visit [AWS Products](https://aws.amazon.com/products/) for more details about various products offered by Amazon as a part their cloud computing services.\n\n*Spring Integration Extension for Amazon Web Services* provides Spring Integration adapters for the various services provided by the [AWS SDK for Java][].\nNote the Spring Integration AWS Extension is based on the [Spring Cloud AWS][] project.\n\n## Spring Integration's extensions to AWS\n\nThe current project version is `3.0.x` and it requires minimum Java `17` and Spring Integration `6.1.x` \u0026 Spring Cloud AWS `3.0.x`.\nThis version is also fully based on AWS Java SDK v2.\nTherefore, it has a lot of breaking changes since the previous version, for example an XML configuration support was fully removed. \n\nThis guide intends to explain briefly the various adapters available for [Amazon Web Services][] such as:\n\n* **Amazon Simple Storage Service (S3)**\n* **Amazon Simple Queue Service (SQS)**\n* **Amazon Simple Notification Service (SNS)**\n* **Amazon DynamoDB**\n* **Amazon Kinesis**\n\n## Contributing\n\n[Pull requests][] are welcome. \nPlease see the [contributor guidelines][] for details. \nAdditionally, if you are contributing, we recommend following the process for Spring Integration as outlined in the [administrator guidelines][].\n\n# Dependency Management\n\nThese dependencies are optional in the project:\n\n* `io.awspring.cloud:spring-cloud-aws-sns` - for SNS channel adapters\n* `io.awspring.cloud:spring-cloud-aws-sqs` - for SQS channel adapters\n* `io.awspring.cloud:spring-cloud-aws-s3` - for S3 channel adapters\n* `org.springframework.integration:spring-integration-file` - for S3 channel adapters\n* `org.springframework.integration:spring-integration-http` - for SNS inbound channel adapter\n* `software.amazon.awssdk:kinesis` - for Kinesis channel adapters\n* `software.amazon.kinesis:amazon-kinesis-client` - for KCL-based inbound channel adapter \n* `com.amazonaws:amazon-kinesis-producer` - for KPL-based `MessageHandler` \n* `software.amazon.awssdk:dynamodb` - for `DynamoDbMetadataStore` and `DynamoDbLockRegistry`\n* `software.amazon.awssdk:s3-transfer-manager` - for `S3MessageHandler`\n* `software.amazon.awssdk:aws-crt-client` - for `S3MessageHandler`\n\nConsider to include an appropriate dependency into your project when you use particular component from this project. \n\n# Adapters\n\n## Amazon Simple Storage Service (Amazon S3)\n\n### Introduction\n\nThe S3 Channel Adapters are based on the `AmazonS3` template and `TransferManager`.\nSee their specification and JavaDocs for more information.\n\n### Inbound Channel Adapter\n\nThe S3 Inbound Channel Adapter is represented by the `S3InboundFileSynchronizingMessageSource` and allows pulling S3 objects as files from the S3 bucket to the local directory for synchronization.\nThis adapter is fully similar to the Inbound Channel Adapters in the FTP and SFTP Spring Integration modules.\nSee more information in the [FTP/FTPS Adapters Chapter][] for common options or `SessionFactory`, `RemoteFileTemplate` and `FileListFilter` abstractions.\n\nThe Java Configuration is:\n\n````java\n@SpringBootApplication\npublic static class MyConfiguration {\n\n    @Autowired\n    private S3Client amazonS3;\n\n    @Bean\n    public S3InboundFileSynchronizer s3InboundFileSynchronizer() {\n    \tS3InboundFileSynchronizer synchronizer = new S3InboundFileSynchronizer(amazonS3());\n    \tsynchronizer.setDeleteRemoteFiles(true);\n    \tsynchronizer.setPreserveTimestamp(true);\n    \tsynchronizer.setRemoteDirectory(S3_BUCKET);\n    \tsynchronizer.setFilter(new S3RegexPatternFileListFilter(\".*\\\\.test$\"));\n    \tExpression expression = PARSER.parseExpression(\"#this.toUpperCase() + '.a'\");\n    \tsynchronizer.setLocalFilenameGeneratorExpression(expression);\n    \treturn synchronizer;\n    }\n\n    @Bean\n    @InboundChannelAdapter(value = \"s3FilesChannel\", poller = @Poller(fixedDelay = \"100\"))\n    public S3InboundFileSynchronizingMessageSource s3InboundFileSynchronizingMessageSource() {\n    \tS3InboundFileSynchronizingMessageSource messageSource =\n    \t\t\tnew S3InboundFileSynchronizingMessageSource(s3InboundFileSynchronizer());\n    \tmessageSource.setAutoCreateLocalDirectory(true);\n    \tmessageSource.setLocalDirectory(LOCAL_FOLDER);\n    \tmessageSource.setLocalFilter(new AcceptOnceFileListFilter\u003cFile\u003e());\n    \treturn messageSource;\n    }\n\n    @Bean\n    public PollableChannel s3FilesChannel() {\n    \treturn new QueueChannel();\n    }\n}\n````\n\nWith this config you receive messages with `java.io.File` `payload` from the `s3FilesChannel` after periodic synchronization of content from the Amazon S3 bucket into the local directory.\n\n### Streaming Inbound Channel Adapter\n\nThis adapter produces message with payloads of type `InputStream`, allowing S3 objects to be fetched without writing to the local file system. \nSince the session remains open, the consuming application is responsible for closing the session when the file has been consumed. \nThe session is provided in the closeableResource header (`IntegrationMessageHeaderAccessor.CLOSEABLE_RESOURCE`). \nStandard framework components, such as the `FileSplitter` and `StreamTransformer` will automatically close the session.\n \nThe following Spring Boot application provides an example of configuring the S3 inbound streaming adapter using Java configuration:\n\n````java\n@SpringBootApplication\npublic class S3JavaApplication {\n\n    public static void main(String[] args) {\n        new SpringApplicationBuilder(S3JavaApplication.class)\n            .web(false)\n            .run(args);\n    }\n    \n    @Autowired\n    private S3Client amazonS3;\n\n    @Bean\n    @InboundChannelAdapter(value = \"s3Channel\", poller = @Poller(fixedDelay = \"100\"))\n    public MessageSource\u003cInputStream\u003e s3InboundStreamingMessageSource() {    \n        S3StreamingMessageSource messageSource = new S3StreamingMessageSource(template());\n        messageSource.setRemoteDirectory(S3_BUCKET);\n        messageSource.setFilter(new S3PersistentAcceptOnceFileListFilter(new SimpleMetadataStore(),\n                                   \"streaming\"));    \t\n    \treturn messageSource;\n    }\n\n    @Bean\n    @Transformer(inputChannel = \"s3Channel\", outputChannel = \"data\")\n    public org.springframework.integration.transformer.Transformer transformer() {\n        return new StreamTransformer();\n    }\n    \n    @Bean\n    public S3RemoteFileTemplate template() {\n        return new S3RemoteFileTemplate(new S3SessionFactory(amazonS3));\n    }\n\n    @Bean\n    public PollableChannel s3Channel() {\n    \treturn new QueueChannel();\n    }\n}\n````\n\n\u003e NOTE: Unlike the non-streaming inbound channel adapter, this adapter does not prevent duplicates by default. \n\u003e If you do not delete the remote file and wish to prevent the file being processed again, you can configure an `S3PersistentFileListFilter` in the `filter` attribute. \n\u003e If you don’t actually want to persist the state, an in-memory `SimpleMetadataStore` can be used with the filter. \n\u003e If you wish to use a filename pattern (or regex) as well, use a `CompositeFileListFilter`.\n\n### Outbound Channel Adapter\n\nThe S3 Outbound Channel Adapter is represented by the `S3MessageHandler` and allows performing `upload`, `download` and `copy` (see `S3MessageHandler.Command` enum) operations in the provided S3 bucket.\n\nThe Java Configuration is:\n\n````java\n@SpringBootApplication\npublic static class MyConfiguration {\n\n    @Autowired\n    private S3AsyncClient amazonS3;\n\n    @Bean\n    @ServiceActivator(inputChannel = \"s3UploadChannel\")\n    public MessageHandler s3MessageHandler() {\n    \treturn new S3MessageHandler(amazonS3(), \"my-bucket\");\n    }\n\n}\n````\n\nWith this config you can send a message with the `java.io.File` as `payload` and the `transferManager.upload()` operation will be performed, where the file name is used as a S3 Object key.\n\nSee more information in the `S3MessageHandler` JavaDocs.\n\nNOTE: The AWS SDK recommends to use `S3CrtAsyncClient` for `S3TransferManager`, therefore an `S3AsyncClient.crtBuilder()` has to be used to achieve respective upload and download requirements.\n\n### Outbound Gateway\n\nThe S3 Outbound Gateway is represented by the same `S3MessageHandler` with the `produceReply = true` constructor argument for Java Configuration.\n\nThe \"request-reply\" nature of this gateway is async and the `Transfer` result from the `TransferManager` operation is sent to the `outputChannel`, assuming the transfer progress observation in the downstream flow.\n\nThe `TransferListener` can be supplied via `AwsHeaders.TRANSFER_LISTENER` header of the request message to track the transfer progress.\n\nSee more information in the `S3MessageHandler` JavaDocs.\n\n## Simple Email Service (SES)\n\nThere is no adapter for SES, since [Spring Cloud AWS][] provides implementations for `org.springframework.mail.MailSender` - `SimpleEmailServiceMailSender` and `SimpleEmailServiceJavaMailSender`, which can be injected to the `MailSendingMessageHandler`.\n\n## Amazon Simple Queue Service (SQS)\n\nThe `SQS` adapters are fully based on the [Spring Cloud AWS][] foundation, so for more information about the background components and core configuration, please, refer to the documentation of that project.\n\n### Outbound Channel Adapter\n\nThe SQS Outbound Channel Adapter is presented by the `SqsMessageHandler` implementation and allows sending messages to the SQS `queue` with provided `SqsAsyncClient` client. \nAn SQS queue can be configured explicitly on the adapter (using `org.springframework.integration.expression.ValueExpression`) or as a SpEL `Expression`, which is evaluated against request message as a root object of evaluation context. \nIn addition, the `queue` can be extracted from the message headers under `AwsHeaders.QUEUE`.\n\nThe Java Configuration is pretty simple:\n\n````java\n@SpringBootApplication\npublic static class MyConfiguration {\n\n    @Bean\n    @ServiceActivator(inputChannel = \"sqsSendChannel\")\n    public MessageHandler sqsMessageHandler(SqsAsyncClient amazonSqs) {\n    \treturn new SqsMessageHandler(amazonSqs);\n    }\n\n}\n````\n\nStarting with _version 2.0_, the `SqsMessageHandler` can be configured with the `HeaderMapper` to map message headers to the SQS message attributes.\nSee `SqsHeaderMapper` implementation for more information and also consult with [Amazon SQS Message Attributes][] about value types and restrictions.   \n\n### Inbound Channel Adapter\n\nThe SQS Inbound Channel Adapter is a `message-driven` implementation for the `MessageProducer` and is represented with `SqsMessageDrivenChannelAdapter`. \nThis channel adapter is based on the `io.awspring.cloud.sqs.listener.SqsMessageListenerContainer` to receive messages from the provided `queues` in async manner and send an enhanced Spring Integration Message to the provided `MessageChannel`.\n\nThe Java Configuration is pretty simple:\n\n````java\n@SpringBootApplication\npublic static class MyConfiguration {\n\n\t@Autowired\n\tprivate SqsAsyncClient amazonSqs;\n\n\t@Bean\n\tpublic PollableChannel inputChannel() {\n\t\treturn new QueueChannel();\n\t}\n\n\t@Bean\n\tpublic MessageProducer sqsMessageDrivenChannelAdapter() {\n\t\tSqsMessageDrivenChannelAdapter adapter = new SqsMessageDrivenChannelAdapter(this.amazonSqs, \"myQueue\");\n\t\tadapter.setOutputChannel(inputChannel());\n\t\treturn adapter;\n\t}\n}\n````\n\nThe target listener container can be configured via `SqsMessageDrivenChannelAdapter.setSqsContainerOptions(SqsContainerOptions)` option. \n\n## Amazon Simple Notification Service (SNS)\n\nAmazon SNS is a publish-subscribe messaging system that allows clients to publish notification to a particular topic.\nOther interested clients may subscribe using different protocols like HTTP/HTTPS, e-mail, or an Amazon SQS queue to receive the messages. \nPlus mobile devices can be registered as subscribers from the AWS Management Console.\n\nUnfortunately [Spring Cloud AWS][] doesn't provide flexible components which can be used from the channel adapter implementations, but Amazon SNS API is pretty simple, on the other hand. \nHence, Spring Integration AWS SNS Support is straightforward and just allows to provide channel adapter foundation for Spring Integration applications.\n\nSince e-mail, SMS and mobile device subscription/unsubscription confirmation is out of the Spring Integration application scope and can be done only from the AWS Management Console, we provide only HTTP/HTTPS SNS endpoint in face of `SnsInboundChannelAdapter`. \nThe SQS-to-SNS subscription can also be done through account configuration: https://docs.aws.amazon.com/sns/latest/dg/subscribe-sqs-queue-to-sns-topic.html.\n\n### Inbound Channel Adapter\n\nThe `SnsInboundChannelAdapter` is an extension of `HttpRequestHandlingMessagingGateway` and must be as a part of Spring MVC application. \nIts URL must be used from the AWS Management Console to add this endpoint as a subscriber to the SNS Topic. \nHowever. before receiving any notification itself this HTTP endpoint must confirm the subscription.\n\nSee `SnsInboundChannelAdapter` JavaDocs for more information.\n\nAn important option of this adapter to consider is `handleNotificationStatus`. \nThis `boolean` flag indicates if the adapter should send `SubscriptionConfirmation/UnsubscribeConfirmation` message to the `output-channel` or not. \nIf that the `AwsHeaders.NOTIFICATION_STATUS` message header is present in the message with the `NotificationStatus` object, which can be used in the downstream flow to confirm subscription or not. \nOr \"re-confirm\" it in case of `UnsubscribeConfirmation` message.\n\nIn addition, the `AwsHeaders#SNS_MESSAGE_TYPE` message header is represented to simplify a routing in the downstream flow.\n\nThe Java Configuration is pretty simple:\n\n````java\n@SpringBootApplication\npublic static class MyConfiguration {\n\n\t@Autowired\n\tprivate SnsClient amazonSns;\n\n\t@Bean\n    public PollableChannel inputChannel() {\n    \treturn new QueueChannel();\n    }\n\n    @Bean\n    public HttpRequestHandler sqsMessageDrivenChannelAdapter() {\n    \tSnsInboundChannelAdapter adapter = new SnsInboundChannelAdapter(amazonSns(), \"/mySampleTopic\");\n    \tadapter.setRequestChannel(inputChannel());\n    \tadapter.setHandleNotificationStatus(true);\n    \treturn adapter;\n    }\n}\n````\n\nNote: by default the message `payload` is a `Map` converted from the received Topic JSON message. \nFor the convenience a `payload-expression` is provided with the `Message` as a root object of the evaluation context. \nHence, even some HTTP headers, populated by the `DefaultHttpHeaderMapper`, are available for the evaluation context.\n\n### Outbound Channel Adapter\n\nThe `SnsMessageHandler` is a simple one-way Outbound Channel Adapter to send Topic Notification using `SnsAsyncClient` service.\n\nThis Channel Adapter (`MessageHandler`) accepts these options:\n\n- `topic-arn` (`topic-arn-expression`) - the SNS Topic to send notification for.\n- `subject` (`subject-expression`) - the SNS Notification Subject;\n- `body-expression` - the SpEL expression to evaluate the `message` property for the `software.amazon.awssdk.services.sns.model.PublishRequest`.\n- `resource-id-resolver` - a `ResourceIdResolver` bean reference to resolve logical topic names to physical resource ids;\n\nSee `SnsMessageHandler` JavaDocs for more information.\n\nThe Java Config looks like:\n\n````java\n@Bean\npublic MessageHandler snsMessageHandler() {\n    SnsMessageHandler handler = new SnsMessageHandler(amazonSns());\n    handler.setTopicArn(\"arn:aws:sns:eu-west:123456789012:test\");\n    String bodyExpression = \"T(SnsBodyBuilder).withDefault(payload).forProtocols(payload.substring(0, 140), 'sms')\";\n    handler.setBodyExpression(spelExpressionParser.parseExpression(bodyExpression));\n\n    // message-group ID and deduplication ID are used for FIFO topics\n    handler.setMessageGroupId(\"foo-messages\");\n    String deduplicationExpression = \"headers.id\";\n    handler.setMessageDeduplicationIdExpression(spelExpressionParser.parseExpression(deduplicationExpression))''\n    return handler;\n}\n````\n\nNOTE: the `bodyExpression` can be evaluated to a `org.springframework.integration.aws.support.SnsBodyBuilder` allowing the configuration of a `json` `messageStructure` for the `PublishRequest` and provide separate messages for different protocols.\nThe same `SnsBodyBuilder` rule is applied for the raw `payload` if the `bodyExpression` hasn't been configured.\nNOTE: if the `payload` of `requestMessage` is a `software.amazon.awssdk.services.sns.model.PublishRequest` already, the `SnsMessageHandler` doesn't do anything with it, and it is sent as-is.\n\nStarting with _version 2.0_, the `SnsMessageHandler` can be configured with the `HeaderMapper` to map message headers to the SNS message attributes.\nSee `SnsHeaderMapper` implementation for more information and also consult with [Amazon SNS Message Attributes][] about value types and restrictions.   \n\nStarting with _version 2.5.3_, the `SnsMessageHandler` supports sending to SNS FIFO topics using the `messageGroupId`/`messageGroupIdExpression`\nand `messageDeduplicationIdExpression` properties.\n\n## Metadata Store for Amazon DynamoDB\n\nThe `DynamoDbMetadataStore`, a `ConcurrentMetadataStore` implementation, is provided to keep the metadata for Spring Integration components in the distributed Amazon DynamoDB store. \nThe implementation is based on a simple table with `metadataKey` and `metadataValue` attributes, both are string types and the `metadataKey` is partition key of the table.\nBy default, the `SpringIntegrationMetadataStore` table is used, and it is created during `DynamoDbMetaDataStore` initialization if that doesn't exist yet.\nThe `DynamoDbMetadataStore` can be used for the `KinesisMessageDrivenChannelAdapter` as a cloud-based `cehckpointStore`.\n\nStarting with _version 2.0_, the `DynamoDbMetadataStore` can be configured with the `timeToLive` option to enable the [DynamoDB TTL][] feature.\nThe `expireAt` attribute is added to each item with the value based on the sum of current time and provided `timeToLive` in seconds.\nIf the provided `timeToLive` value is non-positive, the TTL functionality is disabled on the table.\n\n## Amazon Kinesis\n\nAmazon Kinesis is a platform for streaming data on AWS, making it easy to load and analyze streaming data, and also providing the ability for you to build custom streaming data applications for specialized needs.\n\n### Inbound Channel Adapter\n\nThe `KinesisMessageDrivenChannelAdapter` is an extension of the `MessageProducerSupport` - event-driver channel adapter.\n\nSee `KinesisMessageDrivenChannelAdapter` JavaDocs and its setters for more information how to use and how to configure it in the application for Kinesis streams ingestion.\n\nThe Java Configuration is pretty simple:\n\n````java\n@SpringBootApplication\npublic static class MyConfiguration {\n\n    @Bean\n    public KinesisMessageDrivenChannelAdapter kinesisInboundChannelChannel(KinesisAsyncClient amazonKinesis) {\n        KinesisMessageDrivenChannelAdapter adapter =\n            new KinesisMessageDrivenChannelAdapter(amazonKinesis, \"MY_STREAM\");\n        adapter.setOutputChannel(kinesisReceiveChannel());\n        return adapter;\n    }\n}\n````\n\nThis channel adapter can be configured with the `DynamoDbMetadataStore` mentioned above to track sequence checkpoints for shards in the cloud environment when we have several instances of our Kinesis application. \nBy default, this adapter uses `DeserializingConverter` to convert `byte[]` from the `Record` data.\nCan be specified as `null` with meaning no conversion and the target `Message` is sent with the `byte[]` payload.\n\nAdditional headers like `AwsHeaders.RECEIVED_STREAM`, `AwsHeaders.SHARD`, `AwsHeaders.RECEIVED_PARTITION_KEY` and `AwsHeaders.RECEIVED_SEQUENCE_NUMBER` are populated to the message for downstream logic.\nWhen `CheckpointMode.manual` is used the `Checkpointer` instance is populated to the `AwsHeaders.CHECKPOINTER` header for an acknowledgment in the downstream logic manually. \n\nThe `KinesisMessageDrivenChannelAdapter` can be configured with the `ListenerMode` `record` or `batch` to process records one by one or send the whole just polled batch of records.\nIf `Converter` is configured to `null`, the entire `List\u003cRecord\u003e` is sent as a payload.\nOtherwise, a list of converted `Record.getData().array()` is wrapped to the payload of message to send.\nIn this case the `AwsHeaders.RECEIVED_PARTITION_KEY` and `AwsHeaders.RECEIVED_SEQUENCE_NUMBER` headers contains values as a `List\u003cString\u003e` of partition keys and sequence numbers of converted records respectively.\n\nThe consumer group is included to the metadata store `key`.\nWhen records are consumed, they are filtered by the last stored `lastCheckpoint` under the key as `[CONSUMER_GROUP]:[STREAM]:[SHARD_ID]`.\n\nStarting with _version 2.0_, the `KinesisMessageDrivenChannelAdapter` can be configured with the `InboundMessageMapper` to extract message headers embedded into the record data (if any).\nSee `EmbeddedJsonHeadersMessageMapper` implementation for more information.\nWhen `InboundMessageMapper` is used together with the `ListenerMode.batch`, each `Record` is converted to the `Message` with extracted embedded headers (if any) and converted `byte[]` payload if any and converter is present.\nIn this case `AwsHeaders.RECEIVED_PARTITION_KEY` and `AwsHeaders.RECEIVED_SEQUENCE_NUMBER` headers are populated to the particular message for a record.\nThese messages are wrapped as a list payload to one outbound message. \n\nStarting with _version 2.0_, the `KinesisMessageDrivenChannelAdapter` can be configured with the `LockRegistry` for leader selection for the provided shards or derived from the provided streams.\nThe `KinesisMessageDrivenChannelAdapter` iterates over its shards and tries to acquire a distributed lock for the shard in its consumer group.\nIf `LockRegistry` is not provided, no exclusive locking happens and all the shards are consumed by this `KinesisMessageDrivenChannelAdapter`. \nSee also `DynamoDbLockRegistry` for more information.\n\nThe `KinesisMessageDrivenChannelAdapter` can be configured with a `Function\u003cList\u003cShard\u003e, List\u003cShard\u003e\u003e shardListFilter` to filter the available, open, non-exhausted shards.\nThis filter `Function` will be called each time the shard list is refreshed.\n\nFor example, users may want to fully read any parent shards before starting to read their child shards.  This could be achieved as follows:\n\n```java\n    openShards -\u003e {\n        Set\u003cString\u003e openShardIds = openShards.stream().map(Shard::getShardId).collect(Collectors.toSet());\n        // only return open shards which have no parent available for reading\n        return openShards.stream()\n                .filter(shard -\u003e !openShardIds.contains(shard.getParentShardId())\n                        \u0026\u0026 !openShardIds.contains(shard.getAdjacentParentShardId()))\n                .toList();\n        }\n```\n\nStarting with _version 3.0_, any exception thrown from the record process may lead to shard iterator rewinding to the latest check-pointed sequence or the first one in the current failed batch.\nThis ensures an at-least-once delivery for possibly failed records.\nIf the latest checkpoint is equal to the highest sequence in the batch, then shard consumer continue with the next iterator. \n\nAlso, the `KclMessageDrivenChannelAdapter` is provided for performing streams consumption by [Kinesis Client Library][].\nSee its JavaDocs for more information. \n\n### Outbound Channel Adapter\n\nThe `KinesisMessageHandler` is an `AbstractMessageHandler` to perform put record to the Kinesis stream.\nThe stream, partition key (or explicit hash key) and sequence number can be determined against request message via evaluation provided expressions or can be specified statically.\nThey also can be specified as `AwsHeaders.STREAM`, `AwsHeaders.PARTITION_KEY` and `AwsHeaders.SEQUENCE_NUMBER` respectively.\n\nThe `KinesisMessageHandler` can be configured with the `outputChannel` for sending a `Message` on successful put operation.\nThe payload is the original request and additional `AwsHeaders.SHARD` and `AwsHeaders.SEQUENCE_NUMBER` headers are populated from the `PutRecordResult`. \nIf the request payload is a `PutRecordsRequest`, the full `PutRecordsResult` is populated in the `AwsHeaders.SERVICE_RESULT` header instead. \n\nWhen an async failure is happened on the put operation, the `ErrorMessage` is sent to the `errorChannel` header or global one.\nThe payload is an `AwsRequestFailureException`.\n \nThe `payload` of request message can be:\n \n- `PutRecordsRequest` to perform `KinesisAsyncClient.putRecords`\n- `PutRecordRequest` to perform `KinesisAsyncClient.putRecord`\n- `ByteBuffer` to represent a data of the `PutRecordRequest`\n- `byte[]` which is wrapped to the `ByteBuffer`\n- any other type which is converted to the `byte[]` by the provided `Converter`; the `SerializingConverter` is used by default.  \n\nThe Java Configuration for the message handler:\n\n````java\n@Bean\n@ServiceActivator(inputChannel = \"kinesisSendChannel\")\npublic MessageHandler kinesisMessageHandler(KinesisAsyncClient amazonKinesis,\n                                            MessageChannel channel) {\n    KinesisMessageHandler kinesisMessageHandler = new KinesisMessageHandler(amazonKinesis);\n    kinesisMessageHandler.setPartitionKey(\"1\");\n    kinesisMessageHandler.setOutputChannel(channel);\n    return kinesisMessageHandler;\n}\n````\n\nStarting with _version 2.0_, the `KinesisMessageHandler` can be configured with the `OutboundMessageMapper` to embed message headers into the record data alongside with the payload.\nSee `EmbeddedJsonHeadersMessageMapper` implementation for more information.\n\nAlso, the `KplMessageHandler` is provided for performing streams consumption by [Kinesis Producer Library][].\n\n## Lock Registry for Amazon DynamoDB\n\nStarting with _version 2.0_, the `DynamoDbLockRegistry` implementation is available.\nCertain components (for example aggregator and resequencer) use a lock obtained from a `LockRegistry` instance to ensure that only one thread is manipulating a group at a time. \nThe `DefaultLockRegistry` performs this function within a single component; you can now configure an external lock registry on these components. \nWhen used with a shared `MessageGroupStore`, the `DynamoDbLockRegistry` can be used to provide this functionality across multiple application instances, such that only one instance can manipulate the group at a time.   \nThis implementation can also be used for the distributed leader elections using a [LockRegistryLeaderInitiator][].\n\n## Testing\n\nThe tests in the project are performed via Testcontainers and [Local Stack][] image.\nSee `LocalstackContainerTest` interface Javadocs for more information.\n\n[Spring Cloud AWS]: https://awspring.io/\n[AWS SDK for Java]: https://aws.amazon.com/sdkforjava/\n[Amazon Web Services]: https://aws.amazon.com/\n[https://aws.amazon.com/products/]: https://aws.amazon.com/products/\n[https://aws.amazon.com/ses/]: https://aws.amazon.com/ses/\n[https://aws.amazon.com/documentation/ses/]: https://aws.amazon.com/documentation/ses/\n[FTP/FTPS Adapters Chapter]: https://docs.spring.io/spring-integration/reference/html/ftp.html\n[Pull requests]: https://help.github.com/en/articles/creating-a-pull-request\n[contributor guidelines]: https://github.com/spring-projects/spring-integration/blob/main/CONTRIBUTING.adoc\n[administrator guidelines]: https://github.com/spring-projects/spring-integration/wiki/Administrator-Guidelines\n[DynamoDB TTL]: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html\n[Kinesis Client Library]: https://docs.aws.amazon.com/streams/latest/dev/developing-consumers-with-kcl.html\n[Amazon SQS Message Attributes]: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-attributes.html\n[Amazon SNS Message Attributes]: https://docs.aws.amazon.com/sns/latest/dg/SNSMessageAttributes.html\n[Leader Election]: https://docs.spring.io/spring-integration/docs/current/reference/html/messaging-endpoints-chapter.html#leadership-event-handling\n[Kinesis Producer Library]: https://docs.aws.amazon.com/streams/latest/dev/developing-producers-with-kpl.html \n[LockRegistryLeaderInitiator]: https://docs.spring.io/spring-integration/docs/current/reference/html/messaging-endpoints-chapter.html#leadership-event-handling \n[Local Stack]: https://localstack.cloud\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fspring-projects%2Fspring-integration-aws","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fspring-projects%2Fspring-integration-aws","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fspring-projects%2Fspring-integration-aws/lists"}