{"id":13471603,"url":"https://github.com/segmentio/topicctl","last_synced_at":"2025-10-24T12:59:25.127Z","repository":{"id":37601281,"uuid":"281490582","full_name":"segmentio/topicctl","owner":"segmentio","description":"Tool for declarative management of Kafka topics","archived":false,"fork":false,"pushed_at":"2025-03-14T14:33:23.000Z","size":474,"stargazers_count":620,"open_issues_count":24,"forks_count":60,"subscribers_count":9,"default_branch":"master","last_synced_at":"2025-03-25T05:35:24.410Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/segmentio.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":"CODEOWNERS","security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-07-21T19:52:19.000Z","updated_at":"2025-03-21T15:04:33.000Z","dependencies_parsed_at":"2023-02-18T11:48:12.212Z","dependency_job_id":"f887f97c-57ab-486e-9387-b24a237b4b3d","html_url":"https://github.com/segmentio/topicctl","commit_stats":null,"previous_names":[],"tags_count":37,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/segmentio%2Ftopicctl","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/segmentio%2Ftopicctl/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/segmentio%2Ftopicctl/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/segmentio%2Ftopicctl/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/segmentio","download_url":"https://codeload.github.com/segmentio/topicctl/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":245662850,"owners_count":20652095,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-31T16:00:47.193Z","updated_at":"2025-10-24T12:59:25.114Z","avatar_url":"https://github.com/segmentio.png","language":"Go","readme":"![GitHub Actions](https://github.com/segmentio/topicctl/actions/workflows/ci.yml/badge.svg)\n[![Go Report Card](https://goreportcard.com/badge/github.com/segmentio/topicctl)](https://goreportcard.com/report/github.com/segmentio/topicctl)\n\n# topicctl\n\nA tool for easy, declarative management of Kafka topics. Includes the ability to \"apply\" topic\nchanges from YAML as well as a repl for interactive exploration of brokers, topics, consumer groups,\nmessages, and more.\n\n\u003cimg width=\"667\" alt=\"topicctl_screenshot2\" src=\"https://user-images.githubusercontent.com/54862872/88094530-979db380-cb48-11ea-93e0-ed4c45aefd66.png\"\u003e\n\n## Motivation\n\nManaging Kafka topics via the standard tooling can be tedious and error-prone; there is no\nstandard, declarative way to define topics (e.g., YAML files that can be checked-in to git),\nand understanding the state of a cluster at any given point in time requires knowing\nand using multiple, different commands with different interfaces.\n\nWe created `topicctl` to make the management of our Kafka topics more transparent and\nuser-friendly. The project was inspired by `kubectl` and other tools that we've used in\nnon-Kafka-related contexts.\n\nSee [this blog post](https://segment.com/blog/easier-management-of-Kafka-topics-with-topicctl/) for\nmore details.\n\n## 🆕 Upgrading from v0\n\nWe recently revamped `topicctl` to support ZooKeeper-less cluster access as well as some\nadditional security options (TLS/SSL and SASL)! See\n[this blog post](https://segment.com/blog/topicctl-v1/) to learn more about why and how we did\nthis.\n\nAll changes should be backwards compatible, but you'll need to update your cluster configs if you\nwant to take advantage of these new features; see the [clusters section below](#clusters) for more\ndetails on the latest config format.\n\nThe code for the old version has been preserved in the\n[v0 branch](https://github.com/segmentio/topicctl/tree/v0) if you run into problems and need to\nrevert.\n\n## Related projects\n\nCheck out the [data-digger](https://github.com/segmentio/data-digger) for a command-line tool\nthat makes it easy to tail and summarize structured data in Kafka.\n\n## Getting started\n\n### Installation\n\nEither:\n\n1. Run `go install github.com/segmentio/topicctl/cmd/topicctl@latest`\n2. Clone this repo and run `make install` in the repo root\n3. Use the Docker image: `docker pull segment/topicctl`\n\nIf you use (1) or (2), the binary will be placed in `$GOPATH/bin`.\n\nIf you use Docker, you can run the tool via\n`docker run -ti --rm segment/topicctl [subcommand] [flags]`. Depending on your Docker setup and\nwhat you're trying to do, you may also need to run in the host network via `--net=host` and/or\nmount local volumes with `-v`.\n\n### Quick tour\n\n1. Start up a 6 node Kafka cluster locally:\n\n```\ndocker-compose up -d\n```\n\n2. Run the net alias script to make the broker addresses available on localhost:\n\n```\n./scripts/set_up_net_alias.sh\n```\n\n3. Apply the topic configs in [`examples/local-cluster/topics`](/examples/local-cluster/topics):\n\n```\ntopicctl apply --skip-confirm examples/local-cluster/topics/*yaml\n```\n\n4. Send some test messages to the `topic-default` topic:\n\n```\ntopicctl tester --broker-addr=localhost:9092 --topic=topic-default\n```\n\n5. Open up the repl (while keeping the tester running in a separate terminal):\n\n```\ntopicctl repl --cluster-config=examples/local-cluster/cluster.yaml\n```\n\n6. Run some test commands:\n\n```\nget brokers\nget topics\nget partitions\nget partitions topic-default\nget offsets topic-default\ntail topic-default\n```\n\n7. Increase the number of partitions in the `topic-default` topic by changing the `partitions: ...`\nvalue in\n[topic-default.yaml](https://github.com/segmentio/topicctl/blob/master/examples/local-cluster/topics/topic-default.yaml#L10) to `9` and re-applying:\n\n```\ntopicctl apply examples/local-cluster/topics/topic-default.yaml\n```\n\n8. Bring down the local cluster:\n\n```\ndocker-compose down\n```\n\n## Usage\n\n### Subcommands\n\n#### apply\n\n```\ntopicctl apply [path(s) to topic config(s)]\n```\n\nThe `apply` subcommand ensures that the actual state of a topic in the cluster\nmatches the desired state in its config. If the topic doesn't exist, the tool will\ncreate it. If the topic already exists but its cluster state is out-of-sync,\nthen the tool will initiate the necessary changes to bring it into compliance.\n\nSee the [Config formats](#config-formats) section below for more information on the\nexpected file formats.\n\n#### bootstrap\n\n```\ntopicctl [flags] bootstrap\n```\n\nThe `bootstrap` subcommand creates apply topic configs from the existing topics in a\ncluster. This can be used to \"import\" topics not created or previously managed by topicctl.\nThe output can be sent to either a directory (if the `--output` flag is set) or `stdout`.\n\nBy default, this does not include internal topics such as `__consumer_offsets`.\nIf you would like to have these topics included,\npass the `--allow-internal-topics` flag.\n\n#### check\n\n```\ntopicctl check [path(s) to topic config(s)]\n```\n\nThe `check` command validates that each topic config has the correct fields set and is\nconsistent with the associated cluster config. Unless `--validate-only` is set, it then\nchecks the topic config against the state of the topic in the corresponding cluster.\n\n#### create\n```\ntopicctl create [flags] [command]\n```\n\nThe `create` command creates resources in the cluster from a configuration file. \nCurrently, only ACLs are supported. The create command is separate from the apply\ncommand as it is intended for usage with immutable resources managed by topicctl.\n\n#### delete\n```\ntopicctl delete [flags] [operation]\n```\n\nThe `delete` subcommand deletes a particular resource type in the cluster.\nCurrently, the following operations are supported:\n| Subcommand      | Description |\n| --------- | ----------- |\n| `delete acls [flags]` | Deletes ACL(s) in the cluster matching the provided flags |\n| `delete topic [topic]` | Deletes a single topic in the cluster |\n\n#### get\n\n```\ntopicctl get [flags] [operation]\n```\n\nThe `get` subcommand lists out the instances and/or details of a particular\nresource type in the cluster. Currently, the following operations are supported:\n\n| Subcommand      | Description |\n| --------- | ----------- |\n| `get balance [optional topic]` | Number of replicas per broker position for topic or cluster as a whole |\n| `get brokers` | All brokers in the cluster |\n| `get config [broker or topic]` | Config key/value pairs for a broker or topic |\n| `get groups` | All consumer groups in the cluster |\n| `get lags [topic] [group]` | Lag for each topic partition for a consumer group |\n| `get members [group]` | Details of each member in a consumer group |\n| `get partitions [optional: topics]` | Get all partitions for topics |\n| `get offsets [topic]` | Number of messages per partition along with start and end times |\n| `get topics` | All topics in the cluster |\n| `get acls [flags]` | Describe access control levels (ACLs) in the cluster |\n| `get users` | All users in the cluster |\n\n#### rebalance\n\n```\ntopicctl rebalance [flags]\n```\n\nThe `apply` subcommand can be used with flag `--rebalance` rebalances a specified topics across a cluster.\n\nThe `rebalance` subcommand, on the other hand, performs a rebalance for **all** the topics defined at a given topic prefix path.\n\nSee the [rebalancing](#rebalancing) section below for more information on rebalancing.\n\n#### repl\n\n```\ntopicctl repl [flags]\n```\n\nThe `repl` subcommand starts up a shell that allows running the `get` and `tail`\nsubcommands interactively.\n\n#### reset-offsets\n\n```\ntopicctl reset-offsets [topic] [group] [flags]\n```\n\nThe `reset-offsets` subcommand allows resetting the offsets for a consumer group in a topic. There are 2 main approaches for setting the offsets:\n\n1. Use a combination of `--partitions`, `--offset`, `--to-earliest` and `--to-latest` flags. `--partitions` flag specifies a list of partitions to be reset e.g. `1,2,3 ...`. If not used, the command defaults to resetting consumer group offsets for ALL of the partitions. `--offset` flag indicates the specific value that all desired consumer group partitions will be set to. If not set, it will default to -2. Finally, `--to-earliest` flag resets offsets of consumer group members to earliest offsets of partitions while `--to-latest` resets offsets of consumer group members to latest offsets of partitions. However, only one of the `--to-earliest`, `--to-latest` and `--offset` flags can be used at a time. This approach is easy to use but lacks the ability for detailed offset configuration.\n\n2. Use `--partition-offset-map` flag to specify a detailed offset configuration for individual partitions. For example, `1=5,2=10,7=12,...` means that the consumer group offset for partition 1 must be set to 5, partition 2 to offset 10, partition 7 to offset 12 and so on. This approach provides greater flexibility and fine-grained control for this operation. Note that `--partition-offset-map` flag is standalone and cannot be coupled with any of the previous flags.\n\n\n\n#### tail\n\n```\ntopicctl tail [flags] [topic]\n```\n\nThe `tail` subcommand tails and logs out topic messages using the APIs exposed in\n[kafka-go](https://github.com/segmentio/kafka-go). It doesn't have the full functionality\nof `kafkacat` (yet), but the output is prettier and it may be easier to use in some cases.\n\n#### tester\n\n```\ntopicctl tester [flags]\n```\n\nThe `tester` command reads or writes test messages in a topic. For testing/demonstration purposes\nonly.\n\n### Specifying the target cluster\n\nThere are three ways to specify a target cluster in the `topicctl` subcommands:\n\n1. `--cluster-config=[path]`, where the refererenced path is a cluster configuration\n  in the format expected by the `apply` command described above,\n2. `--zk-addr=[zookeeper address]` and `--zk-prefix=[optional prefix for cluster in zookeeper]`, *or*\n3. `--broker-addr=[bootstrap broker address]`\n\nAll subcommands support the `cluster-config` pattern. The last two are also supported\nby the `get`, `repl`, `reset-offsets`, and `tail` subcommands since these can be run\nindependently of an `apply` workflow.\n\n### Version compatibility\n\nWe've tested `topicctl` on Kafka clusters with versions between `0.10.1` and `2.7.1`, inclusive.\n\nNote, however, that clusters at versions prior to `2.4.0` cannot use broker APIs for applying and\nthus also require ZooKeeper API access for full functionality. See the\n[cluster access details](#cluster-access-details) section below for more details.\n\nIf you run into any unexpected compatibility issues, please file a bug.\n\n## Config formats\n\n`topicctl` uses structured, YAML-formatted configs for clusters and topics. These are\ntypically source-controlled so that changes can be reviewed before being applied.\n\n### Clusters\n\nEach cluster associated with a managed topic must have a config. These configs can also be used\nwith the `get`, `repl`, `reset-offsets`, and `tail` subcommands instead of specifying a broker or\nZooKeeper address.\n\nThe following shows an annotated example:\n\n```yaml\nmeta:\n  name: my-cluster                      # Name of the cluster\n  environment: stage                    # Cluster environment\n  region: us-west-2                     # Cloud region of the cluster\n  shard: 1                              # Shard index of this cluster, if it is sharded.\n  description: |                        # A free-text description of the cluster (optional)\n    Test cluster for topicctl.\n\nspec:\n  bootstrapAddrs:                       # One or more broker bootstrap addresses\n    - my-cluster.example.com:9092\n  clusterID: abc-123-xyz                # Expected cluster ID for cluster (optional,\n                                        # used as safety check only)\n\n  # ZooKeeper access settings (only required for pre-v2 clusters; leave off to force exclusive use\n  # of broker APIs)\n  zkAddrs:                              # One or more cluster zookeeper addresses; if these are\n    - zk.example.com:2181               # omitted, then the cluster will only be accessed via\n                                        # broker APIs; see the section below on cluster access for\n                                        # more details.\n  zkPrefix: my-cluster                  # Prefix for zookeeper nodes if using zookeeper access\n  zkLockPath: /topicctl/locks           # Path used for apply locks (optional)\n\n  # TLS/SSL settings (optional, not supported if using ZooKeeper)\n  tls:\n    enabled: true                       # Whether TLS is enabled\n    caCertPath: path/to/ca.crt          # Path to CA cert to be used (optional)\n    certPath: path/to/client.crt        # Path to client cert to be used (optional)\n    keyPath: path/to/client.key         # Path to client key to be used (optional)\n\n  # SASL settings (optional, not supported if using ZooKeeper)\n  sasl:\n    enabled: true                       # Whether SASL is enabled\n    mechanism: SCRAM-SHA-512            # Mechanism to use; choices are AWS-MSK-IAM, PLAIN,\n                                        # SCRAM-SHA-256, and SCRAM-SHA-512\n    username: my-username               # SASL username; ignored for AWS-MSK-IAM\n    password: my-password               # SASL password; ignored for AWS-MSK-IAM\n```\n\nNote that the `name`, `environment`, `region`, and `description` fields are used\nfor description/identification only, and don't appear in any API calls. They can\nbe set arbitrarily, provided that they match up with the values set in the\nassociated topic configs.\n\nIf the tool is run with the `--expand-env` option, then the cluster config will be prepreocessed\nusing [`os.ExpandEnv`](https://pkg.go.dev/os#ExpandEnv) at load time. The latter will replace\nreferences of the form `$ENV_VAR_NAME` or `${ENV_VAR_NAME}` with the associated values from the\nenvironment.\n\nAdditionally, the Amazon Resource Name (ARN) of a secret in AWS Secrets Manager can be provided\ninstead of the username and password. Topicctl will then retrieve the secret value from Secrets \nManager and use it as the credentials. The secret in Secrets Manager must have a value in the format \nshown below, identical to what [AWS MSK requires](https://docs.aws.amazon.com/msk/latest/developerguide/msk-password.html#msk-password-tutorial).\n```json\n{\n  \"username\": \"alice\",\n  \"password\": \"alice-secret\"\n}\n```\n\nAn example of secrets manager being used can be seen below. Be sure to include the [6Random-Characters\nAWS Secrets Manager tacks on to the end of a secrets ARN](https://docs.aws.amazon.com/secretsmanager/latest/userguide/getting-started.html).\n```yaml\nsasl:\n    enabled: true\n    mechanism: SCRAM-SHA-512\n    secretsManagerArn: arn:aws:secretsmanager:\u003cRegion\u003e:\u003cAccountId\u003e:secret:SecretName-6RandomCharacters\n```\n\n### Topics\n\nEach topic is configured in a YAML file. The following is an\nannotated example:\n\n```yaml\nmeta:\n  name: topics-test                     # Name of the topic\n  cluster: my-cluster                   # Name of the cluster\n  environment: stage                    # Environment of the cluster\n  region: us-west-2                     # Region of the cluster\n  description: |                        # Free-text description of the topic (optional)\n    Test topic in my-cluster.\n  labels:                               # Custom key-value pairs purposed for topic bookkeeping (optional)\n    key1: value1\n    key2: value2\n\nspec:\n  partitions: 9                         # Number of topic partitions\n  replicationFactor: 3                  # Replication factor per partition\n  retentionMinutes: 360                 # Number of minutes to retain messages (optional)\n  placement:\n    strategy: in-zone                   # Placement strategy, see info below\n    picker: randomized                  # Picker method, see info below (optional)\n  settings:                             # Miscellaneous other config settings (optional)\n    cleanup.policy: delete\n    max.message.bytes: 5242880\n```\n\nThe `cluster`, `environment`, and `region` fields are used for matching\nagainst a cluster config and double-checking that the cluster we're applying\nin is correct; they don't appear in any API calls.\n\nSee the [Kafka documentation](https://kafka.apache.org/documentation/#topicconfigs)\nfor more details on the parameters that can be set in the `settings` field. Note\nthat retention time can be set in either this section or via `retentionMinutes` but\nnot in both places. The latter is easier, so it's recommended.\n\nMultiple topics can be included in the same file, separated by `---` lines, provided\nthat they reference the same cluster.\n\n#### Placement strategies\n\nThe tool supports the following per-partition, replica placement strategies:\n\n| Strategy     | Description |\n| --------- | ----------- |\n| `any` | Allow any replica placement |\n| `balanced-leaders` | Ensure that the leaders of each partition are evenly distributed across the broker racks  |\n| `in-rack` | Ensure that the followers for each partition are in the same rack as the leader; generally this is done when the leaders are already balanced, but this isn't required |\n| `cross-rack` | Ensure that the replicas for each partition are all in different racks; generally this is done when the leaders are already balanced, but this isn't required |\n| `static` | Specify the placement manually, via an extra `staticAssignments` field. ([example](examples/local-cluster/topics/topic-static.yaml)) |\n| `static-in-rack` | Specify the rack placement per partition manually, via an extra `staticRackAssignments` field ([example](examples/local-cluster/topics/topic-static-in-rack.yaml))|\n\n#### Picker methods\n\nThere are often multiple options to pick from when updating a replica. For instance, with an\n`in-rack` strategy, we can pick any replica in the target rack that isn't already used in the\npartition.\n\nCurrently, `topicctl` supports the following methods for this replica \"picking\" process:\n\n| Method     | Description |\n| --------- | ----------- |\n| `cluster-use` | Pick based on broker frequency in the topic, then break ties by looking at the frequency of each broker across all topics in the cluster |\n| `lowest-index` | Pick based on broker frequency in the topic, then break ties by choosing the lowest-index broker |\n| `randomized` | Pick based on broker frequency in the topic, then break ties randomly. The underlying random generator uses a consistent seed (generated from the topic name, partition, and index), so the choice won't vary between apply runs.|\n\nIf no picking method is set in the topic config, then `randomized` is used by default.\n\nNote that these all try to achieve in-topic balance, and only vary in the case of ties.\nThus, the placements won't be significantly different in most cases.\n\nIn the future, we may add pickers that allow for some in-topic imbalance, e.g. to correct a\ncluster-wide broker imbalance.\n\n#### Rebalancing\n\nIf `apply` is run with the `--rebalance` flag, then `topicctl` will rebalance specified topics\nafter the usual apply steps. This process will check the balance of the brokers for each index\nposition (i.e., first, second, third, etc.) for each partition and make replacements if there\nare any brokers that are significantly over- or under-represented.\n\nThe rebalance process can optionally remove brokers from a topic. To use this feature, set the\n`--to-remove` flag. Note that this flag has no effect unless `--rebalance` is also set.\n\nRebalancing is not done by default on all apply runs because it can be fairly disruptive and\ngenerally shouldn't be necessary unless the topic started off in an imbalanced state or there\nhas been a change in the number of brokers.\n\nTo rebalance **all** topics in a cluster, use the `rebalance` subcommand, which will perform the `apply --rebalance`\nfunction on all qualifying topics. It will inventory all topic configs found at  `--path-prefix` for a cluster\nspecified by `--cluster-config`.\n\nThis subcommand will not rebalance a topic if:\n\n1. the topic config is inconsistent with the cluster config (name, region, environment etc...)\n1. the partition count of a topic in the kafka cluster does not match the topic partition setting in the topic config\n1. a topic's `retention.ms` in the kafka cluster does not match the topic's `retentionMinutes` setting in the topic config\n1. a topic does not exist in the kafka cluster\n\n### ACLs\n\nSets of ACLs can be configured in a YAML file. The following is an\nannotated example:\n\n```yaml\nmeta:\n  name: acls-test                       # Name of the group of ACLs\n  cluster: my-cluster                   # Name of the cluster\n  environment: stage                    # Environment of the cluster\n  region: us-west-2                     # Region of the cluster\n  description: |                        # Free-text description of the topic (optional)\n    Test topic in my-cluster.\n  labels:                               # Custom key-value pairs purposed for ACL bookkeeping (optional)\n    key1: value1\n    key2: value2\n\nspec:\n  acls:\n    - resource:\n        type: topic                     # Type of resource (topic, group, cluster, etc.)\n        name: test-topic                # Name of the resource to apply an ACL to\n        patternType: literal            # Type of pattern (literal, prefixed, etc.)\n        principal: User:my-user         # Principal to apply the ACL to\n        host: *                         # Host to apply the ACL to\n        permission: allow\t\t\t    # Permission to apply (allow, deny)\n      operations:                       # List of operations to use for the ACLs\n        - read\n        - describe\n```\n\nThe `cluster`, `environment`, and `region` fields are used for matching\nagainst a cluster config and double-checking that the cluster we're applying\nin is correct; they don't appear in any API calls.\n\nSee the [Kafka documentation](https://kafka.apache.org/documentation/#security_authz_primitives)\nfor more details on the parameters that can be set in the `acls` field.\n\nMultiple groups of ACLs can be included in the same file, separated by `---` lines, provided\nthat they reference the same cluster.\n\n## Tool safety\n\nThe `bootstrap`, `get`, `repl`, and `tail` subcommands are read-only and should never make\nany changes in the cluster.\n\nThe `apply` subcommand can make changes, but under the following conditions:\n\n1. A user confirmation is required for any mutation to the cluster\n2. Topics are never deleted\n3. Partitions can be added but are never removed\n4. All apply runs are interruptable and idempotent (see sections below for more details)\n5. Partition changes in apply runs are locked on a per-cluster basis\n6. Leader changes in apply runs are locked on a per-topic basis\n7. Partition replica migrations are protected via\n  [\"throttles\"](https://kafka.apache.org/0101/documentation.html#rep-throttle)\n  to prevent the cluster network from getting overwhelmed\n8. Before applying, the tool checks the cluster ID against the expected value in the\n  cluster config. This can help prevent errors around applying in the wrong cluster when multiple\n  clusters are accessed through the same address, e.g `localhost:2181`.\n\nThe `reset-offsets` command can also make changes in the cluster and should be used carefully.\n\nThe `create` command can be used to create new resources in the cluster. It cannot be used with\nmutable resources.\n\n### Idempotency\n\nApply runs are designed to be idemponent- the effects should be the same no matter how many\ntimes they are run, assuming everything else in the cluster remains constant (e.g., the number of\nbrokers, each broker's rack, etc.). An exception is replica rebalance operations, which can be\nnon-deterministic. Changes in other topics should generally not effect idempotency, unless,\npossibly, if the topic is configured to use the `cluster-use` picker.\n\n### Interruptibility\n\nIf an apply run is interrupted, then any in-progress broker migrations or leader elections\nwill continue and any applied throttles will be kept in-place. The next time the topic is applied,\nthe process should continue from where it left off.\n\n## Cluster access details\n\n### ZooKeeper vs. broker APIs\n\n`topicctl` can interact with a cluster through either ZooKeeper or by hitting broker APIs\ndirectly.\n\nBroker APIs are used exclusively if the tool is run with either of the following flags:\n\n1. `--broker-addr` *or*\n2. `--cluster-config` and the cluster config doesn't specify any ZK addresses\n\nWe recommend using this \"broker only\" access mode for all clusters running Kafka versions \u003e= 2.4.\n\nIn all other cases, i.e. if `--zk-addr` is specified or the cluster config has ZK addresses, then\nZooKeeper will be used for most interactions. A few operations that are not possible via ZK\nwill still use broker APIs, however, including:\n\n1. Group-related `get` commands: `get groups`, `get lags`, `get members`\n2. `get offsets`\n3. `reset-offsets`\n4. `tail`\n5. `apply` with topic creation\n\nThis \"mixed\" mode is required for clusters running Kafka versions \u003c 2.0.\n\n### Limitations of broker-only access mode\n\nThere are a few limitations in the tool when using the broker APIs exclusively:\n\n1. Only newer versions of Kafka are supported. In particular:\n    - v2.0 or greater is required for read-only operations (`get brokers`, `get topics`, etc.)\n    - v2.4 or greater is required for applying topic changes\n2. Apply locking is not yet implemented; please be careful when applying to ensure that someone\n  else isn't applying changes in the same topic at the same time.\n3. The values of some dynamic broker properties, e.g. `leader.replication.throttled.rate`, are\n  marked as \"sensitive\" and not returned via the API; `topicctl` will show the value as\n  `SENSITIVE`. This appears to be fixed in v2.6.\n4. Broker timestamps are not returned by the metadata API. These will be blank in the results\n  of `get brokers`.\n5. Applying is not fully compatible with clusters provisioned in Confluent Cloud. It appears\n  that Confluent prevents arbitrary partition reassignments, among other restrictions. Read-only\n  operations seem to work.\n\n### TLS\n\nTLS (referred to by the older name \"SSL\" in the Kafka documentation) is supported when running\n`topicctl` in the exclusive broker API mode. To use this, either set `--tls-enabled` in the\ncommand-line or, if using a cluster config, set `enabled: true` in the `TLS` section of\nthe latter.\n\nIn addition to standard TLS, the tool also supports mutual TLS using custom certs, keys, and CA\ncerts (in PEM format). As with the enabling of TLS, these can be configured either on the\ncommand-line or in a cluster config. See [this config](examples/auth/cluster.yaml) for an example.\n\n### SASL\n\n`topicctl` supports SASL authentication when running in the exclusive broker API mode. To use this,\neither set the `--sasl-mechanism` and other appropriate `--sasl-*` flags on the command line or\nfill out the `SASL` section of the cluster config.\n\nThe following mechanisms can be used:\n\n1. `AWS-MSK-IAM`\n2. `PLAIN`\n3. `SCRAM-SHA-256`\n4. `SCRAM-SHA-512`\n\nIf using `AWS-MSK-IAM`, then `topicctl` will attempt to discover your AWS credentials in the\nlocations and order described [here](https://docs.aws.amazon.com/sdk-for-go/api/aws/session/).\nThe other mechanisms require a username and password to be set in either the cluster config\nor on the command-line. See the cluster configs in the [examples/auth](/examples/auth) and\n[examples/msk](/examples/msk) directories for some specific examples.\n\nNote that SASL can be run either with or without TLS, although the former is generally more\nsecure.\n\n## Development\n\n#### Run tests\n\nFirst, set up docker-compose and the associated network alias:\n\n```\ndocker-compose up -d\n./scripts/set_up_net_alias.sh\n```\n\nThis will create a 6 node, 3 rack cluster locally with the brokers\naccessible on `169.254.123.123`.\n\nThen, run:\n\n```\nmake test\n```\n\nYou can change the Kafka version of the local cluster by setting the\n`KAFKA_IMAGE_TAG` environment variable when running `docker-compose up -d`. See the\n[`bitnamilegacy/kafka` dockerhub page](https://hub.docker.com/r/bitnamilegacy/kafka/tags) for more\ndetails on the available versions.\n\n#### Run against local cluster\n\nTo run the `get`, `repl`, and `tail` subcommands against the local cluster,\nset `--zk-addr=localhost:2181` and leave the `--zk-prefix` flag unset.\n\nTo test out `apply`, you can use the configs in `examples/local-cluster/`. For example,\nto create all topics defined for that cluster:\n```\ntopicctl apply examples/local-cluster/topics/*.yaml\n```\n","funding_links":[],"categories":["Go","Kafka","Libraries","CLI Tools"],"sub_categories":["Infrastructure from code","Kafka","Interactive Tools"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsegmentio%2Ftopicctl","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsegmentio%2Ftopicctl","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsegmentio%2Ftopicctl/lists"}