{"id":13452398,"url":"https://github.com/lensesio/fast-data-dev","last_synced_at":"2025-06-11T16:43:06.269Z","repository":{"id":37276073,"uuid":"66100183","full_name":"lensesio/fast-data-dev","owner":"lensesio","description":"Kafka Docker for development. Kafka, Zookeeper, Schema Registry, Kafka-Connect, Landoop Tools, 20+ connectors","archived":false,"fork":false,"pushed_at":"2024-08-06T12:22:22.000Z","size":17396,"stargazers_count":2004,"open_issues_count":76,"forks_count":329,"subscribers_count":50,"default_branch":"fdd/main","last_synced_at":"2024-08-19T00:01:53.750Z","etag":null,"topics":["dataops","docker","kafka","kafka-rest-proxy","schema-registry"],"latest_commit_sha":null,"homepage":"https://lenses.io","language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/lensesio.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2016-08-19T17:29:49.000Z","updated_at":"2024-08-19T00:01:56.972Z","dependencies_parsed_at":"2024-03-29T12:28:01.323Z","dependency_job_id":"5c6db2fd-5f24-4d8d-986b-c6d0f832f64f","html_url":"https://github.com/lensesio/fast-data-dev","commit_stats":{"total_commits":427,"total_committers":27,"mean_commits":"15.814814814814815","dds":0.1615925058548009,"last_synced_commit":"551c45076dfdfd55130c2fe55ef81349bf6ee727"},"previous_names":["landoop/fast-data-dev"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lensesio%2Ffast-data-dev","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lensesio%2Ffast-data-dev/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lensesio%2Ffast-data-dev/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lensesio%2Ffast-data-dev/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/lensesio","download_url":"https://codeload.github.com/lensesio/fast-data-dev/tar.gz/refs/heads/fdd/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":245159411,"owners_count":20570380,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["dataops","docker","kafka","kafka-rest-proxy","schema-registry"],"created_at":"2024-07-31T07:01:23.050Z","updated_at":"2025-03-23T19:34:19.983Z","avatar_url":"https://github.com/lensesio.png","language":"Shell","readme":"# Lenses Box / fast-data-dev #\nlensesio/box (lensesio/box)\n[![docker](https://img.shields.io/docker/pulls/lensesio/box.svg?style=flat)](https://hub.docker.com/r/lensesio/box/)\n[![](https://images.microbadger.com/badges/image/lensesio/box.svg)](http://microbadger.com/images/lensesio/box)\n\nlensesio/fast-data-dev\n[![docker](https://img.shields.io/docker/pulls/lensesio/fast-data-dev.svg?style=flat)](https://hub.docker.com/r/lensesio/fast-data-dev/)\n[![](https://images.microbadger.com/badges/image/lensesio/fast-data-dev.svg)](http://microbadger.com/images/lensesio/fast-data-dev)\n\n[Join the Slack Lenses.io Community!](https://launchpass.com/lensesio)\n\n[Apache Kafka](http://kafka.apache.org/) docker image for developers; with\nLenses\n([lensesio/box](https://hub.docker.com/r/lensesio/box))\nor Lenses.io's open source UI tools\n([lensesio/fast-data-dev](https://hub.docker.com/r/lensesio/fast-data-dev)). Have\na full fledged Kafka installation up and running in seconds and top it off with\na modern streaming platform (only for kafka-lenses-dev), intuitive UIs and extra\ngoodies. Also includes Kafka Connect, Schema Registry, Lenses.io's Stream Reactor\n25+ Connectors and more.\n\n\u003e **[Get a free license for Lenses Box](https://lenses.io/box/)**\n\n### Introduction\n\nWhen you need:\n\n1. **A Kafka distribution** with Apache Kafka, Kafka Connect, Zookeeper, Confluent Schema Registry and REST Proxy\n2. **Lenses.io** Lenses or kafka-topics-ui, schema-registry-ui, kafka-connect-ui\n3. **Lenses.io** Stream Reactor, 25+ Kafka Connectors to simplify ETL processes\n4. Integration testing and examples embedded into the docker\n\njust run:\n\n    docker run --rm --net=host lensesio/fast-data-dev\n\nThat's it. Visit \u003chttp://localhost:3030\u003e to get into the fast-data-dev environment\n\n\u003cimg src=\"https://storage.googleapis.com/wch/fast-data-dev-ports.png\" alt=\"fast-data-dev web UI screenshot\" type=\"image/png\" width=\"320\"\u003e\n\nAll the service ports are exposed, and can be used from localhost / or within\nyour IntelliJ.  The kafka broker is exposed by default at port `9092`, zookeeper\nat port `2181`, schema registry at `8081`, connect at `8083`.  As an example, to\naccess the JMX data of the broker run:\n\n    jconsole localhost:9581\n\nIf you want to have the services remotely accessible, then you may need to pass\nin your machine's IP address or hostname that other machines can use to access\nit:\n\n    docker run --rm --net=host -e ADV_HOST=\u003cIP\u003e lensesio/fast-data-dev\n\n\u003e Hit **control+c** to stop and remove everything\n\n\u003cimg src=\"https://storage.googleapis.com/wch/fast-data-dev-ui.png\" alt=\"fast-data-dev web UI screenshot\" type=\"image/png\" width=\"900\"\u003e\n\n### Mac and Windows users (docker-machine)\n\nCreate a VM with 4+GB RAM using Docker Machine:\n\n    docker-machine create --driver virtualbox --virtualbox-memory 4096 lensesio\n\n\nRun `docker-machine ls` to verify that the Docker Machine is running correctly. The command's output should be similar to:\n\n\n    $ docker-machine ls\n    NAME        ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER        ERRORS\n    lensesio     *        virtualbox   Running   tcp://192.168.99.100:2376           v17.03.1-ce\n\nConfigure your terminal to be able to use the new Docker Machine named lensesio:\n\n    eval $(docker-machine env lensesio)\n\nAnd run the Kafka Development Environment. Define ports, advertise the hostname and use extra parameters:\n\n    docker run --rm -p 2181:2181 -p 3030:3030 -p 8081-8083:8081-8083 \\\n           -p 9581-9585:9581-9585 -p 9092:9092 -e ADV_HOST=192.168.99.100 \\\n           lensesio/fast-data-dev:latest\n\nThat's it. Visit \u003chttp://192.168.99.100:3030\u003e to get into the fast-data-dev environment\n\n### Run on the Cloud\n\nYou may want to quickly run a Kafka instance in GCE or AWS and access it from your local\ncomputer. Fast-data-dev has you covered.\n\nStart a VM in the respective cloud. You can use the OS of your choice, provided it has\na docker package. CoreOS is a nice choice as you get docker out of the box.\n\nNext you have to open the firewall, both for your machines but also *for the VM itself*.\nThis is important!\n\nOnce the firewall is open try:\n\n    docker run -d --net=host -e ADV_HOST=[VM_EXTERNAL_IP] \\\n               -e RUNNING_SAMPLEDATA=1 lensesio/fast-data-dev\n\nAlternatively just export the ports you need. E.g:\n\n    docker run -d -p 2181:2181 -p 3030:3030 -p 8081-8083:8081-8083 \\\n               -p 9581-9585:9581-9585 -p 9092:9092 -e ADV_HOST=[VM_EXTERNAL_IP] \\\n               -e RUNNING_SAMPLEDATA=1 lensesio/fast-data-dev\n\nEnjoy Kafka, Schema Registry, Connect, Lensesio UIs and Stream Reactor.\n\n### Customize execution\n\nFast-data-dev and kafka-lenses-dev support custom configuration and extra features\nvia environment variables.\n\n#### fast-data-dev / kafka-lenses-dev advanced configuration\n\n Optional Parameters              | Description\n--------------------------------- | ------------------------------------------------------------------------------------------------------------\n `CONNECT_HEAP=3G`                | Configure the maximum (`-Xmx`) heap size allocated to Kafka Connect. Useful when you want to start many connectors.\n `\u003cSERVICE\u003e_PORT=\u003cPORT\u003e`          | Custom port `\u003cPORT\u003e` for service, `0` will disable it. `\u003cSERVICE\u003e` one of `ZK`, `BROKER`, `BROKER_SSL`, `REGISTRY`, `REST`, `CONNECT`.\n `\u003cSERVICE\u003e_JMX_PORT=\u003cPORT\u003e`      | Custom JMX port `\u003cPORT\u003e` for service, `0` will disable it. `\u003cSERVICE\u003e` one of `ZK`, `BROKER`, `BROKER_SSL`, `REGISTRY`, `REST`, `CONNECT`.\n `USER=username`                  | Run in combination with `PASSWORD` to specify the username to use on basic auth.\n `PASSWORD=password`              | Protect the fast-data-dev UI when running publicly. If `USER` is not set, the default username is `kafka`.\n `SAMPLEDATA=0`                   | Do not create topics with sample avro and json records; (e.g do not create topics `sea_vessel_position_reports`, `reddit_posts`).\n `RUNNING_SAMPLEDATA=1`           | In the sample topics send a continuous (yet very low) flow of messages, so you can develop against live data.\n `RUNTESTS=0`                     | Disable the (coyote) integration tests from running when container starts.\n `FORWARDLOGS=0`                  | Disable running the file source connector that brings broker logs into a Kafka topic.\n `RUN_AS_ROOT=1`                  | Run kafka as `root` user - useful to i.e. test HDFS connector.\n `DISABLE_JMX=1`                  | Disable JMX - enabled by default on ports 9581 - 9585. You may also disable it individually for services.\n `ENABLE_SSL=1`                   | Generate a CA, key-certificate pairs and enable a SSL port on the broker.\n `SSL_EXTRA_HOSTS=IP1,host2`      | If SSL is enabled, extra hostnames and IP addresses to include to the broker certificate.\n `CONNECTORS=\u003cCONNECTOR\u003e[,\u003cCON2\u003e]`| Explicitly set which connectors* will be enabled. E.g `hbase`, `elastic` (Stream Reactor version)\n `DISABLE=\u003cCONNECTOR\u003e[,\u003cCON2\u003e]`   | Disable one or more connectors*. E.g `hbase`, `elastic` (Stream Reactor version), `elasticsearch` (Confluent version)\n `BROWSECONFIGS=1`                | Expose service configuration in the UI. Useful to see how Kafka is setup.\n `DEBUG=1`                        | Print stdout and stderr of all processes to container's stdout. Useful for debugging early container exits.\n `SUPERVISORWEB=1`                | Enable supervisor web interface on port 9001 (adjust via `SUPERVISORWEB_PORT`) in order to control services, run `tail -f`, etc.\n\n*Available connectors are: azure-documentdb, blockchain, bloomberg, cassandra,\ncoap, druid, elastic, elastic5, ftp, hazelcast, hbase, influxdb, jms, kudu,\nmongodb, mqtt, pulsar, redis, rethink, voltdb, couchbase, dbvisitreplicate,\ndebezium-mongodb, debezium-mysql, debezium-postgres, elasticsearch, hdfs,\njdbc, s3, twitter.\n\nTo programmatically get a list, run:\n\n    docker run --rm -it lensesio/fast-data-dev \\\n           find /opt/lensesio/connectors -type d -maxdepth 2 -name \"kafka-connect-*\"\n\nOptional Parameters (unsupported) | Description\n----------------------------------|---------------------------------------------------------------------------------------------------------\n`WEB_ONLY=1`                      | Run in combination with `--net=host` and docker will connect to the kafka services running on the local host. Please use our UI docker images instead.\n`TOPIC_DELETE=0`                  | Configure whether you can delete topics. By default topics can be deleted. Please use `KAFKA_DELETE_TOPIC_ENABLE=false` instead.\n\n\n#### Configure Kafka Components\n\nYou may configure any Kafka component (broker, schema registry, connect, rest proxy) by converting the configuration option to uppercase, replace dots with underscores and prepend with\n`\u003cSERVICE\u003e_`.\n\nAs example:\n\n- To set the `log.retention.bytes` for the broker, you would set the environment\n  variable `KAFKA_LOG_RETENTION_BYTES=1073741824`.\n- To set the `kafkastore.topic` for the schema registry, you would set\n  `SCHEMA_REGISTRY_KAFKASTORE_TOPIC=_schemas`.\n- To set the `plugin.path` for the connect worker, you would set\n  `CONNECT_PLUGIN_PATH=/var/run/connect/connectors/stream-reactor,/var/run/connect/connectors/third-party,/connectors`.\n- To set the `schema.registry.url` for the rest proxy, you would set\n  `KAFKA_REST_SCHEMA_REGISTRY_URL=http://localhost:8081`.\n\nWe also support the variables that set JVM options, such as `KAFKA_OPTS`, `SCHEMA_REGISTRY_JMX_OPTS`, etc.\n\nLensesio's Kafka Distribution (LKD) supports a few extra flags as well. Since in\nthe Apache Kafka build, both the broker and the connect worker expect JVM\noptions at the default `KAFKA_OPTS`, LKD supports using `BROKER_OPTS`, etc for\nthe broker and `CONNECT_OPTS`, etc for the connect worker. Of course\n`KAFKA_OPTS` are still supported and apply to both applications (and the\nembedded zookeeper).\n\nAnother LKD addition are the `VANILLA_CONNECT`, `SERDE_TOOLS` and\n`LENSESIO_COMMON` flags for Kafka Connect.  By default we load into the Connect\nClasspath the Schema Registry and Serde Tools by Confluent in order to support\navro and our own base jars in order to support avro and our connectors. You can\nchoose to run a completely vanilla kafka connect, the same that comes from the\nofficial distribution, without avro support by setting `VANILLA_CONNECT=1`.\nPlease note that most if not all the connectors will fail to load, so it would\nbe wise to disable them.  `SERDE_TOOLS=0` will disable Confluent's jars and\n`LENSESIO_COMMON=0` will disable our jars. Any of these is enough to support\navro, but disabling `LENSESIO_COMMON` will render Stream Reactor inoperable.\n\n### Versions\n\nThe latest version of this docker image tracks our latest stable tag (1.0.1). Our\nimages include:\n\n Version                       | Kafka Distro  | Lensesio tools | Apache Kafka  | Connectors\n-------------------------------| ------------- | -------------- | ------------- | --------------\nlensesio/fast-data-dev:3.6.1   | LKD 3.6.1-L0  |       ✓        |    3.6.1      | 20+ connectors\nlensesio/fast-data-dev:3.3.1   | LKD 3.3.1-L0  |       ✓        |    3.3.1      | 20+ connectors\nlensesio/fast-data-dev:2.6.2   | LKD 2.6.2-L0  |       ✓        |    2.6.2      | 30+ connectors\nlensesio/fast-data-dev:2.5.1   | LKD 2.5.1-L0  |       ✓        |    2.5.1      | 30+ connectors\nlensesio/fast-data-dev:2.4.1   | LKD 2.4.1-L0  |       ✓        |    2.4.1      | 30+ connectors\nlensesio/fast-data-dev:2.3.2   | LKD 2.3.2-L0  |       ✓        |    2.3.2      | 30+ connectors\nlensesio/fast-data-dev:2.2.1   | LKD 2.2.1-L0  |       ✓        |    2.2.1      | 30+ connectors\nlensesio/fast-data-dev:2.1.1   | LKD 2.1.1-L0  |       ✓        |    2.1.1      | 30+ connectors\nlensesio/fast-data-dev:2.0.1   | LKD 2.0.1-L0  |       ✓        |    2.0.1      | 30+ connectors\nlandoop/fast-data-dev:1.1.1    | LKD 1.1.1-L0  |       ✓        |    1.1.1      | 30+ connectors\nlandoop/fast-data-dev:1.0.1    | LKD 1.0.1-L0  |       ✓        |    1.0.1      | 30+ connectors\nlandoop/fast-data-dev:cp3.3.0  | CP 3.3.0 OSS  |       ✓        |    0.11.0.0   | 30+ connectors\nlandoop/fast-data-dev:cp3.2.2  | CP 3.2.2 OSS  |       ✓        |    0.10.2.1   | 24+ connectors\nlandoop/fast-data-dev:cp3.1.2  | CP 3.1.2 OSS  |       ✓        |    0.10.1.1   | 20+ connectors\nlandoop/fast-data-dev:cp3.0.1  | CP 3.0.1 OSS  |       ✓        |    0.10.0.1   | 20+ connectors\n\n*LKD stands for Lenses.io's Kafka Distribution. We build and package Apache Kafka with Kafka Connect\nand Apache Zookeeper, Confluent Schema Registry and REST Proxy and a collection of third party\nKafka Connectors as well as our own Stream Reactor collection.\n\nPlease note the [BSL license](https://lensesio.com/bsl/) of the tools. To use them on a PROD\ncluster with \u003e 3 Kafka nodes, you should contact us.\n\n### Building it\n\nFast-data-dev and Lenses Box require a recent version of docker which supports\nmultistage builds. Optionally you should also enable the buildx plugin to enable\nmulti-arch builds, even if you just use the default builder.\n\nTo build it just run:\n\n    docker build -t lensesio-local/fast-data-dev .\n\nPeriodically pull from docker hub to refresh your cache.\n\nIf your docker version does not support multi-arch builds, or you don't have the\nbuildx plugin installed, use the build args demonstrated below to emulate\nmulti-arch support:\n\n    docker build --build-arg TARGETOS=linux --build-arg TARGETARCH=amd64 -t lensesio-local/fast-data-dev .\n\n\n### Advanced Features and Settings\n\n#### Custom Ports\n\nTo use custom ports for the various services, you can take advantage of the\n`ZK_PORT`, `BROKER_PORT`, `REGISTRY_PORT`, `REST_PORT`, `CONNECT_PORT` and\n`WEB_PORT` environment variables. One catch is that you can't swap ports; e.g\nto assign 8082 (default REST Proxy port) to the brokers.\n\n    docker run --rm -it \\\n               -p 3181:3181 -p 3040:3040 -p 7081:7081 \\\n               -p 7082:7082 -p 7083:7083 -p 7092:7092 \\\n               -e ZK_PORT=3181 -e WEB_PORT=3040 -e REGISTRY_PORT=8081 \\\n               -e REST_PORT=7082 -e CONNECT_PORT=7083 -e BROKER_PORT=7092 \\\n               -e ADV_HOST=127.0.0.1 \\\n               lensesio/fast-data-dev\n\nA port of `0` will disable the service.\n\n#### Execute kafka command line tools\n\nDo you need to execute kafka related console tools? Whilst your Kafka containers is running,\ntry something like:\n\n    docker run --rm -it --net=host lensesio/fast-data-dev kafka-topics --zookeeper localhost:2181 --list\n\nOr enter the container to use any tool as you like:\n\n    docker run --rm -it --net=host lensesio/fast-data-dev bash\n\n#### View logs\n\nYou can view the logs from the web interface. If you prefer the command line,\nevery application stores its logs under `/var/log` inside the container.\nIf you have your container's ID, or name, you could do something like:\n\n    docker exec -it \u003cID\u003e cat /var/log/broker.log\n\n#### Enable SSL on Broker\n\nDo you want to test your application over an authenticated TLS connection to the\nbroker? We got you covered. Enable TLS via `-e ENABLE_SSL=1`:\n\n    docker run --rm --net=host \\\n               -e ENABLE_SSL=1 \\\n               lensesio/fast-data-dev\n\nWhen fast-data-dev spawns, it will create a self-signed CA. From that it will\ncreate a truststore and two signed key-certificate pairs, one for the broker,\none for your client. You can access the truststore and the client's keystore\nfrom our Web UI, under `/certs` (e.g http://localhost:3030/certs). The password\nfor both the keystores and the TLS key is `fastdata`.\nThe SSL port of the broker is `9093`, configurable via the `BROKER_SSL_PORT`\nvariable.\n\nHere is a simple example of how the SSL functionality can be used. Let's spawn\na fast-data-dev to act as the server:\n\n    docker run --rm --net=host -e ENABLE_SSL=1 -e RUNTESTS=0 lensesio/fast-data-dev\n\nOn a new console, run another instance of fast-data-dev only to get access to\nKafka command line utilities and use TLS to connect to the broker of the former\ncontainer:\n\n    docker run --rm -it --net=host --entrypoint bash lensesio/fast-data-dev\n    root@fast-data-dev / $ wget localhost:3030/certs/truststore.jks\n    root@fast-data-dev / $ wget localhost:3030/certs/client.jks\n    root@fast-data-dev / $ kafka-producer-perf-test --topic tls_test \\\n      --throughput 100000 --record-size 1000 --num-records 2000 \\\n      --producer-props bootstrap.servers=\"localhost:9093\" security.protocol=SSL \\\n      ssl.keystore.location=client.jks ssl.keystore.password=fastdata \\\n      ssl.key.password=fastdata ssl.truststore.location=truststore.jks \\\n      ssl.truststore.password=fastdata\n\nSince the plaintext port is also available, you can test both and find out\nwhich is faster and by how much. ;)\n\n\n### Advanced Connector settings\n\n#### Explicitly Enable Connectors\n\nThe number of connectors present significantly affects Kafka Connect's\nstartup time, as well as its memory usage. You can enable connectors\nexplicitly using the `CONNECTORS` environment variable:\n\n    docker run --rm -it --net=host \\\n               -e CONNECTORS=jdbc,elastic,hbase \\\n               lensesio/fast-data-dev\n\nPlease note that if you don't enable jdbc, some tests will fail.\nThis doesn't affect fast-data-dev's operation.\n\n#### Explicitly Disable Connectors\n\nFollowing the same logic as in the paragraph above, you can instead choose to\nexplicitly disable certain connectors using the `DISABLE` environment\nvariable. It takes a comma separated list of connector names you want to\ndisable:\n\n    docker run --rm -it --net=host \\\n               -e DISABLE=elastic,hbase \\\n               lensesio/fast-data-dev\n\nIf you disable the jdbc connector, some tests will fail to run.\n\n#### Enable additional connectors\n\nIf you have a custom connector you would like to use, you can mount it at folder\n`/connectors`. `plugin.path` variable for Kafka Connect is set up to include\n`/connectors/`, so it will use any single-jar connectors it will find inside this\ndirectory and any multi-jar connectors it will find in subdirectories of this directory.\n\n    docker run --rm -it --net=host \\\n               -v /path/to/my/connector/connector.jar:/connectors/connector.jar \\\n               -v /path/to/my/multijar-connector-directory:/connectors/multijar-connector-directory \\\n               lensesio/fast-data-dev\n\n### FAQ\n\n- Lensesio's Fast Data Web UI tools and integration test requires some time\n  till they fully work. Especially the tests and Kafka Connect UI will need\n  a few minutes.\n  \n  That is because the services (Kafka, Schema Registry, Kafka Connect, REST Proxy)\n  have to start and initialize before the UIs can read data.\n- What resources does this container need?\n  \n  An idle, fresh container will need about 3GiB of RAM. As at least 5 JVM\n  applications will be working in it, your mileage will vary. In our\n  experience Kafka Connect usually requires a lot of memory. It's heap size is\n  set by default to 640MiB but you'll might need more than that.\n- Fast-data-dev does not start properly, broker fails with:\n  \u003e [2016-08-23 15:54:36,772] FATAL [Kafka Server 0], Fatal error during\n  \u003e KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)\n  \u003e java.net.UnknownHostException: [HOSTNAME]: [HOSTNAME]: unknown error\n  \n  JVM based apps tend to be a bit sensitive to hostname issues.\n  Either run the image without `--net=host` and expose all ports\n  (2181, 3030, 8081, 8082, 8083, 9092) to the same port at the host, or\n  better yet make sure your hostname resolve to the localhost address\n  (127.0.0.1). Usually to achieve this, you need to add your hostname (case\n  sensitive) at `/etc/hosts` as the first name after 127.0.0.1. E.g:\n  \n      127.0.0.1 MyHost localhost\n\n### Detailed configuration options\n\n#### Web Only Mode\n\n*Note:* Web only mode will be deprecated in the future.\n\nThis is a special mode only for Linux hosts, where *only* Lensesio's Web UIs\nare started and kafka services are expected to be running on the local\nmachine. It must be run with `--net=host` flag, thus the Linux only\nrequisite:\n\n    docker run --rm -it --net=host \\\n               -e WEB_ONLY=true \\\n               lensesio/fast-data-dev\n\nThis is useful if you already have a Kafka cluster and want just the additional Lensesio Fast Data web UI.\n_Please note that we provide separate, lightweight docker images for each UI component\nand we strongly encourage to use these over fast-data-dev._\n\n#### Connect Heap Size\n\nYou can configure Connect's heap size via the environment variable\n`CONNECT_HEAP`. The default is `640M`:\n\n    docker run -e CONNECT_HEAP=3G -d lensesio/fast-data-dev\n\n#### Basic Auth (password)\n\nWe have included a web server to serve Lensesio UIs and proxy the schema registry\nand kafa REST proxy services, in order to share your docker over the web.\nIf you want some basic protection, pass the `PASSWORD` variable and the web\nserver will be protected by user `kafka` with your password. If you want to\nsetup the username too, set the `USER` variable.\n\n     docker run --rm -it -p 3030:3030 \\\n                -e PASSWORD=password \\\n                lensesio/fast-data-dev\n\n#### Disable tests\n\nBy default this docker runs a set of coyote tests, to ensure that your container\nand development environment is all set up. You can disable running the `coyote` tests\nusing the flag:\n\n    -e RUNTESTS=0\n\n#### Run Kafka as root\n\nIn the recent versions of fast-data-dev, we switched to running Kafka as user\n`nobody` instead of `root` since it was a bad practice. The old behaviour may\nstill be desirable, for example on our\n[HDFS connector tests](http://coyote.lensesio.com/connect/kafka-connect-hdfs/),\nConnect worker needs to run as the root user in order to be able to write to the\nHDFS. To switch to the old behaviour, use:\n\n    -e RUN_AS_ROOT=1\n\n#### JMX Metrics\n\nJMX metrics are enabled by default. If you want to disable them for some\nreason (e.g you need the ports for other purposes), use the `DISABLE_JMX`\nenvironment variable:\n\n    docker run --rm -it --net=host \\\n               -e DISABLE_JMX=1 \\\n               lensesio/fast-data-dev\n\nJMX ports are hardcoded to `9581` for the broker, `9582` for schema registry,\n`9583` for REST proxy and `9584` for connect distributed. Zookeeper is exposed\nat `9585`.\n","funding_links":[],"categories":["Shell","Data Processing \u0026 Analytics"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flensesio%2Ffast-data-dev","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Flensesio%2Ffast-data-dev","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flensesio%2Ffast-data-dev/lists"}