{"id":13582459,"url":"https://github.com/NinesStack/sidecar","last_synced_at":"2025-04-06T14:30:59.067Z","repository":{"id":42209455,"uuid":"78738988","full_name":"NinesStack/sidecar","owner":"NinesStack","description":"Gossip-based service discovery. Docker native, but supports non-container discovery, too.","archived":false,"fork":true,"pushed_at":"2025-03-31T15:26:08.000Z","size":32761,"stargazers_count":71,"open_issues_count":7,"forks_count":7,"subscribers_count":9,"default_branch":"master","last_synced_at":"2025-03-31T15:37:03.488Z","etag":null,"topics":["golang","microservices","proxy","service-discovery"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":"newrelic/sidecar","license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/NinesStack.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2017-01-12T11:30:17.000Z","updated_at":"2024-11-20T23:14:41.000Z","dependencies_parsed_at":"2023-02-12T22:46:11.775Z","dependency_job_id":null,"html_url":"https://github.com/NinesStack/sidecar","commit_stats":null,"previous_names":["nitro/sidecar"],"tags_count":17,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NinesStack%2Fsidecar","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NinesStack%2Fsidecar/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NinesStack%2Fsidecar/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NinesStack%2Fsidecar/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/NinesStack","download_url":"https://codeload.github.com/NinesStack/sidecar/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247495764,"owners_count":20948105,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["golang","microservices","proxy","service-discovery"],"created_at":"2024-08-01T15:02:44.149Z","updated_at":"2025-04-06T14:30:58.674Z","avatar_url":"https://github.com/NinesStack.png","language":"Go","readme":"Sidecar ![Sidecar](views/static/Sidecar.png)\n=====\n\n[![](https://travis-ci.com/NinesStack/sidecar.svg?branch=master)](https://travis-ci.com/NinesStack/sidecar)\n\n**The main repo for this project is the [NinesStack\nfork](https://github.com/NinesStack/sidecar)**\n\nSidecar is a dynamic service discovery platform requiring no external\ncoordination service. It's a peer-to-peer system that uses a gossip protocol\nfor all communication between hosts. Sidecar health checks local services and\nannounces them to peer systems. It's Docker-native so your containerized\napplications work out of the box. It's designed to be **A**vailable,\n**P**artition tolerant, and eventually consistent—where \"eventually\" is a very\nshort time window on the matter of a few seconds.\n\nSidecar is part of a small ecosystem of tools. It can stand entirely alone\nor can also leverage:\n\n * [Lyft's Envoy Proxy](https://github.com/envoyproxy/envoy) - In less than\n   a year it is fast becoming a core microservices architecture component.\n   Sidecar implements the Envoy proxy SDS, CDS, LDS (V1) and gRPC (V2) APIs.\n   These allow a standalone Envoy to be entirely configured by Sidecar. This\n   is best used with NinesStack's\n   [Envoy proxy container](https://hub.docker.com/r/gonitro/envoyproxy/tags/).\n\n * [haproxy-api](https://github.com/NinesStack/haproxy-api) - A separation layer\n   that allows Sidecar to drive HAproxy in a separate container. It also\n   allows a local HAproxy to be configured against a remote Sidecar instance.\n\n * [sidecar-executor](https://github.com/NinesStack/sidecar-executor) - A Mesos\n   executor that integrates with Sidecar, allowing your containers to be\n   health checked by Sidecar for both service health and service discovery.\n   Also supports a number of extra features including Vault integration for\n   secrets management.\n\n * [sidecar-dns](https://github.com/relistan/sidecar-dns) - a working, but\n   WIP, project to serve DNS SRV records from Sidecar services state.\n\nOverview in Brief\n-----------------\n\nServices communicate to each other through a proxy (Envoy or HAproxy) instance\non each host that is itself managed and configured by Sidecar. This is, in\neffect, a half service mesh where outbound connections go through the proxy,\nbut inbound requests do not. This has most of the advantages of service mesh\nwith a lot less complexity to manage. It is inspired by Airbnb's SmartStack.\nBut, we believe it has a few advantages over SmartStack:\n\n * Eventually consistent model - a better fit for real world microservices\n * Native support for Docker (works without Docker, too!)\n * No dependence on Zookeeper or other centralized services\n * Peer-to-peer, so it works on your laptop or on a large cluster\n * Static binary means it's easy to deploy, and there is no interpreter needed\n * Tiny memory usage (under 20MB) and few execution threads means its very\n   light weight\n\n**See it in Action:** We presented Sidecar at Velocity 2015 and recorded a [YouTube\nvideo](https://www.youtube.com/watch?v=VA43yWVUnMA) demonstrating Sidecar with\n[Centurion](https://github.com/newrelic/centurion), deploying services in\nDocker containers, and seeing Sidecar discover and health check them. The second\nvideo shows the current state of the UI which is improved since the first video.\n\n[![YouTube Video](views/static/youtube.png)](https://www.youtube.com/watch?v=VA43yWVUnMA)\n[![YouTube Video2](views/static/youtube2.png)](https://www.youtube.com/watch?v=5MQujt36hkI)\n\nComplete Overview and Theory\n----------------------------\n\n![Sidecar Architecture](views/static/Sidecar%20Architecture.png)\n\nSidecar is an eventually consistent service discovery platform where hosts\nlearn about each others' state via a gossip protocol. Hosts exchange messages\nabout which services they are running and which have gone away. All messages\nare timestamped and the latest timestamp always wins. Each host maintains its\nown local state and continually merges changes in from others. Messaging is\nover UDP except when doing anti-entropy transfers.\n\nThere is an anti-entropy mechanism where full state exchanges take place\nbetween peer nodes on an intermittent basis. This allows for any missed\nmessages to propagate, and helps keep state consistent across the cluster.\n\nSidecar hosts join a cluster by having a set of cluster seed hosts passed to\nthem on the command line at startup. Once in a cluster, the first thing a host\ndoes is merge the state directly from another host. This is a big JSON blob\nthat is delivered over a TCP session directly between the hosts.\n\nNow the host starts continuously polling its own services and reviewing the\nservices that it has in its own state, sleeping a couple of seconds in between.\nIt announces its services as UDP gossip messages every couple of seconds, and\nalso announces tombstone records for any services which have gone away.\nLikewise, when a host leaves the cluster, any peers that were notified send\ntombstone records for all of its services. These eventually converge and the\nlatest records should propagate everywhere. If the host rejoins the cluster, it\nwill announce new state every few seconds so the services will be picked back\nup.\n\nThere are lifespans assigned to both tombstone and alive records so that:\n\n1. A service that was not correctly tombstoned will go away in short order\n2. We do not continually add to the tombstone state we are carrying\n\nBecause the gossip mechanism is UDP and a service going away is a higher\npriority message, each tombstone is sent twice initially, followed by once a\nsecond for 10 seconds. This delivers reliable messaging of service death.\n\nTimestamps are all local to the host that sent them. This is because we can\nhave clock drift on various machines. But if we always look at the origin\ntimestamp they will at least be comparable to each other by all hosts in the\ncluster. The one exception to this is that if clock drift is more than a second\nor two, the alive lifespan may be negatively impacted.\n\nRunning it\n----------\n\nYou can download the latest release from the [GitHub\nReleases](https://github.com/NinesStack/sidecar/releases) page.\n\nIf you'd rather build it yourself, you should install the latest version of the\nGo compiler. Sidecar has not been tested with gccgo, only the mainstream Go\ncompiler.\n\nIt's a Go application and the dependencies are all vendored into the `vendor/`\ndirectory so you should be able to build it out of the box.\n\n```bash\n$ go build\n```\n\nOr you can run it like this:\n\n```bash\n$ go run *.go --cluster-ip \u003cboostrap_host\u003e\n```\n\nYou always need to supply at least one IP address or hostname with the\n`--cluster-ip` argument (or via the `SIDECAR_SEEDS` environment variable).  If\nare running solo, or are the first member, this can be your own hostname. You\nmay specify the argument multiple times to have multiple hosts. It is\nrecommended to use more than one when possible.\n\nNote: `--cluster-ip` will overwrite the values passed into the `SIDECAR_SEEDS`\nenvironment variable.\n\n### Running in a Container\n\nThe easiest way to deploy Sidecar to your Docker fleet is to run it in a\ncontainer itself. [Instructions for doing that are provided](docker/README.md).\n\nNitro Software maintains builds of the [Docker container\nimage](https://hub.docker.com/r/gonitro/sidecar/) on Docker Hub. Note that\nthe [README](docker/README.md) describes how to configure this container.\n\n\nConfiguration\n-------------\n\nSidecar configuration is done through environment variables, with a few options\nalso supported on the command line. Once the configuration has been parsed,\nSidecar will use [Rubberneck](https://github.com/relistan/rubberneck) to print\nout the values that were used. The environment variable are as follows.\nDefaults are in bold at the end of the line:\n\n * `SIDECAR_LOGGING_LEVEL`: The logging level to use (debug, info, warn, error)\n   **info**\n * `SIDECAR_LOGGING_FORMAT`: Logging format to use (text, json) **text**\n * `SIDECAR_DISCOVERY`: Which discovery backends to use as a csv array\n   (static, docker, kubernetes_api) **`[ docker ]`**\n * `SIDECAR_SEEDS`: csv array of IP addresses used to seed the cluster.\n * `SIDECAR_CLUSTER_NAME`: The name of the Sidecar cluster. Restricts membership\n   to hosts with the same cluster name.\n * `SIDECAR_BIND_PORT`: Manually override the Memberlist bind port **7946**\n * `SIDECAR_ADVERTISE_IP`: Manually override the IP address Sidecar uses for\n   cluster membership.\n * `SIDECAR_EXCLUDE_IPS`: csv array of IPs to exclude from interface selection\n   **`[ 192.168.168.168 ]`**\n * `SIDECAR_STATS_ADDR`: An address to send performance stats to. **none**\n * `SIDECAR_PUSH_PULL_INTERVAL`: How long to wait between anti-entropy syncs.\n   **20s**\n * `SIDECAR_GOSSIP_MESSAGES`: How many times to gather messages per round. **15**\n * `SIDECAR_DEFAULT_CHECK_ENDPOINT`: Default endpoint to health check services\n   on **`/version`**\n\n * `SERVICES_NAMER`: Which method to use to extract service names. In both\n   cases it will fall back to image name. (`docker_label`, `regex`) **`docker_label`**.\n * `SERVICES_NAME_MATCH`: The regexp to use to extract the service name\n   from the container name.\n * `SERVICES_NAME_LABEL`: The Docker label to use to identify service names\n   `ServiceName`\n\n * `DOCKER_URL`: How to connect to Docker if Docker discovery is enabled.\n   **`unix:///var/run/docker.sock`**\n\n * `STATIC_CONFIG_FILE`: The config file to use if static discovery is enabled\n   **`static.json`**\n\n * `LISTENERS_URLS`: If we want to statically configure any event listeners, the\n   URLs should go in a csv array here. See **Listeners** section below for more\n   on dynamic listeners.\n\n * `HAPROXY_DISABLE`: Disable management of HAproxy entirely. This is useful if\n   you need to run without a proxy or are using something like\n   [haproxy-api](https://github.com/NinesStack/haproxy-api) to manage HAproxy based\n   on Sidecar events. You should also use this setting if you are using\n   Envoy as your proxy.\n * `HAPROXY_RELOAD_COMMAND`: The reload command to use for HAproxy **sane defaults**\n * `HAPROXY_VERIFY_COMMAND`: The verify command to use for HAproxy **sane defaults**\n * `HAPROXY_BIND_IP`: The IP that HAproxy should bind to on the host **192.168.168.168**\n * `HAPROXY_TEMPLATE_FILE`: The source template file to use when writing HAproxy\n   configs. This is a Go text template. **`views/haproxy.cfg`**\n * `HAPROXY_CONFIG_FILE`: The path where the `haproxy.cfg` file will be written. Note\n   that if you change this you will need to update the verify and reload commands.\n   **`/etc/haproxy.cfg`**\n * `HAPROXY_PID_FILE`: The path where HAproxy's PID file will be written. Note\n   that if you change this you will need to update the verify and reload commands.\n   **`/var/run/haproxy.pid`**\n * `HAPROXY_USER`: The Unix user under which HAproxy should run **haproxy**\n * `HAPROXY_GROUP`: The Unix group under which HAproxy should run **haproxy**\n * `HAPROXY_USE_HOSTNAMES`: Should we write hostnames in the HAproxy config instead\n   of IP addresses? **`false`**\n\n * `ENVOY_USE_GRPC_API`: Enable the Envoy gRPC API (V2) **`true`**\n * `ENVOY_BIND_IP`: The IP that Envoy should bind to on the host **192.168.168.168**\n * `ENVOY_USE_HOSTNAMES`: Should we write hostnames in the Envoy config instead\n   of IP addresses? **`false`**\n * `ENVOY_GRPC_PORT`: The port for the Envoy API gRPC server **`7776`**\n * `ENVOY_LOGGING_LEVEL`: The logging level to use (debug, info, warn, error)\n   **info**\n\n * `KUBE_API_IP`: The IP address at which to reach the Kubernetes API **`127.0.0.1`**\n * `KUBE_API_PORT`: The port to use to contact the Kubernetes API **`8080`**\n * `NAMESPACE`: The namespace against which we should do discovery **`default`**\n * `KUBE_TIMEOUT`: How long until we time out calling the Kube API? **`3s`**\n * `CREDS_PATH`: Where do we find the token file containing API auth credentials?\n   **`/var/run/secrets/kubernetes.io/serviceaccount`**\n\n### Ports\n\nSidecar requires both TCP and UDP protocols be open on the port configured via\nSIDECAR_BIND_PORT (default 7946) through any network filters or firewalls\nbetween it and any peers in the cluster. This is the port that the gossip\nprotocol (Memberlist) runs on.\n\n## Discovery\n\nSidecar supports Docker-based discovery, a discovery mechanism where you\npublish services into a JSON file locally, called \"static\", and from the\nKubernetes API. These can then be advertised as running services just like they\nwould be from a Docker host. These are configured with the `SIDECAR_DISCOVERY`\nenvironment variable. Using all of them would look like:\n\n```bash\nexport SIDECAR_DISCOVERY=static,docker,kubernetes_api\n```\n\nZero or more options may be supplied. Note that if nothing is in this section,\nSidecar will only participate in a cluster but will not announce anything.\n\n### Configuring Docker Discovery\n\nSidecar currently accepts a single option for Docker-based discovery, the URL\nto use to connect to Docker. Ideally this will be the same machine that Sidecar\nruns on because it makes assumptions about addresses. By default it will use\nthe standard Docker Unix domain socket. You can change this with the\n`DOCKER_URL` env var. This needs to be a url that works with the Docker client.\n\nNote that Sidecar only supports a *single* URL, unlike the Docker CLI tool.\n\n**NOTE**\nSidecar can now use the normal Docker environment variables for configuring\nDocker discovery. If you unset `DOCKER_URL` entirely, it will fall back to\ntrying to use environment variables to configure Docker. It uses the standard\nvariables like `DOCKER_HOST`, `TLS_VERIFY`, etc.\n\n#### Docker Labels\n\nWhen running Docker discovery, Sidecar relies on Docker labels to understand\nhow to handle a service it has discovered. It uses these to:\n\n 1. Understand how to map container ports to proxy ports. `ServicePort_XXX`\n 2. How to name the service. `ServiceName=`\n 3. How to health check the service. `HealthCheck` and `HealthCheckArgs`\n 4. Whether or not the service is a receiver of Sidecar change events. `SidecarListener`\n 5. Whether or not Sidecar should entirely ignore this service. `SidecarDiscovery`\n 6. Envoy or HAproxy proxy behavior. `ProxyMode`\n\n**Service Ports**\nServices may be started with one or more `ServicePort_xxx` labels that help\nSidecar to understand ports that are mapped dynamically. This controls the port\non which the proxy will listen for the service as well. If I have a service where\nthe container is built with `EXPOSE 80` and I want my proxy to listen on port\n8080 then I will add a Docker label to the service in the form:\n\n```\n\tServicePort_80=8080\n```\n\nWith dynamic port bindings, Docker may then bind that to 32767 but Sidecar will\nknow which service and port that belongs.\n\n**Health Checks**\nIf you services are not checkable with the default settings, they need to have\ntwo Docker labels defining how they are to be health checked. To health check a\nservice on port 9090 on the local system with an `HttpGet` check, for example,\nyou would use the following labels:\n\n```\n\tHealthCheck=HttpGet\n\tHealthCheckArgs=http://:9090/status\n```\n\nThe currently available check types are `HttpGet`, `External` and\n`AlwaysSuccessful`. `External` checks will run the command specified in\nthe `HealthCheckArgs` label (in the context of a bash shell). An exit\nstatus of 0 is considered healthy and anything else is unhealthy. Nagios\nchecks work very well with this mode of health checking.\n\n**Excluding From Discovery**\nAdditionally, it can sometimes be nice to exclude certain containers from\ndiscovery. This is particularly useful if you are running Sidecar in a\ncontainer itself. This is accomplished with another Docker label like so:\n\n```\n\tSidecarDiscover=false\n```\n\n**Proxy Behavior**\nBy default, HAProxy or Envoy will run in HTTP mode. The mode can be changed to TCP by\nsetting the following Docker label:\n\n```\nProxyMode=tcp\n```\n\nYou may also enable Websocket support where it's available (e.g. in Envoy) by\nsetting:\n\n```\nProxyMode=ws\n```\n\n**Templating In Labels**\nYou sometimes need to pass information in the Docker labels which\nis not available to you at the time of container creation. One example of this\nis the need to identify the actual Docker-bound port when running the health\ncheck. For this reason, Sidecar allows simple templating in the labels. Here's\nan example.\n\nIf you have a service that is exposing port 8080 and Docker dynamically assigns\nit the port 31445 at runtime, your health check for that port will be impossible\nto define ahead of time. But with templating we can say:\n\n```--label HealthCheckArgs=\"http://{{ host }}:{{ tcp 8080 }}/\"```\n\nThis will then fill the template fields, at call time, with the current\nhostname and the actual port that Docker bound to your container's port 8080.\nQuerying of UDP ports works as you might expect, by calling `{{ udp 53 }}` for\nexample.\n\n**Note** that the `tcp` and `udp` method calls in the templates refer only\nto ports mapped with `ServicePort` labels. You will need to use the port\nnumber that you expect the proxy to use.\n\n### Configuring Static Discovery\n\nStatic Discovery requires an entry in the `SIDECAR_DISCOVERY` variable of\n`static`. It will then look for a file configured with `STATIC_CONFIG_FILE` to\nexport services. This file is usually `static.json` in the current working\ndirectory of the process.\n\nA static discovery file might look like this:\n\n```json\n[\n    {\n        \"Service\": {\n            \"Name\": \"some_service\",\n            \"Image\": \"bb6268ff91dc42a51f51db53846f72102ed9ff3f\",\n            \"Ports\": [\n                {\n                    \"Type\": \"tcp\",\n                    \"Port\": 10234,\n\t\t\t\t\t\"ServicePort\": 9999\n                }\n            ],\n\t\t\t\"ProxyMode\": \"http\",\n        },\n        \"Check\": {\n            \"Type\": \"HttpGet\",\n            \"Args\": \"http://:10234/\"\n        }\n    },\n\t{\n\t...\n\t}\n]\n```\n\nHere we've defined both the service itself and the health check to use to\nvalidate its status. It supports a single health check per service.  You should\nsupply something in place of the value for `Image` that is meaningful to you.\nUsually this is a version or git commit string. It will show up in the Sidecar\nweb UI.\n\nA further example is available in the `fixtures/` directory used by the tests.\n\n### Configuring Kubernetes API Discovery\n\nThis method of discovery will enale you to bridge together an existing Sidecar\ncluster with a Kubernetes cluster that will make services availabel to the\nSidecar cluster. It will announce all of the Kubernetes services that it finds\navailable and map them to a port in the 30000+ range, with the expectation\nbeing that your have configured services to run with a NodePort in that range.\n\nThis is most useful for transitioning services from one cluster to another. You\ncan run one or more Sidecar instances per Kubernetes cluster and they will show\nup like services exported from other discovery mechanisms with the exception\nthat version information is not passed. The environment variables for\nconfiguring the behavior of this discovery method are described above.\n\nSidecar Events and Listeners\n----------------------------\n\nServices which need to know about service discovery change events can subscribe\nto Sidecar events. Any time a significant change happens, the listener will\nreceive an update over HTTP from Sidecar. There are three mechanisms by which\na service can subscribe to Sidecar events:\n\n 1. Add the endpoint in the `LISTENERS_URLS` env var, e.g.:\n    ```bash\n\texport LISTENERS_URLS=\"http://localhost:7778/api/update\"\n\t```\n\tThis is an array and can be separated with spaces or commas.\n\n 2. Add a Docker label to the subscribing service in the form\n    `SidecarListener=10005` where 10005 is a port that is mapped to a\n    `ServicePort` with a Docker label like `ServicePort_80=10005`. This port will\n    then receive all updates on the `/sidecar/update` endpoint. The subscription\n    will be dynamically added and removed when the service starts or stops.\n\n 3. Add the listener export to the `static.json` file exposed by static\n    services. The `ListenPort` is a top-level setting for the `Target` and is\n\tof the form `ListenPort: 10005` inside the `Target` definition.\n\nMonitoring It\n-------------\n\nThe logging output is pretty good in the normal `info` level. It can be made\nquite verbose in `debug` mode, and contains lots of information about what's\ngoing on and what the current state is. The web interface also contains a lot\nof runtime information on the cluster and the services. If you are running\nHAproxy, it's also recommended that you expose the HAproxy stats port on 3212\nso that Sidecar can find it.\n\nCurrently the web interface runs on port 7777 on each machine that runs\n`sidecar`.\n\nThe `/ui/services` endpoint is a very textual web interface for humans. The\n`/api/services.json` endpoint is JSON-encoded. The JSON is still pretty-printed\nso it's readable by humans.\n\nSidecar API\n-----------\n\nOther than the UI that lives on the base URL, there is a minimalist API\navailable for querying Sidecar. It supports the following endpoints:\n\n * `/services.json`: This returns a big JSON blob sorted and grouped by\n   service.\n * `/state.json`: Returns the whole internal state blob in the internal\n   representation order (servers -\u003e server -\u003e service -\u003e instances)\n * `/services/\u003cservice name\u003e.json`: Returns the same format as the\n   `/service.json` endpoint, but only contains data for a single service.\n * `/watch`: Inconsistenly named endpoint that returns JSON blobs on a\n   long-poll basis every time the internal state changes. Useful for\n   anything that needs to know what the ongoing service status is.\n\nSidecar can also be configured to post the internal state to HTTP endpoints on\nany change event. See the \"Sidecar Events and Listeners\" section.\n\nEnvoy Proxy Support\n-------------------\n\nEnvoy uses a very different model than HAproxy and thus Sidecar's support for\nit is quite different from its support for HAproxy.\n\nWhen using the REST-based LDS API (V1), Envoy makes requests to a variety of\ndiscovery service APIs on a timed basis. Sidecar currently implements three\nof these: the Cluster Discovery Service (CDS), the Service Discovery Service\n(SDS), and the Listeners Discovery Service (LDS). When using the gRPC V2 API,\nSidecar sends updates to Envoy as soon as possible via gRPC.\n\nNote that the LDS API (V1) has been deprecated by Envoy and it's recommended\nto use the gRPC-based V2 API.\n\nNitro builds and supports [an Envoy\ncontainer](https://hub.docker.com/r/gonitro/envoyproxy/tags/) that is tested\nand works against Sidecar. This is the easiest way to run Envoy with Sidecar.\nYou can find an example container configuration\n[here](https://gist.github.com/relistan/55a6f54bfc2b79d03eb0c8327c2aeb1c) if\nyou need to configure it differently from Nitro's recommended setup.\n\nThe critical component is that the Envoy proxy needs to be able to talk to\nthe Sidecar API. By default the Nitro container assumes that Sidecar will\nbe running on `192.168.168.168:7777`. If your sidecar is addressable on that\naddress, you can start the envoy container with your platform's equivalent\nof the following Docker command:\n\n```bash\ndocker run -i -t --net host --cap-add NET_ADMIN gonitro/envoyproxy:latest\n```\n\n**Note:** This assumes host networking mode so that Envoy can freely open\nand close listeners. Beware that the docker (Linux) bridge network is not\nreachable on OSX hosts, due to the way containers are run under HyperKit,\nso we suggest trying this on Linux instead.\n\n\nContributing\n------------\n\nContributions are more than welcome. Bug reports with specific reproduction\nsteps are great. If you have a code contribution you'd like to make, open a\npull request with suggested code.\n\nPull requests should:\n\n * Clearly state their intent in the title\n * Have a description that explains the need for the changes\n * Include tests!\n * Not break the public API\n\nPing us to let us know you're working on something interesting by opening a\nGitHub Issue on the project.\n\nBy contributing to this project you agree that you are granting New Relic a\nnon-exclusive, non-revokable, no-cost license to use the code, algorithms,\npatents, and ideas in that code in our products if we so choose. You also agree\nthe code is provided as-is and you provide no warranties as to its fitness or\ncorrectness for any purpose\n\nLogo\n----\n\nThe logo is used with kind permission from [Picture\nEsk](https://www.flickr.com/photos/22081583@N06/4226337024/).\n","funding_links":[],"categories":["Go"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FNinesStack%2Fsidecar","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FNinesStack%2Fsidecar","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FNinesStack%2Fsidecar/lists"}