{"id":13416119,"url":"https://github.com/jpetazzo/pipework","last_synced_at":"2025-05-13T18:14:10.818Z","repository":{"id":8927632,"uuid":"10657447","full_name":"jpetazzo/pipework","owner":"jpetazzo","description":"Software-Defined Networking tools for LXC (LinuX Containers)","archived":false,"fork":false,"pushed_at":"2024-11-04T17:31:57.000Z","size":201,"stargazers_count":4244,"open_issues_count":5,"forks_count":732,"subscribers_count":215,"default_branch":"master","last_synced_at":"2025-05-06T23:44:57.770Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/jpetazzo.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2013-06-13T02:51:57.000Z","updated_at":"2025-05-05T03:18:27.000Z","dependencies_parsed_at":"2023-12-13T18:39:38.212Z","dependency_job_id":"880564f7-4370-4d9f-9729-28088bace2f9","html_url":"https://github.com/jpetazzo/pipework","commit_stats":{"total_commits":135,"total_committers":57,"mean_commits":"2.3684210526315788","dds":0.8666666666666667,"last_synced_commit":"34f48f37c7b748a51dea79e4b6ab6a5028a02485"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jpetazzo%2Fpipework","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jpetazzo%2Fpipework/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jpetazzo%2Fpipework/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jpetazzo%2Fpipework/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/jpetazzo","download_url":"https://codeload.github.com/jpetazzo/pipework/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254000885,"owners_count":21997443,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-30T21:00:54.521Z","updated_at":"2025-05-13T18:14:10.777Z","avatar_url":"https://github.com/jpetazzo.png","language":"Shell","readme":"⚠️  **WARNING: this project is not maintained.**\n\nIt was written in the early days of Docker, when people needed a way to\n\"plumb\" Docker containers into arbitrary network topologies. If you want\nto use it today (post-2020), you can, but it's at your own risk.\nSmall contributions (of a few lines) are welcome, but I don't have\nthe time to review and test bigger contributions, so don't expect any\nnew features or significant fixes (for instance, if Docker changes the\nway it handles container networking, this will break, and I will not fix it).\n\nProceed at your own risk! :)\n\n# Pipework\n\n**_Software-Defined Networking for Linux Containers_**\n\nPipework lets you connect together containers in arbitrarily complex scenarios.\nPipework uses cgroups and namespace and works with \"plain\" LXC containers\n(created with `lxc-start`), and with the awesome [Docker](http://www.docker.io/).\n\n\u003c!-- START doctoc generated TOC please keep comment here to allow auto update --\u003e\n\u003c!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --\u003e\n**Table of Contents**  *generated with [DocToc](https://github.com/thlorenz/doctoc)*\n\n- [Things to note](#things-to-note)\n  - [vCenter / vSphere / ESX / ESXi](#vcenter--vsphere--esx--esxi)\n  - [Virtualbox](#virtualbox)\n  - [Docker](#docker)\n- [LAMP stack with a private network between the MySQL and Apache containers](#lamp-stack-with-a-private-network-between-the-mysql-and-apache-containers)\n- [Docker integration](#docker-integration)\n- [Peeking inside the private network](#peeking-inside-the-private-network)\n- [Setting container internal interface](#setting-container-internal-interface)\n- [Setting host interface name](#setting-host-interface-name)\n- [Using a different netmask](#using-a-different-netmask)\n- [Setting a default gateway](#setting-a-default-gateway)\n- [Connect a container to a local physical interface](#connect-a-container-to-a-local-physical-interface)\n- [Use MAC address to specify physical interface](#use-mac-address-to-specify-physical-interface)\n- [Let the Docker host communicate over macvlan interfaces](#let-the-docker-host-communicate-over-macvlan-interfaces)\n- [Wait for the network to be ready](#wait-for-the-network-to-be-ready)\n- [Add the interface without an IP address](#add-the-interface-without-an-ip-address)\n- [Add a dummy interface](#add-a-dummy-interface)\n- [DHCP](#dhcp)\n- [DHCP Options](#dhcp-options)\n- [Specify a custom MAC address](#specify-a-custom-mac-address)\n- [Virtual LAN (VLAN)](#virtual-lan-vlan)\n- [Control routes](#routes)\n- [Support Open vSwitch](#support-open-vswitch)\n- [Support InfiniBand IPoIB](#support-infiniband-ipoib)\n- [Cleanup](#cleanup)\n- [Integrating pipework with other tools](#integrating-pipework-with-other-tools)\n- [About this file](#about-this-file)\n\n\u003c!-- END doctoc generated TOC please keep comment here to allow auto update --\u003e\n\n### Things to note\n\n#### vCenter / vSphere / ESX / ESXi\n**If you use vCenter / VSphere / ESX / ESXi,** set or ask your administrator\nto set *Network Security Policies* of the vSwitch as below:\n\n- Promiscuous mode:    **Accept**\n- MAC address changes: **Accept**\n- Forged transmits:    **Accept**\n\nAfter starting the guest OS and creating a bridge, you might also need to\nfine-tune the `br1` interface as follows:\n\n- `brctl stp br1 off` (to disable the STP protocol and prevent the switch\n  from disabling ports)\n- `brctl setfd br1 2` (to reduce the time taken by the `br1` interface to go\n  from *blocking* to *forwarding* state)\n- `brctl setmaxage br1 0`\n\n#### Virtualbox\n**If you use VirtualBox**, you will have to update your VM network settings.\nOpen the settings panel for the VM, go the the \"Network\" tab, pull down the\n\"Advanced\" settings. Here, the \"Adapter Type\" should be `pcnet` (the full\nname is something like \"PCnet-FAST III\"), instead of the default `e1000`\n(Intel PRO/1000). Also, \"Promiscuous Mode\" should be set to \"Allow All\".\n\nIf you don't do that, bridged containers won't work, because the virtual\nNIC will filter out all packets with a different MAC address.  If you are\nrunning VirtualBox in headless mode, the command line equivalent of the above\nis `modifyvm --nicpromisc1 allow-all`.  If you are using Vagrant, you can add\nthe following to the config for the same effect:\n\n```Ruby\nconfig.vm.provider \"virtualbox\" do |v|\n  v.customize ['modifyvm', :id, '--nictype1', 'Am79C973']\n  v.customize ['modifyvm', :id, '--nicpromisc1', 'allow-all']\nend\n```\n\nNote: it looks like some operating systems (e.g. CentOS 7) do not support\n`pcnet` anymore. You might want to use the `virtio-net` (Paravirtualized\nNetwork) interface with those.\n\n\n#### Docker\n\n**Before using Pipework, please ask on the [docker-user mailing list](\nhttps://groups.google.com/forum/#!forum/docker-user) if there is a \"native\"\nway to achieve what you want to do *without* Pipework.**\n\nIn the long run, Docker will allow complex scenarios, and Pipework should\nbecome obsolete.\n\nIf there is really no other way to plumb your containers together with\nthe current version of Docker, then okay, let's see how we can help you!\n\nThe following examples show what Pipework can do for you and your containers.\n\n\n### LAMP stack with a private network between the MySQL and Apache containers\n\nLet's create two containers, running the web tier and the database tier:\n\n    APACHE=$(docker run -d apache /usr/sbin/httpd -D FOREGROUND)\n    MYSQL=$(docker run -d mysql /usr/sbin/mysqld_safe)\n\nNow, bring superpowers to the web tier:\n\n    pipework br1 $APACHE 192.168.1.1/24\n\nThis will:\n\n- create a bridge named `br1` in the docker host;\n- add an interface named `eth1` to the `$APACHE` container;\n- assign IP address 192.168.1.1 to this interface,\n- connect said interface to `br1`.\n\nNow (drum roll), let's do this:\n\n    pipework br1 $MYSQL 192.168.1.2/24\n\nThis will:\n\n- not create a bridge named `br1`, since it already exists;\n- add an interface named `eth1` to the `$MYSQL` container;\n- assign IP address 192.168.1.2 to this interface,\n- connect said interface to `br1`.\n\nNow, both containers can ping each other on the 192.168.1.0/24 subnet.\n\n\n### Docker integration\n\nPipework can resolve Docker containers names. If the container ID that\nyou gave to Pipework cannot be found, Pipework will try to resolve it\nwith `docker inspect`. This makes it even simpler to use:\n\n    docker run -name web1 -d apache\n    pipework br1 web1 192.168.12.23/24\n\n\n### Peeking inside the private network\n\nWant to connect to those containers using their private addresses? Easy:\n\n    ip addr add 192.168.1.254/24 dev br1\n\nVoilà!\n\n\n### Setting container internal interface ##\nBy default pipework creates a new interface `eth1` inside the container. In case you want to change this interface name like `eth2`, e.g., to have more than one interface set by pipework, use:\n\n`pipework br1 -i eth2 ...`\n\n**Note:**: for InfiniBand IPoIB interfaces, the default interface name is `ib0` and not `eth1`.\n\n\n### Setting host interface name ##\nBy default pipework will create a host-side interface with a fixed prefix but random suffix. If you would like to specify this interface name use the `-l` flag (for local):\n\n`pipework br1 -i eth2 -l hostapp1 ...`\n\n\n### Using a different netmask\n\nThe IP addresses given to `pipework` are directly passed to the `ip addr`\ntool; so you can append a subnet size using traditional CIDR notation.\n\nI.e.:\n\n    pipework br1 $CONTAINERID 192.168.4.25/20\n\nDon't forget that all containers should use the same subnet size;\npipework is not clever enough to use your specified subnet size for\nthe first container, and retain it to use it for the other containers.\n\n\n### Setting a default gateway\n\nIf you want *outbound* traffic (i.e. when the containers connects\nto the outside world) to go through the interface managed by\nPipework, you need to change the default route of the container.\n\nThis can be useful in some usecases, like traffic shaping, or if\nyou want the container to use a specific outbound IP address.\n\nThis can be automated by Pipework, by adding the gateway address\nafter the IP address and subnet mask:\n\n    pipework br1 $CONTAINERID 192.168.4.25/20@192.168.4.1\n\n\n### Connect a container to a local physical interface\n\nLet's pretend that you want to run two Hipache instances, listening on real\ninterfaces eth2 and eth3, using specific (public) IP addresses. Easy!\n\n    pipework eth2 $(docker run -d hipache /usr/sbin/hipache) 50.19.169.157/24\n    pipework eth3 $(docker run -d hipache /usr/sbin/hipache) 107.22.140.5/24\n\nNote that this will use `macvlan` subinterfaces, so you can actually put\nmultiple containers on the same physical interface.  If you don't want to\nvirtualize the interface, you can use the `--direct-phys` option to namespace\nan interface exclusively to a container without using a macvlan bridge.\n\n    pipework --direct-phys eth1 $CONTAINERID 192.168.1.2/24\n\nThis is useful for assigning SR-IOV VFs to containers, but be aware of added\nlatency when using the NIC to switch packets between containers on the same host.\n\n\n### Use MAC address to specify physical interface\n\nIn case you want to connect a local physical interface with a specific name inside\nthe container, it will also rename the physical one, this behaviour is not\nidempotent:\n\n    pipework --direct-phys eth1 -i container0 $CONTAINERID 0/0\n    # second call would fail because physical interface eth1 has been renamed\n\nWe can use the interface MAC address to identify the interface the same way\nany time (udev networking rules use a similar method for interfaces persistent\nnaming):\n\n    pipework --direct-phys mac:00:f3:15:4a:42:c8 -i container0 $CONTAINERID 0/0\n\n\n### Let the Docker host communicate over macvlan interfaces\n\nIf you use macvlan interfaces as shown in the previous paragraph, you\nwill notice that the host will not be able to reach the containers over\ntheir macvlan interfaces. This is because traffic going in and out of\nmacvlan interfaces is segregated from the \"root\" interface.\n\nIf you want to enable that kind of communication, no problem: just\ncreate a macvlan interface in your host, and move the IP address from\nthe \"normal\" interface to the macvlan interface.\n\nFor instance, on a machine where `eth0` is the main interface, and has\naddress `10.1.1.123/24`, with gateway `10.1.1.254`, you would do this:\n\n    ip addr del 10.1.1.123/24 dev eth0\n    ip link add link eth0 dev eth0m type macvlan mode bridge\n    ip link set eth0m up\n    ip addr add 10.1.1.123/24 dev eth0m\n    route add default gw 10.1.1.254\n\nThen, you would start a container and assign it a macvlan interface\nthe usual way:\n\n    CID=$(docker run -d ...)\n    pipework eth0 $CID 10.1.1.234/24@10.1.1.254\n\n\n### Wait for the network to be ready\n\nSometimes, you want the extra network interface to be up and running *before*\nstarting your service. A dirty (and unreliable) solution would be to add\na `sleep` command before starting your service; but that could break in\n\"interesting\" ways if the server happens to be a bit slower at one point.\n\nThere is a better option: add the `pipework` script to your Docker image,\nand before starting the service, call `pipework --wait`. It will wait\nuntil the `eth1` interface is present and in `UP` operational state,\nthen exit gracefully.\n\nIf you need to wait on an interface other than eth1, pass the -i flag like\nthis:\n\n    pipework --wait -i ib0\n\n\n### Add the interface without an IP address\n\nIf for some reason you want to set the IP address from within the\ncontainer, you can use `0/0` as the IP address. The interface will\nbe created, connected to the network, and assigned to the container,\nbut without configuring an IP address:\n\n    pipework br1 $CONTAINERID 0/0\n\n\n### Add a dummy interface\n\nIf for some reason you want a dummy interface inside the container, you can add it like any other interface. Just set the host interface to the keyword dummy. All other options - IP, CIDR, gateway - function as normal.\n\n    pipework dummy $CONTAINERID 192.168.21.101/24@192.168.21.1\n\nOf course, a gateway does not mean much in the context of a dummy interface, but there it is.\n\n### DHCP\n\nYou can use DHCP to obtain the IP address of the new interface. Just\nspecify the name of the DHCP client that you want to use instead\non an IP address; for instance:\n\n    pipework eth1 $CONTAINERID dhclient\n\nYou can specify the following DHCP clients:\n\n- dhclient\n- udhcpc\n- dhcpcd\n- dhcp\n\nThe first three are \"normal\" DHCP clients. They have to be installed\non your host for this option to work. The last one works\ndifferently: it will run a DHCP client *in a Docker container*\nsharing its network namespace with your container. This allows\nto use DHCP configuration without worrying about installing the\nright DHCP client on your host. It will use the Docker `busybox`\nimage and its embedded `udhcpc` client.\n\nThe value of $CONTAINERID will be provided to the DHCP client to use\nas the hostname in the DHCP request. Depending on the configuration of\nyour network's DHCP server, this may enable other machines on the network\nto access the container using the $CONTAINERID as a hostname; therefore,\nspecifying $CONTAINERID as a container name rather than a container id\nmay be more appropriate in this use-case.\n\nYou need three things for this to work correctly:\n\n- obviously, a DHCP server (in the example above, a DHCP server should\n  be listening on the network to which we are connected on `eth1`);\n- a DHCP client (either `udhcpc`, `dhclient` or `dhcpcp`) must be installed\n  on your Docker *host* (you don't have to install it in your containers,\n  but it must be present on the host), unless you specify `dhcp` as\n  the client, in which case the Docker `busybox` image should be\n  available;\n- the underlying network must support bridged frames.\n\nThe last item might be particularly relevant if you are trying to\nbridge your containers with a WPA-protected WiFi network. I'm not 100%\nsure about this, but I think that the WiFi access point will drop frames\noriginating from unknown MAC addresses; meaning that you have to go\nthrough extra hoops if you want it to work properly.\n\nIt works fine on plain old wired Ethernet, though.\n\n#### Lease Renewal\n\nAll of the DHCP options - udhcpc, dhcp, dhclient, dhcpcd - exit or are killed by pipework when they are done assigning a lease. This is to prevent zombie processes from existing after a container exits, but the dhcp client still exists.\n\nHowever, if the container is long-running - longer than the life of the lease - then the lease will expire, no dhcp client renews the lease, and the container is stuck without a valid IP address.\n\nTo resolve this problem, you can cause the dhcp client to remain alive. The method depends on the dhcp client you use.\n\n* dhcp: see the next section [DHCP Options](#dhcp-options)\n* dhclient: use DHCP client `dhclient-f`\n* udhcpc: use DHCP client `udhcpc-f`\n* dhcpcd: not yet supported.\n\n\n**Note:** If you use this option *you* will be responsible for finding and killing those dhcp client processes in the future. pipework is a one-time script; it is not intended to manage long-running processes for you.\n\nIn order to find the processes, you can look for pidfiles in the following locations:\n\n* dhcp: see the next section [DHCP Options](#dhcp-options)\n* dhclient: pidfiles in `/var/run/dhclient.$GUESTNAME.pid`\n* udhcpc: pidfiles in `/var/run/udhcpc.$GUESTNAME.pid`\n* dhcpcd: not yet supported\n\n`$GUESTNAME` is the name or ID of the guest as you passed it to pipework on instantiation.\n\n\n### DHCP Options\n\nYou can specify extra DHCP options to be passed to the DHCP client\nby adding them with a colon. For instance:\n\n    pipework eth1 $CONTAINERID dhcp:-f\n\nThis will tell Pipework to setup the interface using the DHCP client\nof the Docker `busybox` image, and pass `-f` as an extra flag to this\nDHCP client. This flag instructs the client to remain in the foreground\ninstead of going to the background. Let's see what this means.\n\n*Without* this flag, a new container is started, in which the DHCP\nclient is executed. The DHCP client obtains a lease, then goes to\nthe background. When it goes to the background, the PID 1 in this\ncontainer exits, causing the whole container to be terminated.\nAs a result, the \"pipeworked\" container has its IP address, but\nthe DHCP client has gone. On the up side, you don't have any\ncleanup to do; on the other, the DHCP lease will not be renewed,\nwhich could be problematic if you have short leases and the\nserver and other clients don't validate their leases before using\nthem.\n\n*With* this flag, a new container is started, it runs the DHCP\nclient just like before; but when it obtains the lease, it\nremains in the foreground. As a result, the lease will be\nproperly renewed. However, when you terminate the \"pipeworked\"\ncontainer, you should also take care of removing the container\nthat runs the DHCP client. This can be seen as an advantage\nif you want to reuse this network stack even if the initial\ncontainer is terminated.\n\n\n### Specify a custom MAC address\n\nIf you need to specify the MAC address to be used (either by the `macvlan`\nsubinterface, or the `veth` interface), no problem. Just add it as the\ncommand-line, as the last argument:\n\n    pipework eth0 $(docker run -d haproxy) 192.168.1.2/24 26:2e:71:98:60:8f\n\nThis can be useful if your network environment requires whitelisting\nyour hardware addresses (some hosting providers do that), or if you want\nto obtain a specific address from your DHCP server. Also, some projects like\n[Orchestrator](https://github.com/cvlc/orchestrator) rely on static\nMAC-IPv6 bindings for DHCPv6:\n\n    pipework br0 $(docker run -d zerorpcworker) dhcp fa:de:b0:99:52:1c\n\n**Note:** if you generate your own MAC addresses, try remember those two\nsimple rules:\n\n- the lowest bit of the first byte should be `0`, otherwise, you are\n  defining a multicast address;\n- the second lowest bit of the first byte should be `1`, otherwise,\n  you are using a globally unique (OUI enforced) address.\n\nIn other words, if your MAC address is `?X:??:??:??:??:??`, `X` should\nbe `2`, `6`, `a`, or `e`. You can check [Wikipedia](\nhttp://en.wikipedia.org/wiki/MAC_address) if you want even more details.\n\nIf you want a consistent MAC address across container restarts, but don't want to have to keep track of the messy MAC addresses, ask pipework to generate an address for you based on a specified string, e.g. the hostname. This guarantees a consistent MAC address:\n\n    pipework eth0 \u003ccontainer\u003e dhcp U:\u003csome_string\u003e\n\npipework will take *some_string* and hash it using MD5. It will then take the first 40 bits of the MD5 hash, add those to the locally administered prefix of 0x02, and create a unique MAC address.\n\nFor example, if your unique string is \"myhost.foo.com\", then the MAC address will **always** be `02:72:6c:cd:9b:8d`.\n\nThis is particularly useful in the case of DHCP, where you might want the container to stop and start, but always get the same address. Most DHCP servers will keep giving you a consistent IP address if the MAC address is consistent.\n\n**Note:**  Setting the MAC address of an IPoIB interface is not supported.\n\n### Virtual LAN (VLAN)\n\nIf you want to attach the container to a specific VLAN, the VLAN ID can be\nspecified using the `[MAC]@VID` notation in the MAC address parameter.\n\n**Note:** VLAN attachment is currently only supported for containers to be\nattached to either an Open vSwitch bridge or a physical interface. Linux\nbridges are currently not supported.\n\nThe following will attach container zerorpcworker to the Open vSwitch bridge\novs0 and attach the container to VLAN ID 10.\n\n    pipework ovsbr0 $(docker run -d zerorpcworker) dhcp @10\n\n### Control Routes\n\nIf you want to add/delete/replace routes in the container, you can run any iproute2 route command via pipework.\n\nAll you have to do is set the interface to be `route`, followed by the container ID or name, followed by the route command.\n\nHere are some examples.\n\n    pipework route $CONTAINERID add 10.0.5.6/24 via 192.168.2.1\n    pipework route $CONTAINERID replace default via 10.2.3.5.78\n\nEverything after the container ID (or name) will be run as an argument to `ip route` inside the container's namespace. Use the iproute2 man page.\n\n### Control Rules\n\nIf you want to add/delete/replace IP rules in the container, you can do the same thing with `ip rule` that you can with\n`ip route`.\n\nSpecify the interface to be `rule`, followed by the container ID or name, followed by the rule command.\n\nHere are some examples, to specify a route table:\n\n    pipework rule $CONTAINERID add from 172.19.0.2/32 table 1\n    pipework rule $CONTAINERID add to 172.19.0.2/32 table 1\n\nNote that for these rules to work you first need to execute the following in your container:\n\n  echo \"1 admin\" \u003e\u003e /etc/iproute2/rt_tables\n\nYou can read more on using route tables, specifically to setup multiple NICs with different default gateways,\nhere: https://kindlund.wordpress.com/2007/11/19/configuring-multiple-default-routes-in-linux/\n\n### Control `tc`\n\nIf you want to use `tc` from within the container namespace, you can do so with the command\n`pipework tc $CONTAINERID \u003ctc_args\u003e`.\n\nExample, to simulate 30% packet loss on `eth0` within the container:\n\n    pipework tc $CONTAINERID qdisc add dev eth0 root netem loss 30%\n\n\n### Support Open vSwitch\n\nIf you want to attach a container to the Open vSwitch bridge, no problem.\n\n    ovs-vsctl list-br\n    ovsbr0\n    pipework ovsbr0 $(docker run -d mysql /usr/sbin/mysqld_safe) 192.168.1.2/24\n\nIf the ovs bridge doesn't exist, it will be automatically created\n\n\n### Support InfiniBand IPoIB\n\nPassing an IPoIB interface to a container is supported.  The IPoIB device is\ncreated as a virtual device, similarly to how macvlan devices work.  The\ninterface also supports setting a partition key for the created virtual device.\n\nThe following will attach a container to ib0\n\n    pipework ib0 $CONTAINERID 10.10.10.10/24\n\nThe following will do the same but connect it to ib0 with pkey 0x8001\n\n    pipework ib0 $CONTAINERID 10.10.10.10/24 @8001\n\n### Gratuitous ARP\n\nIf `arping` is installed, it will be used to send a gratuitous ARP reply\nto the container's neighbors. This can be useful if the container doesn't\nemit any network traffic at all, and seems unreachable (but suddenly becomes\nreachable after it generates some traffic).\n\nNote, however, that Ubuntu/Debian distributions contain two different `arping`\npackages. The one you want is `iputils-arping`.\n\n\n### Cleanup\n\nWhen a container is terminated (the last process of the net namespace exits),\nthe network interfaces are garbage collected. The interface in the container\nis automatically destroyed, and the interface in the docker host (part of the\nbridge) is then destroyed as well.\n\n\n### Integrating pipework with other tools\n\n@dreamcat4 has built an amazing fork of pipework that can be integrated\nwith other tools in the Docker ecosystem, like Compose or Crane.\nIt can be used in \"one shot,\" to create a bunch of network connections\nbetween containers; it can run in the background as a daemon, watching\nthe Docker events API, and automatically invoke pipework when containers\nare started, and it can also expose pipework itself through an API.\n\nFor more info, check the [dreamcat4/pipework](https://hub.docker.com/r/dreamcat4/pipework/)\nimage on the Docker Hub.\n\n\n### About this file\n\nThis README file is currently the only documentation for pipework. When\nupdating it (specifically, when adding/removing/moving sections), please\nupdate the table of contents. This can be done very easily by just running:\n\n    docker-compose up\n\nThis will build a container with `doctoc` and run it to regenerate the\ntable of contents. That's it!\n","funding_links":[],"categories":["Container Operations","Shell","others","\u003ca name=\"Shell\"\u003e\u003c/a\u003eShell"],"sub_categories":["Networking"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjpetazzo%2Fpipework","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjpetazzo%2Fpipework","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjpetazzo%2Fpipework/lists"}