Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/erpc-io/eRPC
Efficient RPCs for datacenter networks
https://github.com/erpc-io/eRPC
Last synced: 3 months ago
JSON representation
Efficient RPCs for datacenter networks
- Host: GitHub
- URL: https://github.com/erpc-io/eRPC
- Owner: erpc-io
- License: other
- Created: 2016-12-20T21:02:20.000Z (about 8 years ago)
- Default Branch: master
- Last Pushed: 2024-05-09T22:45:46.000Z (9 months ago)
- Last Synced: 2024-08-02T20:43:40.553Z (6 months ago)
- Language: C++
- Homepage: https://erpc.io/
- Size: 6.62 MB
- Stars: 840
- Watchers: 34
- Forks: 138
- Open Issues: 21
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- my-awesome - erpc-io/eRPC - 05 star:0.9k fork:0.1k Efficient RPCs for datacenter networks (C++)
- awesome-list - eRPC - io | 546 | (C)
README
**News [Dec 2023]: Check out [Machnet](https://github.com/microsoft/machnet) for fast and easy DPDK-based messaging. It supports many types of cloud VMs and bare-metal NICs, multiple application processes, language bindings, etc.**
eRPC is a fast and general-purpose RPC library for datacenter networks.
Our NSDI 2019 [paper](https://www.usenix.org/system/files/nsdi19-kalia.pdf)
describes the system in detail. Documentation can be generated by running `doxygen`.Some highlights:
* Multiple supported networks: Ethernet, InfiniBand, and RoCE
* Low latency: 2.3 microseconds round-trip RPC latency with UDP over Ethernet
* Performance for small 32-byte RPCs: ~10M RPCs/sec with one CPU core,
60--80M RPCs/sec with one NIC.
* Bandwidth for large RPC: 75 Gbps on one connection (one CPU core at server
and client) for 8 MB RPCs
* Scalability: 20000 RPC sessions per server
* End-to-end congestion control that tolerates 100-way incasts
* Nested RPCs, and long-running background RPCs
* A port of [Raft](https://github.com/willemt/raft) as an example. Our 3-way
replication latency is 5.3 microseconds with traditional UDP over Ethernet.## System requirements
* NICs: Fast (10 GbE+) NICs are needed for good performance. eRPC works best
with Mellanox Ethernet and InfiniBand NICs. Any DPDK-capable NICs
also work well.
* System configuration:
* At least 1024 huge pages on every NUMA node, and unlimited SHM limits
* On a machine with `n` eRPC processes, eRPC uses kernel UDP ports `{31850,
..., 31850 + n - 1}.` These ports should be open on the management
network. See `scripts/firewalld/erpc_firewall.sh` for systems running
`firewalld`.## eRPC quickstart
* Build and run the test suite:
`cmake . -DPERF=OFF -DTRANSPORT=dpdk; make -j; sudo ctest`.
* `DPERF=OFF` enables debugging, which greatly reduces performance. Set
`DPERF=ON` for good performance.
* Here, `dpdk` should be replaced with `infiniband` for InfiniBand NICs.
* A machine with two ports is needed to run the unit tests if DPDK is chosen.
Run `scripts/run-tests-dpdk.sh` instead of `ctest`.
* Run the `hello_world` application:
* `cd hello_world`
* Edit the server and client hostnames in `common.h`
* Based on the transport that eRPC was compiled for, compile `hello_world`
using `make dpdk`, or `make infiniband`.
* Run `./server` at the server, and `./client` at the client
* Generate the documentation: `doxygen`## Supported bare-metal NICs:
* Ethernet/UDP mode:
* DPDK-enabled bare-metal NICs: Use `DTRANSPORT=dpdk`. We have primarily tested Mellanox CX3--CX5 NICs.
* DPDK-enabled NICs on Microsoft Azure: Use `-DTRANSPORT=dpdk -DAZURE=on`
* RDMA (InfiniBand/RoCE) NICs: Use `DTRANSPORT=infiniband`. Add `DROCE=on`
if using RoCE.## Running eRPC over DPDK on Microsoft Azure VMs
* eRPC works well on Azure VMs with accelerated networking.
* Configure two Ubuntu 18.04 VMs as below. Use the same resource group and
availability zone for both VMs.* Uncheck "Accelerated Networking" when launching each VM from the Azure
portal (e.g., F32s-v2). For now, this VM should have just the control
network (i.e., `eth0`) and `lo` interfaces.
* Add a NIC to Azure via the Azure CLI: `az network nic create
--resource-group --name
--vnet-name --subnet default
--accelerated-networking true --subscription --location `
* Stop the VM launched earlier, and attach the NIC created in the previous
step to the VM (i.e., in "Networking" -> "Attach network interface").
* Re-start the VM. It should have a new interface called `eth1`, which eRPC
will use for DPDK traffic.* Prepare DPDK 21.11
* [rdma-core](https://github.com/linux-rdma/rdma-core) must be installed
from source. We recommend the tag `stable-v40. First, install its
dependencies listed in rdma-core's README.
Then, in the `rdma-core` directory:
* `cmake .`
* `sudo make install`
* Install upstream pre-requisite libraries and modules:
* `sudo apt install make cmake g++ gcc libnuma-dev libgflags-dev numactl`
* `sudo modprobe ib_uverbs`
* `sudo modprobe mlx4_ib`
* Download the [DPDK tarball](https://core.dpdk.org/download/) and
extract it. Other DPDK versions are not supported.
* Edit `config/common_base` by changing `CONFIG_RTE_LIBRTE_MLX5_PMD` and
`CONFIG_RTE_LIBRTE_MLX4_PMD` to `y` instead of `n`.
* Build and locally install DPDK:
```bash
export RTE_SDK=
git clone --depth 1 --branch 'v21.11' https://github.com/DPDK/dpdk.git "${RTE_SDK}"
cd "${RTE_SDK}"
meson build -Dexamples='' -Denable_kmods=false -Dtests=false -Ddisable_drivers='raw/*,crypto/*,baseband/*,dma/*'
cd build/
DESTDIR="${RTE_SDK}/build/install" ninja install
```* Create hugepages:
```bash
sudo bash -c "echo 2048 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages"
sudo mkdir /mnt/huge
sudo mount -t hugetlbfs nodev /mnt/huge
```* Build eRPC's library and latency benchmark:
```bash
cmake . -DTRANSPORT=dpdk -DAZURE=on
make
make latency
```* Create the file `scripts/autorun_process_file` like below. Here, do not use
the IP addresses of the accelerated NIC (i.e., not of `eth1`).
```
31850 0
31850 0
```* Run the eRPC application (the latency benchmark by default):
* At VM #1: `./scripts/do.sh 0 0`
* At VM #2: `./scripts/do.sh 1 0`## Configuring and running the provided benchmarks
* The `apps` directory contains a suite of benchmarks and examples. The
instructions below are for this suite of applications. eRPC can also be
simply linked as a library instead (see `hello_world/` for an example).
* To build an application, create `scripts/autorun_app_file` and change its
contents to one of the available directory names in `apps/`. See
`scripts/example_autorun_app_file` for an example. Then generate a
Makefile using `cmake . -DTRANSPORT=dpdk/infiniband`.
* Each application directory in `apps/` contains a config file
that must specify all flags defined in `apps/apps_common.h`. For example,
`num_processes` specifies the total number of eRPC processes in the cluster.
* The URIs of eRPC processes in the cluster are specified in
`scripts/autorun_process_file`. Each line in this file must be
` `.
* Run `scripts/do.sh` for each process:
* With single-CPU machines: `num_processes` machines are needed.
Run `scripts/do.sh 0` on machine `i` in `{0, ..., num_processes - 1}`.
* With dual-CPU machines: `num_machines = ceil(num_processes / 2)` machines
are needed. Run `scripts/do.sh ` on machine i in
`{0, ..., num_machines - 1}`.
* To automatically run an application at all processes in
`scripts/autorun_process_file`, run `scripts/run-all.sh`. For some
applications, statistics generated in a run can be collected and processed
using `scripts/proc-out.sh`.## Getting help
* GitHub issues are preferred over email. Please include the following
information in the issue:
* NIC model
* `rdma_core` version and DPDK version
* Operating system## Contact
Anuj Kalia## License
Copyright 2018, Carnegie Mellon UniversityLicensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.