Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/linux-nvme/nvme-dem

NVMe over Fabrics Distributed Endpoint Management - an implementation of a centralized Discovery controller providing remote configuration and provisioning of remote NVMe resources.
https://github.com/linux-nvme/nvme-dem

Last synced: 10 days ago
JSON representation

NVMe over Fabrics Distributed Endpoint Management - an implementation of a centralized Discovery controller providing remote configuration and provisioning of remote NVMe resources.

Awesome Lists containing this project

README

        

NVMe Distributed Endpoirt Management (DEM) Project Table of Contents:
Project Components
Endpoint Management Modes
System Configuration
Build and Installation
Service Configuration
Starting NVMe-oF Drivers
Manually Starting - Endpoint Manger
Manually Starting - DEM
Manually Starting - Auto Connect
Manually Starting - Discovery Monitor
Configuration Methods

1. Project Components

Distrubited Endpoint Manager (DEM)

Primary component of DEM providing both a centralized Discovery
controller and a management capabilities to configure Targets,
May be used standalone or in concert with other DEM components.
Purposes:
1. Collect and distribute discovery log pages
2. Configure remote NVMe resources via in-band or out-of-band
interfaces

Endpoint Manager (DEM-EM)
Optional mechanism for In-band or out-of-band configuration of each
Target used in concert with DEM

Auto Connect (DEM-AC)
Optional component to enable Hosts to connect to DEM and automatically
connect to provisioned NVMe resources as defined by Discovery Log Pages

Discovery Monitor (DEM-DM)
Optional component to enable Hosts to monitor provisioned NVMe
resources via DEM

Command Line Interface (DEM-CLI)
Mechanism to interface the DEM via console command line

Web interface
Mechanism to interface the DEM via HTML web interface

2. Endpoint Management Modes

Local:
The DEM does not configure individual Targets/Subsystems. Individual
Targets must be locally configured. The DEM must be informed about
Target Interfaces to connect to in order to query log-pages.
Subsystems must be defined to associate log-pages for distribution.

In-band:
The DEM uses a designated NVMe-oF fabric interface to talk with Targets.
The DEM is configured with all Individual Target Endpoint Manager (EM)
interfaces to connect to. The DEM will then query Individual EM for
physical fabric interface and NVMe device resources. The DEM
Administrator has the ability to assign individual Target resources
to NVMe-oF Port ID and Subsystems/Namespaces.

Out-of-Band:
The Out-of-Band management mode operates the same as In-band management
mode however uses a RESTful interface to communicate with the EM.

3. System Configuration

Kernel requirement:
Minimum of 4.15 or Linux Distribution supporting NVMe-oF
(e.g., RHEL 7.5, SUSE12 SP3)

Download the kernel:
$ git clone https://github.com/linux-nvme/nvme-dem.git

Copy the existing /lib/modules/`uname -r`/source/.config to the downloaded
kernel directory.

Install the kernel

Verify openssl-devel package present and installed.
If not present, download and install

Make
Edit config file:
ensure CONFIG_NVME_RDMA, CONFIG_NVME_FC, CONFIG_NVME_TARGET enabled.

make and answer module questions for NVMe Target sub-modules
$ make
$ make modules_install
$ sudo make install

Ensure the system will boot to the new kernel

# grep "^menuentry" /boot/grub2/grub.cfg | cut -d "'" -f2
# grub2-set-default

Boot the new kernel

4. Build and Installation

Repository
$ git clone https://gitlab.com/nvme-over-fabrics/nvme-dem.git

Pulled as part of make (if not copied to mongoose/.git and jansson/.git)
https://github.com/cesanta/mongoose.git
https://github.com/akheron/jansson.git

If target system is behind a head node and does not have access to the
internet clone on a system with access, then build without installing then
tar and copy tar file to remote system

Libraries Required
libcurl-devel librdmacm-devel libpthread-devel
libuuid-devel libibverbs-devel

Build and install
$ ./autoconf.sh
$ ./configure (optional flag: --enable-debug)
$ make

As root, run the following
# make install

Man pages for dem, dem-em, dem-ac, and dem-cli are available after install

5. Service Configuration

Edit /etc/nvme/nvmeof.conf to enable appropriate component.
Following is example for system as both RDMA Host and Target and using
nullb0 for device and manually starting DEM

PCI_HOST_LOAD=no
NVMEOF_HOST_LOAD=yes
RDMA_HOST_LOAD=yes
FC_HOST_LOAD=no
TCP_HOST_LOAD=no
NVMEOF_TARGET_LOAD=yes
RDMA_TARGET_LOAD=yes
FC_TARGET_LOAD=no
TCP_TARGET_LOAD=no
NULL_BLK_TARGET_LOAD=yes
START_DEM=no
START_DEM_AC=no
START_DEM_EM=no
# DEM_AC_OPTS="-t rdma -f ipv4 -a 192.168.1.1 -s 4422"
# DEM_EM_OPTS="-t rdma -f ipv4 -a 192.168.1.1 -s 22334"
# DEM_EM_OPTS="-p 22345"

6. Starting NVMe-oF Drivers

1. Manually Start using nvmeof service

# systemctl start nvmeof

2. Configure the system to start DEM on boot

# chkconfig nvmeof on
# reboot

or

# systemctl start nvmeof

3. If not running nvmeof service, load individual DEM drivers

# modprobe configfs
# modprobe nvme-rdma
# modprobe nvmet-rdma
# modprobe null-blk

7. Manually Starting - Endpoint Manager (dem-em)

Optional mechanism for In-band or out-of-band configuration of each Target.
Only needed if a systems administrator will be remotely managing remote
NVMe resources.

Configure fabric interface(s) the Endpoint Manager will report to the DEM
(Discovery Controller). This is the fabric interface the DEM will use
for NVMe-oF. This is done via files ifcfg-like files under
/etc/nvme/nvmeof-dem (file ‘config’ and ‘signature’ are reserved).

Example RDMA interface on IP-Address “192.168.22.1”. Note: The specification
of a TRSVCID is an indication that the DEM should use this transport for
Hosts to connect for retrieval of Discovery Log pages. The TRSVCID in the
example below is not used by the Endpoint Manager - TRSVCID used by Target
drivers are assigned by administrator through DEM configuration. This
TRSVCID is only used by the DEM daemon.

# Configuration file for RDMA interface
TRTYPE=rdma
ADRFAM=ipv4
TRADDR=192.168.22.1
TRSVCID=4422 # Service ID only used by DEM Discovery Controller

Start Target Endpoint Manager specifying ‘management mode’. This is the
mechanism the DEM will use to communicate to the DEM-EM. This is only
used for configuration purposes.

A. Out-of-Band management mode specifying RESTful interfaces on
port 22334.

# dem-em -p 22334

B. In-Band management mode on rdma ipv4 address “192.168.22.1”.
The Service ID is definable, but cannot be “4420” as this would
conflict with the Target driver.

# dem-em -t rdma -f ipv4 -a 192.168.22.1 -s 4433

Starting either in-band or out-of-band DEM-EM reports enumerated devices
and interfaces.

If run in debug-mode the DEM-EM will not start as a daemon, and all output
will be sent to stdout rather than to /var/log/…

8. Manually Starting - DEM Dicsovery Controller (dem)

Configure fabric interface(s) for Host communication to DEM Discovery
Controller. This is the fabric interface DEM listens on for Hosts to
connect and request Discovery Log Pages. This is done via ifcfg-like
files under /etc/nvme/nvmeof-dem (file ‘config’ and ‘signature’ are
reserved).

Example RDMA interface on IP-Address 192.168.22.2. If the TRSVCID is not
present in a given configuration file, the default TRSVCID for that
interface is the pre-assigned NVMe-oF ID "4420".

Note 1: TRSVCID “4420” is the pre-assigned NVMe-oF ID and MAY ONLY be used
if the DEM Discovery Controller is NOT co-located with an NVMe-oF Target
driver.

Note 2: If the DEM Discovery Controller and DEM Endpoint Manager are
co-located configuration files will be used by both services to define
interfaces used for communication with Hosts.

# Configuration file for RDMA interface
TRTYPE=rdma
ADRFAM=ipv4
TRADDR=192.168.22.2
TRSVCID=4422 # Service ID only used by DEM Discovery Controller

Starting DEM Discovery Controller specifying RESTful interfaces on a port.
Default HTTP port is 22345
# dem -p

If run in debug-mode the DEM will not start as a daemon, and all output will
be sent to stdout rather than to /var/log/…

9. Manually Starting - Auto Connect (dem-ac)

Optional component to enable Hosts to connect to a Discovery controller
(DEM or NVMe-oF Target) and automatically connect to provisioned NVMe
resources as defined by Discovery Log Pages.

Starting Auto Connect to address of the DEM Discovery Controller.

# dem-ac -t rdma -f ipv4 -a 192.168.22.2 -s 4422

Starting Auto Connect to address of an NVMe-oF Target Discovery Controller

# dem-ac -t rdma -f ipv4 -a 192.168.22.200 -s 4420

If run in debug-mode the DEM-AC will not start as a daemon, and all output
will be sent to stdout rather than to /var/log/…

10. Manually Starting - Discovery Monitor (dem-dm)

Optional component to enable Hosts to monitor a Discovery Controller
(DEM or NVMe-oF Target) and report provisioned NVMe resources as defined
by Discovery Log Pages.

Starting Monitor and point it to the address of the DEM Discovery Controller.

# dem-dm -t rdma -f ipv4 -a 192.168.22.2 -s 4422

Starting Monitor and point it to the address of an NVMe-oF Target Discovery
Controller.

# dem-dm -t rdma -f ipv4 -a 192.168.22.200 -s 4420

11. Configuration Methods

Web interface: Mechanism to interface with the DEM via HTTP web interface
Default userid / password is 'dem' and 'nvmeof'. Web site may be remote
from dem to allow bridge to lab networks

Create subdirectory for DEM on the web hosting site and copy web files

$ mkdir /var/www/html/dem
$ cp html/* /var/www/html/dem

Command Line Interface (DEM-CLI): Mechanism to interface with the DEM via
console command line. dem-cli command on system running dem allows full
configuration of targets and hosts Man page install and --help explains
options

cURL: Lowest-level mechanism to interface with the DEM. cURL may be used
if you are willing to construct JSON objects Examples of cURL commands are
in the Makefile