https://github.com/sdleffler/rad-rs
Safe, high-level RADOS bindings using the ceph-rust bindings.
https://github.com/sdleffler/rad-rs
ceph concurrency rados rust storage
Last synced: 6 months ago
JSON representation
Safe, high-level RADOS bindings using the ceph-rust bindings.
- Host: GitHub
- URL: https://github.com/sdleffler/rad-rs
- Owner: sdleffler
- License: mpl-2.0
- Created: 2017-06-09T23:18:13.000Z (almost 8 years ago)
- Default Branch: master
- Last Pushed: 2021-06-17T04:20:29.000Z (almost 4 years ago)
- Last Synced: 2024-10-28T16:56:57.782Z (7 months ago)
- Topics: ceph, concurrency, rados, rust, storage
- Language: Rust
- Size: 10.6 MB
- Stars: 6
- Watchers: 3
- Forks: 2
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- License: LICENSE.txt
Awesome Lists containing this project
README
[](https://travis-ci.org/sdleffler/rad-rs)
[](https://docs.rs/rad)
[](https://crates.io/crates/rad)# rad: High-level Rust library for interfacing with RADOS
This library provides a typesafe and extremely high-level Rust interface to
RADOS, the Reliable Autonomous Distributed Object Store. It uses the raw C
bindings from `ceph-rust`.# Installation
To build and use this library, a working installation of the Ceph librados
development files is required. On systems with apt-get, this can be acquired
like so:```bash
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
sudo apt-add-repository 'deb https://download.ceph.com/debian-luminous/ `lsb_release -sc` main'
sudo apt-get update
sudo apt-get install librados-dev
```*N.B. `luminous` is the current Ceph release. This library will not work
correctly or as expected with earlier releases of Ceph/librados (Jewel or
earlier; Kraken is fine.)*For more information on installing Ceph packages, see [the Ceph documentation](http://docs.ceph.com/docs/master/install/get-packages/).
# Examples
## Connecting to a cluster
The following shows how to connect to a RADOS cluster, by providing a path to a
`ceph.conf` file, a path to the `client.admin` keyring, and requesting to
connect with the `admin` user. This API bares little resemblance to the
bare-metal librados API, but it *is* easy to trace what's happening under the
hood: `ConnectionBuilder::with_user` or `ConnectionBuilder::new`
allocates a new `rados_t`. `read_conf_file` calls `rados_conf_read_file`,
`conf_set` calls `rados_conf_set`, and `connect` calls `rados_connect`.```rust
use rad::ConnectionBuilder;let cluster = ConnectionBuilder::with_user("admin").unwrap()
.read_conf_file("/etc/ceph.conf").unwrap()
.conf_set("keyring", "/etc/ceph.client.admin.keyring").unwrap()
.connect()?;
```The type returned from `.connect()` is a `Cluster` handle, which is a wrapper around a `rados_t` which guarantees a `rados_shutdown` on the connection when dropped.
## Writing a file to a cluster with synchronous I/O
```rust
use std::fs::File;
use std::io::Read;use rad::ConnectionBuilder;
let cluster = ConnectionBuilder::with_user("admin")?
.read_conf_file("/etc/ceph.conf")?
.conf_set("keyring", "/etc/ceph.client.admin.keyring")?
.connect()?;// Read in bytes from some file to send to the cluster.
let file = File::open("/path/to/file")?;
let mut bytes = Vec::new();
file.read_to_end(&mut bytes)?;let pool = cluster.get_pool_context("rbd")?;
pool.write_full("object-name", &bytes)?;
// Our file is now in the cluster! We can check for its existence:
assert!(pool.exists("object-name")?);// And we can also check that it contains the bytes we wrote to it.
let mut bytes_from_cluster = vec![0u8; bytes.len()];
let bytes_read = pool.read("object-name", &mut bytes_from_cluster, 0)?;
assert_eq!(bytes_read, bytes_from_cluster.len());
assert!(bytes_from_cluster == bytes);
```## Writing multiple objects to a cluster with asynchronous I/O and `futures-rs`
`rad-rs` also supports the librados AIO interface, using the `futures` crate.
This example will start `NUM_OBJECTS` writes concurrently and then wait for
them all to finish.```rust
use std::fs::File;
use std::io::Read;use rand::{Rng, SeedableRng, XorShiftRng};
use rad::ConnectionBuilder;
const NUM_OBJECTS: usize = 8;
let cluster = ConnectionBuilder::with_user("admin")?
.read_conf_file("/etc/ceph.conf")?
.conf_set("keyring", "/etc/ceph.client.admin.keyring")?
.connect()?;let pool = cluster.get_pool_context("rbd")?;
stream::iter_ok((0..NUM_OBJECTS)
.map(|i| {
let bytes = XorShiftRng::from_seed([i as u32 + 1, 2, 3, 4])
.gen_iter::()
.take(1 << 16).collect();let name = format!("object-{}", i);
pool.write_full_async(name, &bytes)
}))
.buffer_unordered(NUM_OBJECTS)
.collect()
.wait()?;
```# Running tests
Integration tests against a demo cluster are provided, and the test suite
(which is admittedly a little bare at the moment) uses Docker and a container
derived from the Ceph `ceph/demo` container to bring a small Ceph cluster
online, locally. A script is provided for launching the test suite:```sh
./tests/run-all-tests.sh
```Launching the test suite requires Docker to be installed.
# License
This project is licensed under the [Mozilla Public License, version 2.0.](https://www.mozilla.org/en-US/MPL/2.0/)