https://github.com/sajjadhz/ceph_guide
Ceph step-by-step guide
https://github.com/sajjadhz/ceph_guide
ceph ceph-cluster cephadm vagrant vagrantfile
Last synced: 3 months ago
JSON representation
Ceph step-by-step guide
- Host: GitHub
- URL: https://github.com/sajjadhz/ceph_guide
- Owner: Sajjadhz
- License: mit
- Created: 2024-08-15T07:31:30.000Z (9 months ago)
- Default Branch: main
- Last Pushed: 2024-08-18T20:30:46.000Z (9 months ago)
- Last Synced: 2025-02-04T10:45:38.914Z (4 months ago)
- Topics: ceph, ceph-cluster, cephadm, vagrant, vagrantfile
- Homepage:
- Size: 6.84 KB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
Below is a step-by-step guide to install Ceph on three Rocky Linux nodes using Cephadm.
### Prerequisites
1. Three Rocky Linux nodes.
2. Root or sudo access to all nodes.
3. Networking setup to allow all nodes to communicate with each other.
4. Time synchronization (e.g., via NTP).
5. SSH access configured between the nodes.### Step 1: Prepare Nodes
#### On all nodes:
1. Update all packages and reboot if necessary:
```bash
sudo dnf update -y
```2. Install necessary packages:
```bash
sudo dnf install -y chrony lvm2 podman python3.9
sudo systemctl enable --now chronyd
```3. Set up passwordless SSH access:
```bash
ssh-keygen -t rsa
ssh-copy-id @
ssh-copy-id @
ssh-copy-id @
```4. Stop and disable firewall
```bash
systemctl stop firewalld.service
systemctl disable firewalld.service
```5. Configure time synchronization
```bash
cat >> /etc/chrony.conf <
```4. Copy the SSH key to other nodes:
```bash
sudo cephadm shell
ceph cephadm get-pub-key > ~/ceph.pub
ssh-copy-id -f -i ~/ceph.pub root@
ssh-copy-id -f -i ~/ceph.pub root@
```### Step 3: Add Other Nodes to the Cluster
1. Add the nodes:
```bash
sudo ceph orch host add
sudo ceph orch host add
```2. Verify hosts are added:
```bash
sudo ceph orch host ls
```### Step 4: Deploy Monitor and Manager Daemons
1. Deploy additional monitors (if needed):
```bash
sudo ceph orch apply mon --placement="node2,node3"
```2. Deploy manager daemons:
```bash
sudo ceph orch apply mgr --placement="node1,node2,node3"
```### Step 5: Prepare and Activate OSDs
#### On each node:
1. Identify disks to use for OSDs:
```bash
lsblk
```2. Create OSDs (replace `/dev/sdX` with actual disk):
```bash
sudo ceph orch daemon add osd :/dev/sdX
sudo ceph orch daemon add osd :/dev/sdY
sudo ceph orch daemon add osd :/dev/sdZ
```### Step 6: Verify Cluster Health
1. Check the status of the cluster:
```bash
sudo ceph -s
```### Step 7: Deploy Other Ceph Services
1. Deploy Metadata Server (MDS) for CephFS:
```bash
sudo ceph orch apply mds fs_name --placement="node1,node2"
```2. Deploy RGW (RADOS Gateway) for object storage:
```bash
sudo ceph orch apply rgw rgw_name --placement="node1,node2,node3"
```### Step 8: Access the Ceph Dashboard
1. Retrieve the URL and admin credentials for the Ceph Dashboard:
```bash
ceph mgr services
ceph dashboard create-self-signed-cert
echo -n "admin" > /tmp/dashboard-password
ceph dashboard set-login-credentials admin -i /tmp/dashboard-password
rm /tmp/dashboard-password
```2. Access the dashboard using the provided URL.
### Additional Tips
- Regularly check the cluster status with `ceph -s`.
- For advanced configurations, refer to the official Ceph documentation.
- Monitor logs and health alerts to maintain cluster integrity.By following these steps, you should have a functioning Ceph cluster on your Rocky Linux nodes.