{"id":13446675,"url":"https://github.com/Storidge/quick-start","last_synced_at":"2025-03-21T16:32:41.117Z","repository":{"id":74986841,"uuid":"204782970","full_name":"Storidge/quick-start","owner":"Storidge","description":"START HERE:  Setup a Swarm cluster with persistent storage in 10 minutes","archived":false,"fork":false,"pushed_at":"2019-09-09T21:42:15.000Z","size":9,"stargazers_count":1,"open_issues_count":1,"forks_count":1,"subscribers_count":3,"default_branch":"master","last_synced_at":"2024-10-28T11:43:09.859Z","etag":null,"topics":["docker-swarm","persistent-storage","portainer"],"latest_commit_sha":null,"homepage":"","language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Storidge.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-08-27T20:15:05.000Z","updated_at":"2024-09-22T06:20:02.000Z","dependencies_parsed_at":null,"dependency_job_id":"9fb590b7-2c6c-4edc-8adb-4ef2f406feed","html_url":"https://github.com/Storidge/quick-start","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Storidge%2Fquick-start","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Storidge%2Fquick-start/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Storidge%2Fquick-start/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Storidge%2Fquick-start/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Storidge","download_url":"https://codeload.github.com/Storidge/quick-start/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":244829609,"owners_count":20517342,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["docker-swarm","persistent-storage","portainer"],"created_at":"2024-07-31T05:00:56.404Z","updated_at":"2025-03-21T16:32:41.109Z","avatar_url":"https://github.com/Storidge.png","language":null,"readme":"![Logo](https://i.imgur.com/FfIj2NA.png)\n\n# Quick Start\n\nDocker Swarm is a popular orchestration tool used for managing containerized applications. A Swarm cluster turns a group of Docker hosts into a single virtual system.\n\nWhile containers provide a great develop and deploy applications efficiently, their transient nature mean containers lose all data when deleted. This is a problem for applications which need to persist data beyond the lifecycle of the container.\n\n## Why Persistent Storage?\n\nPersistent storage is a key requirement for virtually all enterprises, because most systems enterprises run require data that can be saved and tapped into later. These include systems which provide insights into consumer behavior and delivers actionable leads into what customers are looking for.\n\nPersistent storage is desirable for container-based applications and Swarm clusters because data can be retained after applications running inside those containers are shut down. However, many deployments rely on external storage systems for data persistence.\n\nIn public cloud deployments, this means using managed services such as EBS, S3 and EFS. On-premise deployments typically use traditional NAS and SAN storage solutions which are cumbersome and expensive to operate.\n\n## Deploy Swarm Cluster with Cloud Native Storage\n\nStoridge’s CIO software was created to simplify the life of developers and operators. It is software defined storage designed to make cloud-native clusters and the applications and services running inside them more self-sufficient and portable, by providing highly available storage as a service.\n\nThis guide shows you how to easily deploy Storidge's Container IO (CIO) software. Follow the steps below to bring up a Swarm cluster with a Portainer dashboard, that's ready to run stateful apps in just a few minutes\n\nLet’s get started!\n\n## Prerequisites\n\nFirst, you'll need to deploy the cluster resources to orchestrate:\n\n- A minimum of three hosts (physical or virtual) are required to form a cluster. Four nodes minimum are recommended for production clusters. You can use `docker-machine create` to provision virtual hosts. Here are examples using [VirtualBox](https://rominirani.com/docker-swarm-tutorial-b67470cf8872) and [DigitalOcean](https://www.digitalocean.com/community/tutorials/how-to-create-a-cluster-of-docker-containers-with-docker-swarm-and-digitalocean-on-centos-7).\n- Each node will need a minimum of four drives; one boot drive and three data drives for CIO to ensure data redundancy and availability\n- Configure networking to allow SSH connections across all hosts\n\n## Step 1. Install cio software\n\nStoridge's CIO software currently supports CentOS 7.6 (3.10 kernel), RHEL 7 (3.10 kernel) and Ubuntu 16.04LTS (4.4 kernel). Note that the desktop edition of Ubuntu 16.04 lists a 4.15 kernel which is not supported.\n\nAfter verifying you have a supported distribution, run the convenience script below to begin installation.\n\n`curl -fsSL ftp://download.storidge.com/pub/ce/cio-ce | sudo bash`\n\nExample:\n```\nroot@ip-172-31-27-160:~# curl -fsSL ftp://download.storidge.com/pub/ce/cio-ce | sudo bash\nStarted installing release 2879 at Sat Jul 13 02:52:57 UTC 2019\nLoading cio software for: u16  (4.4.0-1087-aws)\nReading package lists... Done\nBuilding dependency tree\n.\n.\n.\nFinished at Sat Jul 13 02:53:06 UTC 2019\n\nInstallation completed. cio requires a minimum of 3 local drives per node for data redundancy.\n\nTo start a cluster, run 'cioctl create' on primary node. To add a node, generate a join token\nwith 'cioctl join-token' on sds node. Then run the 'cioctl node add ...' output on this node.\n```\n\n**Install Additional Nodes**\n\nYou can add more nodes to the cluster to increase capacity, performance and enable high availability for your applications. Repeat the convenience script installation on all nodes that will be added to the cluster.\n\n`curl -fsSL ftp://download.storidge.com/pub/ce/cio-ce | sudo bash`\n\nNote: The use of convenience scripts is recommended for dev environments only, as root permissions are required to run them.\n\n## Step 2. Configure cluster\nWith the CIO software installed on all nodes, the next step is to create a cluster and initialize it for use. As part of cluster creation, CIO will automatically discover and add drive resources from each node. Note that drives which are partitioned or have a file system will not be added to the storage pool.\n\nRun the `cioctl create` command on the node you wish to be the leader of the cluster. This generates a `cioctl join` and a `cioctl init` command.\n\n```\nroot@ip-172-31-22-159:~# cioctl create\nKey Generation setup\nCluster started. The current node is now the primary controller node. To add a storage node to this cluster, run the following command:\n    cioctl join 172.31.22.159 7598e9e2eb9fe221b98f9040cb3c73bc-bd987b6a\n\nAfter adding all storage nodes, return to this node and run following command to initialize the cluster:\n    cioctl init bd987b6a\n```\n\nRun the `cioctl join` command on nodes joining the cluster.\n\n**Single node cluster**\n\nTo configure a single node cluster, just run `cioctl create --single-node` to create the cluster and automatically complete initialization.\n\n## Step 3. Initialize cluster\n\nWith all nodes added, run the `cioctl init` command on the SDS node to start initializing the cluster.\n\n```\nroot@ip-172-31-22-159:~# cioctl init bd987b6a\nConfiguring Docker Swarm cluster with Portainer service\n\u003c13\u003eJul 13 02:43:40 cluster: Setup AWS persistent hostname for sds node\n\u003c13\u003eJul 13 02:44:00 cluster: initialization started\n\u003c13\u003eJul 13 02:44:03 cluster: Setup AWS persistent hostnames\n.\n.\n.\n\u003c13\u003eJul 13 02:47:24 cluster: Node initialization completed\n\u003c13\u003eJul 13 02:47:26 cluster: Start cio daemon\n\u003c13\u003eJul 13 02:47:34 cluster: Succeed: Add vd0: Type:2-copy, Size:20GB\n\u003c13\u003eJul 13 02:47:36 cluster: MongoDB ready\n\u003c13\u003eJul 13 02:47:37 cluster: Synchronizing VID files\n\u003c13\u003eJul 13 02:47:38 cluster: Starting API\n```\n\n**Note:**  For physical servers with SSDs, the initialization process will take about 30 minutes longer the first time. This extra time is used to characterize the available performance in the cluster. This performance information is used in CIO’s quality-of-service (QoS) feature to deliver guaranteed performance for individual applications.\n\n## Login dashboard\n\nAt the end of initialization, you have a CIO storage cluster running. A Docker Swarm cluster will be automatically configured if one is not already running.\n\nRun `docker node ls` to show the compute cluster nodes. Example:\n```\nroot@ip-172-31-22-159:~# docker node ls\nID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION\nk9lwu33n1qvc4qw06t8rhmrn0     c-8ca106d7          Ready               Active              Reachable           18.09.6\n18yxcme2s8b4q9jiq0iceh35y     c-6945fd81          Ready               Active              Leader              18.09.6\npmjn72izhrrb6hn2gnaq895c4 *   c-abc38f75          Ready               Active              Reachable           18.09.6\n```\n\nThe Portainer service is launched at the end of initialization. Verify with `docker service ps portainer`:\n```\nroot@ip-172-31-22-159:~# docker service ps portainer\nID                  NAME                IMAGE                        NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS\nzhu4ykc1hdlu        portainer.1         portainer/portainer:latest   c-6945fd81          Running             Running 4 minutes ago\n```\n\nLogin to Portainer at any node's IP address on port 9000. Assign an admin password and you'll see the dashboard.\n\n## Next steps\n\nRefer to the [Getting Started guide](https://guide.storidge.com/) for exercises to create volumes, profiles, deploy stateful apps, test high availability, perform cluster management, etc.\n\nJoin us on slack @ http://storidge.com/join-cio-slack/\n","funding_links":[],"categories":["Container Operations"],"sub_categories":["Volume Management / Data"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FStoridge%2Fquick-start","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FStoridge%2Fquick-start","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FStoridge%2Fquick-start/lists"}