https://github.com/ucs-compute-solutions/flashstack_imm_m7
FlashStack VSI with UCS X-series and C-series M7 Servers, Pure Storage FlashArray and VMware vSphere 8.0
https://github.com/ucs-compute-solutions/flashstack_imm_m7
Last synced: 2 months ago
JSON representation
FlashStack VSI with UCS X-series and C-series M7 Servers, Pure Storage FlashArray and VMware vSphere 8.0
- Host: GitHub
- URL: https://github.com/ucs-compute-solutions/flashstack_imm_m7
- Owner: ucs-compute-solutions
- License: mit
- Created: 2023-10-10T04:52:27.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-08-01T16:48:08.000Z (10 months ago)
- Last Synced: 2025-01-23T06:09:06.179Z (4 months ago)
- Language: Python
- Size: 96.7 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# FlashStack VSI CVD using Cisco UCS M7 servers, Pure Storage FlashArray, and VMware vSphere 8.0
The FlashStack portfolio of solutions is a series of validated solutions developed in partnership with Cisco and Pure Storage. Each FlashStack is designed, built, and validated in Cisco’s internal labs, and delivered as Cisco Validated Design (CVD) that includes a design guide, deployment guide, and automation delivered as Infrastructure as Code (IaC).
This release of the CVD introduces support for the 7th generation of Cisco UCS C-Series and Cisco UCS X-Series Servers, powered using 4th Gen Intel Xeon Scalable processors. The new Cisco UCS servers provide the compute infrastructure for the FlashStack virtual server infrastructure (VSI) solution. For storage, the solution uses either 32Gbps Fibre Channel (FC) or 100Gbps IP/ethernet storage to access unified block and file storage on a Pure Storage FlashArray. For the virtualization layer of the stack, the solution introduces support for VMware vSphere 8.0 and the new capabilities available in this release.
This repository provides the IaC automation for provisioning the end-to-end infrastructure in this CVD solution:
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_m7_vmware_8_ufs_fc.htmlThis repository contains the Ansible playbooks to automate the following components in the Virtual Server Infrastructure stack:
• Cisco UCS X-series and C-series M7 servers in Intersight Managed Mode (IMM)
• Cisco Nexus and MDS Switches
• Pure FlashArray
• VMware ESXi and VMware vCenter.The automation provided in this repo will enable you to deploy the following FlashStack designs in the CVD using Ansible. To execute the playbooks, the following FlashStack solution design must be built and cabled as shown in the figure below. The automation does provide the flexibility to provision either FC or IP/Ethernet for storage access.
## FlashStack - Solution Topology

## Solution Automation Overview
### Set up the execution environment
To execute various ansible playbooks, a Linux based system will need to be setup as described in the CVD with the packages listed at the following pages:
- Cisco Intersight: https://galaxy.ansible.com/ui/repo/published/cisco/intersight/
- Cisco NxOS: https://galaxy.ansible.com/ui/repo/published/cisco/nxos/
- Pure FlashArray: https://galaxy.ansible.com/ui/repo/published/purestorage/flasharray/
- VMware: https://galaxy.ansible.com/ui/repo/published/community/vmware/You might already have this collection installed.
- To check whether it is installed, run: `ansible-galaxy collection list`
- To install it, use:
`ansible-galaxy collection install cisco.intersight` (For Intersight Collection)
`ansible-galaxy collection install cisco.nxos` (For Cisco NX-OS collection)
`ansible-galaxy collection install purestorage.flasharray` (Pure Storage FlashArray Collection)
`ansible-galaxy collection install community.vmware` (For VMWare Ansible Collection)Next, clone the repository from GitHub with "git clone https://github.com/ucs-compute-solutions/FlashStack_IMM_M7.git".
### Setup Inventory and Variables
To execute the automation in this repo, update the inventory and variables files to reflect the specifics of your environment. The variables that must be configured to execute the automation are defined in the following locations:
1. Inventory file is located: inventory/inventory.ini file
2. Variables are distributed across multiple files.
- Variables that require customer inputs are part of `inventory/(group_vars|host_vars)//(vars|vault)`
- Variables that do not require customer input (e.g. descriptions etc.) are present under `role_name/defauls/main.yml`
- All UCS pools, policies, and policies created will use a user_specified_prefix (for e.g. M7a) in the name
- All Intersight driven configurations will use the tag: `Provisioned by: intersight-ansible`
### Prerequisites
1. To deploy UCS configuration via Intersight, you will need an Intersight account with the Cisco UCS Fabric Interconnects claimed and provisioned using a UCS domain profile.
2. Collect the UCS host IQNs (IP/Ethernet) or WWPNs (FC) from the server profiles once they have been provisioned/derived from the server profile template. Update the variables files for Pure Storage and MDS switches with this information.
3. Collect the target IQNs (IP/Ethernet) or WWPNs (FC) from Pure Storage once the ethernet/fc interfaces have been provisioned for the storage protocol/service. Update the variables files for Cisco UCS and MDS switches with this information.
4. To deploy the vCenter and vSphere setup for the solution, you should have a VMware vCenter deployed.
### Playbook Execution Sequence
**NOTE:** You can specify the location of the inventory file by adding `-i inventory/inventory.ini` to the ansible-playbook command below.
1. Setup LAN networking on a pair of Nexus access layer switches: `ansible-playbook ./setup_network.yml`
2. Setup Pools, Policies and Profiles for UCS through Cisco Intersight: `ansible-playbook ./setup_compute.yml`
3. Setup Pure FlashArray: `ansible-playbook ./setup_pure_storage.yml`
4. Setup FC SAN networking on a pair of MDS switches: `ansible-playbook ./setup_network_san.yml`
5. Setup VMWare Cluster and vCenter Setup: `ansible-playbook ./setup_vmware_vcenter.yml`
6. Setup VMWare ESXi servers: `ansible-playbook ./setup_vmware_esxi.yml`
### Intersight API Access and Configuration
To execute the playbooks against your Intersight account, you need to complete following additional steps of creating an API key and saving the Secrets_File:
https://community.cisco.com/t5/data-center-and-cloud-documents/intersight-api-overview/ta-p/3651994
The API key and Secrets.txt (default file name) should be put in the inventory/group_vars/cisco_intersight_common/ sub-directory and encrypted - these as well as passwords should also not be uploaded to GitHub by using the .gitignore file to filter out any files with sensitive information.
The Intersight playbooks in this repository perform following functions:
1. Create various pools required to setup a Server Profile Template
2. Create various policies required to setup a Server Profile Template
3. Create iSCSI and/or FC Server Profile TemplatesAfter successfully executing the playbooks, one or many server profiles can easily derived and attached to the compute node from Intersight dashboard.
### Post Configuration Tasks
Execution of first three playbooks in these repositories set up Server Profile Template in Intersight. After successfully executing the playbooks, one or more server profiles can easily derived and attached to the compute node from Intersight dashboard. To install OS on these newly provisioned servers, Intersight now offers an Install OS workflow that can be directly accessed by navigating to **Infrastructure Service > Operate > Servers**. Next select a server and use the '...' option on the right hand side to select **Install Operating System"** from the list of menu options.