https://github.com/sitectl/cuttle
Blue Box SRE Operations Platform
https://github.com/sitectl/cuttle
ansible bastion bluebox elk operations sensu sre
Last synced: 2 months ago
JSON representation
Blue Box SRE Operations Platform
- Host: GitHub
- URL: https://github.com/sitectl/cuttle
- Owner: sitectl
- License: other
- Created: 2017-06-20T02:10:12.000Z (almost 8 years ago)
- Default Branch: master
- Last Pushed: 2017-08-13T20:38:42.000Z (almost 8 years ago)
- Last Synced: 2024-11-07T05:38:32.253Z (7 months ago)
- Topics: ansible, bastion, bluebox, elk, operations, sensu, sre
- Language: Ruby
- Homepage:
- Size: 1.86 MB
- Stars: 35
- Watchers: 8
- Forks: 14
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE.md
Awesome Lists containing this project
README
# Cuttle
_Originally called Site Controller (sitectl) and is Pronounced "Cuddle"._[](https://travis-ci.org/IBM/cuttle)

A Monolithic Repository of Composable Ansible Roles for building an [Bluebox]
Opinionated SRE Operations Platform.Originally built by the BlueBox Cloud team to install the infrastructure required to build and
support Openstack Clouds using [Ursula](http://github.com/blueboxgroup/ursula) it quickly grew into
a larger project for enabling SRE Operations both in the Data Center and in the Cloud for any kind
of infrastructure.Like [Ursula](http://github.com/blueboxgroup/ursula), Cuttle uses the
[ursula-cli](https://github.com/blueboxgroup/ursula-cli) ( installed via requirements.txt )
for running Ansible on specific environments and has some strong opinions on [how
ansible inventory](docs/inventory.md) should be written and handled.For a rough idea of how Blue Box uses Cuttle by building Central and Remote sites
tethered together with IPSEC VPNs check out [docs/architecture.md](docs/architecture.md).You will see a number of example Ansible Inventories in `envs/example/` that
show Cuttle being used to build infrastructure to solve a number of problems.
`envs/example/sitecontroller` shows close to a full deployment, whereas
`envs/example/mirror` or `envs/example/elk` to build just specific components.
All of these environments can easily be deployed in Vagrant by using the `ursula-cli`
(see [Example Usage](#example-usage) ).### Examples
* See [docs/deploy_2fa_secured_bastion.md](docs/deploy_2fa_secured_bastion.md) for
a fairly comprehensive document on deploying a secure ( 2fa, console logging, RBAC)
Bastion.
* See [docs/deploy-oauth-secured-monitoring-server.md](docs/deploy-oauth-secured-monitoring-server.md) for a OAuth2 secured Sensu / Graphite server.How to Contribute
-----------------See [CONTRIBUTORS.md](CONTRIBUTORS.md) for the original team.
The official git repository of Site Controller is https://github.com/IBM/cuttle.
If you have cloned this from somewhere else, may god have mercy on your soul.### Workflow
We follow the standard github workflow of Fork -> Branch -> PR -> Test -> Review -> Merge.
The Site Controller Core team is working to put together guidance on contributing and
governance now that it is an opensource proect.Development and Testing
-----------------------### Build Development Environment
```
# clone this repo
$ git clone [email protected]:ibm/cuttle.git# install pip, hopefully your system has it already
# install virtualenv
$ pip install virtualenv# create a new virtualenv so python is happy
$ virtualenv --no-site-packages --no-wheel ~//venv# activate your new venv like normal
$ source ~//venv/bin/activate# install ursula-cli, the correct version of ansible, and all other deps
$ cd cuttle
$ pip install -r requirements.txt# run ansible using ursula-cli; or ansible-playbook, if that's how you roll
$ ursula envs/example/ site.yml# decactivate your virtualenv when you are done
$ deactivate
```[Vagrant](https://www.vagrantup.com/) is our preferred Development/Testing framework.
### Example Usage
ursula-cli understands how to interact with vagrant using the `--provisioner` flag:
```
$ ursula --provisioner=vagrant envs/example/sitecontroller bastion.yml
$ ursula --provisioner=vagrant envs/example/sitecontroller site.yml
```### Openstack and Heat
_your inventory must have a `heat_stack.yml` and a optional `vars_heat.yml` in order for this to work_
You can also test in Openstack with Heat Orchestration. First, grab your stackrc file from Openstack Horizon:
`Project > Compute > Access & Security > Download OpenStack RC File`
Ensure your `ssh-agent` is running, then source your stackrc and run the play:
```
$ source -openrc.sh
$ ursula --ursula-forward --provisioner=heat envs/example/sitecontroller site.yml
```Add argument `--ursula-debug` for verbose output.
## Run behind a docker proxy for local dev
```
$ docker run \
--name proxy -p 3128:3128 \
-v $(pwd)/tmp/cache:/var/cache/squid3 \
-d jpetazzo/squid-in-a-can
```then set the following in your inventory (`vagrant.yml` in `envs/example/*/`)
```
env_vars:
http_proxy: "http://10.0.2.2:3128"
https_proxy: "http://10.0.2.2:3128"
no_proxy: localhost,127.0.0.0/8,10.0.0.0/8,172.0.0.0/8```
Deploying
---------To actually deploy an environment you would use ursula-cli like so:
```
$ ursula ../sitecontroller-envs/sjc01 bastion.yml
$ ursula ../sitecontroller-envs/sjc01 site.yml# targetted runs using any ansible-playbook option
$ ursula ../ursula-infra-envs/sjc01 site.yml --tags openid_proxy
```