https://github.com/bwplotka/serenity-formula
Salt formulas and scripts for deployment Mesos with Serenity (also under DCOS)
https://github.com/bwplotka/serenity-formula
Last synced: 6 months ago
JSON representation
Salt formulas and scripts for deployment Mesos with Serenity (also under DCOS)
- Host: GitHub
- URL: https://github.com/bwplotka/serenity-formula
- Owner: bwplotka
- License: apache-2.0
- Created: 2015-09-07T14:00:49.000Z (over 10 years ago)
- Default Branch: master
- Last Pushed: 2016-02-08T15:25:47.000Z (almost 10 years ago)
- Last Synced: 2025-02-09T23:16:24.902Z (12 months ago)
- Language: SaltStack
- Homepage:
- Size: 48.8 KB
- Stars: 3
- Watchers: 3
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Serenity-formulas
Salt formula and scripts for Mesos with Serenity deployment.
For the full Salt Formulas installation and usage see [instructions](http://docs.saltstack.com/topics/development/conventions/formulas.html).
For official Serenity page see [Project Serenity](https://github.com/mesosphere/serenity).
## Requirements
* Zookeeper is working on all Master nodes (in HA) on default 2181 port.
* Salt installed and configured on nodes.
* Proper DNS and hostname setup.
* Master and Slaves nodes must be able to communicate freely with each other.
* For marathon: java installed.
## Status
Tested on CentOS 7, 36x Intel Xeon E5 v3 machines with NFS storage under DCOS.
## Installation:
On salt master node:
1. Install formulas from this repo to `/srv/formulas/`:
git clone https://github.com/Bplotka/serenity-formula.git
2. Add `/srv/formulas/serenity-formula` to `/etc/salt/master` conf under:
file_roots:
base:
- /srv/formulas/serenity-formula
3. Restart `salt-master`.
4. Copy `serenity.sls.example` to `/srv/pillar/serenity.sls` and edit file to configuration.
(make sure that `dns` , `zookeeper_cluster_size` and `master_lb` are filled properly.
5. Copy or append content from `top.sls.example` to `/srv/pillar/top.sls`
6. Set roles for each node using salt:
$ salt 'masternode.example.com' grains.setval mesos-roles [master]
$ salt 'agentnode.example.com' grains.setval mesos-roles [slave]
7. Set roles for each slave if needed:
$ salt 'agentnode.example.com' grains.setval mesos-default-role custom_role
8. Run formula to prepare configuration:
$ salt '*' state.sls serenity.setup
9. [Optional] Configure your build system to put Mesos build in `/srv/formulas/serenity/build/mesos_latest/`
10. Configure your build system to put [Serenity build](https://github.com/mesosphere/serenity#building-serenity-with-cmake) in `/srv/formulas/serenity/build/serenity_latest/`
11. Make sure that `/srv/formulas/serenity/build/` has write rights for your build system.
## Installation under DCOS:
These formulas are able to work also under the DCOS.
On salt master node:
Do all steps from above section except 5. Theses formulas will use dcos-roles.
## Usage:
1. Each time you deploy and build new mesos & serenity source you can deploy it easily:
$ salt '*' state.sls serenity.deploy_all
2. Each time you deploy and build ONLY serenity source you can deploy it easily:
$ salt '*' state.sls serenity.deploy
Or just deploy on slaves:
$ salt -G 'mesos-roles:slave' state.sls serenity.deploy
Under the DCOS in case of need for reverting mesos to the DCOS version just run:
$ salt '*' state.sls serenity.use_dcos
## Marathon installation and usage via serenity-formula:
Serenity formula is able to run configured marathon as well. As default it deploys proper marathon services to systemd in HA on the same nodes
where _Mesos Masters_ are.
In case of willing to run custom marathon using this formula you need some additional steps to make it running:
1. Make sure that you have java installed on your machine.
2. Download marathon.jar (version you want to have in your cluster)
3. Use `serenity.deploy_with_marathon` instead of `serenity.deploy_all`
$ salt -G 'mesos-roles:slave' state.sls serenity.deploy_with_marathon
## Other information:
* Make sure that every needed file from mesos build is present (for now, `.so` , `./sbin` and `./share` are downloaded)