{"id":13564897,"url":"https://github.com/letsencrypt/hashicorp-lessons","last_synced_at":"2026-02-13T22:30:19.046Z","repository":{"id":45395567,"uuid":"436038505","full_name":"letsencrypt/hashicorp-lessons","owner":"letsencrypt","description":null,"archived":false,"fork":false,"pushed_at":"2022-03-10T20:27:11.000Z","size":159,"stargazers_count":30,"open_issues_count":1,"forks_count":10,"subscribers_count":3,"default_branch":"main","last_synced_at":"2024-12-30T22:44:16.051Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"HCL","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/letsencrypt.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2021-12-07T22:03:38.000Z","updated_at":"2024-12-17T11:44:41.000Z","dependencies_parsed_at":"2022-09-15T14:14:36.450Z","dependency_job_id":null,"html_url":"https://github.com/letsencrypt/hashicorp-lessons","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/letsencrypt%2Fhashicorp-lessons","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/letsencrypt%2Fhashicorp-lessons/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/letsencrypt%2Fhashicorp-lessons/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/letsencrypt%2Fhashicorp-lessons/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/letsencrypt","download_url":"https://codeload.github.com/letsencrypt/hashicorp-lessons/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":239816510,"owners_count":19701753,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T13:01:37.682Z","updated_at":"2026-02-13T22:30:18.979Z","avatar_url":"https://github.com/letsencrypt.png","language":"HCL","readme":"## Getting Started with Hashicorp Nomad and Consul\n\n### Why write another learning guide?\nThe official [Nomad learning guides](https://learn.hashicorp.com/nomad) provide excellent examples for deploying workloads with Docker but very few showcase orchestration of non-containerized workloads. As a result of this I actually found it a little tough to get some of my first jobs spun-up.\n\n### Install Consul and Nomad\n1. https://learn.hashicorp.com/tutorials/consul/get-started-install\n1. https://learn.hashicorp.com/tutorials/nomad/get-started-install\n\n### Get Consul and Nomad started in dev mode\nBoth the `nomad` and `consul` binaries will run in the foreground by default. This is great because you can watch log lines as they come in and troubleshoot any issues encountered while working through these workshops. However, this also means you'll want to use `tmux` or using some kind of native tab or window management to keep these running in a session other than the one you'll be using to edit, plan, and run Nomad jobs.\n\n1. Start the Consul server in `dev` mode:\n   ```shell\n   $ consul agent -dev -datacenter dev-general -log-level ERROR\n   ```\n1. Start the Nomad server in `dev` mode:\n   ```shell\n   $ sudo nomad agent -dev -bind 0.0.0.0 -log-level ERROR -dc dev-general\n   ```\n\n### Ensure you can access the Consul and Nomad web UIs\n1. Open the Consul web UI: http://localhost:8500/ui\n1. Open the Nomad web UI: http://localhost:4646/ui\n\n### Clone the letsencrypt/hashicorp-lessons repository\nThis repository contains both the starting job specification and examples of how your specification should look after we complete each of the workshops.\n\n### Scan through this commented job specification\nDon't worry if it feels like a lot, _it is_. It took me a few days of working with Nomad and Consul to get a good sense of what each of these stanzas was actually doing. Feel free to move forward even if you feel a little lost. You can always come back and reference it as needed.\n\n```hcl\n// 'variable' stanzas are used to declare variables that a job specification can\n// have passed to it via '-var' and '-var-file' options on the nomad command\n// line.\n//\n// https://www.nomadproject.io/docs/job-specification/hcl2/variables\nvariable \"config-yml-template\" {\n  type = string\n}\n\n// 'job' is the top-most configuration option in the job specification.\n//\n// https://www.nomadproject.io/docs/job-specification/job\njob \"hello-world\" {\n\n  // datacenters where you would like the hello-world to be deployed.\n  //\n  // https://www.nomadproject.io/docs/job-specification/job#datacenters\n  datacenters = [\"dev-general\"]\n\n  // 'type' of Scheduler that Nomad will use to run and update the 'hello-world'\n  // job. We're using 'service' in this example because we want the\n  // 'hello-world' job to be run persistently and restarted if it becomes\n  // unhealthy or stops unexpectedly.\n  //\n  // https://www.nomadproject.io/docs/job-specification/job#type\n  type = \"service\"\n\n  // 'group' is a series of tasks that should be co-located (deployed) on the\n  // same Nomad client. \n  //\n  //https://www.nomadproject.io/docs/job-specification/group\n  group \"greeter\" {\n    // 'count' is the number of allocations (instances) of the 'hello-world'\n    // 'greeter' tasks you want to be deployed.\n    //\n    // https://www.nomadproject.io/docs/job-specification/group#count\n    count = 1\n\n    // 'network' declares which ports need to be available on a given Nomad\n    // client before it can allocate (deploy an instance of) the 'hello-world'\n    // 'greeter'\n    //\n    // https://www.nomadproject.io/docs/job-specification/network\n    network {\n      // https://www.nomadproject.io/docs/job-specification/network#port-parameters\n      port \"http\" {\n        static = 1234\n      }\n    }\n\n    // 'service' tells Nomad how the 'hello-world' 'greeter' allocations should be\n    // advertised (as a service) in Consul and how Consul should determine that\n    // each hello-world greeter allocation is healthy enough to advertise as\n    // part of the Service Catalog.\n    //\n    // https://www.nomadproject.io/docs/job-specification/service\n    service {\n      name = \"hello-world-greeter\"\n      port = \"http\"\n\n      // 'check' is the check used by Consul to assess the health or readiness\n      // of an individual 'hello-world' 'greeter' allocation.\n      //\n      // https://www.nomadproject.io/docs/job-specification/service#check\n      check {\n        name     = \"ready-tcp\"\n        type     = \"tcp\"\n        port     = \"http\"\n        interval = \"3s\"\n        timeout  = \"2s\"\n      }\n\n      // 'check' same as the above except the status of the service depends on\n      // the HTTP response code: any 2xx code is considered passing, a 429 Too\n      // ManyRequests is warning, and anything else is a failure.\n      check {\n        name     = \"ready-http\"\n        type     = \"http\"\n        port     = \"http\"\n        path     = \"/\"\n        interval = \"3s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    // 'task' defines an individual unit of work for Nomad to schedule and\n    // supervise (e.g. a web server, a database server, etc).\n    //\n    // https://www.nomadproject.io/docs/job-specification/task\n    task \"greet\" {\n      // 'driver' is the Task Driver that Nomad should use to execute our\n      // 'task'. For shell commands and scripts there are two options:\n      //\n      // 1. 'raw_exec' is used to execute a command for a task without any\n      //    isolation. The task is started as the same user as the Nomad\n      //    process. Ensure the Nomad process user is sufficiently restricted in\n      //    Production settings.\n      // 2. 'exec' uses the underlying isolation primitives of the operating\n      //    system to limit the task's access to resources\n      // 3. There are many more here: https://www.nomadproject.io/docs/drivers\n      //\n      // https://www.nomadproject.io/docs/job-specification/task#driver \n      driver = \"raw_exec\"\n\n      config {\n        // 'command' is the binary or script that will be called with /bin/sh.\n        command = \"greet\"\n        \n        // 'args' is a list of arguments passed to the 'greet' binary.\n        args = [\n          \"-c\", \"${NOMAD_ALLOC_DIR}/config.yml\"\n        ]\n      }\n\n      // 'template' instructs the Nomad Client to use 'consul-template' to\n      // template a given file into the allocation at a specified path. Note:\n      // that while 'consul-template' has 'consul' in the name, 'consul' is not\n      // required to use it.\n      // https://www.nomadproject.io/docs/job-specification/task#template\n      // https://www.nomadproject.io/docs/job-specification/template\n      template {\n        // 'data' is a string containing the contents of the consul template.\n        // Here we're passing a variable instead of defining it inline but both\n        // are perfectly valid.\n        data        = var.config-yml-template\n        destination = \"${NOMAD_ALLOC_DIR}/config.yml\"\n        change_mode = \"restart\"\n      }\n\n      // 'env' allows us to pass environment variables to our task. Since\n      // consul-template is run locally inside of allocation, if we needed to\n      // pass a variable from our job specification we would need to pass them\n      // in this stanza.\n      // https://www.nomadproject.io/docs/job-specification/task#env\n      //\n      env {\n        // 'foo' is just an example and not actually used in these lessons.\n        foo = \"${var.foo}\"\n      }\n    }\n  }\n}\n```\n\n## Nomad Workshop 1 - Hello World\nIn 'getting started' we got Nomad and Consul installed and reviewed each stanza of a Nomad job specification. In this workshop we're going to deploy a 'Hello World' style app, make some changes to it's configuration, and then re-deploy it.\n\n### Install the `greet` binary\nThese lessons make use of `greet`, a simple web server that will respond with _Hello, `\u003cname\u003e`_ when you send it an HTTP GET request. Ensure that you've got it installed before continuing.\n\nIf you've got Go `v1.17` installed and your `$GOPATH` in you `$PATH` then you should be able to `go install` it like so:\n                           \n```shell\n$ go install github.com/letsencrypt/hashicorp-lessons/greet@latest\n```\n\nEnsure greet is now in our `$PATH`.\n\n```shell\n$ which greet\n```\n\n### Check the plan output for the _hello-world_ job\nRunning the `plan` subcommand will help us understand what actions the Nomad\nScheduler is going to take on our behalf. Now you're probably seeing a whole lot of changes here that were not included in our job specification. This is because, in the absence of their explicit definition, Nomad will fill in some defaults for required values.\n\n```hcl\n$ nomad job plan -verbose -var-file=./1_HELLO_WORLD/vars.go ./1_HELLO_WORLD/job.go\n+ Job: \"hello-world\"\n+ AllAtOnce:  \"false\"\n+ Dispatched: \"false\"\n+ Name:       \"hello-world\"\n+ Namespace:  \"default\"\n+ Priority:   \"50\"\n+ Region:     \"global\"\n+ Stop:       \"false\"\n+ Type:       \"service\"\n+ Datacenters {\n  + Datacenters: \"dev-general\"\n  }\n+ Task Group: \"greeter\" (1 create)\n  + Count: \"1\" (forces create)\n  + RestartPolicy {\n    + Attempts: \"2\"\n    + Delay:    \"15000000000\"\n    + Interval: \"1800000000000\"\n    + Mode:     \"fail\"\n    }\n  + ReschedulePolicy {\n    + Attempts:      \"0\"\n    + Delay:         \"30000000000\"\n    + DelayFunction: \"exponential\"\n    + Interval:      \"0\"\n    + MaxDelay:      \"3600000000000\"\n    + Unlimited:     \"true\"\n    }\n  + EphemeralDisk {\n    + Migrate: \"false\"\n    + SizeMB:  \"300\"\n    + Sticky:  \"false\"\n    }\n  + Update {\n    + AutoPromote:      \"false\"\n    + AutoRevert:       \"false\"\n    + Canary:           \"0\"\n    + HealthCheck:      \"checks\"\n    + HealthyDeadline:  \"300000000000\"\n    + MaxParallel:      \"1\"\n    + MinHealthyTime:   \"10000000000\"\n    + ProgressDeadline: \"600000000000\"\n    }\n  + Network {\n      Hostname: \"\"\n    + MBits:    \"0\"\n      Mode:     \"\"\n    + Static Port {\n      + HostNetwork: \"default\"\n      + Label:       \"http\"\n      + To:          \"0\"\n      + Value:       \"1234\"\n      }\n    }\n  + Service {\n    + AddressMode:       \"auto\"\n    + EnableTagOverride: \"false\"\n    + Name:              \"hello-world-greeter\"\n    + Namespace:         \"default\"\n    + OnUpdate:          \"require_healthy\"\n    + PortLabel:         \"http\"\n      TaskName:          \"\"\n    + Check {\n        AddressMode:            \"\"\n        Body:                   \"\"\n        Command:                \"\"\n      + Expose:                 \"false\"\n      + FailuresBeforeCritical: \"0\"\n        GRPCService:            \"\"\n      + GRPCUseTLS:             \"false\"\n        InitialStatus:          \"\"\n      + Interval:               \"3000000000\"\n        Method:                 \"\"\n      + Name:                   \"ready-http\"\n      + OnUpdate:               \"require_healthy\"\n      + Path:                   \"/\"\n      + PortLabel:              \"http\"\n        Protocol:               \"\"\n      + SuccessBeforePassing:   \"0\"\n      + TLSSkipVerify:          \"false\"\n        TaskName:               \"\"\n      + Timeout:                \"2000000000\"\n      + Type:                   \"http\"\n      }\n    + Check {\n        AddressMode:            \"\"\n        Body:                   \"\"\n        Command:                \"\"\n      + Expose:                 \"false\"\n      + FailuresBeforeCritical: \"0\"\n        GRPCService:            \"\"\n      + GRPCUseTLS:             \"false\"\n        InitialStatus:          \"\"\n      + Interval:               \"3000000000\"\n        Method:                 \"\"\n      + Name:                   \"ready-tcp\"\n      + OnUpdate:               \"require_healthy\"\n        Path:                   \"\"\n      + PortLabel:              \"http\"\n        Protocol:               \"\"\n      + SuccessBeforePassing:   \"0\"\n      + TLSSkipVerify:          \"false\"\n        TaskName:               \"\"\n      + Timeout:                \"2000000000\"\n      + Type:                   \"tcp\"\n      }\n    }\n  + Task: \"greet\" (forces create)\n    + Driver:        \"raw_exec\"\n    + KillTimeout:   \"5000000000\"\n    + Leader:        \"false\"\n    + ShutdownDelay: \"0\"\n    + Config {\n      + args[0]: \"-c\"\n      + args[1]: \"${NOMAD_ALLOC_DIR}/config.yml\"\n      + command: \"greet\"\n      }\n    + Resources {\n      + CPU:         \"100\"\n      + Cores:       \"0\"\n      + DiskMB:      \"0\"\n      + IOPS:        \"0\"\n      + MemoryMB:    \"300\"\n      + MemoryMaxMB: \"0\"\n      }\n    + LogConfig {\n      + MaxFileSizeMB: \"10\"\n      + MaxFiles:      \"10\"\n      }\n    + Template {\n      + ChangeMode:   \"restart\"\n        ChangeSignal: \"\"\n      + DestPath:     \"${NOMAD_ALLOC_DIR}/config.yml\"\n      + EmbeddedTmpl: \"---\\nname: \\\"Samantha\\\"\\nport: 1234\\n\\n\"\n      + Envvars:      \"false\"\n      + LeftDelim:    \"{{\"\n      + Perms:        \"0644\"\n      + RightDelim:   \"}}\"\n        SourcePath:   \"\"\n      + Splay:        \"5000000000\"\n      + VaultGrace:   \"0\"\n      }\n\nScheduler dry-run:\n- All tasks successfully allocated.\n```\n\n### Run the _hello-world_ job\nWe expect that running our job should succeed because the `plan` output above stated that all of our tasks would be successfully allocated. Let's find out!\n\n```shell\n$ nomad job run -verbose -var-file=./1_HELLO_WORLD/vars.go ./1_HELLO_WORLD/job.go\n```\n\n### Visit the the greeter URL in your browser\n1. Open http://localhost:1234\n1. You should see:\n   ```\n   Hello, Samantha\n   ```\n\n### For our first change, let's make it greet you, instead of me\nEdit the value of `name` in `1_HELLO_WORLD/vars.go`:\n\n```yaml\n---\nname: \"YOUR NAME\"\nport: 1234\n\n```\n\n### Check the plan output for the _hello-world_ job\n```shell\n$ nomad job plan -verbose -var-file=./1_HELLO_WORLD/vars.go ./1_HELLO_WORLD/job.go\n+/- Job: \"hello-world\"\n+/- Task Group: \"greeter\" (1 create/destroy update)\n  +/- Task: \"greet\" (forces create/destroy update)\n    +/- Template {\n          ChangeMode:   \"restart\"\n          ChangeSignal: \"\"\n          DestPath:     \"${NOMAD_ALLOC_DIR}/config.yml\"\n      +/- EmbeddedTmpl: \"---\\nname: \\\"Samantha\\\"\\nport: 1234\\n\\n\" =\u003e \"---\\nname: \\\"YOUR NAME\\\"\\nport: 1234\\n\\n\"\n          Envvars:      \"false\"\n          LeftDelim:    \"{{\"\n          Perms:        \"0644\"\n          RightDelim:   \"}}\"\n          SourcePath:   \"\"\n          Splay:        \"5000000000\"\n          VaultGrace:   \"0\"\n        }\n\nScheduler dry-run:\n- All tasks successfully allocated.\n```\n\nLooks like it's going to work out just fine!\n\n### Deploy our updated _hello-world_ job\n```shell\n$ nomad job run -verbose -var-file=./1_HELLO_WORLD/vars.go ./1_HELLO_WORLD/job.go\n```\n\n### Visit the the _greeter_ URL in your browser\n1. Open [http://localhost:1234](http://localhost:1234)\n1. You should see:\n   ```\n   Hello, YOUR NAME\n   ```\n\n## Nomad Workshop 2 - Scaling Allocations\nIt's time to scale our greeter allocations. Thankfully Nomad's dynamic port allocation and Consul's templating are going to make this operation pretty painless.\n\nIt's best if you follow the documentation here to update your job specification at `1_HELLO_WORLD/job.go` and your vars file at `1_HELLO_WORLD/vars.go`, but if you get lost you can see the final product under `2_HELLO_SCALING/job.go` and `2_HELLO_SCALING/vars.go`.\n\n### Increment the _greeter_ count in our job specification\nEdit `job \u003e\u003e group \"greeter\" \u003e\u003e count` in our job specification from:\n```hcl\ncount = 1\n```\n\nto:\n```hcl\ncount = 2\n```\n\n### Check the plan output for the _hello-world_ job\n```shell\n$ nomad job plan -verbose -var-file=./1_HELLO_WORLD/vars.go ./1_HELLO_WORLD/job.go\n+/- Job: \"hello-world\"\n+/- Task Group: \"greeter\" (1 create, 1 in-place update)\n  +/- Count: \"1\" =\u003e \"2\" (forces create)\n      Task: \"server\"\n\nScheduler dry-run:\n- WARNING: Failed to place all allocations.\n  Task Group \"greeter\" (failed to place 1 allocation):\n    1. Resources exhausted on 1 nodes\n    1. Dimension \"network: reserved port collision http=1234\" exhausted on 1 nodes\n```\n\nIt looks like having a static port of `1234` is going to cause resource exhaustion. Not to worry though, we can update our job specification to let the Nomad Scheduler pick a port for each of our _greeter_ allocations to listen on.\n\n### Update the job to make port selection dynamic\nUnder `job \u003e\u003e group \"greeter\" \u003e\u003e network \u003e\u003e port` we can remove our static port assignment of _1234_ and leave empty curly braces _{}_ . This will instruct the Nomad Scheduler dynamically assign the port for each allocation.\n\nOur existing lines:\n\n```shell\nport \"http\" {\n  static = 1234\n}\n```\n\nOur new line:\n\n```shell\nport \"http\" {}\n```\n\n### Update our _greet_ config file template to use a dynamic port\nBy replacing _1234_ in our _greet_ config template with the `NOMAD_ALLOC_PORT_http` environment variable Nomad will always keep our config file up-to-date.\n\nWe expect the environment variable to be `NOMAD_ALLOC_PORT_http` because the network port we declare at `job \u003e\u003e group \"greeter\" \u003e\u003e network \u003e\u003e port` is called `http` . If we had called it `my-special-port` we would use `NOMAD_ALLOC_PORT_my-special-port`.\n\nOur existing line:\n```hcl\nport: 1234\n```\n\nOur new line:\n```hcl\nport: {{ env \"NOMAD_ALLOC_PORT_http\" }}\n```\n\nFor more info on Nomad Runtime Environment Variables see these\n[docs](https://www.nomadproject.io/docs/runtime/environment).\n\n### Check the plan output of the updated _hello-world_ job\n```shell\n$ nomad job plan -verbose -var-file=./1_HELLO_WORLD/vars.go ./1_HELLO_WORLD/job.go\n+/- Job: \"hello-world\"\n+/- Task Group: \"greeter\" (1 create, 1 ignore)\n  +/- Count: \"1\" =\u003e \"2\" (forces create)\n  +   Network {\n        Hostname: \"\"\n      + MBits:    \"0\"\n        Mode:     \"\"\n      + Dynamic Port {\n        + HostNetwork: \"default\"\n        + Label:       \"http\"\n        + To:          \"0\"\n        }\n      }\n  -   Network {\n        Hostname: \"\"\n      - MBits:    \"0\"\n        Mode:     \"\"\n      - Static Port {\n        - HostNetwork: \"default\"\n        - Label:       \"http\"\n        - To:          \"0\"\n        - Value:       \"1234\"\n        }\n      }\n  +/- Task: \"greet\" (forces create/destroy update)\n    +/- Template {\n          ChangeMode:   \"restart\"\n          ChangeSignal: \"\"\n          DestPath:     \"${NOMAD_ALLOC_DIR}/config.yml\"\n      +/- EmbeddedTmpl: \"---\\nname: \\\"Samantha\\\"\\nport: 1234\\n\\n\" =\u003e \"---\\nname: \\\"Samantha\\\"\\nport: {{ env \\\"NOMAD_ALLOC_PORT_http\\\" }}\\n\\n\"\n          Envvars:      \"false\"\n          LeftDelim:    \"{{\"\n          Perms:        \"0644\"\n          RightDelim:   \"}}\"\n          SourcePath:   \"\"\n          Splay:        \"5000000000\"\n          VaultGrace:   \"0\"\n        }\n\nScheduler dry-run:\n- All tasks successfully allocated.\n```\n\nOkay, this looks as though it will work!\n\n### Run the updated _hello-world_ job\n```shell\n$ nomad job run -verbose -var-file=./1_HELLO_WORLD/vars.go ./1_HELLO_WORLD/job.go\n```\n\n### Fetch the ports of our 2 new _greeter_ allocations\nThere are a few ways that we can fetch the ports that Nomad assigned to our\n_greeter_ allocations.\n\n#### via the Nomad GUI\n1. Open: http://localhost:4646/ui/jobs/hello-world/web\n1. Scroll down to the **Allocations** table\n1. Open each of the **Allocations** where Status is **running**\n1. Scroll down to the **Ports** table, and note the value for **http** in the **Host Address** column\n\n#### via the Nomad CLI\n\n1. Run `nomad job status hello-world` and note the **ID** for each allocation with **running** in the **Status** column:\n    ```shell\n    $ nomad job status hello-world\n    ID            = hello-world\n    Name          = hello-world\n    Submit Date   = 2022-01-06T16:57:57-08:00\n    Type          = service\n    Priority      = 50\n    Datacenters   = dev-general\n    Namespace     = default\n    Status        = running\n    Periodic      = false\n    Parameterized = false\n    \n    Summary\n    Task Group  Queued  Starting  Running  Failed  Complete  Lost\n    greeter     0       0         2        0       1         0\n    \n    Latest Deployment\n    ID          = a11c023a\n    Status      = successful\n    Description = Deployment completed successfully\n    \n    Deployed\n    Task Group  Desired  Placed  Healthy  Unhealthy  Progress Deadline\n    greeter     2        2       2        0          2022-01-06T17:08:24-08:00\n    \n    Allocations\n    ID        Node ID   Task Group  Version  Desired  Status    Created    Modified\n    4ed1c285  e6e7b140  greeter     1        run      running   17s ago    4s ago\n    ef0ef9b3  e6e7b140  greeter     1        run      running   31s ago    18s ago\n    aa3a7834  e6e7b140  greeter     0        stop     complete  14m9s ago  16s ago\n    ```\n1. Run `nomad alloc status \u003callocation-id\u003e` for each alloc _ID_:\n   ```shell\n   $ nomad alloc status 4ed1c285\n   ID                  = 4ed1c285-e923-d627-7cc2-d392147eca2f\n   Eval ID             = e6b817a5\n   Name                = hello-world.greeter[0]\n   Node ID             = e6e7b140\n   Node Name           = treepie.local\n   Job ID              = hello-world\n   Job Version         = 1\n   Client Status       = running\n   Client Description  = Tasks are running\n   Desired Status      = run\n   Desired Description = \u003cnone\u003e\n   Created             = 58s ago\n   Modified            = 45s ago\n   Deployment ID       = a11c023a\n   Deployment Health   = healthy\n   \n   Allocation Addresses\n   Label  Dynamic  Address\n   *http  yes      127.0.0.1:31623\n \n   Task \"greet\" is \"running\"\n   Task Resources\n   CPU        Memory          Disk     Addresses\n   0/100 MHz  49 MiB/300 MiB  300 MiB\n   \n   Task Events:\n   Started At     = 2022-01-07T00:58:12Z\n   Finished At    = N/A\n   Total Restarts = 0\n   Last Restart   = N/A\n   \n   Recent Events:\n   Time                       Type        Description\n   2022-01-06T16:58:12-08:00  Started     Task started by client\n   2022-01-06T16:58:12-08:00  Task Setup  Building Task Directory\n   2022-01-06T16:58:12-08:00  Received    Task received by client\n   ```\n1. Under **Allocation Addresses** we can see `127.0.0.1:31623` is the address for this allocation\n\nIn our job specification you'll see that we also registered our _greeter_ allocations with the Consul Catalog as a Service called _hello-world-greeter_. This means that we can also grab these addresses and ports via the Consul web UI:\n1. Open http://localhost:8500/ui/dev-general/services/hello-world-greeter/instances\n1. On the right-hand side of each entry you can find the complete IP address and port for each of our _hello-world-greeter_ allocations.\n\n#### via the Consul DNS endpoint\nBut, how would a service be able to locate these _hello-world-greeter_ allocations? Well sure, you could integrate a Consul Client into these other services that want to connect with a _hello-world-greeter_, but there's something a little simpler that you can do as a first approach, use the DNS endpoint that Consul exposes by default to fetch theses addresses and ports in the form of a `SRV` record.\n\n```shell\n$ dig @127.0.0.1 -p 8600 hello-world-greeter.service.dev-general.consul. SRV\n\n; \u003c\u003c\u003e\u003e DiG 9.10.6 \u003c\u003c\u003e\u003e @127.0.0.1 -p 8600 hello-world-greeter.service.dev-general.consul. SRV\n; (1 server found)\n;; global options: +cmd\n;; Got answer:\n;; -\u003e\u003eHEADER\u003c\u003c- opcode: QUERY, status: NOERROR, id: 11398\n;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 5\n;; WARNING: recursion requested but not available\n\n;; OPT PSEUDOSECTION:\n; EDNS: version: 0, flags:; udp: 4096\n;; QUESTION SECTION:\n;hello-world-greeter.service.dev-general.consul.\tIN SRV\n\n;; ANSWER SECTION:\nhello-world-greeter.service.dev-general.consul.\t0 IN SRV 1 1 28098 7f000001.addr.dev-general.consul.\nhello-world-greeter.service.dev-general.consul.\t0 IN SRV 1 1 31623 7f000001.addr.dev-general.consul.\n\n;; ADDITIONAL SECTION:\n7f000001.addr.dev-general.consul. 0 IN\tA\t127.0.0.1\ntreepie.local.node.dev-general.consul. 0 IN TXT\t\"consul-network-segment=\"\n7f000001.addr.dev-general.consul. 0 IN\tA\t127.0.0.1\ntreepie.local.node.dev-general.consul. 0 IN TXT\t\"consul-network-segment=\"\n\n;; Query time: 0 msec\n;; SERVER: 127.0.0.1#8600(127.0.0.1)\n;; WHEN: Thu Jan 06 17:01:41 PST 2022\n;; MSG SIZE  rcvd: 302\n```\n\nGiven the output of this `SRV` record you should be able to browse to http://localhost:28098 or http://localhost:31623 and be greeted.\n\nIf you follow [these docs](https://learn.hashicorp.com/tutorials/consul/dns-forwarding) you should also be able add Consul to your list of resolvers.\n\nAssuming Consul is set as one of my resolvers I should also be able to browse to either of the following:\n- http://hello-world-greeter.service.dev-general.consul:28098\n- http://hello-world-greeter.service.dev-general.consul:31623\n\n## Nomad Workshop 3 - Storing Configuration in Consul\nRe-deploying hello-world every time we want to change the name we're saying hello to seems a little heavy handed when we really just need to update our greet config file template and restart our greet task. So, how can we accomplish this without another deployment? Consul to the rescue!\n\nIt's best if you follow the documentation here to update your job specification at `1_HELLO_WORLD/job.go` and your vars file at `1_HELLO_WORLD/vars.go`, but if you get lost you can see the final product under `3_HELLO_CONSUL/job.go` and `3_HELLO_CONSUL/vars.go`.\n\nNomad ships with a tool called `consul-template` that we've actually already been making good use of. For example, the _template_ stanza of our _greet_ uses `consul-template` to template our _config.yml_ file.\n\n```hcl\ntemplate {\n  data        = var.config-yml-template\n  destination = \"${NOMAD_ALLOC_DIR}/config.yml\"\n  change_mode = \"restart\"\n}\n```\n\nWe can instruct `consul-template` to retrieve the name of the person we're saying hello to from the Consul K/V store while deploying our _greeter_ allocations. After an initial deploy, Nomad will then watch the Consul K/V path for changes. If a change is detected, Nomad will re-run `consul-template` with the updated value and then take the action specified by the `change_mode` attribute of our `template` stanza. In our case, it will `restart` the _greet_ task.\n\n### Modify our _greet_ config file template to source from Consul\nOur template var in `1_HELLO_WORLD/vars.go` is currently:\n```hcl\nconfig-yml-template = \u003c\u003c-EOF\n  ---\n  name: \"YOUR NAME\"\n  port: {{ env \"NOMAD_ALLOC_PORT_http\" }}\n  \nEOF\n```\n\nWe should edit it like so:\n```hcl\nconfig-yml-template = \u003c\u003c-EOF\n  {{ with $v := key \"hello-world/config\" | parseYAML }}\n  ---\n  name: \"{{ $v.name }}\"\n  port: {{ env \"NOMAD_ALLOC_PORT_http\" }}\n  {{ end }}\n  \nEOF\n```\nHere we're setting a variable `v` with the parsed contents of the YAML stored at the Consul K/V path of `hello-world/config`. We're then templating the value of the `name` key.\n\nNote: make sure you leave an empty newline at the end of your vars file otherwise the Nomad CLI won't be able to parse it properly.\n\n### Push our YAML formatted config to Consul\nWe could do this with the Consul web UI but using the `consul` CLI is much\nfaster.\n\n```shell\n$ consul kv put 'hello-world/config' 'name: \"Samantha\"'\nSuccess! Data written to: hello-world/config\n\n$ consul kv get 'hello-world/config'\nname: \"Samantha\"\n```\n\nHere we've pushed a `name` key with a value of `\"Samantha\"` to the\n`hello-world/config` Consul K/V. We have also fetched it just to be sure.\n\n### Check the plan output for our updated _hello-world_ job\n```shell\n$ nomad job plan -verbose -var-file=./1_HELLO_WORLD/vars.go ./1_HELLO_WORLD/job.go\n+/- Job: \"hello-world\"\n+/- Task Group: \"greeter\" (1 create/destroy update, 1 ignore)\n  +/- Task: \"greet\" (forces create/destroy update)\n    +/- Template {\n          ChangeMode:   \"restart\"\n          ChangeSignal: \"\"\n          DestPath:     \"${NOMAD_ALLOC_DIR}/config.yml\"\n      +/- EmbeddedTmpl: \"---\\nname: \\\"Samantha\\\"\\nport: {{ env \\\"NOMAD_ALLOC_PORT_http\\\" }}\\n\\n\" =\u003e \"{{ with $v := key \\\"hello-world/config\\\" | parseYAML }}\\n---\\nname: \\\"{{ $v.name }}\\\"\\nport: {{ env \\\"NOMAD_ALLOC_PORT_http\\\" }}\\n{{ end }}\\n  \\n\"\n          Envvars:      \"false\"\n          LeftDelim:    \"{{\"\n          Perms:        \"0644\"\n          RightDelim:   \"}}\"\n          SourcePath:   \"\"\n          Splay:        \"5000000000\"\n          VaultGrace:   \"0\"\n        }\n\nScheduler dry-run:\n- All tasks successfully allocated.\n```\n\nAlright this looks like it should work.\n\n### Run our updated _hello-world_ job\n```shell\n$ nomad job run -verbose -var-file=./1_HELLO_WORLD/vars.go ./1_HELLO_WORLD/job.go\n```\n\n### Let's fetch the ports of our 2 new _greeter_ allocations\n```shell\n$ dig @127.0.0.1 -p 8600 hello-world-greeter.service.dev-general.consul. SRV | grep hello-world-greeter.service\n; \u003c\u003c\u003e\u003e DiG 9.10.6 \u003c\u003c\u003e\u003e @127.0.0.1 -p 8600 hello-world-greeter.service.dev-general.consul. SRV\n;hello-world-greeter.service.dev-general.consul.\tIN SRV\nhello-world-greeter.service.dev-general.consul.\t0 IN SRV 1 1 30226 7f000001.addr.dev-general.consul.\nhello-world-greeter.service.dev-general.consul.\t0 IN SRV 1 1 28843 7f000001.addr.dev-general.consul.\n```\n\nYou should be able to browse to http://localhost:30226 or http://localhost:28843\nand be greeted.\n\n### Update the value of `name` in Consul\n```shell\n$ consul kv put 'hello-world/config' 'name: \"SAMANTHA\"'\nSuccess! Data written to: hello-world/config\n\n$ consul kv get 'hello-world/config'\nname: \"SAMANTHA\"\n```\n\n### Browse to one of our _greeter_ allocation URLs again\nIf you reload http://localhost:30226 or http://localhost:28843 you should be greeted by your updated name. This did not require a deployment; Nomad was notified that the value at the Consul K/V path of `hello-world/config` had been updated. Nomad re-templated our _greet_ config file and then restarted our _greet_ task just like we asked (in the `template` stanza).\n\nNote: you may have observed a pause of about 5 seconds between when the _greet_ task being stopped and when it was started again. This is the default wait time between stop and start operations and it's entirely configurable on a per group or task basis.\n\n## Nomad Workshop 4 - Load Balancing with Consul and Traefik\nIt's time to scale our hello-world job again, but this time we're going to do so with the help of a load balancer called Traefik. Traefik is an open-source edge router written in Go with first-party Consul integration. Please ensure that you've got version 2.5.x installed and in your path.\n\n### Install Traefik\nI was able to fetch the binary via Homebrew on MacOS but you can always fetchthe latest binary for your platform from their[releases](https://github.com/traefik/traefik/releases) page.\n\n### Add a Traefik config template to our vars file\nOut Traefik config is a minimal TOML file with some consul-template syntax.\n\n```hcl\ntraefik-config-template = \u003c\u003c-EOF\n  [entryPoints.http]\n  address = \":{{ env \"NOMAD_ALLOC_PORT_http\" }}\"\n  \n  [entryPoints.traefik]\n  address = \":{{ env \"NOMAD_ALLOC_PORT_dashboard\" }}\"\n  \n  [api]\n  dashboard = true\n  insecure = true\n  \n  [providers.consulCatalog]\n  prefix = \"hello-world-lb\"\n  exposedByDefault = false\n  \n  [providers.consulCatalog.endpoint]\n  address = \"{{ env \"CONSUL_HTTP_ADDR\" }}\"\n  scheme = \"http\"\nEOF\n\n```\n\n1. Our first two declarations are HTTP `entryPoints` which are similar to `http { server {` in NGINX parlance. The only attribute we need need to template is the `\u003chostname\u003e:\u003cport\u003e`. The first is for our _greeter_ load-balancer and the second is for the Traefik dashboard (not required). For both of these we can rely on the Nomad environment variables for two new ports we're going to add to our job specification, these will be called `http` and `dashboard`. Again, we prefix these with `NOMAD_ALLOC_PORT_` and Nomad will do the rest for us.\n1. The next declaration is `api`. Here we're just going to enable the `dashboard` and disable `tls`.\n1. The final declarations enable and configure the `consulCatalog` provider. There are two attributes in the first declaration. `prefix` configures the provider to exclusively query for Consul catalog hosts tagged with `prefix:hello-world-lb`. `exposedByDefault` (false) configures the provider to query only Consul services tagged with `traefik.enable=true`. The last declaration instructs the provider on how to connect to Consul. Because Nomad and Consul are already tightly integrated we can template `address` with the `CONSUL_HTTP_ADDR` env var. As for `scheme`, since we're using Consul in `dev` mode this is `http`.\n1. Ensure that you add a newline at the end of this file otherwise Nomad will be unable to parse it.\n\n### Declare a variable for our Traefik config template in our job specification\nNear the top, just below our existing `config-yml-template` variable declaration add the following:\n\n```hcl\nvariable \"traefik-config-template\" {\n  type = string\n}\n```\n\n### Add a new group for our load-balancer above _greeter_\nOur Traefik load-balancer will route requests on port `8080` to any healthy _greeter_ allocation. Traefik will also expose a dashboard on port `8081`. We've added static ports for both the load-balancer (`http`) and the dashboard under the `network` stanza. We've also added some TCP and HTTP readiness checks that reference these ports in our new _hello-world-lb_ Consul service.\n\n```hcl\n  group \"load-balancer\" {\n    count = 1\n\n    network {\n\n      port \"http\" {\n        static = 8080\n      }\n\n      port \"dashboard\" {\n        static = 8081\n      }\n    }\n\n    service {\n      name = \"hello-world-lb\"\n      port = \"http\"\n\n      check {\n        name     = \"ready-tcp\"\n        type     = \"tcp\"\n        port     = \"http\"\n        interval = \"3s\"\n        timeout  = \"2s\"\n      }\n\n      check {\n        name     = \"ready-http\"\n        type     = \"http\"\n        port     = \"http\"\n        path     = \"/\"\n        interval = \"3s\"\n        timeout  = \"2s\"\n      }\n\n      check {\n        name     = \"ready-tcp\"\n        type     = \"tcp\"\n        port     = \"dashboard\"\n        interval = \"3s\"\n        timeout  = \"2s\"\n      }\n\n      check {\n        name     = \"ready-http\"\n        type     = \"http\"\n        port     = \"dashboard\"\n        path     = \"/\"\n        interval = \"3s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"traefik\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"traefik\"\n        args = [\n          \"--configFile=${NOMAD_ALLOC_DIR}/traefik.toml\",\n        ]\n      }\n\n      template {\n        data        = var.traefik-config-template\n        destination = \"${NOMAD_ALLOC_DIR}/traefik.toml\"\n        change_mode = \"restart\"\n      }\n    }\n  }\n```\n\n### Lastly, add some tags to the _hello-world-greeter_ service\nUnder the _greeter_ group you should see the `service` stanza. Adjust yours to include the tags from the Traefik config.\n\n```hcl\nservice {\n  name = \"hello-world-greeter\"\n  port = \"http\"\n  tags = [\n    \"hello-world-lb.enable=true\",\n    \"hello-world-lb.http.routers.http.rule=Path(`/`)\",\n  ]\n```\n\n\n### Check the plan output for our updated _hello-world_ job\n```shell\n$ nomad job plan -verbose -var-file=./1_HELLO_WORLD/vars.go ./1_HELLO_WORLD/job.go\n+/- Job: \"hello-world\"\n+/- Task Group: \"greeter\" (2 in-place update)\n  +/- Service {\n      AddressMode:       \"auto\"\n      EnableTagOverride: \"false\"\n      Name:              \"hello-world-greeter\"\n      Namespace:         \"default\"\n      OnUpdate:          \"require_healthy\"\n      PortLabel:         \"http\"\n      TaskName:          \"\"\n    + Tags {\n      + Tags: \"hello-world-lb.enable=true\"\n      + Tags: \"hello-world-lb.http.routers.http.rule=Path(`/`)\"\n      }\n      }\n      Task: \"greet\"\n\n+   Task Group: \"load-balancer\" (1 create)\n    + Count: \"1\" (forces create)\n    + RestartPolicy {\n      + Attempts: \"2\"\n      + Delay:    \"15000000000\"\n      + Interval: \"1800000000000\"\n      + Mode:     \"fail\"\n      }\n    + ReschedulePolicy {\n      + Attempts:      \"0\"\n      + Delay:         \"30000000000\"\n      + DelayFunction: \"exponential\"\n      + Interval:      \"0\"\n      + MaxDelay:      \"3600000000000\"\n      + Unlimited:     \"true\"\n      }\n    + EphemeralDisk {\n      + Migrate: \"false\"\n      + SizeMB:  \"300\"\n      + Sticky:  \"false\"\n      }\n    + Update {\n      + AutoPromote:      \"false\"\n      + AutoRevert:       \"false\"\n      + Canary:           \"0\"\n      + HealthCheck:      \"checks\"\n      + HealthyDeadline:  \"300000000000\"\n      + MaxParallel:      \"1\"\n      + MinHealthyTime:   \"10000000000\"\n      + ProgressDeadline: \"600000000000\"\n      }\n    + Network {\n        Hostname: \"\"\n      + MBits:    \"0\"\n        Mode:     \"\"\n      + Static Port {\n        + HostNetwork: \"default\"\n        + Label:       \"dashboard\"\n        + To:          \"0\"\n        + Value:       \"8081\"\n        }\n      + Static Port {\n        + HostNetwork: \"default\"\n        + Label:       \"http\"\n        + To:          \"0\"\n        + Value:       \"8080\"\n        }\n      }\n    + Service {\n      + AddressMode:       \"auto\"\n      + EnableTagOverride: \"false\"\n      + Name:              \"hello-world-lb\"\n      + Namespace:         \"default\"\n      + OnUpdate:          \"require_healthy\"\n      + PortLabel:         \"http\"\n        TaskName:          \"\"\n      + Check {\n          AddressMode:            \"\"\n          Body:                   \"\"\n          Command:                \"\"\n        + Expose:                 \"false\"\n        + FailuresBeforeCritical: \"0\"\n          GRPCService:            \"\"\n        + GRPCUseTLS:             \"false\"\n          InitialStatus:          \"\"\n        + Interval:               \"3000000000\"\n          Method:                 \"\"\n        + Name:                   \"ready-http\"\n        + OnUpdate:               \"require_healthy\"\n        + Path:                   \"/\"\n        + PortLabel:              \"dashboard\"\n          Protocol:               \"\"\n        + SuccessBeforePassing:   \"0\"\n        + TLSSkipVerify:          \"false\"\n          TaskName:               \"\"\n        + Timeout:                \"2000000000\"\n        + Type:                   \"http\"\n        }\n      + Check {\n          AddressMode:            \"\"\n          Body:                   \"\"\n          Command:                \"\"\n        + Expose:                 \"false\"\n        + FailuresBeforeCritical: \"0\"\n          GRPCService:            \"\"\n        + GRPCUseTLS:             \"false\"\n          InitialStatus:          \"\"\n        + Interval:               \"3000000000\"\n          Method:                 \"\"\n        + Name:                   \"ready-tcp\"\n        + OnUpdate:               \"require_healthy\"\n          Path:                   \"\"\n        + PortLabel:              \"dashboard\"\n          Protocol:               \"\"\n        + SuccessBeforePassing:   \"0\"\n        + TLSSkipVerify:          \"false\"\n          TaskName:               \"\"\n        + Timeout:                \"2000000000\"\n        + Type:                   \"tcp\"\n        }\n      }\n    + Task: \"traefik\" (forces create)\n      + Driver:        \"raw_exec\"\n      + KillTimeout:   \"5000000000\"\n      + Leader:        \"false\"\n      + ShutdownDelay: \"0\"\n      + Config {\n        + args[0]: \"--configFile=${NOMAD_ALLOC_DIR}/traefik.toml\"\n        + command: \"traefik\"\n        }\n      + Resources {\n        + CPU:         \"100\"\n        + Cores:       \"0\"\n        + DiskMB:      \"0\"\n        + IOPS:        \"0\"\n        + MemoryMB:    \"300\"\n        + MemoryMaxMB: \"0\"\n        }\n      + LogConfig {\n        + MaxFileSizeMB: \"10\"\n        + MaxFiles:      \"10\"\n        }\n      + Template {\n        + ChangeMode:   \"restart\"\n          ChangeSignal: \"\"\n        + DestPath:     \"${NOMAD_ALLOC_DIR}/traefik.toml\"\n        + EmbeddedTmpl: \"[entryPoints.http]\\naddress = \\\":{{ env \\\"NOMAD_ALLOC_PORT_http\\\" }}\\\"\\n  \\n[entryPoints.traefik]\\naddress = \\\":{{ env \\\"NOMAD_ALLOC_PORT_dashboard\\\" }}\\\"\\n  \\n[api]\\ndashboard = true\\ninsecure = true\\n  \\n[providers.consulCatalog]\\nprefix = \\\"hello-world-lb\\\"\\nexposedByDefault = false\\n  \\n[providers.consulCatalog.endpoint]\\naddress = \\\"{{ env \\\"CONSUL_HTTP_ADDR\\\" }}\\\"\\nscheme = \\\"http\\\"\\n\"\n        + Envvars:      \"false\"\n        + LeftDelim:    \"{{\"\n        + Perms:        \"0644\"\n        + RightDelim:   \"}}\"\n          SourcePath:   \"\"\n        + Splay:        \"5000000000\"\n        + VaultGrace:   \"0\"\n        }\n\nScheduler dry-run:\n- All tasks successfully allocated.\n```\n\nAlright this looks like it should work.\n\n### Run our updated _hello-world_ job\n```shell\n$ nomad job run -verbose -var-file=./1_HELLO_WORLD/vars.go ./1_HELLO_WORLD/job.go\n```\n\n### Browse to our new Traefik load-balancer\n1. Open http://localhost:8080 and ensure that you're being greeted\n1. Open http://localhost:8081 and ensure that it loads succesfully\n\n### Inspect the Consul provided backend configuration via the Traefik dashboard\n1. Open: http://localhost:8081/dashboard/#/http/services/hello-world-greeter@consulcatalog\n1. You should find your 2 existing _greeter_ allocations listed by their full address `\u003chostname\u003e:\u003cport\u003e`.\n\n### Perform some scaling of our _greeter_ allocations \nIt's time to scale our _greeter_ allocations again, except this time we have a load-balancer that will reconfigure itself when the count is increased.\n\n1. You can scale allocations via the job specification but you can also temporarily scale a given `job \u003e\u003e group` via the nomad CLI:\n   ```shell\n   $ nomad job scale \"hello-world\" \"greeter\" 3\n   ```\n1. Refresh: http://localhost:8081/dashboard/#/http/services/hello-world-greeter@consulcatalog\n1. You should see 3 _greeter_ allocations\n1. You can also temporarily de-scale a given `job \u003e\u003e group` via the nomad CLI:\n   ```shell\n   $ nomad job scale \"hello-world\" \"greeter\" 2\n   ```\n1. Refresh: http://localhost:8081/dashboard/#/http/services/hello-world-greeter@consulcatalog\n1. You should see 2 _greeter_ allocations like before\n","funding_links":[],"categories":["HCL"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fletsencrypt%2Fhashicorp-lessons","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fletsencrypt%2Fhashicorp-lessons","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fletsencrypt%2Fhashicorp-lessons/lists"}