https://github.com/launchbynttdata/tf-azurerm-module_collection-hubspoke_monitor
https://github.com/launchbynttdata/tf-azurerm-module_collection-hubspoke_monitor
azure collection terraform
Last synced: 2 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/launchbynttdata/tf-azurerm-module_collection-hubspoke_monitor
- Owner: launchbynttdata
- License: apache-2.0
- Created: 2024-03-25T15:31:50.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-07-03T15:34:24.000Z (10 months ago)
- Last Synced: 2025-01-03T10:46:09.027Z (4 months ago)
- Topics: azure, collection, terraform
- Language: HCL
- Size: 156 KB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Codeowners: CODEOWNERS
Awesome Lists containing this project
README
# tf-azurerm-module_collection-hubspoke_monitor
[](https://opensource.org/licenses/Apache-2.0)
[](https://creativecommons.org/licenses/by-nc-nd/4.0/)## Overview
This module helps deploy services to monitor network and networking components deployed as part of hub-spoke network architecture.
1. Azure Firewall's structured logs provide a more detailed view of firewall events. They include information such as source and destination IP addresses, protocols, port numbers, and action taken by the firewall. They also include more metadata, such as the time of the event and the name of the Azure Firewall instance. This module helps attach the `diagnostic setting` to Azure Firewall. More information can be found [here](https://learn.microsoft.com/en-us/azure/firewall/firewall-structured-logs)
2. Network security group (NSG) flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through a network security group. Flow data is sent to Azure Storage from where you can access it and export it to any visualization tool, security information and event management (SIEM) solution, or intrusion detection system (IDS) of your choice. This module deploys a network watcher, storage account, and captures NSG flow logs associated with NSG. More information can be found [here](https://learn.microsoft.com/en-us/azure/network-watcher/nsg-flow-logs-overview)## Pre-Commit hooks
[.pre-commit-config.yaml](.pre-commit-config.yaml) file defines certain `pre-commit` hooks that are relevant to terraform, golang and common linting tasks. There are no custom hooks added.
`commitlint` hook enforces commit message in certain format. The commit contains the following structural elements, to communicate intent to the consumers of your commit messages:
- **fix**: a commit of the type `fix` patches a bug in your codebase (this correlates with PATCH in Semantic Versioning).
- **feat**: a commit of the type `feat` introduces a new feature to the codebase (this correlates with MINOR in Semantic Versioning).
- **BREAKING CHANGE**: a commit that has a footer `BREAKING CHANGE:`, or appends a `!` after the type/scope, introduces a breaking API change (correlating with MAJOR in Semantic Versioning). A BREAKING CHANGE can be part of commits of any type.
footers other than BREAKING CHANGE: may be provided and follow a convention similar to git trailer format.
- **build**: a commit of the type `build` adds changes that affect the build system or external dependencies (example scopes: gulp, broccoli, npm)
- **chore**: a commit of the type `chore` adds changes that don't modify src or test files
- **ci**: a commit of the type `ci` adds changes to our CI configuration files and scripts (example scopes: Travis, Circle, BrowserStack, SauceLabs)
- **docs**: a commit of the type `docs` adds documentation only changes
- **perf**: a commit of the type `perf` adds code change that improves performance
- **refactor**: a commit of the type `refactor` adds code change that neither fixes a bug nor adds a feature
- **revert**: a commit of the type `revert` reverts a previous commit
- **style**: a commit of the type `style` adds code changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc)
- **test**: a commit of the type `test` adds missing tests or correcting existing testsBase configuration used for this project is [commitlint-config-conventional (based on the Angular convention)](https://github.com/conventional-changelog/commitlint/tree/master/@commitlint/config-conventional#type-enum)
If you are a developer using vscode, [this](https://marketplace.visualstudio.com/items?itemName=joshbolduc.commitlint) plugin may be helpful.
`detect-secrets-hook` prevents new secrets from being introduced into the baseline. TODO: INSERT DOC LINK ABOUT HOOKS
In order for `pre-commit` hooks to work properly
- You need to have the pre-commit package manager installed. [Here](https://pre-commit.com/#install) are the installation instructions.
- `pre-commit` would install all the hooks when commit message is added by default except for `commitlint` hook. `commitlint` hook would need to be installed manually using the command below```
pre-commit install --hook-type commit-msg
```## To test the resource group module locally
1. For development/enhancements to this module locally, you'll need to install all of its components. This is controlled by the `configure` target in the project's [`Makefile`](./Makefile). Before you can run `configure`, familiarize yourself with the variables in the `Makefile` and ensure they're pointing to the right places.
```
make configure
```This adds in several files and directories that are ignored by `git`. They expose many new Make targets.
2. _THIS STEP APPLIES ONLY TO MICROSOFT AZURE. IF YOU ARE USING A DIFFERENT PLATFORM PLEASE SKIP THIS STEP._ The first target you care about is `env`. This is the common interface for setting up environment variables. The values of the environment variables will be used to authenticate with cloud provider from local development workstation.
`make configure` command will bring down `azure_env.sh` file on local workstation. Devloper would need to modify this file, replace the environment variable values with relevant values.
These environment variables are used by `terratest` integration suit.
Service principle used for authentication(value of ARM_CLIENT_ID) should have below privileges on resource group within the subscription.
```
"Microsoft.Resources/subscriptions/resourceGroups/write"
"Microsoft.Resources/subscriptions/resourceGroups/read"
"Microsoft.Resources/subscriptions/resourceGroups/delete"
```Then run this make target to set the environment variables on developer workstation.
```
make env
```3. The first target you care about is `check`.
**Pre-requisites**
Before running this target it is important to ensure that, developer has created files mentioned below on local workstation under root directory of git repository that contains code for primitives/segments. Note that these files are `azure` specific. If primitive/segment under development uses any other cloud provider than azure, this section may not be relevant.- A file named `provider.tf` with contents below
```
provider "azurerm" {
features {}
}
```- A file named `terraform.tfvars` which contains key value pair of variables used.
Note that since these files are added in `gitignore` they would not be checked in into primitive/segment's git repo.
After creating these files, for running tests associated with the primitive/segment, run
```
make check
```If `make check` target is successful, developer is good to commit the code to primitive/segment's git repo.
`make check` target
- runs `terraform commands` to `lint`,`validate` and `plan` terraform code.
- runs `conftests`. `conftests` make sure `policy` checks are successful.
- runs `terratest`. This is integration test suit.
- runs `opa` tests## Requirements
| Name | Version |
|------|---------|
| [terraform](#requirement\_terraform) | <= 1.5.5 |
| [azurerm](#requirement\_azurerm) | ~> 3.77 |## Providers
No providers.
## Modules
| Name | Source | Version |
|------|--------|---------|
| [resource\_names](#module\_resource\_names) | terraform.registry.launch.nttdata.com/module_library/resource_name/launch | ~> 1.0 |
| [resource\_group](#module\_resource\_group) | terraform.registry.launch.nttdata.com/module_primitive/resource_group/azurerm | ~> 1.0 |
| [monitor\_diagnostic\_setting](#module\_monitor\_diagnostic\_setting) | terraform.registry.launch.nttdata.com/module_primitive/monitor_diagnostic_setting/azurerm | ~> 1.0 |
| [log\_analytics\_workspace](#module\_log\_analytics\_workspace) | terraform.registry.launch.nttdata.com/module_primitive/log_analytics_workspace/azurerm | ~> 1.0 |
| [storage\_account](#module\_storage\_account) | terraform.registry.launch.nttdata.com/module_primitive/storage_account/azurerm | ~> 1.0 |
| [network\_watcher](#module\_network\_watcher) | terraform.registry.launch.nttdata.com/module_primitive/network_watcher/azurerm | ~> 1.0 |
| [network\_watcher\_flow\_log](#module\_network\_watcher\_flow\_log) | terraform.registry.launch.nttdata.com/module_primitive/network_watcher_flow_log/azurerm | ~> 1.0 |## Resources
No resources.
## Inputs
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| [resource\_names\_map](#input\_resource\_names\_map) | A map of key to resource\_name that will be used by tf-launch-module\_library-resource\_name to generate resource names |map(object({| `{}` | no |
name = string
max_length = optional(number, 60)
region = optional(string, "eastus2")
}))
| [instance\_env](#input\_instance\_env) | Number that represents the instance of the environment. | `number` | `0` | no |
| [instance\_resource](#input\_instance\_resource) | Number that represents the instance of the resource. | `number` | `0` | no |
| [logical\_product\_family](#input\_logical\_product\_family) | (Required) Name of the product family for which the resource is created.
Example: org\_name, department\_name. | `string` | `"launch"` | no |
| [logical\_product\_service](#input\_logical\_product\_service) | (Required) Name of the product service for which the resource is created.
For example, backend, frontend, middleware etc. | `string` | `"network"` | no |
| [class\_env](#input\_class\_env) | (Required) Environment where resource is going to be deployed. For example. dev, qa, uat | `string` | `"dev"` | no |
| [location](#input\_location) | Azure region to use | `string` | n/a | yes |
| [firewall\_id](#input\_firewall\_id) | The ID of the firewall to apply the diagnostic setting to. | `string` | n/a | yes |
| [sku](#input\_sku) | (Optional) Specifies the SKU of the Log Analytics Workspace. Possible values are Free, PerNode, Premium, Standard, Standalone, Unlimited, CapacityReservation, and PerGB2018 (new SKU as of 2018-04-03). Defaults to PerGB2018. | `string` | `"Free"` | no |
| [retention\_in\_days](#input\_retention\_in\_days) | (Optional) The workspace data retention in days. Possible values are either 7 (Free Tier only) or range between 30 and 730. | `number` | `"30"` | no |
| [identity](#input\_identity) | (Optional) A identity block as defined below. |object({| `null` | no |
type = string
})
| [log\_analytics\_destination\_type](#input\_log\_analytics\_destination\_type) | (Optional) Specifies the type of destination for the logs. Possible values are 'Dedicated' or 'Workspace'. | `string` | `null` | no |
| [enabled\_log](#input\_enabled\_log) | n/a |list(object({| `null` | no |
category_group = optional(string, "allLogs")
category = optional(string, null)
}))
| [metric](#input\_metric) | n/a |object({| `null` | no |
category = optional(string)
enabled = optional(bool)
})
| [network\_security\_group\_id](#input\_network\_security\_group\_id) | The ID of the Network Security Group to apply the flow log to. | `string` | n/a | yes |
| [account\_tier](#input\_account\_tier) | Defines the Tier to use for this storage account. Valid options are Standard and Premium. | `string` | `"Standard"` | no |
| [account\_kind](#input\_account\_kind) | Defines the Kind to use for this storage account. Valid options are Storage, StorageV2, BlobStorage, FileStorage, BlockBlobStorage. | `string` | `"StorageV2"` | no |
| [account\_replication\_type](#input\_account\_replication\_type) | Defines the type of replication to use for this storage account. Valid options are LRS, GRS, RAGRS, ZRS, GZRS, RAGZRS. | `string` | `"LRS"` | no |
| [enable\_https\_traffic\_only](#input\_enable\_https\_traffic\_only) | Allows https traffic only to storage service if set to true. | `bool` | `true` | no |
| [create\_network\_watcher](#input\_create\_network\_watcher) | Create network watcher | `bool` | `false` | no |
| [network\_watcher\_flow\_log\_enabled](#input\_network\_watcher\_flow\_log\_enabled) | Enable network watcher flow log | `bool` | `true` | no |
| [retention\_policy](#input\_retention\_policy) | The retention policy for the Network Watcher Flow Log |object({| `null` | no |
enabled = bool
days = number
})
| [traffic\_analytics](#input\_traffic\_analytics) | The traffic analytics settings for the Network Watcher Flow Log |object({| `null` | no |
enabled = bool
interval_in_minutes = number
})## Outputs
| Name | Description |
|------|-------------|
| [resource\_group\_id](#output\_resource\_group\_id) | resource group id |
| [resource\_group\_name](#output\_resource\_group\_name) | resource group name |
| [diagnostic\_setting\_id](#output\_diagnostic\_setting\_id) | The ID of the Diagnostic Setting. |
| [diagnostic\_setting\_name](#output\_diagnostic\_setting\_name) | The name of the Diagnostic Setting. |
| [log\_analytics\_workspace\_id](#output\_log\_analytics\_workspace\_id) | The Log Analytics Workspace ID. |
| [log\_analytics\_workspace\_workspace\_id](#output\_log\_analytics\_workspace\_workspace\_id) | The Workspace (or Customer) ID for the Log Analytics Workspace. |
| [log\_analytics\_workspace\_name](#output\_log\_analytics\_workspace\_name) | The Log Analytics Workspace name. |
| [log\_analytics\_workspace\_primary\_shared\_key](#output\_log\_analytics\_workspace\_primary\_shared\_key) | Value of the primary shared key for the Log Analytics Workspace. |
| [log\_analytics\_workspace\_secondary\_shared\_key](#output\_log\_analytics\_workspace\_secondary\_shared\_key) | Value of the secondary shared key for the Log Analytics Workspace. |
| [storage\_account\_id](#output\_storage\_account\_id) | The ID of the Storage Account. |
| [storage\_account\_primary\_location](#output\_storage\_account\_primary\_location) | The primary location of the storage account. |
| [storage\_account\_primary\_blob\_endpoint](#output\_storage\_account\_primary\_blob\_endpoint) | The endpoint URL for blob storage in the primary location. |
| [storage\_account\_storage\_containers](#output\_storage\_account\_storage\_containers) | storage container resource map |
| [storage\_account\_storage\_queues](#output\_storage\_account\_storage\_queues) | storage queues resource map |
| [storage\_account\_storage\_shares](#output\_storage\_account\_storage\_shares) | storage share resource map |
| [network\_watcher\_flow\_log\_id](#output\_network\_watcher\_flow\_log\_id) | The ID of the Network Watcher Flow Log instance. |
| [network\_watcher\_flow\_log\_name](#output\_network\_watcher\_flow\_log\_name) | The name of the Network Watcher Flow Log. |