Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/iknowjason/ghostsplayground
A small security playground implementation of GHOSTS User Simulation framework with an Active Directory deployment and Elastic.
https://github.com/iknowjason/ghostsplayground
Last synced: 2 days ago
JSON representation
A small security playground implementation of GHOSTS User Simulation framework with an Active Directory deployment and Elastic.
- Host: GitHub
- URL: https://github.com/iknowjason/ghostsplayground
- Owner: iknowjason
- License: mit
- Created: 2024-05-21T18:56:49.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2024-07-17T22:24:42.000Z (4 months ago)
- Last Synced: 2024-07-18T16:11:19.974Z (4 months ago)
- Language: Smarty
- Homepage:
- Size: 194 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# GHOSTS Security Playground
![GHOSTS](images/ghosts-overview.png "GHOSTS Overview")
## Overview
GHOSTS Security Playground is a terraform template creating a lab implementation of the [GHOSTS NPC User simulation framework](https://cmu-sei.github.io/GHOSTS/) created by Carnegie Mellon University. Additionally, it builds some customizable capabilities for an AD pentest lab or Detection Engineering. It builds the following resources hosted in AWS:
* GHOSTS version 8.0 server with API and Grafana dashboards pre-loaded
* One Active Directory Domain Controller loaded with 1,000 AD users, groups, and OUs
* One Elastic server with Kibana
* One Windows Client (Windows Server 2022) with automated deployment of the following:
- GHOSTS version 8.0 client with customizable timeline and app configuration
- Atomic Red Team (ART) and PurpleSharp
- Sysmon with customizable config
- Automatically ships logs to Elastic via Winlogbeat client
- Active Directory Domain JoinedSee the **Details** section for more information.
## Screen Shots
[Some examples.](examples.md)## Requirements and Setup
**Tested with:**
* Mac OS 13.4 or Ubuntu 22.04
* terraform 1.5.7**Clone this repository:**
```
git clone https://github.com/iknowjason/GHOSTSPlayground
```**Credentials Setup:**
Generate an IAM programmatic access key that has permissions to build resources in your AWS account. Setup your .env to load these environment variables. You can also use the direnv tool to hook into your shell and populate the .envrc. Should look something like this in your .env or .envrc:
```
export AWS_ACCESS_KEY_ID="VALUE"
export AWS_SECRET_ACCESS_KEY="VALUE"
```## Build and Destroy Resources
### Run terraform init
Change into the ```GHOSTSPlayground/code``` working directory and type:```
terraform init
```### Run terraform plan or apply
```
terraform apply -auto-approve
```
or
```
terraform plan -out=run.plan
terraform apply run.plan
```### Destroy resources
```
terraform destroy -auto-approve
```### View terraform created resources
The lab has been created with important terraform outputs showing services, endpoints, IP addresses, and credentials. To view them:
```
terraform output
```## Estimated Cost
As this tool spins up cloud resources, it will result in charges to your AWS account. Efforts have been made to minimize the costs incurred and research the best options for most uses cases. The best way to use this is reference the estimated cost below, check your AWS costs daily, and verify them against this information included below. Be sure to tear down all resources when not using them. See the ```AWS Pricing Calculator```:
https://calculator.aws/#/
| System | Instance | Hourly Cost | Default Region |
| ------------- |:-------------:|:-------:|:-------:|
| GHOSTS | t3a.medium | $0.0376 | us-east-2 |
| Elastic | t2.xlarge | $0.1856 | us-east-2 |
| DC | t2.small | $0.023 | us-east-2 |
| Win Client | t3a.medium | $0.0376 | us-east-2 |**Note:** A couple of the systems use larger EBS volumes for data storage. You can see the EBS pricing estimator to get exact prices:
https://aws.amazon.com/ebs/pricing/
GHOSTS system uses a gp2 volume of 96 GB which calculates to $0.10 per GB-month of provisioned storage or $9.6 per month. Seems to be calculated hourly.
The Elastic Search system uses a gp2 volume of 100 GB which calculates to $0.10 per GB-month of provisioned storage or $10 per month.
The DC system uses a gp2 volume of 90 GB which calculates to $0.10 per GB-month of provisioned storage or $9 per month. Seems to be calculated hourly.
## Details
### Important Firewall and White Listing
By default when you run terraform apply, the security group is wide open to the public Internet allowing ```0.0.0.0/0```. To lock this down: your public IPv4 address can be determined via a query to ifconfig.so and the ```terraform.tfstate``` is updated automatically. If your location changes, simply run ```terraform apply``` to update the security groups with your new public IPv4 address. If ifconfig.me returns a public IPv6 address, your terraform will break. In that case you'll have to customize the white list. To change the white list for custom rules, update this variable in ```sg.tf```:
```
locals {
#src_ip = "${chomp(data.http.firewall_allowed.response_body)}/32"
src_ip = "0.0.0.0/0"
}
```### GHOSTS Linux Server
GHOSTS Linux server is built on an Ubuntu Linux 22.04 AMI automatically using ```user-data``` feature of AWS to bootstrap the services. The following local project files are important for customization:
| File | Description | Output |
| ------------- |:-------------:|:-------:|
| code/ghosts.tf | The terraform file that builds the Linux server | |
| code/files/ghosts/bootstrap.sh.tpl | The main bootstrap script. | code/output/ghosts/bootstrap.sh |
| code/files/ghosts/dashboards.yml | grafana dashboards config | |
| code/files/ghosts/datasources.yml.tpl | grafana config for datasources. | code/output/ghosts/datasources.yml |
| code/files/ghosts/docker-compose.yml | ghosts docker compose | |
| code/files/ghosts/npc.sh | a script loaded onto the server for localhost api commands | |
| code/files/ghosts/npc-ext.sh.tpl | a script to run api commands remotely. | code/output/ghosts/npc-ext.sh |
| code/s3-ghosts.tf | Uploads some of the ghosts linux files to s3 bucket | |**Playing with GHOSTS:**
* After the Linux system boots, navigate to the GHOSTS APIServer and Grafana server endpoints by looking at ```terraform output```.
* Wait for the Windows client to do a Domain Join, and then start up the ghosts.exe using an Administrator cmd.exe
* Monitor the Grafana custom dashboard for timeline events
* Login to Kibana using terraform outputs and monitor events
* Run the npc.sh or external script to synch NPCs with machine current username**Troubleshooting GHOSTS Linux Server:**
SSH into the GHOSTS server by looking in ```terraform output``` for this line:
```
SSH to GHOSTS
--------------
ssh -i ssh_key.pem [email protected]
```
Once in the system, tail the user-data logfile. You will see the steps from the ```code/files/ghosts/bootstrap.sh.tpl``` script running:
```
tail -f /var/log/user-data.log
```**Customize GHOSTS Linux Server:**
To customize GHOSTS, you can modify the linux bootstrap script variables, instance size, security groups and other details in ```ghosts.tf```.
**Creating NPCs and using the API:**
Three methods can be used to automatically add NPCs to GHOSTS using the API.
1. The first method is already included in the main bootstrap script. It uses the API endpoint to generate a random NPC. The code section is included below.
```
request1="http://127.0.0.1:5000/api/npcsgenerate/one"
curl -X 'POST' \
"$request1" \
-H 'accept: application/json' \
-d ''
```
2. The second method is a script (```npc.sh```) which is deployed onto the system at /home/ubuntu/npc.sh. You can SSH into the system and run the script against the API endpoint. This API endpoint ensures that an NPC is created for each and every machine currentusername that exists. So after you register the GHOSTS client agent, it will add an NPC and synch it to the machine's current username. You can edit the script to make it different upon bootstrap at ```code/files/ghosts/npc.sh```.
3. A script that remotely runs against the API endpoint using the public IP address of the GHOSTS server. The templatefile is located at ```code/fils/ghosts/npc-ext.sh.tpl``` and the final output script is located at ```output/ghosts/npc-ext.sh```. The script runs the same change mentioned in #2.**Teraform Output:**
View the terraform outputs for important GHOSTS Linux access information:
```
GHOSTS Grafana Console:
----------------
http://ec2-3-15-227-53.us-east-2.compute.amazonaws.com:3000GHOSTS Grafana Credentials:
--------------------
admin:adminGHOSTS API Server
-----------------
http://ec2-3-15-227-53.us-east-2.compute.amazonaws.com:5000SSH to GHOSTS
--------------
ssh -i ssh_key.pem [email protected]
```**Creating NPCs:**
Some steps here.
### Elastic Search Linux Server
As of July 2024, this automatically deploys a docker deployment of Elastic 8.9.1. The Elastic Search Linux server system includes Kibana and is built on an Ubuntu Linux 22.04 AMI automatically using ```user-data``` feature of AWS to bootstrap the services. The following local project files are important for customization:
| File | Description | Output |
| ------------- |:-------------:|:-------:|
| code/elastic.tf | The terraform file that builds the Linux server | |
| code/files/elastic/bootstrap.sh.tpl | The bootstrap script. | code/output/elastic/bootstrap.sh |
| code/files/elastic/elasticsearch.yml.tpl | elasticsearch server config. | code/output/elastic/elasticsearch.yml |
| code/files/elastic/kibana.yml.tpl | kibana server config. | code/output/elastic/kibana.yml |
| code/files/elastic/logstash.conf.tpl | logstash server config. | code/output/elastic/logstash.conf |
| code/files/elastic/elasticsearch.service | service for elasticsearch |
| code/files/elastic/kibana.service | service for kibana | |
| code/files/elastic/logstash.service | service for logstash | |**Troubleshooting Elastic Linux Server:**
SSH into the Elastic server by looking in ```terraform output``` for this line:
```
SSH to Kibana
-------------
ssh -i ssh_key.pem [email protected]
```
Once in the system, tail the user-data logfile. You will see the steps from the ```code/files/ghosts/bootstrap.sh.tpl``` script running:
```
tail -f /var/log/user-data.log
```**Customize Elastic Linux Server:**
To customize the Elastic Search Linux server, you can modify the linux bootstrap script variables, instance size, security groups and other details in ```elastic.tf```. You can modify the elastic username (elastic_username) and password (elastic_password) variables.
**Teraform Output:**
View the terraform outputs for important credentials and endpoint for Kibana access information:
```
-------
Kibana Console
-------
https://ec2-3-15-19-102.us-east-2.compute.amazonaws.com:5601
username: elastic
password: Elastic2024SSH to Kibana
-------------
ssh -i ssh_key.pem [email protected]
```### Windows Server: Active Directory Domain Controller (DC)
A Windows Server 2022 AMI is built using an Amazon owned image. Active Directory Domain Services is installed with a forest using ```user-data``` feature of AWS to bootstrap the services using powershell. An CSV file is uploaded to S3 and then a special bootstrap script downloads the CSV file and imports in all AD users. The following local project files are important for customization:
| File | Description | Output |
| ------------- |:-------------:|:-------:|
| code/dc.tf | The terraform file that builds the DC | |
| code/files/dc/ad_install.ps1.tpl | The powershell script that installs AD DS and Forest. | code/output/dc/ad_install.ps1 |
| code/files/dc/bootstrap-dc.ps1.tpl | The main powershell script for the DC that bootstraps. | code/output/dc/bootstrap-dc1.ps1 |
| code/ad_users.csv | The list of Active Directory users in CSV | |**Troubleshooting Windows DC:**
The DC local bootstrap script is located in ```code/files/dc/bootstrap-dc.ps1.tpl```. RDP into the Windows system and follow this logfile to see how the system is bootstrapping. This main script downloads and executes the ```ad_install.ps1``` script:
```
C:\Terraform\bootstrap_log.log
```The Active Directory and forest installation follows from ```code/files/dc/ad_install.ps1.tpl```. Follow this logfile to see how the AD Domain is building:
```
C:\Terraform\ad_install.log
```The script checks to make sure the forest has been installed with the correct input domain. If correct, it downloads the ```ad_users.csv``` file from the S3 bucket and loads the AD objects, including AD users, Groups, and OUs.
**Customize Windows DC Server:**
To customize DC, you can modify the AD domain, local Adminstrator username/password, WinRM username/password, the AMI instance and size in ```dc.tf```. The ```ad_users.csv``` file includes all Domain users, groups, and OUs that attempt to get loaded into AD.
**Teraform Output:**
View the terraform outputs for important Windows AD Domain and machine access information:
```
-------------------------
Domain Controller and AD Details
-------------------------
Instance ID: i-01fde1663cbafecd9
Computer Name: dc
Private IP: 10.100.10.4
Public IP: 18.189.21.74
local Admin: OpsAdmin
local password: Tegan-pepper-826627
Domain: rtc.local
Domain Admin Username: jasonlindqvist
Domain Admin Password: Rue-biggie-619140
```### Windows Client
The Windows Client system is built from ```win1.tf```. Windows Server 2022 Datacenter edition is currently used. You can upload your own AMI image and change the data reference in win1.tf. The local bootstrap script is located in ```code/files/windows/bootstrap-win.ps1.tpl```. RDP into the Windows system and follow this logfile to see how the system is bootstrapping:
```
C:\Terraform\bootstrap_log.log
```The following files are some of the powershell bootstrap scripts used to build the ```win1``` client:
| File | Description | Output |
| ------------- |:-------------:|:-------:|
| code/win1.tf | The terraform file that builds the win1 client | |
| code/files/windows/bootstrap-win.ps1.tpl | The main bootstrap pwsh script for win1 | code/output/windows/bootstrap-win1.ps1 |
| code/files/windows/red.ps1.tpl | | code/output/windows/red.ps1 |
| code/files/windows/sysmon.ps1.tpl | | code/output/windows/sysmon.ps1 |
| code/files/windows/winlogbeat.ps1.tpl | | code/output/windows/winlogbeat.ps1 |
| code/files/ghosts/ghosts-client-bootstrap.ps1.tpl | |The main bootstrap.ps1 script downloads each of the individual bootstrap script files such as red.ps1. All of the smaller scripts are stored in the S3 bucket. You can modify them to customize as you see fit.
**Customizing Build Scripts**
For adding new scripts for a customized deployment, reference the arrays in ```scripts.tf``` and ```s3.tf```. For more complex deployments, the Windows system is built to have flexibility for adding customized scripts for post-deployment configuration management. This gets around the size limit of user-data not exceeding 16KB in size. The s3 bucket is used for staging to upload and download scripts, files, and any artifacts needed. How this is done: A small master script is always deployed via user-data (```bootstrap-win.ps1```). This script has instructions to download additional scripts. This is under your control and is configured in ```scripts.tf``` and ```s3.tf```. In ```scripts.tf```, take a look at the array called ```templatefiles```. Add any custom terraform templatefiles here and then add them locally to ```files/windows```. See the ```red.ps1.tpl``` and ```sysmon.ps1.tpl``` files as an example. The file should end in ```tpl```. This template file is generated as output into the directory called ```output```. The terraform code strips off the ```.tpl``` in the filename when it generates into the ```output``` directory. Make sure the filename is correct because the master script downloads based on this name. In ```s3.tf```, each little script referenced in ```templatefiles_win``` is uploaded. The master bootstrap script has a reference to this array. It will automatically download all generated scripts from the ```templatefiles_win``` array and execute each script.
**Customizing timeline.json**
The application execution of NPC behavior can be customized in the ```code/files/ghosts/timeline.json.tpl```. Ensure that you have installed each tool that runs automtically through the timeline execution. Any error can cause the ghosts client to hang and not send any logs. The following applications are pre-installed via chocolately windows package manager. They can be customized in ```code/files/ghosts/ghosts-client-bootstrap.ps1.tpl```.- Microsoft office
- Chrome
- Firefox**Terraform Outputs**
See the output from ```terraform output``` to get the IP address and credentials for RDP:
```
-------------------------
Virtual Machine - win1
-------------------------
Instance ID: i-09499cb3e4f5965ab
Computer Name: win1
Private IP: 10.100.20.10
Public IP: 3.17.14.145
local Admin: OpsAdmin
local password: Tegan-pepper-826627
------------------------------------------
AFTER DOMAIN JOIN, starts GHOSTS on win1
------------------------------------------
Step 1: RDP in with credentials to 3.17.14.145
User: RTC\oliviaodinsdottir
Pass: Esther-daisy-906270
Step 2: Run cmd.exe as Administrator and start ghosts.exe
cd C:\Tools\ghosts\ghosts-client-x64-v8.0.0 (Elevated cmd.exe)
.\ghosts.exe
```**GHOSTS on Windows Client:**
The GHOSTS Windows client automatically deploys onto this win1 system. The important files that can be used for customization include:
| File | Description | Output |
| ------------- |:-------------:|:-------:|
| code/s3-ghosts.tf | The terraform file that uploads ghosts files to s3 | |
| code/files/ghosts/ghosts-client-bootstrap.ps1.tpl | The bootstrap script for ghosts | code/output/ghosts/client-bootstrap-1.ps1 |
| code/files/ghosts/ghosts-client-x64-v8.0.0.zip | The ghosts client zip file with all files | |
| code/files/ghosts/clients/timeline-win1.json | The ghosts config timeline json config | |
| code/files/ghosts/application.json.tpl | The ghosts application json config | code/output/ghosts/application.json |The ghosts ```application.json``` is the file that controls configuring the API server settings. It normally should stay similar to this in the range.
The ghosts ```timeline.json``` file is what controls execution of the NPC behavior such as running applications. This can be customized for this client and others you desire to add.
To troubleshoot the bootstrap process for GHOSTS client, look in the following logfile on the Windows system:
```
C:\Terraform\ghosts_client_log.log
```**Starting GHOSTS client**
The ghosts client is not configured to start automatically. To start it up, follow the instructions in terraform output.
RDP into the system and start it with an Administrator cmd.exe:
```
cd C:\Tools\ghosts\ghosts-client-x64-v8.0.0 (Elevated cmd.exe)
.\ghosts.exe
```**Elastic Winlogbeat on Windows Client:**
Winlogbeat agent automatically deploys onto this win1 system and it registers to the Elastic Search server. This can be customized. The important files that can be used for customization include:
| File | Description |
| ------------- |:-------------:|
| code/files/winlogbeat.tf | The winlogbeat terraform file |
| code/files/winlogbeat/winlogbeat-8.9.1-windows-x86_64.zip | The winlogbeat zip file with config and binary |
| code/files/winlogbeat.yml.tpl | Winlogbeat configuration file |The ```winlogbeat.yml.tpl``` template file deploys into ```code/output/winlogbeat/winlogbeat.yml```.
To update the version of winlogbeat, you can change the ```winlogbeat_zip``` terraform variable and update the zip file and powershell script deployment.
### Red Tools on Windows Client
On the Windows Client system, the following tools are automatically deployed into ```C:\Tools\```:
* Atomic Red Team (ART)
* PurpleSharpThe local bootstrap script for customization is ```code\files\windows\red.ps1.tpl```
To track monitoring of the deployment on the Windows Client, see the logfile at ```C:\Terraform\red_log.log```
### Sysmon on Windows Client
Sysmon service and customized configuration (SwiftOnSecurity) is deployed onto the Windows Client system. To update the sysmon version and configuration, make changes inside the ```code\files\sysmon``` directory.
The local bootstrap script for customization is ```code\files\windows\sysmon.ps1.tpl```
To track monitoring of the deployment on the Windows Client, see the logfile at ```C:\Terraform\blue_log.log```
To update Sysmon configuration and version, see the two files that get automatically deployed on the Windows client:
- code/files/sysmon/Sysmon.zip
- code/files/sysmon/sysmonconfig-export.xml### Future
This terraform was automatically generated by the Operator Lab tool. To get future releases of the tool, follow twitter.com/securitypuck.
For an Azure version of this tool, check out PurpleCloud (https://www.purplecloud.network)
![X (formerly Twitter) Follow](https://img.shields.io/twitter/follow/securitypuck)