{"id":14986120,"url":"https://github.com/odennav/ELK-stack-setup","last_synced_at":"2025-04-11T21:31:22.812Z","repository":{"id":242405401,"uuid":"809429319","full_name":"odennav/elk-centralized-syslog-system","owner":"odennav","description":"Implement a centralized Syslog server cluster with Elasticsearch, Logstash and Kibana","archived":false,"fork":false,"pushed_at":"2024-06-25T11:46:04.000Z","size":393,"stargazers_count":2,"open_issues_count":0,"forks_count":1,"subscribers_count":1,"default_branch":"main","last_synced_at":"2024-10-31T07:42:24.406Z","etag":null,"topics":["ansible","centralized-logging","elasticsearch","kibana","logstash","rsyslog","syslog","syslog-messages","syslog-server","vagrant"],"latest_commit_sha":null,"homepage":"","language":"HCL","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/odennav.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-06-02T16:59:52.000Z","updated_at":"2024-07-17T21:49:19.000Z","dependencies_parsed_at":"2024-06-02T20:48:32.672Z","dependency_job_id":"aa4a6e60-3363-4da2-af04-1455aef53106","html_url":"https://github.com/odennav/elk-centralized-syslog-system","commit_stats":{"total_commits":20,"total_committers":3,"mean_commits":6.666666666666667,"dds":0.55,"last_synced_commit":"08ca7ca10822cb4421f9b7c11d38c3ec0de7a786"},"previous_names":["odennav/elk-centralized-syslog-system"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/odennav%2Felk-centralized-syslog-system","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/odennav%2Felk-centralized-syslog-system/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/odennav%2Felk-centralized-syslog-system/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/odennav%2Felk-centralized-syslog-system/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/odennav","download_url":"https://codeload.github.com/odennav/elk-centralized-syslog-system/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":223479565,"owners_count":17151931,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ansible","centralized-logging","elasticsearch","kibana","logstash","rsyslog","syslog","syslog-messages","syslog-server","vagrant"],"created_at":"2024-09-24T14:12:21.505Z","updated_at":"2025-04-11T21:31:21.457Z","avatar_url":"https://github.com/odennav.png","language":"HCL","readme":"# Elasticsearch Centralized Logging System\n\nThe ELK stack consists of Elasticsearch, Logstash, and Kibana.\n\nThey provide a powerful, flexible, and scalable solution for managing and making sense of large amounts of data.\n\n**`Logstash`**: a data processing pipeline which gathers, processes and forwards data (logs, metrics) from various sources to Elasticsearch.\n\n**`Elasticsearch`**: a distributed search and analytics engine that stores this processed data and enables powerful search and analytics capabilities.\n\n**`Kibana`**: a data visualization and exploration tool. It provides a graphical interface to visualize and explore the data stored in Elasticsearch.\n\n![](https://github.com/odennav/elk-centralized-syslog-system/blob/main/docs/ELK%20Syslog%20System.png)\n\n**Benefits of ELK Stack**\n\n**`Real-time Insights`**: They enable real-time data processing and visualization, which is crucial for monitoring and quick decision-making.\n\n**`Scalability`**: They can handle large-scale data operations, making them suitable for big data applications.\n\n**`Flexibility`**: They support a wide range of data types and sources, providing flexibility in how data is ingested, stored, and analyzed.\n\n\n## Getting Started\n\nWe'll implement the workflow below:\n\n- Provision Servers with Terraform\n\n- User Configuration on Linux Servers\n\n- Setup ELK Stack in Central Server\n\n- Enable ELK Clustering\n\n- Add Remote Hosts \n\n- Develop Kibana Visualization\n\n- Create Kibana Dashboard\n\nMachine image used in terraform configuration is `CentOS 8`.\n\n-----\n\n## Provision Servers with Terraform\n\nInstall AWS CLI in `local` machine\n```bash\nsudo apt install curl unzip\ncurl \"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip\" -o \"awscliv2.zip\"\nunzip awscliv2.zip\nsudo ./aws/install -i /usr/local/aws-cli -b /usr/local/bin\n```\n\nConfirm the AWS CLI installation\n```bash\naws --version\n```\n\nClone this repository in the `local` machine\n```bash\ncd /\ngit clone git@github.com:odennav/elk-centralized-syslog-system.git\n```\n\nExecute these Terraform commands sequentially in the `local` machine to create the AWS VPC(Virtual Private Cloud) and EC2 instances.\n\nInitializes terraform working directory\n```bash\ncd elk-centralized-syslog-system/terraform\nterraform init\n```\n\nValidate the syntax of the terraform configuration files\n```bash\nterraform validate\n```\n\nCreate an execution plan that describes the changes terraform will make to the infrastructure\n```bash\nterraform plan\n```\n\nApply the changes described in execution plan\n```bash\nterraform apply -auto-approve\n```\n\nCheck AWS console for instances created and running\n\n**SSH access**\n\nUse `.pem` key from AWS to SSH into the public EC2 instance. IPv4 address of public EC2 instance will be shown in terraform outputs.\n```bash\nssh -i private-key/terraform-key.pem ec2-user@\u003cipaddress\u003e\n```\n\nWe can use public EC2 instance as a jumpbox to securely SSH into private EC2 instances within the VPC.\n\nNote, the ansible `inventory` is built dynamically by terraform with the private ip addresses of the `EC2` machines.\n\n-----\n\n## User Configuration on Linux Servers\n\nAdd New User\n\nWe'll use the `central-server-1` virtual machine as our build machine. Integrations to pipeline is implemented on this server\n\nChange password for root user\n\n```bash\nsudo passwd\n```\n\nSwitch to root user. Add new user 'odennav' to sudo group.\n```bash\nsudo useradd odennav\nsudo usermod -aG wheel odennav\n```\n\nNotice the prompt to enter your user password. To disable password prompt for every sudo command, implement the following:\n\nAdd sudoers file for odennav-admin\n```bash\necho \"odennav ALL=(ALL) NOPASSWD: ALL\" | sudo tee /etc/sudoers.d/odennav\n```\n\nEnsure correct permissions for sudoers file\n```bash\nsudo chmod 0440 /etc/sudoers.d/odennav\nsudo chown root:root /etc/sudoers.d/odennav\n```\n\nTest sudo privileges by switching to new user\n\n```bash\nsu - odennav\nsudo ls -la /root\n```\n\nTo change the PermitRootLogin setting, modify the SSH server configuration file `/etc/ssh/sshd_config` as shown below:\n\n```bash\nPermitRootLogin no\n```\n\nRestart the SSH service for the changes to take effect\n```bash\nsudo systemctl restart sshd\n```\n\nVerify the the configuration has been applied\n\n```bash\nsudo grep PermitRootLogin /etc/ssh/sshd_config\n```\n\n-----\n\n## Setup ELK Stack in Central Server\n\n\n**Install Elasticsearch with RPM**\n\nDownload from elastic website and install the `RPM` manually \n```bash\nwget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.13.4-x86_64.rpm\n```\n\n\nDownload the SHA512 checksum file\n```bash\nwget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.13.4-x86_64.rpm.sha512\n```\n\nVerify the checksum\n```bash\nshasum -a 512 -c elasticsearch-8.13.4-x86_64.rpm.sha512\n```\n\nComparing the SHA of the downloaded RPM and the published checksum, should output \n```text\nelasticsearch-8.13.4-x86_64.rpm: OK.\n```\n\nWhen the checksums match, this confirms that the file is intact and hasn't been tampered with.\n\nInstall the RPM\n```bash\nsudo rpm --install elasticsearch-8.13.4-x86_64.rpm\n```\n\nWhen installing Elasticsearch, security features are enabled and configured by default.\n\nThe password and certificate and keys are output to your terminal.\n\nStore the elastic password as an environment variable in your shell.\n\n```bash\nexport ELASTIC_PASSWORD=\"my_password\"\n```\n\nName Elasticsearch Cluster\n```bash\nsudo vi /etc/elasticsearch/elasticsearch.yml\n```\n\nFor a production environment, it's beneficial to have shards distributed. We'll configure elasticsearch to communicate with outside network and look for an additional node, `central-server-2`.\n \nAdd this to end of `elasticsearch.yml` and save the configuration.\n\n```bash\ncluster.name: syslog\nnode.name: central-server-1\nnetwork.host: [10.33.10.1, _local_]\ndiscovery.zen.ping.unicast.hosts: [\"10.33.10.1\", \"10.33.10.6\"]\naction.auto_create_index: .monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*\n```\n\nStart and enable elasticsearch service\n```\nsudo systemctl daemon-reload\nsudo systemctl start elasticsearch.service\nsudo systemctl enable elasticsearch.service\n```\n\nConfirm connection to elasticsearch\n```bash\ncurl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic:$ELASTIC_PASSWORD https://localhost:9200 \n```\n\n\n**Install Logstash with RPM**\n\nEnsure java is available for Logstash\n```bash\nsudo yum install -y java-1.8.0-openjdk\n```\n\nDownload and install the public signing key\n```bash\nsudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch\n```\n\nCreate `logstash.repo` in `/etc/yum.repos.d/`\n```bash\ncd /etc/yum.repos.d/\nsudo touch logstash.repo\n```\n\nAdd this to `logstash.repo` file\n```text\n[logstash-8.x]\nname=Elastic repository for 8.x packages\nbaseurl=https://artifacts.elastic.co/packages/8.x/yum\ngpgcheck=1\ngpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch\nenabled=1\nautorefresh=1\ntype=rpm-md\n```\n\nInstall logstash\n```bash\nsudo yum install logstash\n```\n\n\n**Configure Logstash**\n\nCreate the syslog configuration file for logstash\n```bash\nsudo touch /etc/logstash/conf.d/syslog.conf\n```\n\nOur logstash configuration will have three main blocks:\n\n- **`Input`**: cause logstash to listen for syslog messages on port 5141\n\n- **`Filter`**: process messages it receives that match the given patterns.\n  \n  It extracts the authentication method, the username, the source IP address, and source   \n  port for ssh connection attempts. Also tags the messages with `ssh_successful_login` or\n  `ssh_failed_login`.\n\n- **`Output`**:  store the messages into the elasticsearch instance we just created.\n\nAdd this to `syslog.conf` and save the configuration\n```text\ninput {\n  syslog {\n    type =\u003e syslog\n    port =\u003e 5141\n  }\n}\n\nfilter {\n  if [type] == \"syslog\" {\n    grok {\n      match =\u003e { \"message\" =\u003e \"Accepted %{WORD:auth_method} for %{USER:username} from %{IP:src_ip} port %{INT:src_port} ssh2\" }\n      add_tag =\u003e \"ssh_successful_login\"\n    }\n    grok {\n      match =\u003e { \"message\" =\u003e \"Failed %{WORD:auth_method} for %{USER:username} from %{IP:src_ip} port %{INT:src_port} ssh2\" }\n      add_tag =\u003e \"ssh_failed_login\"\n    }\n    grok {\n      match =\u003e { \"message\" =\u003e \"Invalid user %{USER:username} from %{IP:src_ip}\" }\n      add_tag =\u003e \"ssh_failed_login\"\n    }\n  }\n  geoip {\n    source =\u003e \"src_ip\"\n  }\n}\n\noutput {\n  elasticsearch { }\n}\n\n```\n\nStart and enable logstash service\n```bash\nsudo systemctl start logstash.service\nsudo systemctl enable logstash.service\n```\n\n**Forward Syslogs to Logstash**\n\nNext, we configure `central-server-1` node tp forward its syslog messages to logstash.\n\nCreate logstash configuration file\n```bash\nsudo touch /etc/rsyslog.d/logstash.conf\n```\nAdd this to `logstash.conf` and save\n```bash\n*.* @10.33.10.1:5141\n```\n \n\nRestart rsyslog service\n```bash\nsudo systemctl restart rsyslog\n```\n\nConfirm logstash is now receiving syslog messages from `central-server-1` node and and storing them in Elasticsearch.\n\n```bash\ncurl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic:$ELASTIC_PASSWORD https://localhost:9200/_cat/indices?v\n```\n\n\n**Install Kibana**\n\nDownload from elastic website and install the RPM manually \n```bash\nwget https://artifacts.elastic.co/downloads/kibana/kibana-8.13.4-x86_64.rpm\n```\n\nDownload the SHA512 checksum file\n```bash\nshasum -a 512 -c kibana-8.13.4-x86_64.rpm.sha512\n```\n\nVerify the checksum\n```bash\nshasum -a 512 -c kibana-8.13.4-x86_64.rpm.sha512\n```\n\nComparing the SHA of the downloaded RPM and the published checksum, should output \n```text\nkibana-8.13.4-x86_64.rpm: OK\n```\n\nInstall the RPM \n```bash\nsudo rpm --install kibana-8.13.4-x86_64.rpm\n```\n\nTo enable connection to Kibana from outside the localhost\n```bash\nsudo vi /etc/kibana/kibana.yml\n```\n\nAdd this to the configuration file, `kibana.yml`\n```bash\nserver.host: \"10.33.10.1\"\n```\n\n**Securely connect kibana with elasticsearch**\n\nThe elasticsearch-create-enrollment-token command creates enrollment tokens for Elasticsearch nodes and Kibana instances.\n\nGenerate an enrollment token for kibana\n```bash\n/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana\n```\n\nStart and enable kibana service\n```bash\nsudo systemctl daemon-reload\nsudo systemctl start kibana.service\nsudo systemctl enable kibana.service\n```\n\nTo receive feedback whether Kibana was started successfully or not\n```bash\nsudo journalctl -u kibana.service\n```\n\nBrowse `10.33.10.1:5601` to view Kibana UI and click on `Explore on my own` link to get started with Elastic.\n\n![](https://github.com/odennav/elk-centralized-syslog-system/blob/main/docs/view_kibana.png)\n\n-----\n\n## Enable ELK CLustering\n\n**Configure Node to Join Cluster**\n\nWhen Elasticsearch was installed in first node `central-server-1`, the installation process configured a single-node cluster by default.\n\nTo enable a node to join an existing cluster instead, implement the following:\n\n1. Generate an enrollment token on an existing node, `central-server-1` before you start the new node \n`cs2` for the first time.\n\n On `central-server-1` in our existing cluster, generate a node enrollment token\n\n```bash\n/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node\n```\n\n2. Copy the enrollment token, which is output to your terminal.\n\n   We'll use `central-server-2` node as our additional elasticsearch cluster member.\n\n3. Implement the same steps done for `central-server-1` node in `central-server-2` node: Install Elasticsearch, Logstash, Kibana\n\n\n4. Ensure the elasticsearch cluster in second node is named as `syslog`\n\n```bash\nsudo vi /etc/elasticsearch/elasticsearch.yml\n```\n\nAdd this to end of `elasticsearch.yml` and save the configuration.\n\n```bash\ncluster.name: syslog\nnode.name: central-server-2\nnetwork.host: [10.33.10.6, _local_]\ndiscovery.zen.ping.unicast.hosts: [\"10.33.10.1\", \"10.3.10.6\"]\naction.auto_create_index: .monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*\n```\n\nThis new second server will automatically discover and join the cluster as long as it has the same `cluster.name` as the first node.\n\n\n5. On your new Elasticsearch node, `central-server-2`, pass the enrollment token generated in step1 as a parameter to the elasticsearch-reconfigure-node tool\n```bash\n/usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token \u003cenrollment-token\u003e\n```\n\nStart and enable elasticsearch service\n```\nsudo systemctl daemon-reload\nsudo systemctl start elasticsearch.service\nsudo systemctl enable elasticsearch.service\n```\n\n-----\n\n## Add Remote Hosts \n\nStart other remote machines, run:\n\n```bash\nvagrant up \nvagrant ssh \n```\n\n**Install Ansible**\n\nTo install ansibe without upgrading current python version, we'll make use of the yum packae manager.\n```bash\nsudo yum update\nsudo yum upgrade\n```\n\nInstall EPEL repository\n```bash\nsudo yum install epel-release\n```\n\nVerify installation of EPEL repository\n```bash\nsudo yum repolist\n```\n\nInstall Ansible\n```bash\nsudo yum install ansible\n```\n\nConfirm installation\n```bash\nansible --version\n```\n\n**Configure Ansible Vault**\n\nAnsible communicates with target remote servers using SSH and usually we generate RSA key pair and copy the public key to each remote server, instead we'll use username and password credentials of `odennav` user.\n\nThis credentials are added to inventory host file but encrypted with `ansible-vault`\n\nEnsure all IPv4 addresses and user variables of remote servers are in the inventory file as shown\n\nView ansible-vault/values.yml which has the secret password\n\n```bash\ncat /elk-centralized-logging-system/ansible/ansible-vault/values.yml\n```\n\nGenerate vault password file\n```bash\nopenssl rand -base64 2048 \u003e /elk-centralized-logging-system/ansible/ansible-vault/secret-vault.pass\n```\n\nCreate ansible vault with vault password file\n```bash\nansible-vault create /elk-centralized-logging-system/ansible/ansible-vault/values.yml --vault-password-file=/elk-centralized-logging-system/ansible/ansible-vault/secret-vault.pass\n```\n\nView content of ansible vault\n```bash\nansible-vault view /elk-centralized-logging-system/ansible/ansible-vault/values.yml --vault-password-file=/elk-centralized-logging-system/ansible/ansible-vault/secret-vault.pass\n```\n\nRead ansible vault password from environment variable\n```bash\nexport ANSIBLE_VAULT_PASSWORD_FILE=/elk-centralized-logging-system/ansible/ansible-vault/secret-vault.pass\n```\n\nConfirm environment variable has been exported\n```bash\nexport ANSIBLE_VAULT_PASSWORD_FILE\n```\n\nTest Ansible by pinging all remote servers in inventory list\n```bash\nansible all -m ping\n```\n\n**Configure Remote Hosts**\n\nWe'll use ansible playbook to configure the remote systems to forward their syslog messages to the centralized syslog server.\n\n```bash\nansible-playbook -i hosts.inventory /elk-centralized-logging-system/ansible/add_hosts/add_hosts.yml\n```\n\n**Create Index Pattern**\n\nImpplement the following steps below:\n\n- Return to the Kibana UI at `10.33.10.1:5601`.\n\n- Click on the hamburger menu icon, then click on the **`Stack Management`** link under the **`Management`** section of the menu.\n\n![](https://github.com/odennav/elk-centralized-syslog-system/blob/main/docs/stack_mgt.png)\n\n- Scroll down. Under the **`Kibana`** section, click the **`Index Patterns`** link.\n\n![](https://github.com/odennav/elk-centralized-syslog-system/blob/main/docs/index_paterns.png)\n\n- There will be a pop-up display labeled `About Index Patterns` in the right-hand side of your screen. Click the `x` to close it.\n\n- Now click on the `Create index pattern` button.\n\n![](https://github.com/odennav/elk-centralized-syslog-system/blob/main/docs/create_index_pattern.png)\n\n- In the `Index pattern name` field, enter `logstash*` and then click `Next Step \u003e` button. \n \n![](https://github.com/odennav/elk-centralized-syslog-system/blob/main/docs/index_pattern_name.png)\n\n  This tells Kibana to use any indices in Elasticsearch that start with `logstash`.\n\n- In the `Time Field` dropdown menu, select `@timestamp`, then click the `Create index pattern` button.\n\n![](https://github.com/odennav/elk-centralized-syslog-system/blob/main/docs/timestamp.png)\n\n A screen will appear that shows information about the index pattern that we've just created.\n\n![](https://github.com/odennav/elk-centralized-syslog-system/blob/main/docs/logstash_index_pattern.png)\n\n\n**Confirm Log Sources from Remote Hosts**\n\nNow we can start searching for log messages\n\n- Click on the hamburger menu icon, then click on the **`Discover`** link under the **`Kibana`** section of the menu.\n\n![](https://github.com/odennav/elk-centralized-syslog-system/blob/main/docs/discover.png)\n\n- At the left-hand **`field`** menu, click on **`logsource`**.\n \nYou should now see the other remote hosts appear in addition to the `cs1` host.\n\nNow you can search forlog records across multiple hosts in one single place. \n\n-----\n\n## Develop Kibana Dashboard\n\n**Generate Syslog**\n\nWe'll use two scripts in `syslog-gen` directory to simulate syslog generation.\n\nThe logstash config will process and filter this messages, while also tagging them.\n\nWe can then search for this syslog records stored in elasticsearch and visually analyze with Kibana.\n\n**Create Kibana Visualization**\n\nNext, we use Kibana to create a visualization object that analyzes the unsuccessful `ssh login` data.\n\nImplement the following steps:\n\n- Click on the hamburger menu icon, then click on the **`Visualize`** link under the **`Kibana`** section of the menu.\n\n- Click on the `Create new visualization` button.\n\n- Scroll down and click `Vertical Bar`.\n\n![](https://github.com/odennav/elk-centralized-syslog-system/blob/main/docs/vertical_bar.png)\n\n- Next, click on `logstash*` index pattern.\n\n![](https://github.com/odennav/elk-centralized-syslog-system/blob/main/docs/logstash_icon.png)\n\n- In the search bar, type in `tags:ssh_failed_login` and click on the `Refresh` button. Set date as `Last 1 hour` to capture events of multiple failed ssh login attempts.\n\n-  Under \"Buckets\", click on the `+ Add` link. The click on `X-Axis`. \n\n- Under aggregation, select `Date Histogram`. To apply the changes, click the `Update` button at the bottom-right of your screen.\n\n- Now we'll see that one big bar break into smaller bars.\n\n  When you hover over the bar, it tells us the number of times the search occurs during that time period. \n\n- Under the `Metrics` Section, click on `Y-Axis`. Now supply a\ncustom label of `Failed Logins`.\n\n To apply this change, click the `Update` button in the bottom-right of your screen.\n\n- Finally, click the `Save` icon in the top-left and give this graph a Title of `Failed SSH Logins`. Click `Save`.\n\n\n**Use Kibana Dashboard**\n\nNext, we use Kibana to create a dashboard for the the unsuccessful `ssh login` visualization object we just created.\n\n\nImplement the following steps:\n\n- Click on the hamburger menu icon, then click on the **`Dashboard`** link under the **`Kibana`** section of the menu.\n\n- Click on the `Create new dashboard` button.\n\n![](https://github.com/odennav/elk-centralized-syslog-system/blob/main/docs/create_new_dashboard.png)\n\n- Click on the `Add an existing object` link, then click on `Failed SSH Logins`. \n\n  Next, click on the `x` to close the pop-up window.\n\n  Adjust the graph to whatever size you desire.\n\n- Click on the `save` link at the top of your screen and name the dashboard `SSH Login Analysis\" then save it.\n\nThe visualization object enables us to represent the syslog data in a meaningful way and also adding them to a dashboard for real-time view.\n\n-----\n\nEnjoy!\n\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fodennav%2FELK-stack-setup","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fodennav%2FELK-stack-setup","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fodennav%2FELK-stack-setup/lists"}