{"id":13540031,"url":"https://github.com/rackerlabs/scantron","last_synced_at":"2025-04-12T20:45:44.753Z","repository":{"id":33416532,"uuid":"145452632","full_name":"rackerlabs/scantron","owner":"rackerlabs","description":"A distributed nmap / masscan scanning framework complete with scan scheduling, engine pooling, subsequent scan port diff-ing, and an API client for automation workflows.","archived":false,"fork":false,"pushed_at":"2023-07-17T19:01:21.000Z","size":16400,"stargazers_count":753,"open_issues_count":15,"forks_count":150,"subscribers_count":26,"default_branch":"master","last_synced_at":"2025-04-04T00:09:28.407Z","etag":null,"topics":["ansible","api","automation","django","masscan","nmap","penetration-testing","python","rest","scanning"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/rackerlabs.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2018-08-20T17:59:20.000Z","updated_at":"2025-02-26T22:00:42.000Z","dependencies_parsed_at":"2022-08-07T21:15:20.055Z","dependency_job_id":"19d6ef74-6215-4f52-89e1-24f2ebef6dae","html_url":"https://github.com/rackerlabs/scantron","commit_stats":null,"previous_names":[],"tags_count":45,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rackerlabs%2Fscantron","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rackerlabs%2Fscantron/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rackerlabs%2Fscantron/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rackerlabs%2Fscantron/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/rackerlabs","download_url":"https://codeload.github.com/rackerlabs/scantron/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248631692,"owners_count":21136559,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ansible","api","automation","django","masscan","nmap","penetration-testing","python","rest","scanning"],"created_at":"2024-08-01T09:01:37.984Z","updated_at":"2025-04-12T20:45:44.730Z","avatar_url":"https://github.com/rackerlabs.png","language":"Python","readme":"\u003ch3 align=\"center\"\u003e\n    This project has been forked and will be maintained here:\n    \u003cbr\u003e\n    \u003cbr\u003e\n    \u003ca href=\"https://github.com/opsdisk/scantron\"\u003ehttps://github.com/opsdisk/scantron\u003c/a\u003e\n\u003c/h3\u003e\n\n# Scantron\n\n![logo](./console/django_scantron/static/scantron/img/scantron_300x300.jpg)\n\n\u003cp align=\"left\"\u003e\n\u003ca href=\"\"\u003e\u003cimg alt=\"Python 3.6\" src=\"https://img.shields.io/badge/python-3.6-green.svg\"\u003e\u003c/a\u003e\n\u003ca href=\"https://github.com/ambv/black\"\u003e\u003cimg alt=\"Code style: black\" src=\"https://img.shields.io/badge/code%20style-black-000000.svg\"\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n## Overview\n\nScantron is a distributed nmap and masscan scanner comprised of two components.  The first is a console node that\nconsists of a web front end used for scheduling scans and storing scan targets and results.  The second component is an\nengine that pulls scan jobs from the console and conducts the actual scanning.  A majority of the application's logic is\npurposely placed on the console to make the engine(s) as \"dumb\" as possible.  All scan target files and scan results\nreside on the console and are shared through a network file share (NFS) leveraging SSH tunnels.  The engines call back\nto the console periodically using a REST API to check for scan tasks and provide scan status updates.  There is also an\noption to generate nmap scan diffs emailed to you using the [pyndiff](https://github.com/rackerlabs/pyndiff) library.\n\nCheckout the Python [Scantron API client](https://github.com/rackerlabs/scantron/tree/master/scantron_api_client) for\ninteracting with the Scantron API and driving automated workflows.\n\n![scheduled_scans](./img/scheduled_scans.png)\n\nScantron is coded for Python3.6+ exclusively and leverages Django for the web front-end, Django REST Framework as the\nAPI endpoint, PostgreSQL as the database, a Redis job queue for tasks, Postfix for email scan alerts, and comes complete\nwith Ubuntu-focused Ansible playbooks for smooth deployments.  Scantron has been tested on Ubuntu 18.04 and may be\ncompatible with other operating systems.  Scantron's inspiration comes from:\n\n* [dnmap](https://sourceforge.net/projects/dnmap/)\n* [Minions](https://github.com/sixdub/Minions)\n* [rainmap svn](https://svn.nmap.org/rainmap/) / [rainmap github](https://github.com/axtl/rainmap)\n* [rainmap-lite](https://github.com/cldrn/rainmap-lite)\n\n![results](./img/results.png)\n\nScantron relies heavily on utilizing SSH port forwards (-R / -L) as an umbilical cord to the engines.  Either an SSH\nconnection from `console --\u003e engine` or `engine --\u003e console` is acceptable and may be required depending on different\nfirewall rules, but tweaking the port forwards and autossh commands will be necessary.  If you are unfamiliar with these\nconcepts, there are some great overviews and tutorials out there:\n\n* \u003chttps://help.ubuntu.com/community/SSH/OpenSSH/PortForwarding\u003e\n* \u003chttps://www.systutorials.com/39648/port-forwarding-using-ssh-tunnel/\u003e\n* \u003chttps://www.everythingcli.org/ssh-tunnelling-for-fun-and-profit-autossh/\u003e\n\n## Use cases\n\nScantron is not engineered to be quickly deployed to a server to scan for a few minutes, then torn down and destroyed.  \nIt's better suited for having a set of static scanners (e.g., \"internal-scanner\", \"external-scanner\") with a relatively\nstatic set of assets to scan.\nA [Scantron API client](https://github.com/rackerlabs/scantron/tree/master/scantron_api_client) is also available for\ncreating, retrieving, updating, or deleting sites, scan commands, scans, etc.\n\n## Architecture Diagram\n\n![scantron_architecture_overview](./img/scantron_architecture_overview.png)\n\n## Hardware Requirements\n\n* Engine: If you plan on compiling masscan on an engine, you'll need at least 1024 MB of memory.  It fails to build with\nonly 512 MB.  If you do not want to build masscan, set `install_masscan_on_engine` to `False` in\n`ansible-playbooks/group_vars/all.yml`\n\n* Console: 512 MB of memory was the smallest amount successfully tested, however, if you plan on processing large scan\nfiles (using the scripts found in `console/scan_results`: `masscan_json_to_csv.py`, `nmap_to_csv.py` or\n`xml_to_json_nmap_results.py`), you'll need more memory.\n\n## Ansible Deployment Server and Initial Setup\n\nThis is your local box, preferably Linux.  Ansible \u003e= 2.4.0.0 is the minimum version required for utilizing ufw comments.\n\nClone the project and execute `initial_setup.sh`.\n\n```bash\n# Clone scantron project.\ngit clone https://github.com/rackerlabs/scantron.git\ncd scantron\n./initial_setup.sh  # Run as non-root user.\n```\n\n## Installation\n\nInstallation requires a general knowledge of Python, pip, and Ansible.  Every attempt to make the deployment as simple\nas possible has been made.\n\n### Cloud Provider Caveats\n\n#### NAT'd instances\n\nIf the console server is actually a RFC1918 IP and not the public IP (because of NAT), the NAT'd RFC1918 IP\n(e.g., 10.1.1.2) will have to be added to the `ALLOWED_HOSTS` in\n`ansible-playbooks/roles/console/templates/production.py.j2`\n\nThis is common in AWS and GCP environments.\n\n#### IBM Cloud\n\nPer \u003chttps://github.com/0xtavian\u003e:  For the Ansible workload to work on IBM Cloud, edit the file `/boot/grub/menu.lst`\nby changing\n\n```bash\n# groot=LABEL...\n```\n\nto\n\n```bash\n# groot=(hd0)\n```\n\n### Update hosts\n\nEdit the hosts in this file:\n\n* `ansible-playbooks/hosts.ini`\n\n### Console Installation\n\nThe recommendation is to deploy the console first.  \n\n#### Update Console Ansible Variables\n\nEdit any variables in `ansible-playbooks/group_vars/all.yml` before running playbook.  Note the time zone variables:\n\n* `timezone_server` - Set this to be the timezone you want the server to be in, usually UTC.\n* `timezone_django` - Set this to be your local timezone.  It makes dealing with dates, times, and scheduling easier.\n\nIf you plan on utilizing the same API key across all engines (not recommended, but easier for automated deployments),\nchange `utilize_static_api_token_across_engines` to `True`.  This prevents you from having to log into each engine and\nupdate `engine_config.json` with the corresponding API key.  The `group_vars/static_api_key` will be created by the\nthe console ansible playbook.  The Ansible engine playbook will autofill the `engine_config.json.j2` template with the\nAPI key found in `group_vars/static_api_key`.\n\n**WARNING**: The `engine_config.json.j2` will generate a random `scan_engine` (e.g., `engine-847623`), so if you deploy\nmore than 1 engine, you won't run into complications with engine name collisions.  You will, however, need to add create\nthe user on the console, since the console returns scheduled jobs to the engine based off the engine's name!\n\n#### Update Console Secrets Variables\n\nRename `console/scantron_secrets.json.empty` to `console/scantron_secrets.json` (should be done for you by\n`initial_setup.sh`)\n\nUpdate all the values `console/scantron_secrets.json` if you do not like ones generated using `initial_setup.sh`.  Only\nthe `production` values are used.\n\n* All Scantron Django passwords have a minimum password length of 12.\n\n* For the \"SECRET_KEY\", per Django's\n[documentation](\u003chttps://docs.djangoproject.com/en/2.0/howto/deployment/checklist/#secret-key\u003e): The secret key must be\na large random value and it must be kept secret.\n\n#### Change scantron user password (optional)\n\nThe `scantron` operating system user password is not really leveraged and is populated by providing a salted hash of a\nrandom password generated using Python's `passlib` library.  If you want to change the password, you will have to\ngenerate a hash for the desired password and update the `temp_user_pass` variable in\n`scantron/ansible-playbooks/roles/add_users/vars/main.yml`.\n\n```python\npip3 install passlib\n\npython3 -c \"from passlib.hash import sha512_crypt; import getpass; print(sha512_crypt.encrypt(getpass.getpass()))\"\n```\n\n#### Execute Console Ansible Playbook\n\nEnsure you have a SSH key (or username/password) to access the console box, specified by `--private-key` in the Ansible\ncommand.  User must also have password-less sudo privileges.\n\n```bash\ncd ansible-playbooks\n\n# non-root user with password-less sudo capabilities.\nansible-playbook console.yml -u ubuntu --become --private-key=\u003cengine SSH key\u003e\n\n# root user.\nansible-playbook console.yml -u root --private-key=\u003cengine SSH key\u003e\n```\n\n#### Change Django user passwords with manage.py (optional)\n\ncd into the console directory `scantron/console` and run the following to change the `admin` (or whatever user needs\ntheir password changed) user password.\n\n```bash\npython3 manage.py changepassword admin\n```\n\n### Engine Installation\n\n#### Update Engine Ansible Variables\n\nEdit any variables in these files before running playbook:\n\n* `ansible-playbooks/group_vars/all.yml`\n* `ansible-playbooks/roles/engine/vars/main.yml`\n\n#### Ensure proper user permissions\n\nEnsure you have a SSH key (or username/password) to access the engine box, specified by `--private-key` in the Ansible\ncommand.  The user **must** also have password-less sudo privileges.  If you are creating the boxes on AWS, then the\nuser is `ubuntu` for Ubuntu distros and the user already has password-less sudo capabilities.  If you need to add\npassword-less sudo capability to a user, create a `/etc/sudoder.d/\u003cUSERNAME\u003e` file, where `\u003cUSERNAME\u003e` is the actual\nuser, and populate it with:\n\n```bash\n\u003cUSERNAME\u003e ALL=(ALL) NOPASSWD: ALL\n```\n\nSSH-ing in as `root` will also work for the Ansible deployment, but is not generally recommended.\n\n#### Execute Engine Ansible Playbook\n\n```bash\ncd ansible-playbooks\n\n# non-root user with password-less sudo capabilities.\nansible-playbook engine.yml -u ubuntu --become --private-key=\u003cengine SSH key\u003e\n\n# root user.\nansible-playbook engine.yml -u root --private-key=\u003cengine SSH key\u003e\n```\n\n## Adding additional engines\n\nA Scantron engine is synonymous with a user.\n\n```none\nengines \u003c--\u003e users\n```\n\nUsers / engines are added through the webapp, so once a user / engine is added, an API token is automatically generated\nfor that user / engine.  The user's / engine's password is not necessary for Scantron to function since all user /\nengine authentication is done using the API token.  The username and password can be used to login to the webapp to test\nAPI functionality.  More API testing information can be found in the **Test Engine API** section of this README.\n\n### Update /etc/rc.local with engine IPs for autossh\n\nThis is done automatically for one engine through Ansible.  You may have to add additional lines and update SSH keys for\neach engine if they are different.  These commands are for the console connecting to the engines.  \n\nIn this example:\n\n* console - 192.168.1.99\n* engine1 - 192.168.1.100\n* engine2 - 192.168.1.101\n\n```bash\n# console --\u003e engine 1\nsu - autossh -s /bin/bash -c 'autossh -M 0 -f -N -o \"StrictHostKeyChecking no\" -o \"ServerAliveInterval 60\" \\\n    -o \"ServerAliveCountMax 3\" -p 22 -R 4430:127.0.0.1:443 -R 2049:127.0.0.1:2049 \\\n    -i /home/scantron/console/autossh.key autossh@192.168.1.100'\n\n# console --\u003e engine 2\nsu - autossh -s /bin/bash -c 'autossh -M 0 -f -N -o \"StrictHostKeyChecking no\" -o \"ServerAliveInterval 60\" \\\n    -o \"ServerAliveCountMax 3\" -p 22 -R 4430:127.0.0.1:443 -R 2049:127.0.0.1:2049 \\\n    -i /home/scantron/console/autossh.key autossh@192.168.1.101'\n```\n\nIf the console cannot SSH to an engine, then the autossh command will be run on the engine and the port forwards will be\nlocal (`-L`) instead of remote (`-R`).\n\n```bash\n# console \u003c-- engine 1\nsu - autossh -s /bin/bash -c 'autossh -M 0 -f -N -o \"StrictHostKeyChecking no\" -o \"ServerAliveInterval 60\" \\\n    -o \"ServerAliveCountMax 3\" -p 22 -L 4430:127.0.0.1:443 -L 2049:127.0.0.1:2049 \\\n    -i /home/scantron/console/autossh.key autossh@192.168.1.99'\n```\n\n## Engines\n\n### Engine's engine_config.json\n\n`engine_config.json` is a configuration file used by engines to provide basic settings and bootstrap communication with\nthe console.  Each engine can have a different configuration file.  \n\n```none\nThe \"api_token\" will have to be modified on all the engines after deploying the console!\n```\n\nEngine settings:\n\n**scan_engine:** Name of the engine.  This name is also used in the engine's HTTP `User-Agent` string to help identify\nengines calling back in the nginx web logs.\n\n**api_token:** Used to authenticate engines.  Recommend different API Tokens per engine, but the same one could be used.\n\n**console_address:** Web address of the console.  Could be 127.0.0.1 if engine traffic is tunneled to the console\nthrough an SSH port forward.\n\n**console_port:** Web port the console is listening on.\n\n**callback_interval_in_seconds:** Number of seconds engines wait before calling back for scan jobs.\n\n**number_of_threads:** Number of threads used to execute/kill scan jobs.\n\n**target_files_dir:** Name of actual engine `target_files` directory on the engine box.\n\n**scan_results_dir:** Name of actual engine `scan_results` directory on the engine box.\n\n**log_verbosity:** Desired log level for logs/engine.log\n\n```bash\n# Level     Numeric value\n# CRITICAL  50\n# ERROR     40\n# WARNING   30\n# INFO      20\n# DEBUG     10\n```\n\n**http_useragent:** HTTP User-Agent used instead of nmap's default\n`Mozilla/5.0 (compatible; Nmap Scripting Engine; https://nmap.org/book/nse.html)`.\n\n**supported_scan_binaries** Experimental.  Supported scan binaries advertised by the engine.\n\n### Standalone Engine Binary\n\nThis repo also contains a stand-alone binary `engine/engine` that can be used for the engine.  This\nallows for a quicker deployment if managing the Python environment is difficult or cumbersome.  The basic requirements\nare:\n\n* `nmap` and `masscan` must exist on the system\n* the `engine_config.json` file exists and the `scan_engine` and `api_token` values have been updated\n* An SSH tunnel to/from the console still exists to read target files and write scan results\n\n#### Creating the standalone binary\n\nThe standalone binaries can be generated on a local box.\n\n```bash\ncd engine\nvirtualenv -p python3.6 .venv\nsource .venv/bin/activate\npip install pyinstaller\npyinstaller --onefile engine.py --name engine\nmv dist/engine .\n\n./engine -v \u003e standalone_engine_binary.txt\necho -e \"MD5     `md5sum engine`\" \u003e\u003e standalone_engine_binary.txt\necho -e \"SHA-256 `sha256sum engine`\" \u003e\u003e standalone_engine_binary.txt\n\nrm -rf __pycache__ build dist engine.spec .venv\n```\n\n### Engine Execution\n\nUpdate all the engines' `engine_config.json` files with their respective `api_token` for the engine by logging in as\n`admin` and browsing to `https://\u003cHOST\u003e/scantron-admin/authtoken/token` to see the corresponding API token for each\nuser / engine.\n\n#### Option 1: Run engine as a service\n\nEnable scantron-engine service at startup.\n\n```bash\nsystemctl daemon-reload  # Required if scantron-engine.service changed.\nsystemctl enable scantron-engine\n```\n\nDisable scantron-engine service at startup.\n\n```bash\nsystemctl disable scantron-engine\n```\n\nScantron service troubleshooting commands.\n\n```bash\nsystemctl status scantron-engine\nsystemctl start scantron-engine\nsystemctl stop scantron-engine\nsystemctl restart scantron-engine\n```\n\n#### Option 2: Run engine as standalone script\n\nUse `screen` to avoid the script dying after disconnecting through SSH.\n\n```bash\nscreen -S engine1  # Create a screen session and name it engine1, if using screen.\n\ncd engine\nsource .venv/bin/activate\n# Option 1: Python virtual environment\npython engine.py -c engine_config.json\n# Option 2: Stand alone binary\n./engine -c engine_config.json\n\nCTRL + a + d  # Break out of screen session, if using screen.\nscreen -ls  # View screen job, if using screen.\n\nscreen -r engine1  # Resume named screen session, if using screen.\n```\n\n### Engine Troubleshooting\n\nVerify SSH connection from the console with reverse port redirect is up on each engine.  Any traffic hitting\n127.0.0.1:4430 will be tunneled back to the console.  This port is for communicating with the API.  Any traffic hitting\n127.0.0.1:2049 will connect back to the NFS share on the console.\n\n```bash\ntcp    0    0 127.0.0.1:4430    0.0.0.0:*    LISTEN    1399/sshd: autossh\ntcp    0    0 127.0.0.1:2049    0.0.0.0:*    LISTEN    1399/sshd: autossh\n```\n\nCheck each engine's root crontab to ensure `nfs_watcher.sh` is being run every minute.\n\n```bash\ncrontab -l -u root\n```\n\n### Test Engine API\n\nIf you need to test the API without running the engine, ensure there is a \"pending\" scan set to start earlier than the\ncurrent date and time.  The server only returns scan jobs that have a \"pending\" status and start datetime earlier than\nthe current datetime.\n\n```bash\n# Not using SSH tunnels.\ncurl -k -X GET -H 'Authorization: Token \u003cVALID API TOKEN\u003e' https://192.168.1.99:443/api/scheduled_scans\n\n# Using SSH tunnels.\ncurl -k -X GET -H 'Authorization: Token \u003cVALID API TOKEN\u003e' https://127.0.0.1:4430/api/scheduled_scans\n```\n\nYou can also log into the webapp using the engine name and password and browse to `/api/?format=json` to view any scan\njobs.  The username and engine name are the same from the webapp's point of view.\n\n## Console\n\n### Console `target_files` Folder\n\n* Place files with target IPs/hosts (fed to nmap `-iL` switch) in `console/target_files/`\n* `target_files` is an NFS share on the console that the engine reads from through an SSH tunnel.\n\n### Console `scan_results` folder\n\n* nmap scan results from engines go here.\n* `console/scan_results/` is an NFS share on the console that the engine writes to through an SSH tunnel.\n\n### Console Troubleshooting\n\n1). Ensure SSH tunnels setup in `/etc/rc.local` are up.\n\n```bash\nnetstat -nat | egrep \"192.168.1.100|192.168.1.101\"\nps -ef | egrep autossh\n```\n\n2). Django logs can be found here: `/var/log/webapp/django_scantron.log`\n\n3). Check nginx logs for engine name in User-Agent field to determine which engines are calling home.\nnginx logs: `tail -f /var/log/nginx/{access,error}.log`\n\n4). uwsgi logs: `/home/scantron/console/logs`\n\n### Known issues with the console NFS share\n\nIf you need to reboot a box, do it with the provided `clean_reboot.sh` script that will stop all relevant services.\nWithout stopping the `nfs-kernel-server` service gracefully, sometimes the OS will hang and get angry.\n\n## Email Alerts\n\nA vanilla Postfix instance is installed on the console that can be used to send email alerts when a scan starts, errors\nout, or finishes.  Email settings and credentials are kept in the `scantron_secrets.json` file.  Out of the box, most\nmail will likely be marked as spam/junk.  The recommendation is to use a credentialed account to send email alerts to\nusers.\n\n## Miscellaneous\n\n### Updating nmap version\n\nUbuntu's nmap version pulled using `apt` is fairly out-of-date and the recommendation for Scantron's engines is to pull\nthe latest version.\n\nFor RPM-based Distributions, the latest `.rpm` packages can be found here \u003chttps://nmap.org/dist/?C=M\u0026O=D\u003e.  However,\nfor Debian-based distributions, you must utilize `alien` to convert the `.rpm` to a `.deb` file\n\u003chttps://nmap.org/book/inst-linux.html\u003e or compile from source.  Recommend going down the `alien` route before compiling\nfrom source.\n\n#### alien\n\n```bash\nVERSION=7.70-1  # CHANGE THIS TO LATEST VERSION\n\napt install alien -y\nwget https://nmap.org/dist/nmap-$VERSION.x86_64.rpm\nalien nmap-$VERSION.x86_64.rpm\n\napt remove nmap -y\napt remove ndiff -y\ndpkg --install nmap_*.deb\n```\n\n#### Compile nmap from source\n\nAnother option is to compile nmap from source.  This is dynamically compiled and must be done on the box where nmap is\ngoing to be run from.  Note that past experience had a compiled nmap version returning a different banner than the\nprovided apt version...so your mileage may vary.\n\n```bash\nVERSION=7.70-1  # CHANGE THIS TO LATEST VERSION\n\nwget https://nmap.org/dist/nmap-$VERSION.tar.bz2\nbzip2 -cd nmap-$VERSION.tar.bz2 | tar xvf -\ncd nmap-$VERSION\n./configure --without-ncat --without-ndiff --without-nmap-update --without-nping --without-subversion \\\n--without-zenmap --with-libdnet=included --with-libpcap=included --with-libpcre=included\nmake\n./nmap -V\n```\n\n### Get nmap --top-ports\n\nThis provides a list of the actual ports being scanned when the `--top-ports` option is used:\n\n```bash\n# TCP\nnmap -sT --top-ports 1000 -v -oG -\n\n# UDP\nnmap -sU --top-ports 1000 -v -oG -\n```\n\nSorted list based on frequency.\n\n```bash\n# TCP sorted list based on frequency.\negrep /tcp /usr/share/nmap/nmap-services | sort -r -k3\n\n# UDP sorted list based on frequency.\negrep /udp /usr/share/nmap/nmap-services | sort -r -k3\n```\n\nSource: \u003chttps://security.stackexchange.com/questions/78618/is-there-a-nmap-command-to-get-the-top-most-common-ports\u003e\n\n## nmap_port_range_carver\n\nA standalone [script](https://github.com/rackerlabs/scantron/tree/master/nmap_port_range_carver) to carve out a range\nof the top TCP/UDP ports according to the nmap-services file.\n\nThis is useful when:\n\n1. You want to scan a subset of the ports specified in `--top-ports`, say the 10th through 20th top TCP ports, but not\nthe 1st or 9th ports.\n\n2. You want the 1337th ranked TCP port.\n\n3. You want to utilize nmap to scan **both** TCP and UDP, but not scan the same number of top ports.\n\n    This works and will scan the top 10 ports for BOTH TCP and UDP\n\n    ```bash\n    nmap --top-ports 10 -sU -sT \u003cTARGET\u003e\n    ```\n\n    but you can't only scan the top 20 TCP and top 10 UDP ports using `--top-ports`.\n\n## Workflow\n\n1. Create user/engine.  By default, Ansible creates `engine1`.\n\n    ![create_user_engine](./img/create_user_engine.png)\n\n2. Create scan command\n\n    ![create_nmap_command](./img/create_nmap_command.png)\n\n3. Create a site\n\n    * IPs, IP subnets, and FQDNs are allowed.\n    * IP ranges (`192.168.1.0-10`) are not currently supported.\n    * The targets and excluded_targets are validated using `console/extract_targets.py`, which can also be used as a\n      stand alone script.\n\n    ![create_site](./img/create_site.png)\n\n4. Create scan\n\n    * Select start time\n    * Add start date\n    * Add recurrence rules (if applicable)\n\n    The `/home/scantron/console/scan_scheduler.sh` cronjob checks every minute to determine if any scans need to be\n    queued.  If scans are found, it schedules them to be picked up by the engines.\n\n    ![create_scan](./img/create_scan.png)\n\n    You can use the `console/scan_scheduler_visualizer.py` script found on the console to print out scheduled scan\n    times:\n\n    ```bash\n    # Print out the scan start dates and times for all enabled scans in the next 10 days.\n    cd /home/scantron/console\n    source .venv/bin/activate\n    python scan_scheduler_visualizer.py -d 10\n    ```\n\n5. View currently executing scan results\n\n    ```bash\n    cd /home/scantron/console/scan_results/pending\n    ls -lart\n    ```\n\n    Completed scans are moved to the `/home/scantron/console/scan_results/completed` directory.\n\n6. Process scans\n\n    Scan files are moved between a few folders.\n\n    `/home/scantron/console/scan_results/pending` - Pending scan files from engines are stored here before being moved\n    to `scan_results/complete`\n\n    `/home/scantron/console/scan_results/complete` - Completed scan files from engines are stored here before being\n    processed by `nmap_to_csv.py`\n\n    `/home/scantron/console/scan_results/processed` - nmap scan files already processed by `nmap_to_csv.py` reside here.\n\n    `/home/scantron/console/for_bigdata_analytics` - .csv files for big data analytics ingestion if applicable\n\n## Scantron API Client\n\nCheckout the Python [Scantron API client](https://github.com/rackerlabs/scantron/tree/master/scantron_api_client) for\ninteracting with the Scantron API and driving automated workflows.\n\n## API Documentation\n\nThere are 3 ways to explore and play around with the API.  The first is the Django REST Framework view:\n\n![api_django_rest_framework](./img/api_django_rest_framework.png)\n\nYou can also dig through the API documentation using ReDoc:\n\n![api_redoc](./img/api_redoc.png)\n\nLastly, you can interact with the API using Swagger:\n\n![api_swagger](./img/api_swagger.png)\n\n## Database Model Graph\n\nGenerated using `django-extensions`'s graph_models.\n\n![database_graph_model](./scantron_model_graph.png)\n\n## Contributing\n\nIf you would like to contribute, please adhere to the Python code black formatter rules specifying a line length of 120.\n\nMore information about black can be found here (\u003chttps://github.com/ambv/black\u003e)\n\n## Robot Image\n\nRobot lovingly delivered by Robohash.org (\u003chttps://robohash.org\u003e)\n","funding_links":[],"categories":["Python","Related Lists","Python (1887)","\u003ca id=\"170048b7d8668c50681c0ab1e92c679a\"\u003e\u003c/a\u003e工具"],"sub_categories":["\u003ca id=\"94c01f488096fafc194b9a07f065594c\"\u003e\u003c/a\u003enmap"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frackerlabs%2Fscantron","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Frackerlabs%2Fscantron","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frackerlabs%2Fscantron/lists"}