{"id":13620781,"url":"https://github.com/bitsofinfo/comms-analyzer-toolbox","last_synced_at":"2026-03-01T19:01:54.923Z","repository":{"id":32379293,"uuid":"101420778","full_name":"bitsofinfo/comms-analyzer-toolbox","owner":"bitsofinfo","description":"Tool for OSINT forensic analysis, search and graphing of communications content such as email MBOX files and CSV text message data using Elasticsearch and Kibana","archived":false,"fork":false,"pushed_at":"2022-11-30T20:18:38.000Z","size":811,"stargazers_count":77,"open_issues_count":2,"forks_count":7,"subscribers_count":4,"default_branch":"master","last_synced_at":"2025-04-13T06:42:26.930Z","etag":null,"topics":["analytics","android","csv","docker","elasticsearch","email","forensics","gmail","graphs","hotmail","imessage","iphone","kibana","mbox","osint","osint-tool","search","sms","text-messaging"],"latest_commit_sha":null,"homepage":"","language":"Dockerfile","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/bitsofinfo.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2017-08-25T16:10:20.000Z","updated_at":"2025-03-26T00:22:45.000Z","dependencies_parsed_at":"2023-01-14T21:04:25.522Z","dependency_job_id":null,"html_url":"https://github.com/bitsofinfo/comms-analyzer-toolbox","commit_stats":null,"previous_names":[],"tags_count":4,"template":false,"template_full_name":null,"purl":"pkg:github/bitsofinfo/comms-analyzer-toolbox","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bitsofinfo%2Fcomms-analyzer-toolbox","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bitsofinfo%2Fcomms-analyzer-toolbox/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bitsofinfo%2Fcomms-analyzer-toolbox/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bitsofinfo%2Fcomms-analyzer-toolbox/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/bitsofinfo","download_url":"https://codeload.github.com/bitsofinfo/comms-analyzer-toolbox/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bitsofinfo%2Fcomms-analyzer-toolbox/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29980793,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-01T16:35:47.903Z","status":"ssl_error","status_checked_at":"2026-03-01T16:35:44.899Z","response_time":124,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["analytics","android","csv","docker","elasticsearch","email","forensics","gmail","graphs","hotmail","imessage","iphone","kibana","mbox","osint","osint-tool","search","sms","text-messaging"],"created_at":"2024-08-01T21:00:59.403Z","updated_at":"2026-03-01T19:01:49.909Z","avatar_url":"https://github.com/bitsofinfo.png","language":"Dockerfile","readme":"# comms-analyzer-toolbox\n\nDocker image that provides a simplified OSINT toolset for the import and analysis of communications content from email [MBOX](https://en.wikipedia.org/wiki/Mbox) files, and other CSV data (such as text messages) using Elasticsearch and Kibana. This provides a single command that launches a full OSINT analytical software stack as well as imports all of your communications into it, ready for analysis w/ Kibana and ElasticSearch.\n\n* [Summary](#summary)\n* [Docker setup](#dockersetup)\n* Importing email from MBOX files\n  * [MBOX import summary](#mboxsummary)\n  * [Example: Export from Gmail](#gmailexample)\n  * [Example: Import emails from MBOX export file](#runningmbox)\n  * [MBOX import options](#mboxoptions)\n  * [Troubleshooting](#mboxwarn)\n* Importing data from CSV files\n  * [CSV import summary](#csvsummary)\n  * [Example: Export text messages from Iphone](#iphoneexample)\n  * [Example: Import text messages from CSV data file](#runningcsv)\n  * [CSV import options](#csvoptions)\n* [Analyze previously imported data](#analyzeonly)\n* [Expected warnings](#warn)\n* [Help/Resources](#help)\n* [Security/Privacy](#security)\n\n## \u003ca id=\"summary\"\u003e\u003c/a\u003e Summary\n\nThis project manages a Dockerfile to produce an image that when run starts both ElasticSearch and Kibana and then optionally imports communications data using the the following tools bundled within the container:\n\n**IMPORTANT** *the links below are **FORKS** of the original projects due to outstanding issues w/ the original projects that were not fixed at the time of this projects development*\n\n* [elasticsearch-gmail](https://github.com/bitsofinfo/elasticsearch-gmail) python scripts which import email data from an MBOX file. (See [this link](https://github.com/oliver006/elasticsearch-gmail/pulls?q=is%3Apr+author%3Abitsofinfo+is%3Aclosed) for issues this fork addresses)\n* [csv2es](https://github.com/bitsofinfo/csv2es) python scripts which can import any data from an CSV file. (See [this link](https://github.com/rholder/csv2es/pulls/bitsofinfo) for issues this fork addresses)\n\nFrom there... well, you can analyze and visualize practically anything about your communications. Enjoy.\n\n![Diag1](/docs/diag1.png \"Diagram1\")\n\n![Diag2](/docs/diag2.png \"Diagram2\")\n\n## \u003ca id=\"dockersetup\"\u003e\u003c/a\u003eDocker setup\n\nBefore running the example below, you need [Docker](https://www.docker.com/get-docker) installed.\n\n* [Docker for Mac](https://store.docker.com/editions/community/docker-ce-desktop-mac)\n* [Docker Toolbox for Windows 10+ home or earlier versions](https://www.docker.com/products/docker-toolbox)\n* [Docker for Windows 10+ pro, enterprise, hyper-v capable](https://www.docker.com/docker-windows)\n\n**Windows Note**: When you `git clone` this project on Windows prior to building be sure to add the git clone flag `--config core.autocrlf=input`. Example `git clone https://github.com/bitsofinfo/comms-analyzer-toolbox.git --config core.autocrlf=input`. [read more here](http://willi.am/blog/2016/08/11/docker-for-windows-dealing-with-windows-line-endings/)\n\nOnce Docker is installed bring up a command line shell and type the following to build the docker image for the toolbox:\n\n```\ndocker build -t comms-analyzer-toolbox .\n```\n\n**Docker toolbox for Windows notes**\n\nThe `default` docker machine VM created is likely to underpowered to run this out of the box. You will need to do the following to increase the CPU and memory of the local virtual-box machine\n\n1. Bring up a \"Docker Quickstart Terminal\"\n\n2. Remove the default machine: `docker-machine rm default`\n\n3. Recreate it: `docker-machine create -d virtualbox --virtualbox-cpu-count=[N cpus] --virtualbox-memory=[XXXX megabytes] --virtualbox-disk-size=[XXXXXX] default`\n\n**Troubleshooting error: \"max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]\"**\n\nIf you see this error when starting the toolbox (the error is reported from Elasticsearch) you will need do the following on the docker host the container is being launched on.\n\n`sysctl -w vm.max_map_count=262144`\n\nIf you are using Docker Toolbox, you have to first shell into the boot2docker VM first with `docker ssh default` to run this command. Or do the following to make it permanent: https://github.com/docker/machine/issues/3859\n\n## \u003ca id=\"mboxsummary\"\u003e\u003c/a\u003eMBOX import summary\n\nFor every email message in your MBOX file, each message becomes a separate document in ElasticSearch where all email headers are indexed as individual fields and all body content indexed and stripped of html/css/js.\n\nFor example, each email imported into the index has the following fields available for searching and analysis in Kibana (plus many, many more)\n\n* date_ts (epoch_millis timestamp in GMT/UTC)\n* to\n* from\n* cc\n* bcc\n* subject\n* body\n* body_size\n\n## \u003ca id=\"gmailexample\"\u003e\u003c/a\u003eExample: export Gmail email to mbox file\n\nOnce Docker is available on your system, before you run `comms-analyzer-toolbox` you need to have some email to analyze in MBOX format. As an example, below is how to export email from Gmail.\n\n1. Login to your gmail account with a web-browser on a computer\n\n2. Go to: https://takeout.google.com/settings/takeout\n\n3. On the screen that says **\"Download your data\"**, under the section **\"Select data to include\"** click on the **\"Select None\"** button. This will grey-out all the **\"Products\"** listed below it\n\n4. Now scroll down and find the greyed out section labeled **\"Mail\"** and click on the **X** checkbox on the right hand side. It will now turn green indicating this data will be prepared for you to download.\n\n5. Scroll down and click on the blue **\"Next\"** button\n\n6. Leave the **\"Customize archive format\"** settings as-is and hit the **\"Create Archive\"** button\n\n7. This will now take you to a **\"We're preparing your archive.\"** screen. This might take a few hours depending on the size of all the email you have.\n\n8. You will receive an email from google when the archive is ready to download. When you get it, download the zip file to your local computer's hard drive, it will be named something like `takeout-[YYYMMMDDD..].zip`\n\n9. Once save to your hard drive, you will want to unzip the file, once unzipped all of your exported mail from Gmail will live in an **mbox** export file in the `Takeout/Mail/` folder and the filename with all your mail is in: `All mail Including Spam and Trash.mbox`\n\n10. You should rename this file to something simpler like `my-email.mbox`\n\n11. Take note of the location of your *.mbox* file as you will use it below when running the toolbox.\n\n\n## \u003ca id=\"runningmbox\"\u003e\u003c/a\u003eRunning: import emails for analysis\n\nBefore running the example below, you need [Docker](#dockersetup) installed.\n\nBring up a terminal or command prompt on your computer and run the following, before doing so, you need to replace `PATH/TO/YOUR/email.mbox` and `PATH/TO/ELASTICSEARCH_DATA_DIR` below with the proper paths on your local system as appropriate.\n\n*Note: if using Docker Toolbox for Windows*: All of the mounted volumes below should live somewhere under your home directory under `c:\\Users\\[your username]\\...` due to permissions issues.\n\n```\ndocker run --rm -ti \\\n   --ulimit nofile=65536:65536 \\\n  -v PATH/TO/YOUR/my-email.mbox:/toolbox/email.mbox \\\n  -v PATH/TO/ELASTICSEARCH_DATA_DIR:/toolbox/elasticsearch/data \\\n  comms-analyzer-toolbox:latest \\\n  python /toolbox/elasticsearch-gmail/src/index_emails.py \\\n  --infile=/toolbox/email.mbox \\\n  --init=[True | False] \\\n  --index-bodies=True \\\n  --index-bodies-ignore-content-types=application,image \\\n  --index-bodies-html-parser=html5lib \\\n  --index-name=comm_data\n```\n\nSetting `--init=True` will delete and re-create the `comm_data` index. Setting `--init=False` will retain whatever data already exists\n\nThe console will log output of what is going on, when the system is booted up you can bring up a web browser on your desktop and go to *http://localhost:5601* to start using Kibana to analyze your data. *Note: if running docker toolbox; 'localhost' might not work, execute a `docker-machine env default` to determine your docker hosts IP address, then go to http://[machine-ip]:5601\"*\n\nOn the first screen that says `Configure an index pattern`, in the field labeled `Index name or pattern` you type `comm_data` you will then see the `date_ts` field auto-selected, then hit the `Create` button. From there Kibana is ready to use!\n\nLaunching does several things in the following order\n\n1. Starts ElasticSearch (where your indexed emails are stored)\n2. Starts Kibana (the user-interface to query the index)\n3. Starts the mbox importer\n\nWhen then mbox importer is running you will see the following entries in the logs as the system does its work importing your mail from the mbox files\n\n```\n...\n[I 170825 18:46:53 index_emails:96] Upload: OK - upload took:  467ms, total messages uploaded:   1000\n[I 170825 18:48:23 index_emails:96] Upload: OK - upload took:  287ms, total messages uploaded:   2000\n...\n```\n\n## \u003ca id=\"mboxoptions\"\u003e\u003c/a\u003eToolbox MBOX import options\n\nWhen running the `comms-analyzer-toolbox` image, one of the arguments is to invoke the [elasticsearch-gmail](https://github.com/bitsofinfo/elasticsearch-gmail) script which takes the following arguments. You can adjust the `docker run` command above to pass the following flags as you please:\n\n```\nUsage: /toolbox/elasticsearch-gmail/src/index_emails.py [OPTIONS]\n\nOptions:\n\n  --help                           show this help information\n\n/toolbox/elasticsearch-gmail/src/index_emails.py options:\n\n  --batch-size                     Elasticsearch bulk index batch size (default\n                                   500)\n  --es-url                         URL of your Elasticsearch node (default\n                                   http://localhost:9200)\n  --index-bodies                   Will index all body content, stripped of\n                                   HTML/CSS/JS etc. Adds fields: 'body',\n                                   'body_size' and 'body_filenames' for any\n                                   multi-part attachments (default False)\n  --index-bodies-html-parser       The BeautifulSoup parser to use for\n                                   HTML/CSS/JS stripping. Valid values\n                                   'html.parser', 'lxml', 'html5lib' (default\n                                   html.parser)\n  --index-bodies-ignore-content-types\n                                   If --index-bodies enabled, optional list of\n                                   body 'Content-Type' header keywords to match\n                                   to ignore and skip decoding/indexing. For\n                                   all ignored parts, the content type will be\n                                   added to the indexed field\n                                   'body_ignored_content_types' (default\n                                   application,image)\n  --index-name                     Name of the index to store your messages\n                                   (default gmail)\n  --infile                         The mbox input file\n\n  --init                           Force deleting and re-initializing the\n                                   Elasticsearch index (default False)\n  --num-of-shards                  Number of shards for ES index (default 2)\n\n  --skip                           Number of messages to skip from the mbox\n                                   file (default 0)\n\n```\n\n## \u003ca id=\"mboxwarn\"\u003e\u003c/a\u003e MBOX import expected warnings\n\nWhen importing MBOX email data, in the log output you may see warnings/errors like the following.\n\nThey are expected and ok, they are simply warnings about some special characters that are not able to be decoded etc.\n\n```\n...\n/usr/lib/python2.7/site-packages/bs4/__init__.py:282: UserWarning: \"https://someurl.com/whatever\" looks like a URL. Beautiful Soup is not an HTTP client. You should probably use an HTTP client like requests to get the document behind the URL, and feed that document to Beautiful Soup.\n  ' that document to Beautiful Soup.' % decoded_markup\n[W 170825 18:41:56 dammit:381] Some characters could not be decoded, and were replaced with REPLACEMENT CHARACTER.\n[W 170825 18:41:56 dammit:381] Some characters could not be decoded, and were replaced with REPLACEMENT CHARACTER.\n...\n```\n\n\n## \u003ca id=\"csvsummary\"\u003e\u003c/a\u003eCSV import summary\n\nThe CSV import tool `csv2es` embedded in the toolbox can import ANY CSV file, not just this example format below.\n\nFor every row of data in a CSV file, each row becomes a separate document in ElasticSearch where all CSV columns are indexed as individual fields\n\nFor example, each line in the CSV data file below (text messages from an iphone) imported into the index has the following fields available for searching and analysis in Kibana\n\n```\n\"Name\",\"Address\",\"date_ts\",\"Message\",\"Attachment\",\"iMessage\"\n\"Me\",\"+1 555-555-5555\",\"7/17/2016 9:21:39 AM\",\"How are you doing?\",\"\",\"True\"\n\"Joe Smith\",\"+1 555-444-4444\",\"7/17/2016 9:38:56 AM\",\"Pretty good you?\",\"\",\"True\"\n\"Me\",\"+1 555-555-5555\",\"7/17/2016 9:39:02 AM\",\"Great!\",\"\",\"True\"\n....\n```\n\n* date_ts (epoch_millis timestamp in GMT/UTC)\n* name\n* address\n* message\n* attachment\n* imessage\n\n*The above text messages export CSV is just an example.* The `csv2es` tool that is bundled with the toolbox *can import ANY data set you want* not just the example format above.\n\n# \u003ca id=\"iphoneexample\"\u003e\u003c/a\u003eExample: Export text messages from Iphone\n\nOnce Docker is available on your system, before you run `comms-analyzer-toolbox` you need to have some data to analyze in CSV format. As an example, below is how to export text messages from an iphone to a CSV file.\n\n1. Export iphone messages using [iExplorer for mac or windows](https://macroplant.com/iexplorer/tutorials/how-to-transfer-and-backup-sms-and-imessages)\n\n2. Edit the generated CSV file and change the first row's header value of `\"Time\"` to `\"date_ts\"`, save and exit.\n\n2. Take note of the location of your *.csv* file as you will use it below when running the toolbox.\n\n## \u003ca id=\"runningcsv\"\u003e\u003c/a\u003eRunning: import CSV of text messages for analysis\n\nBefore running the example below, you need [Docker](#dockersetup) installed.\n\nThis example below is specifically for a CSV data file containing text message data exported using [IExplorer](https://macroplant.com/iexplorer)\n\n*Contents of data.csv*\n```\n\"Name\",\"Address\",\"date_ts\",\"Message\",\"Attachment\",\"iMessage\"\n\"Me\",\"+1 555-555-5555\",\"7/17/2016 9:21:39 AM\",\"How are you doing?\",\"\",\"True\"\n\"Joe Smith\",\"+1 555-444-4444\",\"7/17/2016 9:38:56 AM\",\"Pretty good you?\",\"\",\"True\"\n\"Me\",\"+1 555-555-5555\",\"7/17/2016 9:39:02 AM\",\"Great!\",\"\",\"True\"\n....\n```\n\n*Contents of csvdata.mapping.json*\n```\n{\n    \"dynamic\": \"true\",\n    \"properties\": {\n        \"date_ts\": {\"type\": \"date\" },\n        \"name\": {\"type\": \"string\", \"index\" : \"not_analyzed\"},\n        \"address\": {\"type\": \"string\", \"index\" : \"not_analyzed\"},\n        \"imessage\": {\"type\": \"string\", \"index\" : \"not_analyzed\"}\n    }\n}\n```\n\nBring up a terminal or command prompt on your computer and run the following, before doing so, you need to replace `PATH/TO/YOUR/data.csv`, `PATH/TO/YOUR/csvdata.mapping.json` and `PATH/TO/ELASTICSEARCH_DATA_DIR` below with the proper paths on your local system as appropriate.\n\n*Note: if using Docker Toolbox for Windows*: All of the mounted volumes below should live somewhere under your home directory under `c:\\Users\\[your username]\\...` due to permissions issues.\n\n```\ndocker run --rm -ti -p 5601:5601 \\\n  -v PATH/TO/YOUR/data.csv:/toolbox/data.csv \\\n  -v PATH/TO/YOUR/csvdata.mapping.json:/toolbox/csvdata.mapping.json \\\n  -v PATH/TO/ELASTICSEARCH_DATA_DIR:/toolbox/elasticsearch/data \\\n  comms-analyzer-toolbox:latest \\\n  python /toolbox/csv2es/csv2es.py \\\n    [--existing-index \\]\n    [--delete-index \\]\n\t --index-name comm_data \\\n\t --doc-type txtmsg \\\n\t --mapping-file /toolbox/csvdata.mapping.json \\\n\t --import-file /toolbox/data.csv \\\n\t --delimiter ',' \\\n\t --csv-clean-fieldnames \\\n\t --csv-date-field date_ts \\\n\t --csv-date-field-gmt-offset -1\n```\n\nIf running against a pre-existing `comm_data` index make sure to include the `--existing-index` flag only. If you want to re-create the `comm_data` index prior to import, include the `--delete-index` flag only.\n\nThe console will log output of what is going on, when the system is booted up you can bring up a web browser on your desktop and go to *http://localhost:5601* to start using Kibana to analyze your data. *Note: if running docker toolbox; 'localhost' might not work, execute a `docker-machine env default` to determine your docker hosts IP address, then go to http://[machine-ip]:5601\"*\n\nOn the first screen that says `Configure an index pattern`, in the field labeled `Index name or pattern` you type `comm_data` you will then see the `date_ts` field auto-selected, then hit the `Create` button. From there Kibana is ready to use!\n\nLaunching does several things in the following order\n\n1. Starts ElasticSearch (where your indexed CSV data is stored)\n2. Starts Kibana (the user-interface to query the index)\n3. Starts the CSV file importer\n\nWhen then mbox importer is running you will see the following entries in the logs as the system does its work importing your mail from the mbox files\n\n## \u003ca id=\"csvoptions\"\u003e\u003c/a\u003eToolbox CSV import options\n\nWhen running the `comms-analyzer-toolbox` image, one of the arguments is to invoke the [csv2es](https://github.com/bitsofinfo/csv2es) script which takes the following arguments. You can adjust the `docker run` command above to pass the following flags as you please:\n\n```\nUsage: /toolbox/csv2es/csv2es.py [OPTIONS]\n\n  Bulk import a delimited file into a target Elasticsearch instance. Common\n  delimited files include things like CSV and TSV.\n\n  Load a CSV file:\n    csv2es --index-name potatoes --doc-type potato --import-file potatoes.csv\n\n  For a TSV file, note the tab delimiter option\n    csv2es --index-name tomatoes --doc-type tomato --import-file tomatoes.tsv --tab\n\n  For a nifty pipe-delimited file (delimiters must be one character):\n    csv2es --index-name pipes --doc-type pipe --import-file pipes.psv --delimiter '|'\n\nOptions:\n  --index-name TEXT               Index name to load data into\n                                  [required]\n  --doc-type TEXT                 The document type (like user_records)\n                                  [required]\n  --import-file TEXT              File to import (or '-' for stdin)\n                                  [required]\n  --mapping-file TEXT             JSON mapping file for index\n  --delimiter TEXT                The field delimiter to use, defaults to CSV\n  --tab                           Assume tab-separated, overrides delimiter\n  --host TEXT                     The Elasticsearch host\n                                  (http://127.0.0.1:9200/)\n  --docs-per-chunk INTEGER        The documents per chunk to upload (5000)\n  --bytes-per-chunk INTEGER       The bytes per chunk to upload (100000)\n  --parallel INTEGER              Parallel uploads to send at once, defaults\n                                  to 1\n  --delete-index                  Delete existing index if it exists\n  --existing-index                Don't create index.\n  --quiet                         Minimize console output\n  --csv-clean-fieldnames          Strips double quotes and lower-cases all CSV\n                                  header names for proper ElasticSearch\n                                  fieldnames\n  --csv-date-field TEXT           The CSV header name that represents a date\n                                  string to parsed (via python-dateutil) into\n                                  an ElasticSearch epoch_millis\n  --csv-date-field-gmt-offset INTEGER\n                                  The GMT offset for the csv-date-field (i.e.\n                                  +/- N hours)\n  --tags TEXT                     Custom static key1=val1,key2=val2 pairs to\n                                  tag all entries with\n  --version                       Show the version and exit.\n  --help                          Show this message and exit.\n```\n\n## \u003ca id=\"analyzeonly\"\u003e\u003c/a\u003eRunning: analyze previously imported data\n\nRunning in this mode will just launch elasticsearch and kibana and will not import anything. It just brings up the\ntoolbox so you can analyze previously imported data that resides in elasticsearch.\n\n*Note: if using Docker Toolbox for Windows*: All of the mounted volumes below should live somewhere under your home directory under `c:\\Users\\[your username]\\...` due to permissions issues.\n\n```\ndocker run --rm -ti -p 5601:5601 \\\n  -v PATH/TO/ELASTICSEARCH_DATA_DIR:/toolbox/elasticsearch/data \\\n  comms-analyzer-toolbox:latest \\\n  analyze-only\n```\n\nWant to control the default ElasticSearch JVM memory heap options you can do so via\na docker environment variable i.e. `-e ES_JAVA_OPTS=\"-Xmx1g -Xms1g\"` etc.\n\n## \u003ca id=\"help\"\u003e\u003c/a\u003eHelp/Resources\n\n### Gmail\n* [Exporting Gmail](https://www.lifewire.com/how-to-export-your-emails-from-gmail-as-mbox-files-1171881)\n* [Gmail download  data](https://support.google.com/accounts/answer/3024190?hl=en)\n\n### IPhone text messages\n* [Exporting text messages from IPhone to CSV](https://macroplant.com/iexplorer/tutorials/how-to-transfer-and-backup-sms-and-imessages)\n\n### Hotmail/Outlook\n\nFor hotmail/outlook, you need to export to PST, and then as a second step convert to MBOX\n\n* https://support.microsoft.com/en-us/help/980534/export-windows-live-mail-email--contacts--and-calendar-data-to-outlook\n* https://gallery.technet.microsoft.com/Convert-PST-to-MBOX-25f4bb0e\n* http://www.hotmail.googleapps--backup.com/pst\n* https://steemit.com/hotmail/@ariyantoooo/how-to-export-hotmail-to-pst\n* http://www.techhit.com/outlook/convert_outlook_mbox.html\n* https://gallery.technet.microsoft.com/office/PST-to-MBOX-Converter-to-e5ae03ae\n\n### Kibana, graphs, searching\n* [Kibana 5 tutorial](https://www.youtube.com/watch?v=mMhnGjp8oOI)\n* [Kibana 101](https://www.elastic.co/webinars/getting-started-kibana?baymax=default\u0026elektra=docs\u0026storm=top-video)\n* [Kibana getting started](https://www.elastic.co/guide/en/kibana/current/getting-started.html)\n* [Kibana introduction](https://www.timroes.de/2016/10/23/kibana5-introduction/)\n* [Kibana logz.io tutorial](https://logz.io/blog/kibana-tutorial/)\n* [Kibana search syntax](https://www.elastic.co/guide/en/kibana/current/search.html)\n\n\n## \u003ca id=\"security\"\u003e\u003c/a\u003e Security/Privacy\n\nUsing this tool is completely local to whatever machine you are running this tool on (i.e. your Docker host). In the case of running it on your laptop or desktop computer its 100% local.\n\nData is not uploaded or transferred anywhere.\n\nThe data does not go anywhere other than on disk locally to the Docker host this is running on.\n\nTo completely remove the data analyzed, you can `docker rm -f [container-id]` of the `comms-analyzer-toolbox` container running on your machine.\n\nIf you mounted the elasticsearch data directory via a volume on the host (i.e. `-v PATH/TO/ELASTICSEARCH_DATA_DIR:/toolbox/elasticsearch/data`) that locally directory is where all the indexed data resides locally on disk.\n","funding_links":[],"categories":["Dockerfile","\u003ca id=\"ecb63dfb62722feb6d43a9506515b4e3\"\u003e\u003c/a\u003e新添加"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbitsofinfo%2Fcomms-analyzer-toolbox","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbitsofinfo%2Fcomms-analyzer-toolbox","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbitsofinfo%2Fcomms-analyzer-toolbox/lists"}