{"id":13453810,"url":"https://github.com/corelight/zeek2es","last_synced_at":"2026-01-16T19:20:53.822Z","repository":{"id":81487569,"uuid":"442491504","full_name":"corelight/zeek2es","owner":"corelight","description":"A Python application to filter and transfer Zeek logs to Elastic/OpenSearch+Humio.  This app can also output pure JSON logs to stdout for further processing!","archived":false,"fork":false,"pushed_at":"2022-08-18T13:23:02.000Z","size":3469,"stargazers_count":35,"open_issues_count":0,"forks_count":7,"subscribers_count":5,"default_branch":"master","last_synced_at":"2024-10-28T20:39:46.280Z","etag":null,"topics":["elasticsearch","humio","kibana","opensearch","python","zeek"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-3-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/corelight.png","metadata":{"files":{"readme":"Readme.md","changelog":"CHANGES","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-12-28T14:39:54.000Z","updated_at":"2024-10-03T13:37:57.000Z","dependencies_parsed_at":"2023-07-10T16:01:26.144Z","dependency_job_id":null,"html_url":"https://github.com/corelight/zeek2es","commit_stats":null,"previous_names":[],"tags_count":54,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/corelight%2Fzeek2es","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/corelight%2Fzeek2es/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/corelight%2Fzeek2es/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/corelight%2Fzeek2es/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/corelight","download_url":"https://codeload.github.com/corelight/zeek2es/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":245194315,"owners_count":20575740,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["elasticsearch","humio","kibana","opensearch","python","zeek"],"created_at":"2024-07-31T08:00:47.853Z","updated_at":"2026-01-16T19:20:52.723Z","avatar_url":"https://github.com/corelight.png","language":"Python","readme":"# zeek2es.py\n\nThis Python application translates [Zeek's](https://zeek.org/) ASCII TSV and JSON\nlogs into [ElasticSearch's bulk load JSON format](https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html#add-multiple-documents).\n\n## Table of Contents:\n- [Introduction](#introduction)\n- [Installation](#installation)\n  - [Elastic v8.0+](#elastic80)\n  - [Docker](#docker)\n- [Upgrading zeek2es](#upgradingzeek2es)\n  - [ES Ingest Pipeline](#esingestpipeline)\n- [Filtering Data](#filteringdata)\n  - [Python Filters](#pythonfilters)\n  - [Filter on Keys](#filteronkeys)\n- [Command Line Examples](#commandlineexamples)\n- [Command Line Options](#commandlineoptions)\n- [Requirements](#requirements)\n- [Notes](#notes)\n  - [Humio](#humio)\n  - [JSON Log Input](#jsonloginput)\n  - [Data Streams](#datastreams)\n  - [Helper Scripts](#helperscripts)\n  - [Cython](#cython)\n\n## Introduction \u003ca name=\"introduction\" /\u003e\n\n![Kibana](images/kibana.png)\n\nWant to see multiple Zeek logs for the same connection ID (uid)\nor file ID (fuid)?  Here are the hits from files.log, http.log, and\nconn.log for a single uid:\n\n![Kibana](images/multi-log-correlation.png)\n\nYou can perform subnet searching on Zeek's 'addr' type:\n\n![Kibana Subnet Searching](images/kibana-subnet-search.png)\n\nYou can create time series graphs, such as this NTP and HTTP graph:\n\n![Kibana Time Series](images/kibana-timeseries.png)\n\nIP Addresses can be Geolocated with the `-g` command line option:\n\n![Kibana Mapping](images/kibana-map.png)\n\nAggregations are simple and quick:\n\n![Kibana Aggregation](images/kibana-aggregation.png)\n\nThis application will \"just work\" when Zeek log formats change.  The logic reads\nthe field names and associated types to set up the mappings correctly in\nElasticSearch.\n\nThis application will recognize gzip or uncompressed logs.  This application assumes \nyou have ElasticSearch set up on your localhost at the default port.\nIf you do not have ElasticSearch you can output the JSON to stdout with the `-s -b` command line options\nto process with the [jq application](https://stedolan.github.io/jq).\n\nYou can add a keyword subfield to text fields with the `-k` command line option.  This is useful\nfor aggregations in Kibana.\n\nIf Python is already on your system, there is nothing additional for you to copy over\nto your machine than [Elasticsearch, Kibana](https://www.elastic.co/start), and [zeek2es.py](zeek2es.py)\nif you already have the [requests](https://docs.python-requests.org/en/latest/) library installed.\n\n## Installation \u003ca name=\"installation\" /\u003e\n\nAssuming you meet the [requirements](#requirements), there is none.  You just \ncopy [zeek2es.py](zeek2es.py) to your host and run it with Python.  Once Zeek\nlogs have been imported with automatic index name generation (meaning, you did not supply the `-i` option)\nyou will find your indices named \"zeek_`zeeklogname`_`date`\", where `zeeklogname` is a log name like `conn`\nand the `date` is in `YYYY-MM-DD` format.  Set your Kibana index pattern to match `zeek*` in this case.  If\nyou named your index with the `-i` option, you will need to create a Kibana index pattern that \nmatches your naming scheme.\n\nIf you are upgrading zeek2es, please see [the section on upgrading zeek2es](#upgradingzeek2es).\n\n### Elastic v8.0+ \u003ca name=\"elastic80\" /\u003e\n\nIf you are using Elastic v8.0+, it has security enabled by default.  This adds a requirement of a username\nand password, plus HTTPS.  \n\nIf you want to be able to delete indices/data streams with wildcards (as examples in this readme show),\nedit  `elasticsearch.yml` with the following line:\n\n```\naction.destructive_requires_name: false\n```\n\nYou will also need to change the curl commands in this readme to contain `-k -u elastic:\u003cpassword\u003e`\nwhere the `elastic` user's password is set with a command like the following:\n\n```\n./bin/elasticsearch-reset-password -u elastic -i\n```\n\nYou can use `zeek2es.py` with the `--user` and `--passwd` command line options to specify your\ncredentials to ES.  You can also supply these options via the extra command line arguments for the helper\nscripts.\n\n### Docker \u003ca name=\"docker\" /\u003e\n\nProbably the easiest way to use this code is through Docker.  All of the files are in the `docker` directory.\nFirst, you will want to edit the lines with `CHANGEME!!!` in the `.env` file to fit your environment.  \nYou will also need to edit the Elastic password in `docker/zeek2es/entrypoint.sh` to match.  It can be found after the `--passwd` option.  \nNext, you can change directory into the `docker` directory and type the following commands to bring \nup a zeek2es and Elasticsearch cluster:\n\n```\ndocker-compose build\ndockr-compose up\n```\n\nNow you can put logs in the `VOLUME_MOUNT/data/logs` directory (`VOLUME_MOUNT` you set in the `.env` file).\nWhen logs are CREATED in this directory, zeek2es will begin processing them and pushing them into Elasticsearch.\nYou can then login to https://localhost:5601 with the username and password you set up in the `.env` file.  \nBy default there is a self signed certificate, but you can change that if you edit the docker compose files.  Once inside\nKibana you will go to Stack Management-\u003eData Views and create a data view for `logs*` with the timestamp `@timestamp`.\nNow you will be able to go to Discover and start searching your logs!  Your data is persistent in the `VOLUME_MOUNT/data` directory you set.\nIf you would like to remove all data, just `rm -rf VOLUME_MOUNT/data`, substituting the directory you set into that remove command.\nThe next time you start your cluster it will be brand new for more data.\n\n## Upgrading zeek2es \u003ca name=\"upgradingzeek2es\" /\u003e\n\nMost upgrades should be as simple as copying the newer [zeek2es.py](zeek2es.py) over \nthe old one.  In some cases, the ES ingest pipeline required for the `-g` command line option \nmight change during an upgrade.  Therefore, it is strongly recommend you delete \nyour [ingest pipeline](#esingestpipeline) before you run a new version of zeek2es.py.\n\n### ES Ingest Pipeline \u003ca name=\"esingestpipeline\" /\u003e\n\nIf you need to [delete the \"zeekgeoip\" ES ingest pipeline](https://www.elastic.co/guide/en/elasticsearch/reference/current/delete-pipeline-api.html) \nused to geolocate IP addresses with the `-g` command line option, you can either do it graphically\nthrough Kibana's Stack Management-\u003eIngest Pipelines or this command will do it for you:\n\n```\ncurl -X DELETE \"localhost:9200/_ingest/pipeline/zeekgeoip?pretty\"\n```\n\nThis command is strongly recommended whenever updating your copy of zeek2es.py.\n\n## Filtering Data \u003ca name=\"filteringdata\" /\u003e\n\n### Python Filters \u003ca name=\"pythonfilters\" /\u003e\n\nzeek2es provides filtering capabilities for your Zeek logs before they are stored in ElasticSearch.  This\nfunctionality can be enabled with the `-a` or `-f` options.  The filters are constructed from Python\nlambda functions, where the input is a Python dictionary representing the output.  You can add a \nfilter to only store connection logs where the `service` field is populated using the `-f` option with\nthis lambda filter file:\n\n```\nlambda x: 'service' in x and len(x['service']) \u003e 0\n```\n\nOr maybe you'd like to filter for connections that have at least 1,024 bytes, with at least 1 byte coming from \nthe destination:\n\n```\nlambda x: 'orig_ip_bytes' in x and 'resp_ip_bytes' in x and x['orig_ip_bytes'] + x['resp_ip_bytes'] \u003e 1024 and x['resp_ip_bytes'] \u003e 0\n```\n\nSimpler lambda filters can be provided on the command line via the `-a` option.  This filter will only store \nconnection log entries where the originator IP address is part of the `192.0.0.0/8` network:\n\n```\npython zeek2es.py conn.log.gz -a \"lambda x: 'id.orig_h' in x and ipaddress.ip_address(x['id.orig_h']) in ipaddress.ip_network('192.0.0.0/8')\"\n```\n\nFor power users, the `-f` option will allow you to define a full function (instead of Python's lambda functions) so you can write functions that \nspan multiple lines.\n\n### Filter on Keys \u003ca name=\"filteronkeys\" /\u003e\n\nIn some instances you might want to pull data from one log that depends on another.  An\nexample would be finding all `ssl.log` rows that have a `uid` matching previously\nindexed rows from `conn.log`, or vice versa.  You can filter by importing your\n`conn.log` files with the `-o uid uid.txt` command line.  This will log all uids that were \nindexed to a file named `uid.txt`.  Then, when you import your `ssl.log` files you will provide \nthe `-e uid uid.txt` command line.  This will only import SSL rows \ncontaining `uid` values that are in `uid.txt`, previously built from our import of `conn.log`.\n\n## Command Line Examples \u003ca name=\"commandlineexamples\" /\u003e\n\n```\npython zeek2es.py your_zeek_log.gz -i your_es_index_name\n```\n\nThis script can be run in parallel on all connection logs, 10 at a time, with the following command:\n\n```\nfind /some/dir -name “conn*.log.gz” | parallel -j 10 python zeek2es.py {1} :::: -\n```\n\nIf you would like to automatically import all conn.log files as they are created in a directory, the following\n[fswatch](https://emcrisostomo.github.io/fswatch/) command will do that for you:\n\n```\nfswatch -m poll_monitor --event Created -r /data/logs/zeek/ | awk '/^.*\\/conn.*\\.log\\.gz$/' | parallel -j 5 python ~/zeek2es.py {} -g -d :::: -\n```\n\nIf you have the jq command installed you can perform searches across all your logs for a common\nfield like connection uid, even without ElasticSearch:\n\n```\nfind /usr/local/var/logs -name \"*.log.gz\" -exec python ~/Source/zeek2es/zeek2es.py {} -s -b -z \\; | jq -c '. | select(.uid==\"CLbPij1vThLvQ2qDKh\")'\n```\n\nYou can use much more complex jq queries than this if you are familiar with jq.\n\nIf you want to remove all of your Zeek data from ElasticSearch, this command will do it for you:\n\n```\ncurl -X DELETE http://localhost:9200/zeek*\n```\n\nSince the indices have the date appended to them, you could\ndelete Dec 31, 2021 with the following command:\n\n```\ncurl -X DELETE http://localhost:9200/zeek_*_2021-12-31\n```\n\nYou could delete all conn.log entries with this command:\n\n```\ncurl -X DELETE http://localhost:9200/zeek_conn_*\n```\n\n## Command Line Options \u003ca name=\"commandlineoptions\" /\u003e\n\n```\n$ python zeek2es.py -h\nusage: zeek2es.py [-h] [-i ESINDEX] [-u ESURL] [--user USER] [--passwd PASSWD]\n                  [-l LINES] [-n NAME] [-k KEYWORDS [KEYWORDS ...]]\n                  [-a LAMBDAFILTER] [-f FILTERFILE]\n                  [-y OUTPUTFIELDS [OUTPUTFIELDS ...]] [-d DATASTREAM]\n                  [--compress] [-o fieldname filename] [-e fieldname filename]\n                  [-g] [-p SPLITFIELDS [SPLITFIELDS ...]] [-j] [-r] [-t] [-s]\n                  [-b] [--humio HUMIO HUMIO] [-c] [-w] [-z]\n                  filename\n\nProcess Zeek ASCII logs into ElasticSearch.\n\npositional arguments:\n  filename              The Zeek log in *.log or *.gz format.  Include the full path.\n\noptional arguments:\n  -h, --help            show this help message and exit\n  -i ESINDEX, --esindex ESINDEX\n                        The Elasticsearch index/data stream name.\n  -u ESURL, --esurl ESURL\n                        The Elasticsearch URL.  Use ending slash.  Use https for Elastic v8+. (default: http://localhost:9200)\n  --user USER           The Elasticsearch user. (default: disabled)\n  --passwd PASSWD       The Elasticsearch password. Note this will put your password in this shell history file.  (default: disabled)\n  -l LINES, --lines LINES\n                        Lines to buffer for RESTful operations. (default: 10,000)\n  -n NAME, --name NAME  The name of the system to add to the index for uniqueness. (default: empty string)\n  -k KEYWORDS [KEYWORDS ...], --keywords KEYWORDS [KEYWORDS ...]\n                        A list of text fields to add a keyword subfield. (default: service)\n  -a LAMBDAFILTER, --lambdafilter LAMBDAFILTER\n                        A Python lambda function, when eval'd will filter your output JSON dict. (default: empty string)\n  -f FILTERFILE, --filterfile FILTERFILE\n                        A Python function file, when eval'd will filter your output JSON dict. (default: empty string)\n  -y OUTPUTFIELDS [OUTPUTFIELDS ...], --outputfields OUTPUTFIELDS [OUTPUTFIELDS ...]\n                        A list of fields to keep for the output.  Must include ts. (default: empty string)\n  -d DATASTREAM, --datastream DATASTREAM\n                        Instead of an index, use a data stream that will rollover at this many GB.\n                        Recommended is 50 or less.  (default: 0 - disabled)\n  --compress            If a datastream is used, enable best compression.\n  -o fieldname filename, --logkey fieldname filename\n                        A field to log to a file.  Example: uid uid.txt.  \n                        Will append to the file!  Delete file before running if appending is undesired.  \n                        This option can be called more than once.  (default: empty - disabled)\n  -e fieldname filename, --filterkeys fieldname filename\n                        A field to filter with keys from a file.  Example: uid uid.txt.  (default: empty string - disabled)\n  -g, --ingestion       Use the ingestion pipeline to do things like geolocate IPs and split services.  Takes longer, but worth it.\n  -p SPLITFIELDS [SPLITFIELDS ...], --splitfields SPLITFIELDS [SPLITFIELDS ...]\n                        A list of additional fields to split with the ingestion pipeline, if enabled.\n                        (default: empty string - disabled)\n  -j, --jsonlogs        Assume input logs are JSON.\n  -r, --origtime        Keep the numerical time format, not milliseconds as ES needs.\n  -t, --timestamp       Keep the time in timestamp format.\n  -s, --stdout          Print JSON to stdout instead of sending to Elasticsearch directly.\n  -b, --nobulk          Remove the ES bulk JSON header.  Requires --stdout.\n  --humio HUMIO HUMIO   First argument is the Humio URL, the second argument is the ingest token.\n  -c, --cython          Use Cython execution by loading the local zeek2es.so file through an import.\n                        Run python setup.py build_ext --inplace first to make your zeek2es.so file!\n  -w, --hashdates       Use hashes instead of dates for the index name.\n  -z, --supresswarnings\n                        Supress any type of warning.  Die stoically and silently.\n\nTo delete indices:\n\n\tcurl -X DELETE http://localhost:9200/zeek*?pretty\n\nTo delete data streams:\n\n\tcurl -X DELETE http://localhost:9200/_data_stream/zeek*?pretty\n\nTo delete index templates:\n\n\tcurl -X DELETE http://localhost:9200/_index_template/zeek*?pretty\n\nTo delete the lifecycle policy:\n\n\tcurl -X DELETE http://localhost:9200/_ilm/policy/zeek-lifecycle-policy?pretty\n\nYou will need to add -k -u elastic_user:password if you are using Elastic v8+.\n```\n\n## Requirements \u003ca name=\"requirements\" /\u003e\n\n- A Unix-like environment (MacOs works!)\n- Python\n  - [requests](https://docs.python-requests.org/en/latest/) Python library installed, such as with with `pip`.\n\n## Notes \u003ca name=\"notes\" /\u003e\n\n### Humio \u003ca name=\"humio\" /\u003e\n\nTo import your data into Humio you will need to set up a repository with the `corelight-json` parser.  Obtain\nthe ingest token for the repository and you can import your data with a command such as:\n\n```\npython3 zeek2es.py -s -b --humio http://localhost:8080 b005bf74-1ed3-4871-904f-9460a4687202 http.log \n```\n\nThe URL should be in the format of: `http://yourserver:8080`, as the rest of the path is added by the\n`zeek2es.py` script automatically for you.\n\n### JSON Log Input \u003ca name=\"jsonloginput\" /\u003e\n\nSince Zeek JSON logs do not have type information like the ASCII TSV versions, only limited type information \ncan be provided to ElasticSearch.  You will notice this most for Zeek \"addr\" log fields that \nare not id$orig_h and id$resp_h, since the type information is not available to translate the field into \nElasticSearch's \"ip\" type.  Since address fields will not be of type \"ip\", you will not be able to use \nsubnet searches, for example, like you could for the TSV logs.  Saving Zeek logs in ASCII TSV \nformat provides for greater long term flexibility.\n\n### Data Streams \u003ca name=\"datastreams\" /\u003e\n\nYou can use data streams instead of indices for large logs with the `-d` command line option.  This\noption creates index templates beginning with `zeek_`.  It also creates a lifecycle policy\nnamed `zeek-lifecycle-policy`.  If you would like to delete all of your data streams, lifecycle policies,\nand index templates, these commands will do it for you:\n\n```\ncurl -X DELETE http://localhost:9200/_data_stream/zeek*?pretty\ncurl -X DELETE http://localhost:9200/_index_template/zeek*?pretty\ncurl -X DELETE http://localhost:9200/_ilm/policy/zeek-lifecycle-policy?pretty\n```\n\n### Helper Scripts \u003ca name=\"helperscripts\" /\u003e\n\nThere are two scripts that will help you make your logs into data streams such as `logs-zeek-conn`.\nThe first script is [process_logs_as_datastream.sh](process_logs_as_datastream.sh) and given \na list of logs and directories, will import them as such.  The second script \nis [process_log.sh](process_log.sh), and it can be used to import logs \none at a time.  This script can also be used to monitor logs created in a directory with \n[fswatch](https://emcrisostomo.github.io/fswatch/).  Both scripts have example command lines \nif you run them without any parameters.  \n\n```\n$ ./process_logs_as_datastream.sh \nUsage: ./process_logs_as_datastream.sh NJOBS \"ADDITIONAL_ARGS_TO_ZEEK2ES\" \"LIST_OF_LOGS_DELIMITED_BY_SPACES\" DIR1 DIR2 ...\n\nExample:\n  time ./process_logs_as_datastream.sh 16 \"\" \"amqp bgp conn dce_rpc dhcp dns dpd files ftp http ipsec irc kerberos modbus modbus_register_change mount mqtt mysql nfs notice ntlm ntp ospf portmap radius reporter rdp rfb rip ripng sip smb_cmd smb_files smb_mapping smtp snmp socks ssh ssl stun syslog tunnel vpn weird wireguard x509\" /usr/local/var/logs\n```\n\n```\n$ ./process_log.sh \nUsage: ./process_log.sh LOGFILENAME \"ADDITIONAL_ARGS_TO_ZEEK2ES\"\n\nExample:\n  fswatch -m poll_monitor --event Created -r /data/logs/zeek |  awk '/^.*\\/(conn|dns|http)\\..*\\.log\\.gz$/' | parallel -j 16 ./process_log.sh {} \"\" :::: -\n```\n\nYou will need to edit these scripts and command lines according to your environment.  \n\nAny files having a name of a log such as `conn_filter.txt` in the `lambda_filter_file_dir`, by default your home directory, will be applied as a lambda\nfilter file to the corresponding log input.  This allows you to set up all of your filters in one directory and import multiple log files with\nthat set of filters in one command with [process_logs_as_datastream.sh](process_logs_as_datastream.sh).\n\nThe following lines should delete all Zeek data in ElasticSearch no matter if you use indices or \ndata streams, or these helper scripts:\n\n```\ncurl -X DELETE http://localhost:9200/zeek*?pretty\ncurl -X DELETE http://localhost:9200/_data_stream/zeek*?pretty\ncurl -X DELETE http://localhost:9200/_data_stream/logs-zeek*?pretty\ncurl -X DELETE http://localhost:9200/_index_template/zeek*?pretty\ncurl -X DELETE http://localhost:9200/_index_template/logs-zeek*?pretty\ncurl -X DELETE http://localhost:9200/_ilm/policy/zeek-lifecycle-policy?pretty\n```\n\n... or if using Elastic v8+ ...\n\n```\ncurl -X DELETE -k -u elastic:password https://localhost:9200/zeek*?pretty\ncurl -X DELETE -k -u elastic:password https://localhost:9200/_data_stream/zeek*?pretty\ncurl -X DELETE -k -u elastic:password https://localhost:9200/_data_stream/logs-zeek*?pretty\ncurl -X DELETE -k -u elastic:password https://localhost:9200/_index_template/zeek*?pretty\ncurl -X DELETE -k -u elastic:password https://localhost:9200/_index_template/logs-zeek*?pretty\ncurl -X DELETE -k -u elastic:password https://localhost:9200/_ilm/policy/zeek-lifecycle-policy?pretty\n```\n\nBut to be able to do this in v8+ you will need to configure Elastic as described \nin the section [Elastic v8.0+](#elastic80).\n\n### Cython \u003ca name=\"cython\" /\u003e\n\nIf you'd like to try [Cython](https://cython.org/), you must run `python setup.py build_ext --inplace` \nfirst to generate your compiled file.  You must do this every time you update zeek2es!","funding_links":[],"categories":["Threat Detection and Hunting","Network","Security Monitoring","\u003ca name=\"tools\"\u003e\u003c/a\u003eTools"],"sub_categories":["Tools","IDS / IPS / Host IDS / Host IPS","SD-WAN"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcorelight%2Fzeek2es","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcorelight%2Fzeek2es","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcorelight%2Fzeek2es/lists"}