{"id":13574098,"url":"https://github.com/elastic/ecs-mapper","last_synced_at":"2025-09-29T23:31:02.382Z","repository":{"id":43273594,"uuid":"232903174","full_name":"elastic/ecs-mapper","owner":"elastic","description":"Translate an ECS mapping CSV to starter pipelines for Beats, Elasticsearch or Logstash","archived":true,"fork":false,"pushed_at":"2022-03-09T13:30:30.000Z","size":74,"stargazers_count":54,"open_issues_count":7,"forks_count":12,"subscribers_count":264,"default_branch":"main","last_synced_at":"2024-09-22T23:04:42.917Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Ruby","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/elastic.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-01-09T20:50:25.000Z","updated_at":"2024-07-02T03:52:28.000Z","dependencies_parsed_at":"2022-09-06T06:40:25.104Z","dependency_job_id":null,"html_url":"https://github.com/elastic/ecs-mapper","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/elastic%2Fecs-mapper","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/elastic%2Fecs-mapper/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/elastic%2Fecs-mapper/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/elastic%2Fecs-mapper/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/elastic","download_url":"https://codeload.github.com/elastic/ecs-mapper/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":219874706,"owners_count":16554610,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T15:00:46.455Z","updated_at":"2025-09-29T23:30:57.104Z","avatar_url":"https://github.com/elastic.png","language":"Ruby","readme":"⚠️ **This tool and repository are no longer maintained. We strongly advise you to [use Kibana to map custom data to ECS fields](https://www.elastic.co/guide/en/ecs/current/ecs-converting.html) instead.**\n\n---\n\n## Synopsis\n\nThe ECS mapper tool turns a field mapping from a CSV to an equivalent pipeline for:\n\n- [Beats](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html)\n- [Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest-processors.html)\n- [Logstash](https://www.elastic.co/guide/en/logstash/current/filter-plugins.html)\n\nThis tool generates starter pipelines for each solution above to help you \nget started quickly in mapping new data sources to ECS.\n\nA mapping CSV is what you get when you start planning how to map a new data\nsource to ECS in a spreadsheet.\n\nColleagues may collaborate on a spreadsheet that looks like this:\n\n| source\\_field | destination\\_field | notes  |\n|--------------|-------------------|---------------------------------------|\n| duration     | event.duration    | ECS supports nanoseconds precision    |\n| remoteip     | source.ip         | Hey @Jane do you agree with this one? |\n| message      |                   | No need to change this field          |\n| ...          |                   |                                       |\n\nYou can export your spreadsheet to CSV, run it through the ECS mapper,\nand generate your starter pipelines.\n\nNote that this tool generates starter pipelines. They only do field rename and copy\noperations as well as some field format adjustments. It's up to you to integrate them\nin a complete pipeline that ingests and outputs the data however you need.\n\nScroll down to the [Examples](#examples) section below to get right to a\nconcrete example you can play with.\n\n## CSV Format\n\nHere are more details on the CSV format supported by this tool. Since mapping\nspreadsheets are used by humans, it's totally fine to have as many columns\nas you need in your spreadsheets/CSV. Only the following columns will be considered:\n\n| column name | required | allowed values | notes |\n|-------------|----------|----------------|-------|\n| source\\_field | required |  | A dotted Elasticsearch field name. Dots represent JSON nesting. Lines with empty \"source\\_field\" are skipped. |\n| destination\\_field | required |  | A dotted Elasticsearch field name. Dots represent JSON nesting. Can be left empty if there's no copy action (just a type conversion). |\n| format\\_action | optional | to\\_float, to\\_integer, to\\_string, to\\_boolean, to\\_array, parse\\_timestamp, uppercase, lowercase, (empty) | Simple conversion to apply to the field value. |\n| timestamp\\_format | optional | Only UNIX and UNIX\\_MS formats are supported across all three tools. You may also specify other formats, like ISO8601, TAI64N, or a Java time pattern, but we will not validate whether the format is supported by the tool. |\n| copy\\_action | optional | rename, copy, (empty) | What to do with the field. If left empty, default action is based on the `--copy-action` flag. |\n\nYou can start from this\n[spreadsheet template](https://docs.google.com/spreadsheets/d/1m5JiOTeZtUueW3VOVqS8bFYqNGEEyp0jAsgO12NFkNM). Make a copy of it in your Google Docs account, or download it as an Excel file.\n\nWhen the destination field is @timestamp, then we always enforce an explicit date ```format_action``` of ```parse_timestamp``` to ```UNIX_MS``` avoid conversion problems downstream. If no ```timestamp_format``` is provided, then ```UNIX_MS``` is used. Please note that the timestamp layouts used by the [Filebeat processor for converting timestamps](https://www.elastic.co/guide/en/beats/filebeat/current/processor-timestamp.html) are different than the formats supported by date processors in Logstash and Elasticsearch Ingest Node.\n\n\n\n## Usage and Dependencies\n\nThis is a simple Ruby program with no external dependencies, other than development\ndependencies.\n\nAny modern version of Ruby should be sufficient. If you don't intend to run the\ntests or the rake tasks, you can skip right to [usage tips](#using-the-ecs-mapper).\n\n### Ruby Setup\n\nIf you want to tweak the code of this script, run the tests or use the rake tasks,\nyou'll need to install the development dependencies.\n\nOnce you have Ruby installed for your platform, installing the dependencies is simply:\n\n```bash\ngem install bundler\nbundle install\n```\n\nRun the tests:\n\n```bash\nrake test\n```\n\n### Using the ECS Mapper\n\nHelp.\n\n```bash\n./ecs-mapper --help\nReads a CSV mapping of source field names to destination field names, and generates\nElastic pipelines to help perform the conversion.\n\nYou can have as many columns as you want in your CSV.\nOnly the following columns will be used by this tool:\nsource_field, destination_field, format_action, copy_action\n\nOptions:\n    -f, --file FILE                  Input CSV file.\n    -o, --output DIR                 Output directory. Defaults to parent dir of --file.\n        --copy-action COPY_ACTION\n                                     Default action for field renames. Acceptable values are: copy, rename. Default is copy.\n        --debug                      Shorthand for --log-level=debug\n    -h, --help                       Display help\n```\n\nProcess my.csv and output pipelines in the same directory as the csv.\n\n```bash\n./ecs-mapper --file my.csv\n```\n\nProcess my.csv and output pipelines elsewhere.\n\n```bash\n./ecs-mapper --file my.csv --output pipelines/mine/\n```\n\nProcess my.csv, fields with an empty value in the \"copy\\_action\" column are renamed,\ninstead of copied (the default).\n\n```bash\n./ecs-mapper --file my.csv --copy_action rename \n```\n\n## Examples\n\nLook at an example CSV mapping and the pipelines generated from it:\n\n- [example/mapping.csv](example/mapping.csv)\n- [example/beats.yml](example/beats.yml)\n- [example/elasticsearch.json](example/elasticsearch.json)\n- [example/logstash.conf](example/logstash.conf)\n\nYou can try each pipeline easily by following the instructions\nin [example/README.md](example/).\n\n## Caveats\n\n* The Beats pipelines don't perform \"to\\_array\", \"uppercase\" nor\n  \"lowercase\" transformations. They could be implemented via the \"script\" processor.\n* Only UNIX and UNIX\\_MS timestamp formats are supported across Beats, Elasticsearch, \n  and Filebeat. For other timestamp formats, please modify the starter pipeline or add the \n  appropriate date processor in the generated pipeline by hand. Refer to the documentation\n  for [Beats](https://www.elastic.co/guide/en/beats/filebeat/current/processor-timestamp.html), [Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/master/date-processor.html), and [Logstash](https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html#plugins-filters-date-match).\n* This tool does not currently support additional processors, like setting static \n  field values or dropping events based on a condition.\n","funding_links":[],"categories":["Ruby"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Felastic%2Fecs-mapper","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Felastic%2Fecs-mapper","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Felastic%2Fecs-mapper/lists"}