{"id":13665704,"url":"https://github.com/square/ETL","last_synced_at":"2025-04-26T08:33:01.656Z","repository":{"id":6658514,"uuid":"7903081","full_name":"square/ETL","owner":"square","description":"Extract, Transform, and Load data with Ruby","archived":false,"fork":false,"pushed_at":"2023-03-16T00:53:49.000Z","size":37,"stargazers_count":387,"open_issues_count":7,"forks_count":28,"subscribers_count":51,"default_branch":"master","last_synced_at":"2025-04-08T10:53:53.297Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Ruby","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/square.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2013-01-29T22:51:22.000Z","updated_at":"2025-03-30T23:35:32.000Z","dependencies_parsed_at":"2023-02-10T18:15:47.002Z","dependency_job_id":"97c08866-0287-4e1e-8e5b-4a1212ee87c1","html_url":"https://github.com/square/ETL","commit_stats":{"total_commits":19,"total_committers":5,"mean_commits":3.8,"dds":"0.26315789473684215","last_synced_commit":"029433cb08c50f3addbe29957c0d856b024c113a"},"previous_names":[],"tags_count":4,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/square%2FETL","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/square%2FETL/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/square%2FETL/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/square%2FETL/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/square","download_url":"https://codeload.github.com/square/ETL/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":250960689,"owners_count":21514500,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-02T06:00:48.070Z","updated_at":"2025-04-26T08:32:56.638Z","avatar_url":"https://github.com/square.png","language":"Ruby","readme":"# ETL\n\nExtract, transform, and load data with ruby!\n\n## Installation\n\nAdd this line to your application's Gemfile:\n\n    gem 'ETL'\n\nAnd then execute:\n\n    $ bundle\n\nOr install it yourself as:\n\n    $ gem install ETL\n\n## ETL Dependencies\n\nETL depends on having a database connection object that __must__ respond\nto `#query`. The [mysql2](https://github.com/brianmario/mysql2) gem is a good option.\nYou can also proxy another library using Ruby's `SimpleDelegator` and add a `#query`\nmethod if need be.\n\nThe gem comes bundled with a default logger. If you'd like to write your own\njust make sure that it implements `#debug` and `#info`. For more information\non what is logged and when, view the [logger details](#logger-details).\n\n### Basic ETL\n\nAssume that we have a database connection represented by `connection`.\n\nTo run a basic ETL that is composed of sequential SQL statements, start by\ncreating a new ETL instance:\n\n```ruby\n# setting connection at the class level\nETL.connection = connection\n\netl = ETL.new(description: \"a description of what this ETL does\")\n```\n\nor\n\n```ruby\n# setting connection at the instance level\netl = ETL.new(\n  description: \"a description of what this ETL does\",\n  connection:  connection\n)\n```\nwhich can then be configured:\n\n```ruby\netl.config do |etl|\n  etl.ensure_destination do |etl|\n    # For most ETLs you may want to ensure that the destination exists, so the\n    # #ensure_destination block is ideally suited to fulfill this requirement.\n    #\n    # By way of example:\n    #\n    etl.query %[\n      CREATE TABLE IF NOT EXISTS some_database.some_destination_table (\n          user_id INT UNSIGNED NOT NULL\n        , created_date DATE NOT NULL\n        , total_amount INT SIGNED NOT NULL\n        , message VARCHAR(100) DEFAULT NULL\n        , PRIMARY KEY (user_id, created_date)\n        , KEY (created_date)\n      )\n    ]\n  end\n\n  etl.before_etl do |etl|\n    # All pre-ETL work is performed in this block.\n    #\n    # This can be thought of as a before-ETL hook that will fire only once. When\n    # you are not leveraging the ETL iteration capabilities, the value of this\n    # block vs the #etl block is not very clear. We will see how and when to\n    # leverage this block effectively when we introduce iteration.\n    #\n    # As an example, let's say we want to get rid of all entries that have an\n    # amount less than zero before moving on to our actual etl:\n    #\n    etl.query %[DELETE FROM some_database.some_source_table WHERE amount \u003c 0]\n  end\n\n  etl.etl do |etl|\n    # Here is where the magic happens! This block contains the main ETL\n    # operation.\n    #\n    # For example:\n    #\n    etl.query %[\n      REPLACE INTO some_database.some_destination_table (\n          user_id\n        , created_date\n        , total_amount\n      ) SELECT\n          user_id\n        , DATE(created_at) AS created_date\n        , SUM(amount) AS total_amount\n      FROM\n        some_database.some_source_table sst\n      GROUP BY\n          sst.user_id\n        , DATE(sst.created_at)\n    ]\n  end\n\n  etl.after_etl do |etl|\n    # All post-ETL work is performed in this block.\n    #\n    # Again, to finish up with an example:\n    #\n    etl.query %[\n      UPDATE some_database.some_destination_table\n      SET message = \"WOW\"\n      WHERE total_amount \u003e 100\n    ]\n  end\nend\n```\n\nAt this point it is possible to run the ETL instance via:\n\n```ruby\netl.run\n```\nwhich executes `#ensure_destination`, `#before_etl`, `#etl`, and `#after_etl` in\nthat order.\n\n### ETL with iteration\n\nTo add in iteration, simply supply `#start`, `#step`, and `#stop` blocks. This\nis useful when dealing with large data sets or when executing queries that,\nwhile optimized, are still slow.\n\nAgain, to kick things off:\n\n```ruby\netl = ETL.new(\n  description: \"a description of what this ETL does\",\n  connection:  connection\n)\n```\n\nwhere `connection` is the same as described above.\n\nNext we can configure the ETL:\n\n```ruby\n# assuming we have the ETL instance from above\netl.config do |etl|\n  etl.ensure_destination do |etl|\n    # For most ETLs you may want to ensure that the destination exists, so the\n    # #ensure_destination block is ideally suited to fulfill this requirement.\n    #\n    # By way of example:\n    #\n    etl.query %[\n      CREATE TABLE IF NOT EXISTS some_database.some_destination_table (\n          user_id INT UNSIGNED NOT NULL\n        , created_date DATE NOT NULL\n        , total_amount INT SIGNED NOT NULL\n        , message VARCHAR(100) DEFAULT NULL\n        , PRIMARY KEY (user_id, created_date)\n        , KEY (created_date)\n      )\n    ]\n  end\n\n  etl.before_etl do |etl|\n    # All pre-ETL work is performed in this block.\n    #\n    # Now that we are leveraging iteration the #before_etl block becomes\n    # more useful as a way to execute an operation once before we begin\n    # our iteration.\n    #\n    # As an example, let's say we want to get rid of all entries that have an\n    # amount less than zero before moving on to our actual etl:\n    #\n    etl.query %[\n      DELETE FROM some_database.some_source_table\n      WHERE amount \u003c 0\n    ]\n  end\n\n  etl.start do |etl|\n    # This defines where the ETL should start. This can be a flat number\n    # or date, or even SQL / other code can be executed to produce a starting\n    # value.\n    #\n    # Usually, this is the last known entry for the destination table with\n    # some sensible default if the destination does not yet contain data.\n    #\n    # As an example:\n    #\n    # Note that we cast the default date as a DATE. If we don't, it will be\n    # treated as a string and our iterator will fail under the hood when testing\n    # if it is complete.\n    res = etl.query %[\n      SELECT COALESCE(MAX(created_date), DATE('2010-01-01')) AS the_max\n      FROM some_database.some_destination_table\n    ]\n\n    res.to_a.first['the_max']\n  end\n\n  etl.step do |etl|\n    # The step block defines the size of the iteration block. To iterate by\n    # ten records, the step block should be set to return 10.\n    #\n    # As an alternative example, to set the iteration to go 10,000 units\n    # at a time, the following value should be provided:\n    #\n    #   10_000 (Note: an underscore is used for readability)\n    #\n    # As an example, to iterate 7 days at a time:\n    #\n    7\n  end\n\n  etl.stop do |etl|\n    # The stop block defines when the iteration should halt.\n    # Again, this can be a flat value or code. Either way, one value *must* be\n    # returned.\n    #\n    # As a flat value:\n    #\n    #   1_000_000\n    #\n    # Or a date value:\n    #\n    #   Time.now.to_date\n    #\n    # Or as a code example:\n    #\n    res = etl.query %[\n      SELECT DATE(MAX(created_at)) AS the_max\n      FROM some_database.some_source_table\n    ]\n\n    res.to_a.first['the_max']\n  end\n\n  etl.etl do |etl, lbound, ubound|\n    # The etl block is the main part of the framework. Note: there are\n    # two extra args with the iterator this time around: \"lbound\" and \"ubound\"\n    #\n    # \"lbound\" is the lower bound of the current iteration. When iterating\n    # from 0 to 10 and stepping by 2, the lbound would equal 2 on the\n    # second iteration.\n    #\n    # \"ubound\" is the upper bound of the current iteration. In continuing with the\n    # example above, when iterating from 0 to 10 and stepping by 2, the ubound would\n    # equal 4 on the second iteration.\n    #\n    # These args can be used to \"window\" SQL queries or other code operations.\n    #\n    # As a first example, to iterate over a set of ids:\n    #\n    #   etl.query %[\n    #     REPLACE INTO some_database.some_destination_table (\n    #         created_date\n    #       , user_id\n    #       , total_amount\n    #     ) SELECT\n    #         DATE(sst.created_at) AS created_date\n    #       , sst.user_id\n    #       , SUM(sst.amount) AS total_amount\n    #     FROM\n    #       some_database.some_source_table sst\n    #     WHERE\n    #       sst.user_id \u003e #{lbound} AND sst.user_id \u003c= #{ubound}\n    #     GROUP BY\n    #         DATE(sst.created_at)\n    #       , sst.user_id]\n    #\n    # To \"window\" a SQL query using dates:\n    #\n    etl.query %[\n      REPLACE INTO some_database.some_destination_table (\n          created_date\n        , user_id\n        , total_amount\n      ) SELECT\n          DATE(sst.created_at) AS created_date\n        , sst.user_id\n        , SUM(sst.amount) AS total_amount\n      FROM\n        some_database.some_source_table sst\n      WHERE\n        -- Note the usage of quotes surrounding the lbound and ubound vars.\n        -- This is is required when dealing with dates / datetimes\n        sst.created_at \u003e= '#{lbound}' AND sst.created_at \u003c '#{ubound}'\n      GROUP BY\n          DATE(sst.created_at)\n        , sst.user_id\n    ]\n\n    # Note that there is no sql sanitization here so there is *potential* for SQL\n    # injection. That being said you'll likely be using this gem in an internal\n    # tool so hopefully your co-workers are not looking to sabotage your ETL\n    # pipeline. Just be aware of this and handle it as you see fit.\n  end\n\n  etl.after_etl do |etl|\n    # All post-ETL work is performed in this block.\n    #\n    # Again, to finish up with an example:\n    #\n    etl.query %[\n      UPDATE some_database.some_destination_table\n      SET message = \"WOW\"\n      WHERE total_amount \u003e 100\n    ]\n  end\nend\n```\n\nAt this point it is possible to run the ETL instance via:\n\n```ruby\netl.run\n```\nwhich executes `#ensure_destination`, `#before_etl`, `#etl`, and `#after_etl` in\nthat order.\n\nNote that `#etl` executes `#start` and `#stop` once and memoizes the result for\neach. It then begins to iterate from what `#start` evaluated to up until what `#stop`\nevaluated to by what `#step` evaluates to.\n\n## Examples\n\nThere are two examples found in `./examples` that demonstrate the basic ETL and\niteration ETL. Each file uses the [mysql2](https://github.com/brianmario/mysql2)\ngem and reads / writes data to localhost using the root user with no password.\nAdjust as needed.\n\n## Logger Details\n\nA logger must support two methods: `#info` and `#warn`.\n\nBoth methods should accept a single hash argument. The argument will contain:\n\n- `:emitter` =\u003e a reference to the ETL instance's `self`\n- `:event_type` =\u003e a symbol that includes the type of event being logged. You\n  can use this value to derive which other data you'll have available\n\nWhen `:event_type` is equal to `:query_start`, you'll have the following\navailable in the hash argument:\n\n- `:sql` =\u003e the sql that is going to be run\n\nThese events are logged at the debug level.\n\nWhen `:event_type` is equal to `:query_complete`, you'll have the following\navailable in the hash argument:\n\n- `:sql` =\u003e the sql that was run\n- `:runtime` =\u003e how long the query took to execute\n\nThese events are logged at the info level.\n\nFollowing from this you could implement a simple logger as:\n\n```ruby\nclass PutsLogger\n  def info data\n    @data = data\n    write!\n  end\n\n  def debug data\n    @data = data\n    write!\n  end\n\nprivate\n\n  def write!\n    case (event_type = @data.delete(:event_type))\n    when :query_start\n      output =  \"#{@data[:emitter].description} is about to run\\n\"\n      output += \"#{@data[:sql]}\\n\"\n    when :query_complete\n      output =  \"#{@data[:emitter].description} executed:\\n\"\n      output += \"#{@data[:sql]}\\n\"\n      output += \"query completed at #{Time.now} and took #{@data[:runtime]}s\\n\"\n    else\n      output = \"no special logging for #{event_type} event_type yet\\n\"\n    end\n    puts output\n    @data = nil\n  end\nend\n```\n\n## Contributing\n\nIf you would like to contribute code to ETL you can do so through GitHub by\nforking the repository and sending a pull request.\n\nWhen submitting code, please make every effort to follow existing conventions\nand style in order to keep the code as readable as possible.\n\nBefore your code can be accepted into the project you must also sign the\n[Individual Contributor License Agreement (CLA)][1].\n\n\n [1]: https://spreadsheets.google.com/spreadsheet/viewform?formkey=dDViT2xzUHAwRkI3X3k5Z0lQM091OGc6MQ\u0026ndplr=1\n\n## License\n\nCopyright 2013 Square Inc.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n","funding_links":[],"categories":["Ruby","1. language"],"sub_categories":["1.1 ruby"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsquare%2FETL","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsquare%2FETL","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsquare%2FETL/lists"}