{"id":13395120,"url":"https://github.com/zdennis/activerecord-import","last_synced_at":"2025-05-12T03:44:20.670Z","repository":{"id":837407,"uuid":"558790","full_name":"zdennis/activerecord-import","owner":"zdennis","description":"A library for bulk insertion of data into your database using ActiveRecord.","archived":false,"fork":false,"pushed_at":"2025-04-05T22:38:37.000Z","size":4930,"stargazers_count":4091,"open_issues_count":38,"forks_count":615,"subscribers_count":43,"default_branch":"master","last_synced_at":"2025-05-09T12:04:24.231Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"http://www.continuousthinking.com","language":"Ruby","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/zdennis.png","metadata":{"files":{"readme":"README.markdown","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2010-03-12T05:06:53.000Z","updated_at":"2025-05-07T12:15:33.000Z","dependencies_parsed_at":"2023-07-06T01:45:51.172Z","dependency_job_id":"bbbd7375-e397-4116-aad3-1211ffe46778","html_url":"https://github.com/zdennis/activerecord-import","commit_stats":{"total_commits":871,"total_committers":164,"mean_commits":5.310975609756097,"dds":0.6716417910447761,"last_synced_commit":"2f61ccd54dc7cdab5d130c01cf52dafcae072d45"},"previous_names":[],"tags_count":80,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zdennis%2Factiverecord-import","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zdennis%2Factiverecord-import/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zdennis%2Factiverecord-import/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zdennis%2Factiverecord-import/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/zdennis","download_url":"https://codeload.github.com/zdennis/activerecord-import/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253418281,"owners_count":21905326,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-30T17:01:42.682Z","updated_at":"2025-05-12T03:44:20.607Z","avatar_url":"https://github.com/zdennis.png","language":"Ruby","readme":"# Activerecord-Import ![Build Status](https://github.com/zdennis/activerecord-import/actions/workflows/test.yaml/badge.svg)\n\nActiverecord-Import is a library for bulk inserting data using ActiveRecord.\n\nOne of its major features is following activerecord associations and generating the minimal\nnumber of SQL insert statements required, avoiding the N+1 insert problem. An example probably\nexplains it best. Say you had a schema like this:\n\n- Publishers have Books\n- Books have Reviews\n\nand you wanted to bulk insert 100 new publishers with 10K books and 3 reviews per book. This library will follow the associations\ndown and generate only 3 SQL insert statements - one for the publishers, one for the books, and one for the reviews.\n\nIn contrast, the standard ActiveRecord save would generate\n100 insert statements for the publishers, then it would visit each publisher and save all the books:\n100 * 10,000 = 1,000,000 SQL insert statements\nand then the reviews:\n100 * 10,000 * 3 = 3M SQL insert statements,\n\nThat would be about 4M SQL insert statements vs 3, which results in vastly improved performance. In our case, it converted\nan 18 hour batch process to \u003c2 hrs.\n\nThe gem provides the following high-level features:\n\n* Works with raw columns and arrays of values (fastest)\n* Works with model objects (faster)\n* Performs validations (fast)\n* Performs on duplicate key updates (requires MySQL, SQLite 3.24.0+, or Postgres 9.5+)\n\n## Table of Contents\n\n* [Examples](#examples)\n  * [Introduction](#introduction)\n  * [Columns and Arrays](#columns-and-arrays)\n  * [Hashes](#hashes)\n  * [ActiveRecord Models](#activerecord-models)\n  * [Batching](#batching)\n  * [Recursive](#recursive)\n* [Options](#options)\n  * [Duplicate Key Ignore](#duplicate-key-ignore)\n  * [Duplicate Key Update](#duplicate-key-update)\n* [Return Info](#return-info)\n* [Counter Cache](#counter-cache)\n* [ActiveRecord Timestamps](#activerecord-timestamps)\n* [Callbacks](#callbacks)\n* [Supported Adapters](#supported-adapters)\n* [Additional Adapters](#additional-adapters)\n* [Requiring](#requiring)\n  * [Autoloading via Bundler](#autoloading-via-bundler)\n  * [Manually Loading](#manually-loading)\n* [Load Path Setup](#load-path-setup)\n* [Conflicts With Other Gems](#conflicts-with-other-gems)\n* [More Information](#more-information)\n* [Contributing](#contributing)\n  * [Running Tests](#running-tests)\n  * [Issue Triage](#issue-triage)\n\n### Examples\n\n#### Introduction\n\nThis gem adds an `import` method (or `bulk_import`, for compatibility with gems like `elasticsearch-model`; see [Conflicts With Other Gems](#conflicts-with-other-gems)) to ActiveRecord classes.\n\nWithout `activerecord-import`, you'd write something like this:\n\n```ruby\n10.times do |i|\n  Book.create! name: \"book #{i}\"\nend\n```\n\nThis would end up making 10 SQL calls. YUCK!  With `activerecord-import`, you can instead do this:\n\n```ruby\nbooks = []\n10.times do |i|\n  books \u003c\u003c Book.new(name: \"book #{i}\")\nend\nBook.import books    # or use import!\n```\n\nand only have 1 SQL call. Much better!\n\n#### Columns and Arrays\n\nThe `import` method can take an array of column names (string or symbols) and an array of arrays. Each child array represents an individual record and its list of values in the same order as the columns. This is the fastest import mechanism and also the most primitive.\n\n```ruby\ncolumns = [ :title, :author ]\nvalues = [ ['Book1', 'George Orwell'], ['Book2', 'Bob Jones'] ]\n\n# Importing without model validations\nBook.import columns, values, validate: false\n\n# Import with model validations\nBook.import columns, values, validate: true\n\n# when not specified :validate defaults to true\nBook.import columns, values\n```\n\n#### Hashes\n\nThe `import` method can take an array of hashes. The keys map to the column names in the database.\n\n```ruby\nvalues = [{ title: 'Book1', author: 'George Orwell' }, { title: 'Book2', author: 'Bob Jones'}]\n\n# Importing without model validations\nBook.import values, validate: false\n\n# Import with model validations\nBook.import values, validate: true\n\n# when not specified :validate defaults to true\nBook.import values\n```\n#### Import Using Hashes and Explicit Column Names\n\nThe `import` method can take an array of column names and an array of hash objects. The column names are used to determine what fields of data should be imported. The following example will only import books with the `title` field:\n\n```ruby\nbooks = [\n  { title: \"Book 1\", author: \"George Orwell\" },\n  { title: \"Book 2\", author: \"Bob Jones\" }\n]\ncolumns = [ :title ]\n\n# without validations\nBook.import columns, books, validate: false\n\n# with validations\nBook.import columns, books, validate: true\n\n# when not specified :validate defaults to true\nBook.import columns, books\n\n# result in table books\n# title  | author\n#--------|--------\n# Book 1 | NULL\n# Book 2 | NULL\n\n```\n\nUsing hashes will only work if the columns are consistent in every hash of the array. If this does not hold, an exception will be raised. There are two workarounds: use the array to instantiate an array of ActiveRecord objects and then pass that into `import` or divide the array into multiple ones with consistent columns and import each one separately.\n\nSee https://github.com/zdennis/activerecord-import/issues/507 for discussion.\n\n```ruby\narr = [\n  { bar: 'abc' },\n  { baz: 'xyz' },\n  { bar: '123', baz: '456' }\n]\n\n# An exception will be raised\nFoo.import arr\n\n# better\narr.map! { |args| Foo.new(args) }\nFoo.import arr\n\n# better\narr.group_by(\u0026:keys).each_value do |v|\n Foo.import v\nend\n```\n\n#### ActiveRecord Models\n\nThe `import` method can take an array of models. The attributes will be pulled off from each model by looking at the columns available on the model.\n\n```ruby\nbooks = [\n  Book.new(title: \"Book 1\", author: \"George Orwell\"),\n  Book.new(title: \"Book 2\", author: \"Bob Jones\")\n]\n\n# without validations\nBook.import books, validate: false\n\n# with validations\nBook.import books, validate: true\n\n# when not specified :validate defaults to true\nBook.import books\n```\n\nThe `import` method can take an array of column names and an array of models. The column names are used to determine what fields of data should be imported. The following example will only import books with the `title` field:\n\n```ruby\nbooks = [\n  Book.new(title: \"Book 1\", author: \"George Orwell\"),\n  Book.new(title: \"Book 2\", author: \"Bob Jones\")\n]\ncolumns = [ :title ]\n\n# without validations\nBook.import columns, books, validate: false\n\n# with validations\nBook.import columns, books, validate: true\n\n# when not specified :validate defaults to true\nBook.import columns, books\n\n# result in table books\n# title  | author\n#--------|--------\n# Book 1 | NULL\n# Book 2 | NULL\n\n```\n\n#### Batching\n\nThe `import` method can take a `batch_size` option to control the number of rows to insert per INSERT statement. The default is the total number of records being inserted so there is a single INSERT statement.\n\n```ruby\nbooks = [\n  Book.new(title: \"Book 1\", author: \"George Orwell\"),\n  Book.new(title: \"Book 2\", author: \"Bob Jones\"),\n  Book.new(title: \"Book 1\", author: \"John Doe\"),\n  Book.new(title: \"Book 2\", author: \"Richard Wright\")\n]\ncolumns = [ :title ]\n\n# 2 INSERT statements for 4 records\nBook.import columns, books, batch_size: 2\n```\n\nIf your import is particularly large or slow (possibly due to [callbacks](#callbacks)) whilst batch importing, you might want a way to report back on progress. This is supported by passing a callable as the `batch_progress` option. e.g:\n\n```ruby\nmy_proc = -\u003e(rows_size, num_batches, current_batch_number, batch_duration_in_secs) {\n  # Using the arguments provided to the callable, you can\n  # send an email, post to a websocket,\n  # update slack, alert if import is taking too long, etc.\n}\n\nBook.import columns, books, batch_size: 2, batch_progress: my_proc\n```\n\n#### Recursive\n\n\u003e **Note**\n\u003e This only works with PostgreSQL and ActiveRecord objects. This won't work with hashes or arrays as recursive inputs.\n\nAssume that Books \u003ccode\u003ehas_many\u003c/code\u003e Reviews.\n\n```ruby\nbooks = []\n10.times do |i|\n  book = Book.new(name: \"book #{i}\")\n  book.reviews.build(title: \"Excellent\")\n  books \u003c\u003c book\nend\nBook.import books, recursive: true\n```\n\n### Options\n\nKey                       | Options               | Default            | Description\n------------------------- | --------------------- | ------------------ | -----------\n:validate                 | `true`/`false`        | `true`             | Whether or not to run `ActiveRecord` validations (uniqueness skipped). This option will always be true when using `import!`.\n:validate_uniqueness      | `true`/`false`        | `false`            | Whether or not to run ActiveRecord uniqueness validations. Beware this will incur an sql query per-record (N+1 queries). (requires `\u003e= v0.27.0`).\n:validate_with_context    | `Symbol`              |`:create`/`:update` | Allows passing an ActiveModel validation context for each model. Default is `:create` for new records and `:update` for existing ones.\n:track_validation_failures| `true`/`false`        | `false`            | When this is set to true, `failed_instances` will be an array of arrays, with each inner array having the form `[:index_in_dataset, :object_with_errors]`\n:on_duplicate_key_ignore  | `true`/`false`        | `false`            | Allows skipping records with duplicate keys. See [here](#duplicate-key-ignore) for more details.\n:ignore                   | `true`/`false`        | `false`            | Alias for :on_duplicate_key_ignore.\n:on_duplicate_key_update  | :all, `Array`, `Hash` | N/A                | Allows upsert logic to be used. See [here](#duplicate-key-update) for more details.\n:synchronize              | `Array`               | N/A                | An array of ActiveRecord instances. This synchronizes existing instances in memory with updates from the import.\n:timestamps               | `true`/`false`        | `true`             | Enables/disables timestamps on imported records.\n:recursive                | `true`/`false`        | `false`            | Imports has_many/has_one associations (PostgreSQL only).\n:recursive_on_duplicate_key_update | `Hash`       | N/A                | Allows upsert logic to be used for recursive associations. The hash key is the association name and the value has the same options as `:on_duplicate_key_update`. See [here](#duplicate-key-update) for more details.\n:batch_size               | `Integer`             | total # of records | Max number of records to insert per import\n:raise_error              | `true`/`false`        | `false`            | Raises an exception at the first invalid record. This means there will not be a result object returned. The `import!` method is a shortcut for this.\n:all_or_none              | `true`/`false`        | `false`            | Will not import any records if there is a record with validation errors.\n\n#### Duplicate Key Ignore\n\n[MySQL](http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html), [SQLite](https://www.sqlite.org/lang_insert.html), and [PostgreSQL](https://www.postgresql.org/docs/current/static/sql-insert.html#SQL-ON-CONFLICT) (9.5+) support `on_duplicate_key_ignore` which allows you to skip records if a primary or unique key constraint is violated.\n\nFor Postgres 9.5+ it adds `ON CONFLICT DO NOTHING`, for MySQL it uses `INSERT IGNORE`, and for SQLite it uses `INSERT OR IGNORE`. Cannot be enabled on a recursive import. For database adapters that normally support setting primary keys on imported objects, this option prevents that from occurring.\n\n```ruby\nbook = Book.create! title: \"Book1\", author: \"George Orwell\"\nbook.title = \"Updated Book Title\"\nbook.author = \"Bob Barker\"\n\nBook.import [book], on_duplicate_key_ignore: true\n\nbook.reload.title  # =\u003e \"Book1\"     (stayed the same)\nbook.reload.author # =\u003e \"George Orwell\" (stayed the same)\n```\n\nThe option `:on_duplicate_key_ignore` is bypassed when `:recursive` is enabled for [PostgreSQL imports](https://github.com/zdennis/activerecord-import/wiki#recursive-example-postgresql-only).\n\n#### Duplicate Key Update\n\nMySQL, PostgreSQL (9.5+), and SQLite (3.24.0+) support `on duplicate key update` (also known as \"upsert\") which allows you to specify fields whose values should be updated if a primary or unique key constraint is violated.\n\nOne big difference between MySQL and PostgreSQL support is that MySQL will handle any conflict that happens, but PostgreSQL requires that you specify which columns the conflict would occur over. SQLite models its upsert support after PostgreSQL.\n\nThis will use MySQL's `ON DUPLICATE KEY UPDATE` or Postgres/SQLite `ON CONFLICT DO UPDATE` to do upsert.\n\nBasic Update\n\n```ruby\nbook = Book.create! title: \"Book1\", author: \"George Orwell\"\nbook.title = \"Updated Book Title\"\nbook.author = \"Bob Barker\"\n\n# MySQL version\nBook.import [book], on_duplicate_key_update: [:title]\n\n# PostgreSQL version\nBook.import [book], on_duplicate_key_update: {conflict_target: [:id], columns: [:title]}\n\n# PostgreSQL shorthand version (conflict target must be primary key)\nBook.import [book], on_duplicate_key_update: [:title]\n\nbook.reload.title  # =\u003e \"Updated Book Title\" (changed)\nbook.reload.author # =\u003e \"George Orwell\"          (stayed the same)\n```\n\nUsing the value from another column\n\n```ruby\nbook = Book.create! title: \"Book1\", author: \"George Orwell\"\nbook.title = \"Updated Book Title\"\n\n# MySQL version\nBook.import [book], on_duplicate_key_update: {author: :title}\n\n# PostgreSQL version (no shorthand version)\nBook.import [book], on_duplicate_key_update: {\n  conflict_target: [:id], columns: {author: :title}\n}\n\nbook.reload.title  # =\u003e \"Book1\"              (stayed the same)\nbook.reload.author # =\u003e \"Updated Book Title\" (changed)\n```\n\nUsing Custom SQL\n\n```ruby\nbook = Book.create! title: \"Book1\", author: \"George Orwell\"\nbook.author = \"Bob Barker\"\n\n# MySQL version\nBook.import [book], on_duplicate_key_update: \"author = values(author)\"\n\n# PostgreSQL version\nBook.import [book], on_duplicate_key_update: {\n  conflict_target: [:id], columns: \"author = excluded.author\"\n}\n\n# PostgreSQL shorthand version (conflict target must be primary key)\nBook.import [book], on_duplicate_key_update: \"author = excluded.author\"\n\nbook.reload.title  # =\u003e \"Book1\"      (stayed the same)\nbook.reload.author # =\u003e \"Bob Barker\" (changed)\n```\n\nPostgreSQL Using partial indexes\n\n```ruby\nbook = Book.create! title: \"Book1\", author: \"George Orwell\", published_at: Time.now\nbook.author = \"Bob Barker\"\n\n# in migration\nexecute \u003c\u003c-SQL\n      CREATE INDEX books_published_at_index ON books (published_at) WHERE published_at IS NOT NULL;\n    SQL\n\n# PostgreSQL version\nBook.import [book], on_duplicate_key_update: {\n  conflict_target: [:id],\n  index_predicate: \"published_at IS NOT NULL\",\n  columns: [:author]\n}\n\nbook.reload.title  # =\u003e \"Book1\"          (stayed the same)\nbook.reload.author # =\u003e \"Bob Barker\"     (changed)\nbook.reload.published_at # =\u003e 2017-10-09 (stayed the same)\n```\n\nPostgreSQL Using constraints\n\n```ruby\nbook = Book.create! title: \"Book1\", author: \"George Orwell\", edition: 3, published_at: nil\nbook.published_at = Time.now\n\n# in migration\nexecute \u003c\u003c-SQL\n      ALTER TABLE books\n        ADD CONSTRAINT for_upsert UNIQUE (title, author, edition);\n    SQL\n\n# PostgreSQL version\nBook.import [book], on_duplicate_key_update: {constraint_name: :for_upsert, columns: [:published_at]}\n\n\nbook.reload.title  # =\u003e \"Book1\"          (stayed the same)\nbook.reload.author # =\u003e \"George Orwell\"      (stayed the same)\nbook.reload.edition # =\u003e 3               (stayed the same)\nbook.reload.published_at # =\u003e 2017-10-09 (changed)\n```\n\n```ruby\nBook.import books, validate_uniqueness: true\n```\n\n### Return Info\n\nThe `import` method returns a `Result` object that responds to `failed_instances` and `num_inserts`. Additionally, for users of Postgres, there will be two arrays `ids` and `results` that can be accessed.\n\n```ruby\narticles = [\n  Article.new(author_id: 1, title: 'First Article', content: 'This is the first article'),\n  Article.new(author_id: 2, title: 'Second Article', content: ''),\n  Article.new(author_id: 3, content: '')\n]\n\ndemo = Article.import(articles, returning: :title) # =\u003e #\u003cstruct ActiveRecord::Import::Result\n\ndemo.failed_instances\n=\u003e [#\u003cArticle id: 3, author_id: 3, title: nil, content: \"\", created_at: nil, updated_at: nil\u003e]\n\ndemo.num_inserts\n=\u003e 1,\n\ndemo.ids\n=\u003e [\"1\", \"2\"] # for Postgres\n=\u003e [] # for other DBs\n\ndemo.results\n=\u003e [\"First Article\", \"Second Article\"] # for Postgres\n=\u003e [] # for other DBs\n```\n\n### Counter Cache\n\nWhen running `import`, `activerecord-import` does not automatically update counter cache columns. To update these columns, you will need to do one of the following:\n\n* Provide values to the column as an argument on your object that is passed in.\n* Manually update the column after the record has been imported.\n\n### ActiveRecord Timestamps\n\nIf you're familiar with ActiveRecord you're probably familiar with its timestamp columns: created_at, created_on, updated_at, updated_on, etc. When importing data the timestamp fields will continue to work as expected and each timestamp column will be set.\n\nShould you wish to specify those columns, you may use the option `timestamps: false`.\n\nHowever, it is also possible to set just `:created_at` in specific records. In this case despite using `timestamps: true`,  `:created_at` will be updated only in records where that field is `nil`. Same rule applies for record associations when enabling the option `recursive: true`.\n\nIf you are using custom time zones, these will be respected when performing imports as well as long as `ActiveRecord::Base.default_timezone` is set, which for practically all Rails apps it is.\n\u003e **Note**\n\u003e If you are using ActiveRecord 7.0 or later, please use `ActiveRecord.default_timezone` instead.\n\n### Callbacks\n\nActiveRecord callbacks related to [creating](http://guides.rubyonrails.org/active_record_callbacks.html#creating-an-object), [updating](http://guides.rubyonrails.org/active_record_callbacks.html#updating-an-object), or [destroying](http://guides.rubyonrails.org/active_record_callbacks.html#destroying-an-object) records (other than `before_validation` and `after_validation`) will NOT be called when calling the import method. This is because it is mass importing rows of data and doesn't necessarily have access to in-memory ActiveRecord objects.\n\nIf you do have a collection of in-memory ActiveRecord objects you can do something like this:\n\n```ruby\nbooks.each do |book|\n  book.run_callbacks(:save) { false }\n  book.run_callbacks(:create) { false }\nend\nBook.import(books)\n```\n\nThis will run before_create and before_save callbacks on each item. The `false` argument is needed to prevent after_save being run, which wouldn't make sense prior to bulk import. Something to note in this example is that the before_create and before_save callbacks will run before the validation callbacks.\n\nIf that is an issue, another possible approach is to loop through your models first to do validations and then only run callbacks on and import the valid models.\n\n```ruby\nvalid_books = []\ninvalid_books = []\n\nbooks.each do |book|\n  if book.valid?\n    valid_books \u003c\u003c book\n  else\n    invalid_books \u003c\u003c book\n  end\nend\n\nvalid_books.each do |book|\n  book.run_callbacks(:save) { false }\n  book.run_callbacks(:create) { false }\nend\n\nBook.import valid_books, validate: false\n```\n\n### Supported Adapters\n\nThe following database adapters are currently supported:\n\n* MySQL - supports core import functionality plus on duplicate key update support (included in activerecord-import 0.1.0 and higher)\n* MySQL2 - supports core import functionality plus on duplicate key update support (included in activerecord-import 0.2.0 and higher)\n* PostgreSQL - supports core import functionality (included in activerecord-import 0.1.0 and higher)\n* SQLite3 - supports core import functionality (included in activerecord-import 0.1.0 and higher)\n* Oracle - supports core import functionality through DML trigger (available as an external gem: [activerecord-import-oracle_enhanced](https://github.com/keeguon/activerecord-import-oracle_enhanced)\n* SQL Server - supports core import functionality (available as an external gem: [activerecord-import-sqlserver](https://github.com/keeguon/activerecord-import-sqlserver)\n\nIf your adapter isn't listed here, please consider creating an external gem as described in the README to provide support. If you do, feel free to update this wiki to include a link to the new adapter's repository!\n\nTo test which features are supported by your adapter, use the following methods on a model class:\n* `supports_import?(*args)`\n* `supports_on_duplicate_key_update?`\n* `supports_setting_primary_key_of_imported_objects?`\n\n### Additional Adapters\n\nAdditional adapters can be provided by gems external to activerecord-import by providing an adapter that matches the naming convention setup by activerecord-import (and subsequently activerecord) for dynamically loading adapters.  This involves also providing a folder on the load path that follows the activerecord-import naming convention to allow activerecord-import to dynamically load the file.\n\nWhen `ActiveRecord::Import.require_adapter(\"fake_name\")` is called the require will be:\n\n```ruby\nrequire 'activerecord-import/active_record/adapters/fake_name_adapter'\n```\n\nThis allows an external gem to dynamically add an adapter without the need to add any file/code to the core activerecord-import gem.\n\n### Requiring\n\n\u003e **Note**\n\u003e These instructions will only work if you are using version 0.2.0 or higher.\n\n#### Autoloading via Bundler\n\nIf you are using Rails or otherwise autoload your dependencies via Bundler, all you need to do add the gem to your `Gemfile` like so:\n\n```ruby\ngem 'activerecord-import'\n```\n\n#### Manually Loading\n\nYou may want to manually load activerecord-import for one reason or another. First, add the `require: false` argument like so:\n\n```ruby\ngem 'activerecord-import', require: false\n```\n\nThis will allow you to load up activerecord-import in the file or files where you are using it and only load the parts you need.\nIf you are doing this within Rails and ActiveRecord has established a database connection (such as within a controller), you will need to do extra initialization work:\n\n```ruby\nrequire 'activerecord-import/base'\n# load the appropriate database adapter (postgresql, mysql2, sqlite3, etc)\nrequire 'activerecord-import/active_record/adapters/postgresql_adapter'\n```\n\nIf your gem dependencies aren’t autoloaded, and your script will be establishing a database connection, then simply require activerecord-import after ActiveRecord has been loaded, i.e.:\n\n```ruby\nrequire 'active_record'\nrequire 'activerecord-import'\n```\n\n### Load Path Setup\nTo understand how rubygems loads code you can reference the following:\n\n  https://guides.rubygems.org/patterns/#loading-code\n\nAnd an example of how active_record dynamically load adapters:\n\n  https://github.com/rails/rails/blob/main/activerecord/lib/active_record/connection_adapters.rb\n\nIn summary, when a gem is loaded rubygems adds the `lib` folder of the gem to the global load path `$LOAD_PATH` so that all `require` lookups will not propagate through all of the folders on the load path. When a `require` is issued each folder on the `$LOAD_PATH` is checked for the file and/or folder referenced. This allows a gem (like activerecord-import) to define push the activerecord-import folder (or namespace) on the `$LOAD_PATH` and any adapters provided by activerecord-import will be found by rubygems when the require is issued.\n\nIf `fake_name` adapter is needed by a gem (potentially called `activerecord-import-fake_name`) then the folder structure should look as follows:\n\n```bash\nactiverecord-import-fake_name/\n|-- activerecord-import-fake_name.gemspec\n|-- lib\n|   |-- activerecord-import-fake_name.rb\n|   |-- activerecord-import-fake_name\n|   |   |-- version.rb\n|   |-- activerecord-import\n|   |   |-- active_record\n|   |   |   |-- adapters\n|   |   |       |-- fake_name_adapter.rb\n```\n\nWhen rubygems pushes the `lib` folder onto the load path a `require` will now find `activerecord-import/active_record/adapters/fake_name_adapter` as it runs through the lookup process for a ruby file under that path in `$LOAD_PATH`\n\n\n### Conflicts With Other Gems\n\nActiverecord-Import adds the `.import` method onto `ActiveRecord::Base`. There are other gems, such as `elasticsearch-rails`, that do the same thing. In conflicts such as this, there is an aliased method named `.bulk_import` that can be used interchangeably.\n\nIf you are using the `apartment` gem, there is a weird triple interaction between that gem, `activerecord-import`, and `activerecord` involving caching of the `sequence_name` of a model. This can be worked around by explicitly setting this value within the model. For example:\n\n```ruby\nclass Post \u003c ActiveRecord::Base\n  self.sequence_name = \"posts_seq\"\nend\n```\n\nAnother way to work around the issue is to call `.reset_sequence_name` on the model. For example:\n\n```ruby\nschemas.all.each do |schema|\n  Apartment::Tenant.switch! schema.name\n  ActiveRecord::Base.transaction do\n    Post.reset_sequence_name\n\n    Post.import posts\n  end\nend\n```\n\nSee https://github.com/zdennis/activerecord-import/issues/233 for further discussion.\n\n### More Information\n\nFor more information on Activerecord-Import please see its wiki: https://github.com/zdennis/activerecord-import/wiki\n\nTo document new information, please add to the README instead of the wiki. See https://github.com/zdennis/activerecord-import/issues/397 for discussion.\n\n### Contributing\n\n#### Running Tests\n\nThe first thing you need to do is set up your database(s):\n\n* copy `test/database.yml.sample` to `test/database.yml`\n* modify `test/database.yml` for your database settings\n* create databases as needed\n\nAfter that, you can run the tests. They run against multiple tests and ActiveRecord versions.\n\nThis is one example of how to run the tests:\n\n```bash\nrm Gemfile.lock\nAR_VERSION=7.0 bundle install\nAR_VERSION=7.0 bundle exec rake test:postgresql test:sqlite3 test:mysql2\n```\n\nOnce you have pushed up your changes, you can find your CI results [here](https://github.com/zdennis/activerecord-import/actions).\n\n#### Docker Setup\n\nBefore you begin, make sure you have [Docker](https://www.docker.com/products/docker-desktop/) and [Docker Compose](https://docs.docker.com/compose/) installed on your machine. If you don't, you can install both via Homebrew using the following command:\n\n```bash\nbrew install docker \u0026\u0026 brew install docker-compose\n```\n##### Steps\n\n1. In your terminal run `docker-compose up --build`\n1. In another tab/window run `docker-compose exec app bash`\n1. In that same terminal run the mysql2 test by running `bundle exec rake test:mysql2`\n\n## Issue Triage [![Open Source Helpers](https://www.codetriage.com/zdennis/activerecord-import/badges/users.svg)](https://www.codetriage.com/zdennis/activerecord-import)\n\nYou can triage issues which may include reproducing bug reports or asking for vital information, such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to activerecord-import on CodeTriage](https://www.codetriage.com/zdennis/activerecord-import).\n\n# License\n\nThis is licensed under the MIT license.\n\n# Author\n\nZach Dennis (zach.dennis@gmail.com)\n","funding_links":[],"categories":["Ruby","Gems","ORM/ODM Extensions","ActiveRecord"],"sub_categories":["Bulk Operations","Articles"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzdennis%2Factiverecord-import","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fzdennis%2Factiverecord-import","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzdennis%2Factiverecord-import/lists"}