{"id":15111536,"url":"https://github.com/ariga/atlas-operator","last_synced_at":"2026-01-28T11:27:33.775Z","repository":{"id":158985987,"uuid":"629928182","full_name":"ariga/atlas-operator","owner":"ariga","description":"Atlas Kubernetes Operator","archived":false,"fork":false,"pushed_at":"2026-01-25T11:46:15.000Z","size":995,"stargazers_count":137,"open_issues_count":13,"forks_count":18,"subscribers_count":7,"default_branch":"master","last_synced_at":"2026-01-26T01:39:52.232Z","etag":null,"topics":["databases","kubernetes","kubernetes-operator","migrations"],"latest_commit_sha":null,"homepage":"https://atlasgo.io/integrations/kubernetes/operator","language":"Go","has_issues":false,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ariga.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2023-04-19T10:04:05.000Z","updated_at":"2026-01-25T11:46:17.000Z","dependencies_parsed_at":"2026-01-19T10:07:31.517Z","dependency_job_id":null,"html_url":"https://github.com/ariga/atlas-operator","commit_stats":{"total_commits":189,"total_committers":8,"mean_commits":23.625,"dds":0.6190476190476191,"last_synced_commit":"5c3e5f305564b67e9598f420bdf9146c63015b14"},"previous_names":[],"tags_count":53,"template":false,"template_full_name":null,"purl":"pkg:github/ariga/atlas-operator","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ariga%2Fatlas-operator","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ariga%2Fatlas-operator/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ariga%2Fatlas-operator/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ariga%2Fatlas-operator/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ariga","download_url":"https://codeload.github.com/ariga/atlas-operator/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ariga%2Fatlas-operator/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28845088,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-28T10:53:21.605Z","status":"ssl_error","status_checked_at":"2026-01-28T10:53:20.789Z","response_time":57,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["databases","kubernetes","kubernetes-operator","migrations"],"created_at":"2024-09-26T00:20:57.972Z","updated_at":"2026-01-28T11:27:33.759Z","avatar_url":"https://github.com/ariga.png","language":"Go","readme":"# The Atlas Kubernetes Operator\n\nManage your database with Kubernetes using [Atlas](https://atlasgo.io).\n\n### What is Atlas? \n\n[Atlas](https://atlasgo.io) is a popular open-source schema management tool.\nIt is designed to help software engineers, DBAs and DevOps practitioners manage their database schemas. \nUsers can use the [Atlas DDL](https://atlasgo.io/atlas-schema/sql-resources) (data-definition language)\nor [plain SQL](https://atlasgo.io/declarative/apply#sql-schema) to describe the desired database \nschema and use the command-line tool to plan and apply the migrations to their systems.\n\n### What is the Atlas Kubernetes Operator?\n\nLike many other stateful resources, reconciling the desired state of a database with its actual state\ncan be a complex task that requires a lot of domain knowledge. [Kubernetes Operators](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/)\nwere introduced to the Kubernetes ecosystem to help users manage complex stateful resources by codifying \nthis domain knowledge into a Kubernetes controller.\n\nThe Atlas Kubernetes Operator is a Kubernetes controller that uses [Atlas](https://atlasgo.io) to manage\nthe schema of your database. The Atlas Kubernetes Operator allows you to define the desired schema of your\nand apply it to your database using the Kubernetes API.\n\n### Features\n\n- [x] Support for [declarative migrations](https://atlasgo.io/concepts/declarative-vs-versioned#declarative-migrations)\n  for schemas defined in [Plain SQL](https://atlasgo.io/declarative/apply#sql-schema) or \n  [Atlas HCL](https://atlasgo.io/concepts/declarative-vs-versioned#declarative-migrations).\n- [X] Detect risky changes such as accidentally dropping columns or tables and define a policy to handle them.\n- [X] Support for [versioned migrations](https://atlasgo.io/concepts/declarative-vs-versioned#versioned-migrations).\n- [X] Supported databases: MySQL, MariaDB, PostgresSQL, SQLite, TiDB, CockroachDB\n\n### Declarative schema migrations\n\n![](https://atlasgo.io/uploads/images/operator-declarative.png)\n\nThe Atlas Kubernetes Operator supports [declarative migrations](https://atlasgo.io/concepts/declarative-vs-versioned#declarative-migrations).\nIn declarative migrations, the desired state of the database is defined by the user and the operator is responsible\nfor reconciling the desired state with the actual state of the database (planning and executing `CREATE`, `ALTER`\nand `DROP` statements).\n\n### Versioned schema migrations\n\n![](https://atlasgo.io/uploads/k8s/operator/versioned-flow.png)\n\nThe Atlas Kubernetes Operator also supports [versioned migrations](https://atlasgo.io/concepts/declarative-vs-versioned#versioned-migrations).\nIn versioned migrations, the database schema is defined by a series of SQL scripts (\"migrations\") that are applied\nin lexicographical order. The user can specify the version and migration directory to run, which can be located\non the [Atlas Cloud](https://atlasgo.io/cloud/getting-started) or stored as a `ConfigMap` in your Kubernetes\ncluster.\n\n### Installation\n\nThe Atlas Kubernetes Operator is available as a Helm chart. To install the chart with the release name `atlas-operator`:\n\n```bash\nhelm install atlas-operator oci://ghcr.io/ariga/charts/atlas-operator --create-namespace --namespace atlas-operator\n```\n\n### Configuration\n\nTo configure the operator, you can set the following values in the `values.yaml` file:\n\n- `prewarmDevDB`: The Operator always keeps devdb resources around to speed up the migration process. Set this to `false` to disable this feature.\n\n- `allowCustomConfig`: Enable this to allow custom `atlas.hcl` configuration. To use this feature, you can set the `config` field in the `AtlasSchema` or `AtlasMigration` resource.\n\n```yaml\n  spec:\n    envName: myenv\n    config: |\n      env myenv {}\n    # config from secretKeyRef\n    # configFrom:\n    #   secretKeyRef:\n    #     key: config\n    #     name: my-secret\n```\n\nTo use variables in the `config` field:\n\n```yaml\n  spec:\n    envName: myenv\n    variables:\n      - name: db_url\n        value: \"mysql://root\"\n      # variables from secretKeyRef\n      # - name: db_url\n      #   valueFrom:\n      #     secretKeyRef:\n      #       key: db_url\n      #       name: my-secret\n      # variables from configMapKeyRef\n      # - name: db_url\n      #   valueFrom:\n      #     configMapKeyRef:\n      #       key: db_url\n      #       name: my-configmap\n    config: |\n      variable \"db_url\" {\n        type = string\n      }\n      env myenv {\n        url = var.db_url\n      }\n```\n\n\u003e Note: Allowing custom configuration enables executing arbitrary commands using the `external` data source as well as arbitrary SQL using the `sql` data source. Use this feature with caution.\n\n- `extraEnvs`: Used to set environment variables for the operator\n\n```yaml\n  extraEnvs: []\n  # extraEnvs:\n  #   - name: MSSQL_ACCEPT_EULA\n  #     value: \"Y\"\n  #   - name: MSSQL_PID\n  #     value: \"Developer\"\n  #   - name: ATLAS_TOKEN\n  #     valueFrom:\n  #       secretKeyRef:\n  #         key: ATLAS_TOKEN\n  #         name: atlas-token-secret\n  #   - name: BAZ\n  #     valueFrom:\n  #       configMapKeyRef:\n  #         key: BAZ\n  #         name: configmap-resource\n```\n\n\u003e Note: The SQL Server driver requires the `MSSQL_ACCEPT_EULA` and `MSSQL_PID` environment variables to be set for acceptance of the [Microsoft EULA](https://go.microsoft.com/fwlink/?linkid=857698) and the product ID, respectively.\n\n- `extraVolumes`: Used to mount additional volumes to the operator\n\n```yaml\n  extraVolumes: []\n  # extraVolumes:\n  #   - name: my-volume\n  #     secret:\n  #       secretName: my-secret\n  #   - name: my-volume\n  #     configMap:\n  #       name: my-configmap\n```\n\n- `extraVolumeMounts`: Used to mount additional volumes to the operator\n\n```yaml\n  extraVolumeMounts: []\n  # extraVolumeMounts:\n  #   - name: my-volume\n  #     mountPath: /path/to/mount\n  #   - name: my-volume\n  #     mountPath: /path/to/mount\n```\n\n### Authentication\n\nIf you want use use any feature that requires logging in (triggers, functions, procedures, sequence support or SQL Server, ClickHouse, and Redshift drivers), you need to provide the operator with an  Atlas token. You can do this by creating a secret with the token:\n\n```shell\nkubectl create secret generic atlas-token-secret \\\n  --from-literal=ATLAS_TOKEN='aci_xxxxxxx'\n```\n\nThen set the `ATLAS_TOKEN` environment variable in the operator's deployment manifest:\n\n```yaml\nvalues:\n  extraEnvs:\n    - name: ATLAS_TOKEN\n      valueFrom:\n        secretKeyRef:\n          key: ATLAS_TOKEN\n          name: atlas-token-secret\n```\n\n### Getting started\n\nIn this example, we will create a MySQL database and apply a schema to it. After installing the\noperator, follow these steps to get started:\n\n1. Create a MySQL database and a secret with an [Atlas URL](https://atlasgo.io/concepts/url)\n  to the database:\n\n  ```bash\n  kubectl apply -f https://raw.githubusercontent.com/ariga/atlas-operator/master/examples/databases/mysql.yaml\n  ```\n  \n  Result:\n  \n  ```bash\n  deployment.apps/mysql created\n  service/mysql created\n  secret/mysql-credentials created\n  ```\n\n2. Create a file named `schema.yaml` containing an `AtlasSchema` resource to define the desired schema:\n\n  ```yaml\n  apiVersion: db.atlasgo.io/v1alpha1\n  kind: AtlasSchema\n  metadata:\n    name: atlasschema-mysql\n  spec:\n    urlFrom:\n      secretKeyRef:\n        key: url\n        name: mysql-credentials\n    schema:\n      sql: |\n        create table users (\n          id int not null auto_increment,\n          name varchar(255) not null,\n          email varchar(255) unique not null,\n          short_bio varchar(255) not null,\n          primary key (id)\n        );\n  ```\n\n3. Apply the schema:\n\n  ```bash\n  kubectl apply -f schema.yaml\n  ```\n  \n  Result:\n  ```bash\n  atlasschema.db.atlasgo.io/atlasschema-mysql created\n  ```\n\n4. Check that our table was created:\n\n  ```bash\n  kubectl exec -it $(kubectl get pods -l app=mysql -o jsonpath='{.items[0].metadata.name}') -- mysql -uroot -ppass -e \"describe myapp.users\"\n  ```\n  \n  Result:\n  \n  ```bash\n  +-----------+--------------+------+-----+---------+----------------+\n  | Field     | Type         | Null | Key | Default | Extra          |\n  +-----------+--------------+------+-----+---------+----------------+\n  | id        | int          | NO   | PRI | NULL    | auto_increment |\n  | name      | varchar(255) | NO   |     | NULL    |                |\n  | email     | varchar(255) | NO   | UNI | NULL    |                |\n  | short_bio | varchar(255) | NO   |     | NULL    |                |\n  +-----------+--------------+------+-----+---------+----------------+\n  ```\n  \nHooray! We applied our desired schema to our target database.\n\nNow, let's try versioned migrations with a PostgreSQL database.\n\n1. Create a PostgresQL database and a secret with an [Atlas URL](https://atlasgo.io/concepts/url)\n  to the database:\n\n  ```bash\n  kubectl apply -f https://raw.githubusercontent.com/ariga/atlas-operator/master/examples/databases/postgres.yaml\n  ```\n  \n  Result:\n\n  ```bash\n  deployment.apps/postgres created\n  service/postgres unchanged\n  ```\n\n2. Create a file named `migrationdir.yaml` to define your migration directory\n\n  ```yaml\n  apiVersion: v1\n  kind: ConfigMap\n  metadata:\n    name: migrationdir\n  data:\n    20230316085611.sql: |\n      create sequence users_seq;\n      create table users (\n        id int not null default nextval ('users_seq'),\n        name varchar(255) not null,\n        email varchar(255) unique not null,\n        short_bio varchar(255) not null,\n        primary key (id)\n      );\n    atlas.sum: |\n      h1:FwM0ApKo8xhcZFrSlpa6dYjvi0fnDPo/aZSzajtbHLc=\n      20230316085611.sql h1:ldFr73m6ZQzNi8q9dVJsOU/ZHmkBo4Sax03AaL0VUUs=\n  ``` \n\n3. Create a file named `atlasmigration.yaml` to define your migration resource that links to the migration directory.\n\n  ```yaml\n  apiVersion: db.atlasgo.io/v1alpha1\n  kind: AtlasMigration\n  metadata:\n    name: atlasmigration-sample\n  spec:\n    urlFrom:\n      secretKeyRef:\n        key: url\n        name: postgres-credentials\n    dir:\n      configMapRef:\n        name: \"migrationdir\"\n  ```\n\n  Alternatively, we can define a migration directory inlined in the migration resource instead of using a ConfigMap:\n  \n  ```yaml\n  apiVersion: db.atlasgo.io/v1alpha1\n  kind: AtlasMigration\n  metadata:\n    name: atlasmigration-sample\n  spec:\n    urlFrom:\n      secretKeyRef:\n        key: url\n        name: postgres-credentials\n    dir:\n      local:\n        20230316085611.sql: |\n          create sequence users_seq;\n          create table users (\n            id int not null default nextval ('users_seq'),\n            name varchar(255) not null,\n            email varchar(255) unique not null,\n            short_bio varchar(255) not null,\n            primary key (id)\n          );\n        atlas.sum: |\n          h1:FwM0ApKo8xhcZFrSlpa6dYjvi0fnDPo/aZSzajtbHLc=\n          20230316085611.sql h1:ldFr73m6ZQzNi8q9dVJsOU/ZHmkBo4Sax03AaL0VUUs=\n  ```\n4. Apply migration resources:\n\n  ```bash\n  kubectl apply -f migrationdir.yaml\n  kubectl apply -f atlasmigration.yaml\n  ```\n  \n  Result:\n  ```bash\n  atlasmigration.db.atlasgo.io/atlasmigration-sample created\n  ```\n\n5. Check that our table was created:\n\n  ```\n  kubectl exec -it $(kubectl get pods -l app=postgres -o jsonpath='{.items[0].metadata.name}') -- psql -U root -d postgres -c \"\\d+ users\"\n  ```\n\n  Result:\n\n  ```bash\n    Column   |          Type          | Collation | Nullable |            Default             | Storage  | Compression | Stats target | Description\n  -----------+------------------------+-----------+----------+--------------------------------+----------+-------------+--------------+-------------\n   id        | integer                |           | not null | nextval('users_seq'::regclass) | plain    |             |              |\n   name      | character varying(255) |           | not null |                                | extended |             |              |\n   email     | character varying(255) |           | not null |                                | extended |             |              |\n   short_bio | character varying(255) |           | not null |                                | extended |             |              |\n  ```\n  \n  Please refer to [this link](https://atlasgo.io/integrations/kubernetes/versioned#api-reference) to explore the supported API for versioned migrations.\n\n### API Reference\n\nExample resource: \n\n```yaml\napiVersion: db.atlasgo.io/v1alpha1\nkind: AtlasSchema\nmetadata:\n  name: atlasschema-mysql\nspec:\n  urlFrom:\n    secretKeyRef:\n      key: url\n      name: mysql-credentials\n  policy:\n    # Fail if the diff planned by Atlas contains destructive changes.\n    lint:\n      destructive:\n        error: true\n    diff:\n      # Omit any DROP INDEX statements from the diff planned by Atlas.\n      skip:\n        drop_index: true\n  schema:\n    sql: |\n      create table users (\n        id int not null auto_increment,\n        name varchar(255) not null,\n        primary key (id)\n      );\n  exclude:\n    - ignore_me\n```\n\nThis resource describes the desired schema of a MySQL database. \n* The `urlFrom` field is a reference to a secret containing an [Atlas URL](https://atlasgo.io/concepts/url) \n to the target database. \n* The `schema` field contains the desired schema in SQL. To define the schema in HCL instead of SQL, use the `hcl` field:\n  ```yaml\n  spec:\n    schema:\n      hcl: |\n        table \"users\" {\n          // ...\n        }\n  ```\n  To learn more about defining SQL resources in HCL see [this guide](https://atlasgo.io/atlas-schema/sql-resources).\n* The `policy` field defines different policies that direct the way Atlas will plan and execute schema changes.\n  * The `lint` policy defines a policy for linting the schema. In this example, we define a policy that will fail\n    if the diff planned by Atlas contains destructive changes.\n  * The `diff` policy defines a policy for planning the schema diff. In this example, we define a policy that will\n    omit any `DROP INDEX` statements from the diff planned by Atlas.\n\n### Version checks\n\nThe operator will periodically check for new versions and security advisories related to the operator.\nTo disable version checks, set the `SKIP_VERCHECK` environment variable to `true` in the operator's\ndeployment manifest.\n\n### Troubleshooting\n\nIn successful reconciliation, the conditon status will look like this:\n\n```yaml\nStatus:\n  Conditions:\n    Last Transition Time: 2024-03-20T09:59:56Z\n    Message: \"\"\n    Reason: Applied\n    Status: True\n    Type: Ready\n  Last Applied: 1710343398\n  Last Applied Version: 20240313121148\n  observed_hash: d5a1c1c08de2530d9397d4\n```\n\nIn case of an error, the condition `status` will be set to false and `reason` field will contain the type of error that occurred (e.g. `Reconciling`, `ReadingMigrationData`, `Migrating`, etc.). To get more information about the error, you can check the `message` field.\n\n**For AtlasSchema resource:**\n| Reason | Description |\n| ------ | ----------- |\n| Reconciling | The operator is reconciling the desired state with the actual state of the database |\n| ReadSchema | There was an error about reading the schema from ConfigMap or database credentials |\n| GettingDevDB | Failed to get a [Dev Database](https://atlasgo.io/concepts/dev-database), which used for normalization the schema |\n| VerifyingFirstRun | Occurred when a first run of the operator that contain destructive changes |\n| LintPolicyError | Occurred when the lint policy is violated |\n| ApplyingSchema | Failed to apply to database |\n\n**For AtlasMigration resource:** \n\n| Reason | Description |\n| ------ | ----------- |\n| Reconciling | The operator is reconciling the desired state with the actual state of the database |\n| GettingDevDB | Failed to get a [Dev Database](https://atlasgo.io/concepts/dev-database) which is required to compute a migration plan |\n| ReadingMigrationData | Failed to read the migration directory from `ConfigMap`, Atlas Cloud or invalid database credentials |\n| ProtectedFlowError | Occurred when the migration is protected and the operator is not able to apply it |\n| ApprovalPending | Applying the migration requires manual approval on Atlas Cloud. The URL used for approval is provided in the `approvalUrl` field of the `status` object |\n| Migrating | Failed to migrate to database |\n\n### Support\n\nNeed help? File issues on the [Atlas Issue Tracker](https://github.com/ariga/atlas/issues) or join\nour [Discord](https://discord.gg/zZ6sWVg6NT) server.\n\n### Development\n\nStart [MiniKube](https://minikube.sigs.k8s.io/docs/start/)\n\n```bash\nminikube start\n```\n\nInstall CRDs\n\n```bash\nmake install\n```\n\nStart [Skaffold](https://skaffold.dev/)\n```\nskaffold dev --profile kustomize\n```\n\n### License\n\nThe Atlas Kubernetes Operator is licensed under the [Apache License 2.0](LICENSE).\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fariga%2Fatlas-operator","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fariga%2Fatlas-operator","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fariga%2Fatlas-operator/lists"}