{"id":13397535,"url":"https://github.com/bep/s3deploy","last_synced_at":"2025-05-15T14:06:37.134Z","repository":{"id":4482331,"uuid":"52308449","full_name":"bep/s3deploy","owner":"bep","description":"A simple tool to deploy static websites to Amazon S3 and CloudFront with Gzip and custom headers support (e.g. \"Cache-Control\")","archived":false,"fork":false,"pushed_at":"2024-10-18T05:56:54.000Z","size":2975,"stargazers_count":535,"open_issues_count":5,"forks_count":41,"subscribers_count":11,"default_branch":"master","last_synced_at":"2025-04-07T17:05:46.875Z","etag":null,"topics":["amazon-s3","deploy","s3","static-site"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/bep.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null},"funding":{"github":["bep"]}},"created_at":"2016-02-22T21:48:59.000Z","updated_at":"2025-04-07T09:54:32.000Z","dependencies_parsed_at":"2024-05-06T13:00:02.705Z","dependency_job_id":"2add4fc2-4767-4e1b-bc6e-6f2020ee3b1d","html_url":"https://github.com/bep/s3deploy","commit_stats":{"total_commits":178,"total_committers":14,"mean_commits":"12.714285714285714","dds":0.348314606741573,"last_synced_commit":"f951c0dd5743194faee8fb13d4c1954228c9ddf2"},"previous_names":[],"tags_count":37,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bep%2Fs3deploy","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bep%2Fs3deploy/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bep%2Fs3deploy/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bep%2Fs3deploy/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/bep","download_url":"https://codeload.github.com/bep/s3deploy/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254355335,"owners_count":22057354,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["amazon-s3","deploy","s3","static-site"],"created_at":"2024-07-30T18:01:29.346Z","updated_at":"2025-05-15T14:06:32.103Z","avatar_url":"https://github.com/bep.png","language":"Go","readme":"# s3deploy\n\n[![Project status: active – The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n[![GoDoc](https://godoc.org/github.com/bep/s3deploy?status.svg)](https://godoc.org/github.com/bep/s3deploy)\n[![Test](https://github.com/bep/s3deploy/actions/workflows/test.yml/badge.svg)](https://github.com/bep/s3deploy/actions/workflows/test.yml)\n[![Go Report Card](https://goreportcard.com/badge/github.com/bep/s3deploy)](https://goreportcard.com/report/github.com/bep/s3deploy)\n[![codecov](https://codecov.io/gh/bep/s3deploy/branch/master/graph/badge.svg)](https://codecov.io/gh/bep/s3deploy)\n[![Release](https://img.shields.io/github/release/bep/s3deploy.svg?style=flat-square)](https://github.com/bep/s3deploy/releases/latest)\n\nA simple tool to deploy static websites to Amazon S3 and CloudFront with Gzip and custom headers support (e.g. \"Cache-Control\"). It uses ETag hashes to check if a file has changed, which makes it optimal in combination with static site generators like [Hugo](https://github.com/gohugoio/hugo).\n\n * [Install](#install)\n * [Configuration](#configuration)\n     * [Flags](#flags)\n     * [Routes](#routes)\n * [Global AWS Configuration](#global-aws-configuration)\n * [Example IAM Policy](#example-iam-policy)\n * [CloudFront CDN Cache Invalidation](#cloudfront-cdn-cache-invalidation)\n     * [Example IAM Policy With CloudFront Config](#example-iam-policy-with-cloudfront-config)\n * [Background Information](#background-information)\n * [Alternatives](#alternatives)\n * [Stargazers over time](#stargazers-over-time)\n\n## Install\n\nPre-built binaries can be found [here](https://github.com/bep/s3deploy/releases/latest).\n\n**s3deploy** is a [Go application](https://golang.org/doc/install), so you can also install the latest version with:\n\n```bash\n go install github.com/bep/s3deploy/v2@latest\n ```\n\n To install on MacOS using Homebrew:\n\n ```bash\n brew install bep/tap/s3deploy\n ```\n \n**Note** The brew tap above currently stops at v2.8.1; see [this issue](https://github.com/bep/s3deploy/issues/312) for more info.\n\nNote that `s3deploy` is a perfect tool to use with a continuous integration tool such as [CircleCI](https://circleci.com/). See [this](https://mostlygeek.com/posts/hugo-circle-s3-hosting/) for a tutorial that uses s3deploy with CircleCI.\n\n## Configuration\n\n### Flags\n\nThe list of flags from running `s3deploy -h`:\n\n```\n-V\tprint version and exit\n-acl string\n    provide an ACL for uploaded objects. to make objects public, set to 'public-read'. all possible values are listed here: https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#canned-acl (default \"private\")\n-bucket string\n    destination bucket name on AWS\n-config string\n    optional config file (default \".s3deploy.yml\")\n-distribution-id value\n    optional CDN distribution ID for cache invalidation, repeat flag for multiple distributions\n-endpoint-url string\n    optional endpoint URL\n-force\n    upload even if the etags match\n-h\thelp\n-ignore value\n    regexp pattern for ignoring files, repeat flag for multiple patterns,\n-key string\n    access key ID for AWS\n-max-delete int\n    maximum number of files to delete per deploy (default 256)\n-path string\n    optional bucket sub path\n-public-access\n    DEPRECATED: please set -acl='public-read'\n-quiet\n    enable silent mode\n-region string\n    name of AWS region\n-secret string\n    secret access key for AWS\n-skip-local-dirs value\n    regexp pattern of files of directories to ignore when walking the local directory, repeat flag for multiple patterns, default \"^\\\\/?(?:\\\\w+\\\\/)*(\\\\.\\\\w+)\"\n-skip-local-files value\n    regexp pattern of files to ignore when walking the local directory, repeat flag for multiple patterns, default \"^(.*/)?/?.DS_Store$\"\n-source string\n    path of files to upload (default \".\")\n-strip-index-html\n    strip index.html from all directories expect for the root entry\n-try\n    trial run, no remote updates\n-v\tenable verbose logging\n-workers int\n    number of workers to upload files (default -1)\n```\n\nThe flags can be set in one of (in priority order):\n\n1. As a flag, e.g. `s3deploy -path public/`\n1. As an OS environment variable prefixed with `S3DEPLOY_`, e.g. `S3DEPLOY_PATH=\"public/\"`.\n1. As a key/value in `.s3deploy.yml`, e.g. `path: \"public/\"`\n1. For `key` and `secret` resolution, the OS environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` (and `AWS_SESSION_TOKEN`) will also be checked. This way you don't need to do any special to make it work with [AWS Vault](https://github.com/99designs/aws-vault) and similar tools.\n\t\n\nEnvironment variable expressions in `.s3deploy.yml` on the form `${VAR}` will be expanded before it's parsed:\n\n```yaml\npath: \"${MYVARS_PATH}\"\nmax-delete: \"${MYVARS_MAX_DELETE@U}\"\n```\n\nNote the special `@U` (_Unquoute_) syntax for the int field.\n\n#### Skip local files and directories\n\nThe options `-skip-local-dirs` and `-skip-local-files` will match against a relative path from the source directory with Unix-style path separators. The source directory is represented by `.`, the rest starts with a `/`.\n\n#### Strip index.html\n\nThe option `-strip-index-html` strips index.html from all directories expect for the root entry. This matches the option with (almost) same name in [hugo deploy](https://gohugo.io/hosting-and-deployment/hugo-deploy/). This simplifies the cloud configuration needed for some use cases, such as CloudFront distributions with S3 bucket origins.  See this [PR](https://github.com/gohugoio/hugo/pull/12608) for more information.\n\n### Routes\n\nThe `.s3deploy.yml` configuration file can also contain one or more routes. A route matches files given a regexp. Each route can apply:\n\n`header`\n: Header values, the most notable is probably `Cache-Control`. Note that the list of [system-defined metadata](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html#object-metadata) that S3 currently supports and returns as HTTP headers when hosting  a static site is very short. If you have more advanced requirements (e.g. security headers), see [this comment](https://github.com/bep/s3deploy/issues/57#issuecomment-991782098).\n\n`gzip`\n: Set to true to gzip the content when stored in S3. This will also set the correct `Content-Encoding` when fetching the object from S3.\n\nExample:\n\n```yaml\nroutes:\n    - route: \"^.+\\\\.(js|css|svg|ttf)$\"\n      #  cache static assets for 1 year.\n      headers:\n         Cache-Control: \"max-age=31536000, no-transform, public\"\n      gzip: true\n    - route: \"^.+\\\\.(png|jpg)$\"\n      headers:\n         Cache-Control: \"max-age=31536000, no-transform, public\"\n      gzip: false\n    - route: \"^.+\\\\.(html|xml|json)$\"\n      gzip: true\n```\n\n\n\n\n## Global AWS Configuration\n\nSee https://docs.aws.amazon.com/sdk-for-go/api/aws/session/#hdr-Sessions_from_Shared_Config\n\nThe `AWS SDK` will fall back to credentials from `~/.aws/credentials`.\n\nIf you set the `AWS_SDK_LOAD_CONFIG` environment variable, it will also load shared config from `~/.aws/config` where you can set the global `region` to use if not provided etc.\n\n## Example IAM Policy\n\n```json\n{\n   \"Version\": \"2012-10-17\",\n   \"Statement\":[\n      {\n         \"Effect\":\"Allow\",\n         \"Action\":[\n            \"s3:ListBucket\",\n            \"s3:GetBucketLocation\"\n         ],\n         \"Resource\":\"arn:aws:s3:::\u003cbucketname\u003e\"\n      },\n      {\n         \"Effect\":\"Allow\",\n         \"Action\":[\n            \"s3:PutObject\",\n            \"s3:PutObjectAcl\",\n            \"s3:DeleteObject\"\n         ],\n         \"Resource\":\"arn:aws:s3:::\u003cbucketname\u003e/*\"\n      }\n   ]\n}\n```\n\nReplace \u003cbucketname\u003e with your own.\n\n## CloudFront CDN Cache Invalidation\n\nIf you have configured CloudFront CDN in front of your S3 bucket, you can supply the `distribution-id` as a flag. This will make sure to invalidate the cache for the updated files after the deployment to S3. Note that the AWS user must have the needed access rights.\n\nNote that CloudFront allows [1,000 paths per month at no charge](https://aws.amazon.com/blogs/aws/simplified-multiple-object-invalidation-for-amazon-cloudfront/), so S3deploy tries to be smart about the invalidation strategy; we try to reduce the number of paths to 8. If that isn't possible, we will fall back to a full invalidation, e.g. \"/*\".\n\n### Example IAM Policy With CloudFront Config\n\n```json\n{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Effect\": \"Allow\",\n            \"Action\": [\n                \"s3:ListBucket\",\n                \"s3:GetBucketLocation\"\n            ],\n            \"Resource\": \"arn:aws:s3:::\u003cbucketname\u003e\"\n        },\n        {\n            \"Effect\": \"Allow\",\n            \"Action\": [\n                \"s3:PutObject\",\n                \"s3:DeleteObject\",\n                \"s3:PutObjectAcl\"\n            ],\n            \"Resource\": \"arn:aws:s3:::\u003cbucketname\u003e/*\"\n        },\n        {\n            \"Effect\": \"Allow\",\n            \"Action\": [\n                \"cloudfront:GetDistribution\",\n                \"cloudfront:CreateInvalidation\"\n            ],\n            \"Resource\": \"*\"\n        }\n    ]\n}\n```\n\n## Background Information\n\nIf you're looking at `s3deploy` then you've probably already seen the [`aws s3 sync` command](https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html) - this command has a sync-strategy that is not optimised for static sites, it compares the **timestamp** and **size** of your files to decide whether to upload the file.\n\nBecause static-site generators can recreate **every** file (even if identical) the timestamp is updated and thus `aws s3 sync` will needlessly upload every single file. `s3deploy` on the other hand checks the etag hash to check for actual changes, and uses that instead.\n\n## Alternatives\n\n* [go3up](https://github.com/alexaandru/go3up) by Alexandru Ungur\n* [s3up](https://github.com/nathany/s3up) by Nathan Youngman (the starting-point of this project)\n \n## Stargazers over time\n\n [![Stargazers over time](https://starchart.cc/bep/s3deploy.svg)](https://starchart.cc/bep/s3deploy)\n","funding_links":["https://github.com/sponsors/bep"],"categories":["Go"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbep%2Fs3deploy","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbep%2Fs3deploy","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbep%2Fs3deploy/lists"}