{"id":13450765,"url":"https://github.com/Asana/bazels3cache","last_synced_at":"2025-03-23T16:32:11.622Z","repository":{"id":37925996,"uuid":"110285544","full_name":"Asana/bazels3cache","owner":"Asana","description":"Small web server for a Bazel cache, proxies to S3; allows Bazel to work offline; async uploads to make Bazel faster","archived":true,"fork":false,"pushed_at":"2022-12-07T18:20:13.000Z","size":97,"stargazers_count":80,"open_issues_count":9,"forks_count":19,"subscribers_count":113,"default_branch":"master","last_synced_at":"2024-10-28T17:39:27.069Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Asana.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2017-11-10T19:37:16.000Z","updated_at":"2024-09-29T21:18:14.000Z","dependencies_parsed_at":"2023-01-24T20:15:45.781Z","dependency_job_id":null,"html_url":"https://github.com/Asana/bazels3cache","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Asana%2Fbazels3cache","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Asana%2Fbazels3cache/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Asana%2Fbazels3cache/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Asana%2Fbazels3cache/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Asana","download_url":"https://codeload.github.com/Asana/bazels3cache/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":245130962,"owners_count":20565751,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-31T07:00:38.172Z","updated_at":"2025-03-23T16:32:11.292Z","avatar_url":"https://github.com/Asana.png","language":"TypeScript","readme":"# [DEPRECATED] This project is no longer maintained\n\nAs of December 2022, Asana is no longer using bazels3cache internally. See [the Bazel documentation](https://bazel.build/remote/caching#cache-backend) for potential alternatives.\n\n# Web server for proxying Bazel remote cache requests to S3.\n\n`bazels3cache` is a simple web server that supports basic WebDAV (`GET`, `PUT`,\n`HEAD`, and `DELETE`), and proxies those requests through to S3. You can use it\nwith `bazel --remote_http_cache=...`, so that you can use S3 for your [Bazel](https://bazel.build)\ncache.\n\n## Quick start\n\n*   Download and install bazels3cache:\n\n        npm install -g bazels3cache\n\n*   Launch `bazels3cache` like this (by default it listens on port 7777):\n\n        bazels3cache --bucket=MY_S3_BUCKET\n\n*   When you launch Bazel, tell it where the cache is:\n\n        bazel build --remote_http_cache=http://localhost:7777 ...\n\n## Main features\n\n*   Use an S3 bucket as the storage area for your Bazel remote cache.\n*   Keep working (gracefully degrading to no cache) even if you are offline.\n*   Asynchronous uploading to S3, to avoid slowing down your Bazel build.\n*   Local in-memory cache of recently accessed data (off by default).\n\n## Detailed description\n\nIf you want [Bazel](https://bazel.build) to use S3 as its backing store, you could really use any\nWebDAV-to-S3 proxy. But the key feature of `bazels3cache` that differentiates\nit from a general-purpose proxy is that if you are offline, it will report to\nBazel that \"everything is fine, I just can't find the items you're looking for\nin the cache.\" Even if Bazel tries to _upload_ something to the cache,\n`bazels3cache` will pretend the upload succeeded. (This is harmless; it's just\na cache, after all.) This means that Bazel will gracefully fall back to working\nlocally if you go offline.\n\nAnother feature (but off by default): In-memory cache. Bazel actually uses the\ncache only as [Content-addressable\nstorage](https://en.wikipedia.org/wiki/Content-addressable_storage) (CAS). What\nthis means is that the \"key\" (in this case, the URL) of any entry in the cache\nis actually a hash of that entry's contents. Because of this, you can be\nguaranteed that any cached data for a given key is definitely still valid.\n\n`bazels3cache` takes advantage of that fact, and optionally keeps a local\n(currently in-memory) cache of the data it has previously downloaded or\nuploaded. This can allow for faster cache response: Sometimes it will not be\nnecessary to make a round-trip to S3. (This feature is OFF by default. Use\n`--cache.enabled=true` to enable it.)\n\n## Starting\n\n`bazels3cache` will look for AWS credentials in the standard AWS-defined\nplaces, including the\n[environment](https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/loading-node-credentials-environment.html)\n(`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`) and\n[`~/.aws/credentials`](https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/loading-node-credentials-shared.html).\n\n## Stopping\n\nA clean shutdown:\n\n    curl http://localhost:7777/shutdown\n\nOr the brute-force way:\n\n    pkill -f bazels3cache\n\nAlso, if `idleMinutes` is greater than zero, `bazels3cache` will cleanly\nterminate itself after it has received no requests for that many minutes.\n\n## Printing debug info to the console\n\n`bazels3cache` uses the [`debug`](https://www.npmjs.com/package/debug) Node\npackage, so if you want to see debugging output, run it with the `DEBUG`\nenvironment variable:\n\n    DEBUG=bazels3cache* bin/bazels3cache\n\n## Offline usage\n\nAs mentioned above, it is often desirable to have Bazel continue to work even\nif you are offline.  By default, if `bazels3cache` is unable to reach S3, it\nwill _not_ report error messages back to Bazel; it will continue to function,\npassing appropriate success codes back to Bazel.\n\nThe way this works is:\n\n*   `GET` and `HEAD`: If `bazels3cache` can find the item in its local cache,\n    it will return it, along with a status code of `200 OK`; otherwise, it will\n    return `404 Not Found`. Bazel will simply treat this the same as any other\n    cache miss. `bazels3cache` will never report back any other errors.\n*   `PUT`: `bazels3cache` will store the item in its local cache and then\n    report back `200 OK`. It will never let Bazel know that it was unable to\n    upload the item to S3.\n\nTo be clear: The only errors that will be ignored in this way are connectivity\nerrors. Other S3 errors, such as invalid key, access denied, etc., will be\npassed on to Bazel as errors.\n\n## Automatic pause of S3 access\n\nRepeatedly attempting to access S3 while offline can be slow. So after\n`bazels3cache` has gotten back three consecutive connectivity errors from S3,\nit temporarily pauses all S3 access (for five minutes). During that time, only\nthe local in-memory cache will be used. This pause will be transparent to\nBazel.\n\n## Asynchronous uploading to S3\n\nWhen bazels3cache receives a `PUT` (an upload request) from Bazel, it needs to\nupload the content to S3, and send a success/failure response back to Bazel.\nThere are two ways it can handle the response to Bazel:\n\n*   If asynchronous uploading is enabled (the default), then bazels3cache\n    immediately sends a success response back to Bazel, even before it has\n    uploaded to S3. This allows the Bazel build to complete much more quickly.\n    Of course, the upload to S3 might fail; but it's okay if Bazel doesn't know\n    that.\n\n    In this case, `bazels3cache` might even drop some uploads if it falls too\n    far behind. Since the remote cache is just a cache, this is usually\n    acceptable.\n\n*   If asynchronous uploading is disabled (the `\"asyncUpload\"` section of\n    `config.default.json`, or `--asyncUpload.enabled=false` on the command\n    line), then the response code will not be sent back to Bazel until the\n    upload to S3 has completed.\n\n## Configuration\n\n`config.default.json` shows all configurable settings, including comments\ndescribing them, and their default values. You can override these defaults in a\ncouple of ways. The overrides are loaded in the order listed below -- for\nexample, if you have both a `~/.config/bazels3cache/config.json` file and\ncommand-line arguments, then the command-line arguments win.\n\n1.  A user-wide config file: `~/.config/bazels3cache/config.json`\n\n2.  A config file specified with `--config`:\n\n        bazels3cache --config=myconfig.json\n\n    Your config file only needs to include the values you want to override.\n\n3.  Command line arguments with the same names as the names from the config\n    file, but with dots for nested elements. For example, the config file\n    includes this:\n\n        {\n            \"cache\": {\n                \"maxEntrySizeBytes\": 1000000\n            }\n        }\n\n    To override this, use dashes:\n\n        bazels3cache --cache.maxEntrySizeBytes=\u003cNUMBER\u003e\n","funding_links":[],"categories":["Tooling"],"sub_categories":["Remote caching and execution"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FAsana%2Fbazels3cache","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FAsana%2Fbazels3cache","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FAsana%2Fbazels3cache/lists"}