{"id":16971501,"url":"https://github.com/thomastjdev/nim_awss3","last_synced_at":"2025-07-25T17:39:58.627Z","repository":{"id":45875281,"uuid":"380415847","full_name":"ThomasTJdev/nim_awsS3","owner":"ThomasTJdev","description":"Amazon Simple Storage Service (AWS S3) basic API support","archived":false,"fork":false,"pushed_at":"2025-02-14T07:09:09.000Z","size":173,"stargazers_count":11,"open_issues_count":2,"forks_count":3,"subscribers_count":4,"default_branch":"main","last_synced_at":"2025-02-14T08:22:15.209Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Nim","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ThomasTJdev.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-06-26T04:50:09.000Z","updated_at":"2025-02-14T07:09:10.000Z","dependencies_parsed_at":"2024-11-21T15:43:30.383Z","dependency_job_id":null,"html_url":"https://github.com/ThomasTJdev/nim_awsS3","commit_stats":null,"previous_names":[],"tags_count":13,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ThomasTJdev%2Fnim_awsS3","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ThomasTJdev%2Fnim_awsS3/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ThomasTJdev%2Fnim_awsS3/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ThomasTJdev%2Fnim_awsS3/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ThomasTJdev","download_url":"https://codeload.github.com/ThomasTJdev/nim_awsS3/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":244860611,"owners_count":20522466,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-10-14T00:52:03.864Z","updated_at":"2025-03-21T20:16:19.824Z","avatar_url":"https://github.com/ThomasTJdev.png","language":"Nim","readme":"# awsS3\nAmazon Simple Storage Service (AWS S3) basic API support.\n\nIf you need more API's then take a look at [atoz](https://github.com/disruptek/atoz).\n\n\n## Procedures\n\nThe core AWS commands has two procedures - one is the raw request returning\nthe response, the other one is a sugar returning a `assert [is is2xx] == true`.\n\nThe raw request commands can be chained where the `client` can be reused,\ne.g. the `move to trash`, which consists of a `copyObject` and a `deleteObject`.\n\nAll requests are performed async.\n\n\nLimitations:\nSpaces in `keys` is not supported.\n\n\n## TODO:\n- all `bucketHost` should be `bucketName`, and when needed as a host, the\nregion (host) should be appended within here. In that way we would only\nneed to pass `bucketName` (shortform) around.\n\n\n# Example\n\n```nim\nimport\n  std/asyncdispatch,\n  std/httpclient,\n  std/os\n\nimport\n  awsS3,\n  awsS3/utils_async,\n  awsSTS\n\nconst\n  bucketHost    = \"my-bucket.s3-eu-west-1.amazonaws.com\"\n  bucketName    = \"my-bucket\"\n  serverRegion  = \"eu-west-1\"\n  myAccessKey   = \"AKIAEXAMPLE\"\n  mySecretKey   = \"J98765RFGBNYT4567EXAMPLE\"\n  role          = \"arn:aws:iam::2345676543:role/Role-S3-yeah\"\n  s3File        = \"test/test.jpg\"\n  s3MoveTo      = \"test2/test.jpg\"\n  localTestFile = \"/home/username/download/myimage.jpg\"\n  downloadTo    = \"/home/username/git/nim_awsS3/test3.jpg\"\n\n\n## Get creds with awsSTS package\nlet creds = awsSTScreate(myAccessKey, mySecretKey, serverRegion, role)\n\n## 1) Create test file\nwriteFile(localTestFile, \"blabla\")\n\n## 2) Put object\necho waitFor s3PutObjectIs2xx(creds, bucketHost, s3File, localTestFile)\n\n## 3) Move object\nwaitFor s3MoveObject(creds, bucketHost, s3MoveTo, bucketHost, bucketName, s3File)\n\n## 4) Get content-length\nvar client = newAsyncHttpClient()\nlet m1 = waitFor s3HeadObject(client, creds, bucketHost, s3MoveTo)\necho m1.headers[\"content-length\"]\n\n## 5) Get object\necho waitFor s3GetObjectIs2xx(creds, bucketHost, s3MoveTo, downloadTo)\necho fileExists(downloadTo)\n\n## 6) Delete object\necho waitFor s3DeleteObjectIs2xx(creds, bucketHost, s3MoveTo)\n```\n\n\n\n# Procs\n\n## s3Creds*\n\n```nim\nproc s3Creds*(accessKey, secretKey, tokenKey, region: string): AwsCreds =\n```\n\nThis uses the nimble package `awsSTS` to store the credentials.\n\n\n____\n\n## S3 Signed URL\n\nGenerate S3 presigned URL's.\n\n### API {.deprecated.}\n\n~~This is the standard public API.~~\n\nUse the `s3SignedUrl` instead.\n\n```nim\nproc s3Presigned*(accessKey, secretKey, region: string, bucketHost, key: string,\n    httpMethod = HttpGet,\n    contentDisposition = CDTattachment, contentDispositionName = \"\",\n    setContentType = true, fileExt = \"\", expireInSec = \"65\", accessToken = \"\"\n  ): string {.deprecated.} =\n```\n\n```nim\nproc s3Presigned*(creds: AwsCreds, bucketHost, key: string,\n    contentDisposition = CDTattachment, contentDispositionName = \"\",\n    setContentType = true, fileExt = \"\", expireInSec = \"65\"\n  ): string {.deprecated.} =\n```\n\n### Raw\n\nThis exposes the internal API. It has been made public for users to skip the `s3Presigned*`.\n\n```nim\nproc s3SignedUrl*(\n    credsAccessKey, credsSecretKey, credsRegion: string,\n    bucketHost, key: string,\n    httpMethod = HttpGet,\n    contentDisposition = CDTignore, contentDispositionName = \"\",\n    setContentType = true,\n    fileExt = \"\", customQuery = \"\", copyObject = \"\", expireInSec = \"65\",\n    accessToken = \"\"\n  ): string =\n\n  ## customQuery:\n  ##  This is a custom defined header query. The string needs to include the format\n  ##  \"head1:value,head2:value\" - a comma separated string with header and\n  ##  value diveded by colon.\n  ##\n  ## copyObject:\n  ##   Attach copyObject to headers\n```\n\n### Details\nGenerates a S3 presigned url for sharing.\n\n```\ncontentDisposition =\u003e sets \"Content-Disposition\" type (inline/attachment)\ncontentDispositionName =\u003e sets \"Content-Disposition\" name\nsetContentType =\u003e sets \"response-content-type\"\nfileExt        =\u003e only if setContentType=true\n                  if `fileExt = \"\"` then mimetype is automated\n                  needs to be \".jpg\" (dot before) like splitFile(f).ext\n```\n\n\n### Content-Disposition type\n\n```nim\ntype\n  contentDisposition* = enum\n    CDTinline     # Content-Disposition: inline\n    CDTattachment # Content-Disposition: attachment\n    CDTignore\n```\n\n\n____\n\n## parseReponse*\n\n```nim\nproc parseReponse*(response: AsyncResponse): (bool, HttpHeaders) =\n```\n\nHelper-Procedure that can be used to return true on success and the response headers.\n\n\n____\n\n## isSuccess2xx*\n\n**[utils package - async \u0026 sync]**\n\n```nim\nproc isSuccess2xx*(response: AsyncResponse): (bool) =\n```\n\nHelper-Procedure that can be used with the raw call for parsing the response.\n\n\n____\n\n## s3DeleteObject\n\n```nim\nproc s3DeleteObject(client: AsyncHttpClient, creds: AwsCreds, bucketHost, key: string): Future[AsyncResponse] {.async.} =\n```\n\nAWS S3 API - DeleteObject\n\n\n____\n\n## s3DeleteObjectIs2xx*\n\n**[utils package - async \u0026 sync]**\n\n```nim\nproc s3DeleteObjectIs2xx*(creds: AwsCreds, bucketHost, key: string): Future[bool] {.async.} =\n```\n\nAWS S3 API - DeleteObject bool\n\n\n____\n\n## s3HeadObject*\n\n```nim\nproc s3HeadObject*(client: AsyncHttpClient, creds: AwsCreds, bucketHost, key: string): Future[AsyncResponse] {.async.} =\n```\n\nAWS S3 API - HeadObject\n\n Response: - result.headers[\"content-length\"]\n\n\n____\n\n## s3HeadObjectIs2xx*\n\n**[utils package - async \u0026 sync]**\n\n```nim\nproc s3HeadObjectIs2xx*(creds: AwsCreds, bucketHost, key: string): Future[bool] {.async.} =\n```\n\nAWS S3 API - HeadObject bool\n\n AWS S3 API - HeadObject is2xx is only checking the existing of the file. If the data is needed, then use the raw `s3HeadObject` procedure and parse the response.\n\n\n____\n\n## s3GetObject*\n\n```nim\nproc s3GetObject*(client: AsyncHttpClient, creds: AwsCreds, bucketHost, key, downloadPath: string) {.async.} =\n```\n\nAWS S3 API - GetObject\n\n `downloadPath` needs to full local path.\n\n\n____\n\n## s3GetObjectIs2xx*\n\n**[utils package - async \u0026 sync]**\n\n```nim\nproc s3GetObjectIs2xx*(creds: AwsCreds, bucketHost, key, downloadPath: string): Future[bool] {.async.} =\n```\n\nAWS S3 API - GetObject bool\n\n AWS S3 API - GetObject is2xx returns true on downloaded file.\n\n `downloadPath` needs to full local path.\n\n\n____\n\n## s3PutObject*\n\n```nim\nproc s3PutObject*(client: AsyncHttpClient, creds: AwsCreds, bucketHost, key, localPath: string): Future[AsyncResponse]  {.async.} =\n```\n\nAWS S3 API - PutObject\n\nThe PutObject reads the file to memory and uploads it.\n\n\n____\n\n## s3PutObjectIs2xx*\n\n**[utils package - async \u0026 sync]**\n\n```nim\nproc s3PutObjectIs2xx*(creds: AwsCreds, bucketHost, key, localPath: string, deleteLocalFileAfter=true): Future[bool] {.async.} =\n```\n\nAWS S3 API - PutObject bool\n\nThis performs a PUT and uploads the file. The `localPath` param needs to be the full path.\n\nThe PutObject reads the file to memory and uploads it.\n\n____\n\n## s3CopyObject*\n\n```nim\nproc s3CopyObject*(client: AsyncHttpClient, creds: AwsCreds, bucketHost, key, copyObject: string): Future[AsyncResponse]  {.async.} =\n```\n\nAWS S3 API - CopyObject\n\nThe copyObject param is the full path to the copy source, this means both the bucket and file, e.g.:\n```\n- /bucket-name/folder1/folder2/s3C3FiLXRsPXeE9TUjZGEP3RYvczCFYg.jpg\n- /[BUCKET]/[KEY]\n```\n\n**TODO:**\nImplement error checker. An error occured during `copyObject` can return a 200-response. If the error occurs during the copy operation, the error response is embedded in the 200 OK response. This means that a 200 OK response can contain either a success or an error. (https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html)\n\n\n____\n\n## s3CopyObjectIs2xx*\n\n**[utils package - async \u0026 sync]**\n\n```nim\nproc s3CopyObjectIs2xx*(client: AsyncHttpClient, creds: AwsCreds, bucketHost, key, copyObject: string): Future[bool] {.async.} =\n```\n\nAWS S3 API - CopyObject bool\n\n\n____\n\n## s3MoveObject*\n\n```nim\nproc s3MoveObject*(creds: AwsCreds, bucketToHost, keyTo, bucketFromHost, bucketFromName, keyFrom: string) {.async.} =\n```\n\nThis does a pseudo move of an object. We copy the object to the destination and then we delete the object from the original location.\n\n```\nbucketToHost   =\u003e Destination bucket host\nkeyTo          =\u003e 12/files/file.jpg\nbucketFromHost =\u003e Origin bucket host\nbucketFromName =\u003e Origin bucket name\nkeyFrom        =\u003e 24/files/old.jpg\n```\n\n\n____\n\n## s3MoveObjects*\n\n**[utils package - async \u0026 sync]**\n\n```nim\nproc s3MoveObjects*(creds: AwsCreds, bucketHost, bucketFromHost, bucketFromName: string, keys: seq[string], waitValidate = 0, waitDelete = 0) {.async.} =\n```\n\nIn this (plural) multiple moves are performed. The keys are identical in \"from\" and \"to\", so origin and destination are the same.\n\nThe `waitValidate` and `waitDelete` are used to wait between the validation if the file exists and delete operation.\n\n\n____\n\n## s3TrashObject*\n\n**[utils package - async \u0026 sync]**\n\n```nim\nproc s3TrashObject*(creds: AwsCreds, bucketTrashHost, bucketFromHost, bucketFromName, keyFrom: string) {.async.} =\n```\n\nThis does a pseudo move of an object. We copy the object to the destination and then we delete the object from the original location. The destination in this particular situation - is our trash.\n\n____\n\n## s3TrashObjects*\n\n**[utils package - async \u0026 sync]**\n\n```nim\nproc s3TrashObjects*(creds: AwsCreds, bucketTrashHost, bucketFromHost, bucketFromName, keyFrom: seq[string], waitValidate = 0, waitDelete = 0) {.async.} =\n```\n\nThis does a pseudo move of an object. We copy the object to the destination and then we delete the object from the original location. The destination in this particular situation - is our trash.\n\n\n\n\n\n______\n\n\n# S3 Multipart uploads\n\nTo use multipart import it directly and compile with `-d:s3multipart`:\n\n```nim\nimport awsS3/multipart\n```\n\nThe upload part in ```src/multipart/api/uploadPart.nim``` contains a full example of\n- abortMultipartUpload\n- listMultipartUpload\n- listParts\n- completeMultipartUpload\n- createMultipartUpload\n\n## Quick test\n\nThe multipart files contains `when isMainModule` which can be used to test the upload\nprocedures.\n\nTo test the full upload procedure: Create a file called testFile.bin with\n+10MB of data, copy `example.env` to `.env`, run `nimble install dotenv`\nand then run the following command:\n\n```nim\nnim c -d:dev -r src/multipart/api/uploadPart.nim\n```\n____\n\n## abordMultipartUpload\n\n```nim\nlet abortMultipartUploadRequest = AbortMultipartUploadRequest(\n  bucket: bucket,\n  key: upload.key,\n  uploadId: upload.uploadId.get()\n)\n\ntry:\n    var abortClient = newAsyncHttpClient()\n    let abortMultipartUploadResult = await abortClient.abortMultipartUpload(credentials=credentials, bucket=bucket, region=region, args=abortMultipartUploadRequest)\n    echo abortMultipartUploadResult.toJson().parseJson().pretty()\nexcept:\n    echo getCurrentExceptionMsg()\n\n```\n\n____\n\n## createMultipartUpload\n\n```nim\n# initiate the multipart upload\nlet createMultiPartUploadRequest = CreateMultipartUploadRequest(\n    bucket: bucket,\n    key: key,\n  )\n\nlet createMultiPartUploadResult = await client.createMultipartUpload(\n        credentials = credentials,\n        bucket = bucket,\n        region = region,\n        args = createMultiPartUploadRequest\n  )\n```\n\n____\n\n## completeMultipartUpload\n\n```nim\nlet args = CompleteMultipartUploadRequest(\n    bucket: bucket,\n    key: key,\n    uploadId: uploadId\n)\n\nlet res = await client.completeMultipartUpload(credentials=credentials, bucket=bucket, region=region, args=args)\necho res.toJson().parseJson().pretty()\n\n```\n\n____\n\n## uploadPart\n\n```nim\nlet uploadPartCommandRequest = UploadPartCommandRequest(\n    bucket: bucket,\n    key: key,\n    body: body,\n    partNumber: partNumber,\n    uploadId: createMultiPartUploadResult.uploadId\n)\nlet res = await client.uploadPart(\n  credentials = credentials,\n  bucket = bucket,\n  region = region,\n  args = uploadPartCommandRequest\n)\necho \"\\n\u003e uploadPart\"\necho res.toJson().parseJson().pretty()\n```\n____\n\n## listMultipartUploads\n\n```nim\nlet listMultipartUploadsRequest = ListMultipartUploadsRequest(\n    bucket: bucket,\n    prefix: some(\"test\")\n)\nlet listMultipartUploadsRes = await client.listMultipartUploads(credentials=credentials, bucket=bucket, region=region, args=listMultipartUploadsRequest)\n\n```\n\n____\n\n## listParts\n\n```nim\nlet args = ListPartsRequest(\n    bucket: bucket,\n    key: some(key),\n    uploadId: some(uploadId)\n)\nlet result = await client.listParts(credentials=credentials, bucket=bucket, region=region, args=args)\n# echo result\necho result.toJson().parseJson().pretty()\n```\n\n____\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fthomastjdev%2Fnim_awss3","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fthomastjdev%2Fnim_awss3","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fthomastjdev%2Fnim_awss3/lists"}