{"id":41619838,"url":"https://github.com/th3hero/express-storage","last_synced_at":"2026-02-28T08:54:24.667Z","repository":{"id":307398736,"uuid":"1029160867","full_name":"th3hero/express-storage","owner":"th3hero","description":"express-storage is an easy-to-use Express middleware for handling file uploads across multiple storage providers like local, S3, GCS, and OCI buckets. It supports presigned and normal uploads with simple configuration through config files or environment variables.","archived":false,"fork":false,"pushed_at":"2026-02-04T20:39:44.000Z","size":677,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-02-05T08:48:25.025Z","etag":null,"topics":["aws-s3","azure-blob-storage","cloud-storage","express","expressjs","file-upload","google-cloud-storage","middleware","multer","multi-cloud","nodejs","presigned-url","storage-abstraction","storage-s3","typescript"],"latest_commit_sha":null,"homepage":"","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/th3hero.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-07-30T15:59:58.000Z","updated_at":"2026-02-04T20:40:00.000Z","dependencies_parsed_at":null,"dependency_job_id":"2f70d2c8-8f63-4f6a-ab73-e872b1190198","html_url":"https://github.com/th3hero/express-storage","commit_stats":null,"previous_names":["th3hero/express-storage"],"tags_count":9,"template":false,"template_full_name":null,"purl":"pkg:github/th3hero/express-storage","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/th3hero%2Fexpress-storage","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/th3hero%2Fexpress-storage/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/th3hero%2Fexpress-storage/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/th3hero%2Fexpress-storage/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/th3hero","download_url":"https://codeload.github.com/th3hero/express-storage/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/th3hero%2Fexpress-storage/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29927176,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-27T19:37:42.220Z","status":"online","status_checked_at":"2026-02-28T02:00:07.010Z","response_time":90,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["aws-s3","azure-blob-storage","cloud-storage","express","expressjs","file-upload","google-cloud-storage","middleware","multer","multi-cloud","nodejs","presigned-url","storage-abstraction","storage-s3","typescript"],"created_at":"2026-01-24T13:30:24.623Z","updated_at":"2026-02-28T08:54:24.645Z","avatar_url":"https://github.com/th3hero.png","language":"TypeScript","readme":"# Express Storage\n\n**Express.js file upload middleware for AWS S3, Google Cloud Storage, Azure Blob Storage, and local disk — one unified API, zero vendor lock-in.**\n\nExpress Storage is a TypeScript-first file upload library for Node.js and Express. Upload files to AWS S3, Google Cloud Storage (GCS), Azure Blob Storage, or local disk using a single API. Switch cloud providers by changing one environment variable — no code changes needed. Built-in presigned URL support, file validation, streaming uploads, and security protection make it a production-ready alternative to multer-s3 that works with every major cloud provider.\n\n[![npm version](https://img.shields.io/npm/v/express-storage.svg)](https://www.npmjs.com/package/express-storage)\n[![npm downloads](https://img.shields.io/npm/dm/express-storage.svg)](https://www.npmjs.com/package/express-storage)\n[![npm bundle size](https://img.shields.io/bundlephobia/minzip/express-storage)](https://bundlephobia.com/package/express-storage)\n[![TypeScript](https://img.shields.io/badge/TypeScript-Ready-blue.svg)](https://www.typescriptlang.org/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)\n[![Node.js Version](https://img.shields.io/node/v/express-storage)](https://nodejs.org)\n[![GitHub stars](https://img.shields.io/github/stars/th3hero/express-storage?style=social)](https://github.com/th3hero/express-storage)\n\n---\n\n## Table of Contents\n\n- [Features](#features)\n- [Quick Start](#quick-start)\n- [Supported Storage Providers](#supported-storage-providers)\n- [Error Codes](#error-codes)\n- [Security Features](#security-features)\n- [Presigned URLs: Client-Side Uploads](#presigned-urls-client-side-uploads)\n- [Large File Uploads](#large-file-uploads)\n- [API Reference](#api-reference)\n- [Environment Variables](#environment-variables)\n- [Lifecycle Hooks](#lifecycle-hooks)\n- [Type-Safe Results](#type-safe-results)\n- [Configurable Concurrency](#configurable-concurrency)\n- [Lifecycle Management](#lifecycle-management)\n- [Custom Rate Limiting](#custom-rate-limiting)\n- [Utilities](#utilities)\n- [Real-World Examples](#real-world-examples)\n- [Migrating Between Providers](#migrating-between-providers)\n- [Migrating from v2 to v3](#migrating-from-v2-to-v3)\n- [Why Express Storage over Alternatives?](#why-express-storage-over-alternatives)\n- [TypeScript Support](#typescript-support)\n- [Contributing](#contributing)\n\n---\n\n## Features\n\n- **One API, Four Providers** — Write upload code once. Deploy to AWS S3, GCS, Azure, or local disk.\n- **Presigned URLs** — Client-side uploads that bypass your server, with per-provider constraint enforcement.\n- **File Validation** — Size limits, MIME type checks, and extension filtering before storage.\n- **Security Built In** — Path traversal prevention, filename sanitization, null byte protection.\n- **TypeScript Native** — Full type safety with discriminated unions. No `any` types.\n- **Streaming Uploads** — Automatic multipart/streaming for files over 100MB.\n- **Zero Config Switching** — Change `FILE_DRIVER=local` to `FILE_DRIVER=s3` and you're done.\n- **Lifecycle Hooks** — Tap into upload/delete events for logging, virus scanning, or audit trails.\n- **Batch Operations** — Upload or delete multiple files in parallel with concurrency control and `AbortSignal` support.\n- **Custom Rate Limiting** — Built-in in-memory limiter or plug in your own (Redis, Memcached, etc.).\n- **Lightweight** — Install only the cloud SDK you need. No dependency bloat.\n\n---\n\n## Quick Start\n\n### Installation\n\n```bash\nnpm install express-storage\n```\n\nThen install only the cloud SDK you need:\n\n```bash\n# For AWS S3\nnpm install @aws-sdk/client-s3 @aws-sdk/lib-storage @aws-sdk/s3-request-presigner\n\n# For Google Cloud Storage\nnpm install @google-cloud/storage\n\n# For Azure Blob Storage\nnpm install @azure/storage-blob @azure/identity\n```\n\nLocal storage works out of the box with no additional dependencies.\n\n### Basic Setup\n\n```typescript\nimport express from \"express\";\nimport multer from \"multer\";\nimport { StorageManager } from \"express-storage\";\n\nconst app = express();\nconst upload = multer();\nconst storage = new StorageManager();\n\napp.post(\"/upload\", upload.single(\"file\"), async (req, res) =\u003e {\n    const result = await storage.uploadFile(req.file, {\n        maxSize: 10 * 1024 * 1024, // 10MB limit\n        allowedMimeTypes: [\"image/jpeg\", \"image/png\", \"application/pdf\"],\n    });\n\n    if (result.success) {\n        res.json({ reference: result.reference, url: result.fileUrl });\n    } else {\n        res.status(400).json({ error: result.error });\n    }\n});\n```\n\n### Environment Configuration\n\nCreate a `.env` file:\n\n```env\n# Choose your storage provider\nFILE_DRIVER=local\n\n# For local storage\nLOCAL_PATH=uploads\n\n# For AWS S3\nFILE_DRIVER=s3\nBUCKET_NAME=my-bucket\nAWS_REGION=us-east-1\nAWS_ACCESS_KEY=your-key\nAWS_SECRET_KEY=your-secret\n\n# For Google Cloud Storage\nFILE_DRIVER=gcs\nBUCKET_NAME=my-bucket\nGCS_PROJECT_ID=my-project\n\n# For Azure Blob Storage\nFILE_DRIVER=azure\nBUCKET_NAME=my-container\nAZURE_CONNECTION_STRING=your-connection-string\n```\n\nThat's it. Your upload code stays the same regardless of which provider you choose.\n\n---\n\n## Supported Storage Providers\n\n| Provider         | Direct Upload | Presigned URLs    | Best For                  |\n| ---------------- | ------------- | ----------------- | ------------------------- |\n| **Local Disk**   | `local`       | —                 | Development, small apps   |\n| **AWS S3**       | `s3`          | `s3-presigned`    | Most production apps      |\n| **Google Cloud** | `gcs`         | `gcs-presigned`   | GCP-hosted applications   |\n| **Azure Blob**   | `azure`       | `azure-presigned` | Azure-hosted applications |\n\n---\n\n## Error Codes\n\nEvery error result includes a `code` field for programmatic error handling — no more parsing error strings:\n\n```typescript\nconst result = await storage.uploadFile(file, {\n    maxSize: 5 * 1024 * 1024,\n    allowedMimeTypes: [\"image/jpeg\", \"image/png\"],\n});\n\nif (!result.success) {\n    switch (result.code) {\n        case \"FILE_TOO_LARGE\":\n            res.status(413).json({ error: \"File is too large\" });\n            break;\n        case \"INVALID_MIME_TYPE\":\n            res.status(415).json({ error: \"Unsupported file type\" });\n            break;\n        case \"RATE_LIMITED\":\n            res.status(429).json({ error: \"Too many requests\" });\n            break;\n        default:\n            res.status(400).json({ error: result.error });\n    }\n}\n```\n\n| Code                       | When                                                           |\n| -------------------------- | -------------------------------------------------------------- |\n| `NO_FILE`                  | No file provided to upload                                     |\n| `FILE_EMPTY`               | File has zero bytes                                            |\n| `FILE_TOO_LARGE`           | File exceeds `maxSize` or `maxFileSize`                        |\n| `INVALID_MIME_TYPE`        | MIME type not in `allowedMimeTypes`                            |\n| `INVALID_EXTENSION`        | Extension not in `allowedExtensions`                           |\n| `INVALID_FILENAME`         | Filename is empty, too long, or contains illegal characters    |\n| `INVALID_INPUT`            | Bad argument (e.g., non-numeric fileSize, missing fileName)    |\n| `PATH_TRAVERSAL`           | Path contains `..`, `\\0`, or other traversal sequences         |\n| `FILE_NOT_FOUND`           | File doesn't exist (delete, validate, view)                    |\n| `VALIDATION_FAILED`        | Post-upload validation failed (content type or size mismatch)  |\n| `RATE_LIMITED`              | Presigned URL rate limit exceeded                              |\n| `HOOK_ABORTED`             | A `beforeUpload` or `beforeDelete` hook threw                  |\n| `PRESIGNED_NOT_SUPPORTED`  | Local driver doesn't support presigned URLs                    |\n| `PROVIDER_ERROR`           | Cloud provider SDK error (network, auth, permissions)          |\n\n---\n\n## Security Features\n\nFile uploads are one of the most exploited attack vectors in web applications. Express Storage protects you by default.\n\n### Path Traversal Prevention\n\nAttackers try filenames like `../../../etc/passwd` to escape your upload directory. We block this:\n\n```typescript\n// These malicious filenames are automatically rejected\n\"../secret.txt\"; // Blocked: path traversal\n\"..\\\\config.json\"; // Blocked: Windows path traversal\n\"file\\0.txt\"; // Blocked: null byte injection\n```\n\n### Automatic Filename Sanitization\n\nUser-provided filenames can't be trusted. We transform them into safe, unique identifiers:\n\n```\nUser uploads: \"My Photo (1).jpg\"\nStored as:    \"1706123456789_a1b2c3d4e5_my_photo_1_.jpg\"\n```\n\nThe format `{timestamp}_{random}_{sanitized_name}` prevents collisions and removes dangerous characters.\n\n### File Validation\n\nValidate before processing. Reject before storing.\n\n```typescript\nawait storage.uploadFile(file, {\n    maxSize: 5 * 1024 * 1024, // 5MB limit\n    allowedMimeTypes: [\"image/jpeg\", \"image/png\"],\n    allowedExtensions: [\".jpg\", \".png\"],\n});\n```\n\n### Presigned URL Security\n\nFor S3 and GCS, file constraints are enforced at the URL level — clients physically cannot upload the wrong file type or size. For Azure (which doesn't support URL-level constraints), we validate after upload and automatically delete invalid files.\n\n---\n\n## Presigned URLs: Client-Side Uploads\n\nLarge files shouldn't flow through your server. Presigned URLs let clients upload directly to cloud storage.\n\n### The Flow\n\n```\n1. Client → Your Server: \"I want to upload photo.jpg (2MB, image/jpeg)\"\n2. Your Server → Client: \"Here's a presigned URL, valid for 10 minutes\"\n3. Client → Cloud Storage: Uploads directly (your server never touches the bytes)\n4. Client → Your Server: \"Upload complete, please verify\"\n5. Your Server: Confirms file exists, returns permanent URL\n```\n\n### Implementation\n\n```typescript\n// Step 1: Generate upload URL\napp.post(\"/upload/init\", async (req, res) =\u003e {\n    const { fileName, contentType, fileSize } = req.body;\n\n    const result = await storage.generateUploadUrl(\n        fileName,\n        contentType,\n        fileSize,\n        \"user-uploads\", // Optional folder\n    );\n\n    res.json({\n        uploadUrl: result.uploadUrl,\n        reference: result.reference, // Save this for later\n    });\n});\n\n// Step 2: Confirm upload\napp.post(\"/upload/confirm\", async (req, res) =\u003e {\n    const { reference, expectedContentType, expectedFileSize } = req.body;\n\n    const result = await storage.validateAndConfirmUpload(reference, {\n        expectedContentType,\n        expectedFileSize,\n    });\n\n    if (result.success) {\n        res.json({ viewUrl: result.viewUrl });\n    } else {\n        res.status(400).json({ error: result.error });\n    }\n});\n```\n\n### Provider-Specific Behavior\n\n| Provider | Content-Type Enforced | File Size Enforced | Post-Upload Validation |\n| -------- | --------------------- | ------------------ | ---------------------- |\n| S3       | At URL level          | At URL level       | Optional               |\n| GCS      | At URL level          | At URL level       | Optional               |\n| Azure    | **Not enforced**      | **Not enforced**   | **Required**           |\n\nFor Azure, always call `validateAndConfirmUpload()` with expected values. Invalid files are automatically deleted.\n\n---\n\n## Large File Uploads\n\nFor files larger than 100MB, we recommend using **presigned URLs** instead of direct server uploads. Here's why:\n\n### Memory Efficiency\n\nWhen you upload through your server, the entire file must be buffered in memory (or stored temporarily on disk). For a 500MB video file, that's 500MB of RAM per concurrent upload. With presigned URLs, the file goes directly to cloud storage — your server only handles small JSON requests.\n\n### Automatic Streaming\n\nFor files that must go through your server, Express Storage automatically uses streaming uploads for files larger than 100MB:\n\n- **S3**: Uses multipart upload with 10MB chunks\n- **GCS**: Uses resumable uploads with streaming\n- **Azure**: Uses block upload with streaming\n\nThis happens transparently — you don't need to change your code.\n\n### Recommended Approach for Large Files\n\n```typescript\n// Frontend: Request presigned URL\nconst { uploadUrl, reference } = await fetch(\"/api/upload/init\", {\n    method: \"POST\",\n    body: JSON.stringify({\n        fileName: \"large-video.mp4\",\n        contentType: \"video/mp4\",\n        fileSize: 524288000, // 500MB\n    }),\n}).then((r) =\u003e r.json());\n\n// Frontend: Upload directly to cloud (bypasses your server!)\nawait fetch(uploadUrl, {\n    method: \"PUT\",\n    body: file,\n    headers: { \"Content-Type\": \"video/mp4\" },\n});\n\n// Frontend: Confirm upload\nawait fetch(\"/api/upload/confirm\", {\n    method: \"POST\",\n    body: JSON.stringify({ reference }),\n});\n```\n\n### Size Limits\n\n| Scenario                       | Recommended Limit | Reason                         |\n| ------------------------------ | ----------------- | ------------------------------ |\n| Direct upload (memory storage) | \u003c 100MB           | Node.js memory constraints     |\n| Direct upload (disk storage)   | \u003c 500MB           | Temp file management           |\n| Presigned URL upload           | 5GB+              | Limited only by cloud provider |\n\n---\n\n## API Reference\n\n### StorageManager\n\nThe main class you'll interact with.\n\n```typescript\nimport { StorageManager } from \"express-storage\";\n\n// Use environment variables\nconst storage = new StorageManager();\n\n// Or configure programmatically\nconst storage = new StorageManager({\n    driver: \"s3\",\n    credentials: {\n        bucketName: \"my-bucket\",\n        awsRegion: \"us-east-1\",\n        maxFileSize: 50 * 1024 * 1024, // 50MB\n    },\n    logger: console, // Optional: enable debug logging\n});\n```\n\n### File Upload Methods\n\n```typescript\n// Single file\nconst result = await storage.uploadFile(file, validation?, options?);\n\n// Multiple files (processed in parallel with concurrency limits)\nconst results = await storage.uploadFiles(files, validation?, options?);\n```\n\n### Presigned URL Methods\n\n```typescript\n// Generate upload URL with constraints\nconst result = await storage.generateUploadUrl(fileName, contentType?, fileSize?, folder?);\n\n// Generate view URL for existing file\nconst result = await storage.generateViewUrl(reference);\n\n// Validate upload (required for Azure, recommended for all)\nconst result = await storage.validateAndConfirmUpload(reference, options?);\n\n// Batch operations\nconst results = await storage.generateUploadUrls(files, folder?);\nconst results = await storage.generateViewUrls(references);\n```\n\n### File Management\n\n```typescript\n// Delete single file (returns DeleteResult with error details on failure)\nconst result = await storage.deleteFile(reference);\nif (!result.success) console.log(result.error, result.code);\n\n// Delete multiple files\nconst results = await storage.deleteFiles(references);\n\n// Get file metadata without downloading\nconst info = await storage.getMetadata(reference);\nif (info) console.log(info.name, info.size, info.contentType, info.lastModified);\n\n// List files with pagination\nconst result = await storage.listFiles(prefix?, maxResults?, continuationToken?);\n```\n\n### Upload Options\n\n```typescript\ninterface UploadOptions {\n    contentType?: string; // Override detected type\n    metadata?: Record\u003cstring, string\u003e; // Custom metadata\n    cacheControl?: string; // e.g., 'max-age=31536000'\n    contentDisposition?: string; // e.g., 'attachment; filename=\"doc.pdf\"'\n}\n\n// Example: Upload with caching headers\nawait storage.uploadFile(file, undefined, {\n    cacheControl: \"public, max-age=31536000\",\n    metadata: { uploadedBy: \"user-123\" },\n});\n```\n\n### Validation Options\n\n```typescript\ninterface FileValidationOptions {\n    maxSize?: number; // Maximum file size in bytes\n    allowedMimeTypes?: string[]; // e.g., ['image/jpeg', 'image/png']\n    allowedExtensions?: string[]; // e.g., ['.jpg', '.png']\n}\n```\n\n---\n\n## Environment Variables\n\n### Core Settings\n\n| Variable               | Description                         | Default                  |\n| ---------------------- | ----------------------------------- | ------------------------ |\n| `FILE_DRIVER`          | Storage driver to use               | `local`                  |\n| `BUCKET_NAME`          | Cloud storage bucket/container name | —                        |\n| `BUCKET_PATH`          | Default folder path within bucket   | `\"\"` (root)              |\n| `LOCAL_PATH`           | Directory for local storage         | `public/express-storage` |\n| `PRESIGNED_URL_EXPIRY` | URL validity in seconds             | `600` (10 min)           |\n| `MAX_FILE_SIZE`        | Maximum upload size in bytes        | `5368709120` (5GB)       |\n\n### AWS S3\n\n| Variable         | Description                                     |\n| ---------------- | ----------------------------------------------- |\n| `AWS_REGION`     | AWS region (e.g., `us-east-1`)                  |\n| `AWS_ACCESS_KEY` | Access key ID (optional if using IAM roles)     |\n| `AWS_SECRET_KEY` | Secret access key (optional if using IAM roles) |\n\n### Google Cloud Storage\n\n| Variable          | Description                                      |\n| ----------------- | ------------------------------------------------ |\n| `GCS_PROJECT_ID`  | Google Cloud project ID                          |\n| `GCS_CREDENTIALS` | Path to service account JSON (optional with ADC) |\n\n### Azure Blob Storage\n\n| Variable                  | Description                                       |\n| ------------------------- | ------------------------------------------------- |\n| `AZURE_CONNECTION_STRING` | Full connection string (recommended)              |\n| `AZURE_ACCOUNT_NAME`      | Storage account name (alternative)                |\n| `AZURE_ACCOUNT_KEY`       | Storage account key (alternative)                 |\n\n**Note**: Azure uses `BUCKET_NAME` for the container name (same as S3/GCS).\n\n---\n\n## Lifecycle Hooks\n\nHooks let you tap into the upload/delete lifecycle without modifying drivers. Perfect for logging, virus scanning, metrics, or audit trails.\n\n```typescript\nconst storage = new StorageManager({\n    driver: \"s3\",\n    hooks: {\n        beforeUpload: async (file) =\u003e {\n            await virusScan(file.buffer); // Throw to abort upload\n        },\n        afterUpload: (result, file) =\u003e {\n            auditLog(\"file_uploaded\", { result, originalName: file.originalname });\n        },\n        beforeDelete: async (reference) =\u003e {\n            await checkPermissions(reference);\n        },\n        afterDelete: (reference, success) =\u003e {\n            if (success) auditLog(\"file_deleted\", { reference });\n        },\n        onError: (error, context) =\u003e {\n            metrics.increment(\"storage.error\", { operation: context.operation });\n        },\n    },\n});\n```\n\nAll hooks are optional and async-safe. `beforeUpload` and `beforeDelete` can throw to abort the operation — the error message is included in the result.\n\n---\n\n## Type-Safe Results\n\nAll result types use TypeScript discriminated unions. Check `result.success` and TypeScript narrows the type automatically:\n\n```typescript\nconst result = await storage.uploadFile(file);\n\nif (result.success) {\n    console.log(result.reference); // stored file path (for delete/view/getMetadata)\n    console.log(result.fileUrl);   // URL to access the file\n} else {\n    console.log(result.error); // TypeScript knows this exists\n}\n```\n\nThis applies to all result types: `FileUploadResult`, `DeleteResult`, `PresignedUrlResult`, `BlobValidationResult`, and `ListFilesResult`.\n\n---\n\n## Configurable Concurrency\n\nControl how many parallel operations run in batch methods:\n\n```typescript\nconst storage = new StorageManager({\n    driver: \"s3\",\n    concurrency: 5, // Applies to uploadFiles, deleteFiles, generateUploadUrls, etc.\n});\n```\n\nDefault is 10. Lower it for rate-limited APIs or resource-constrained environments.\n\n### Cancellable Batch Operations\n\nAll batch methods accept an `AbortSignal` for cancelling long-running operations mid-flight:\n\n```typescript\nconst controller = new AbortController();\n\n// Cancel after 5 seconds\nsetTimeout(() =\u003e controller.abort(), 5000);\n\ntry {\n    const results = await storage.uploadFiles(files, validation, options, {\n        signal: controller.signal,\n    });\n} catch (error) {\n    console.log(\"Upload batch was cancelled\");\n}\n\n// Also works with deleteFiles, generateUploadUrls, generateViewUrls\nawait storage.deleteFiles(references, { signal: controller.signal });\n```\n\n---\n\n## Lifecycle Management\n\nClean up resources when you're done with a StorageManager instance:\n\n```typescript\nconst storage = new StorageManager({ driver: \"s3\", rateLimiter: { maxRequests: 100 } });\n\n// ... use storage ...\n\n// Release resources (clears factory cache entry and rate limiter)\nstorage.destroy();\n```\n\nThis is especially useful in tests, serverless functions, or any environment where StorageManager instances are created and discarded frequently.\n\n---\n\n## Custom Rate Limiting\n\nThe built-in rate limiter works for single-process apps. For clustered deployments, provide your own adapter:\n\n```typescript\nimport { StorageManager, RateLimiterAdapter } from \"express-storage\";\n// or: import { RateLimiterAdapter } from \"express-storage\"; // types are always at top level\n\n// Built-in in-memory limiter\nconst storage = new StorageManager({\n    driver: \"s3\",\n    rateLimiter: { maxRequests: 100, windowMs: 60000 },\n});\n\n// Custom Redis-backed limiter\nclass RedisRateLimiter implements RateLimiterAdapter {\n    async tryAcquire() {\n        /* Redis INCR + EXPIRE */\n    }\n    async getRemainingRequests() {\n        /* ... */\n    }\n    async getResetTime() {\n        /* ... */\n    }\n}\n\nconst storage = new StorageManager({\n    driver: \"s3\",\n    rateLimiter: new RedisRateLimiter(redisClient),\n});\n```\n\n---\n\n## Utilities\n\nExpress Storage includes battle-tested utilities you can use directly.\n\n### Retry with Exponential Backoff\n\n```typescript\nimport { withRetry } from \"express-storage/utils\";\n\nconst result = await withRetry(() =\u003e storage.uploadFile(file), {\n    maxAttempts: 3,\n    baseDelay: 1000,\n    maxDelay: 10000,\n    exponentialBackoff: true,\n});\n```\n\n### File Type Helpers\n\n```typescript\nimport {\n    isImageFile,\n    isDocumentFile,\n    getFileExtension,\n    formatFileSize,\n} from \"express-storage/utils\";\n\nisImageFile(\"image/jpeg\"); // true\nisDocumentFile(\"application/pdf\"); // true\ngetFileExtension(\"photo.jpg\"); // '.jpg'\nformatFileSize(1048576); // '1 MB'\n```\n\n### Custom Logging\n\n```typescript\nimport { StorageManager, type Logger } from \"express-storage\";\n\nconst logger: Logger = {\n    debug: (msg, ...args) =\u003e console.debug(`[Storage] ${msg}`, ...args),\n    info: (msg, ...args) =\u003e console.info(`[Storage] ${msg}`, ...args),\n    warn: (msg, ...args) =\u003e console.warn(`[Storage] ${msg}`, ...args),\n    error: (msg, ...args) =\u003e console.error(`[Storage] ${msg}`, ...args),\n};\n\nconst storage = new StorageManager({ driver: \"s3\", logger });\n```\n\n---\n\n## Real-World Examples\n\n### Profile Picture Upload\n\n```typescript\napp.post(\"/users/:id/avatar\", upload.single(\"avatar\"), async (req, res) =\u003e {\n    const result = await storage.uploadFile(\n        req.file,\n        {\n            maxSize: 2 * 1024 * 1024, // 2MB\n            allowedMimeTypes: [\"image/jpeg\", \"image/png\", \"image/webp\"],\n        },\n        {\n            cacheControl: \"public, max-age=86400\",\n            metadata: { userId: req.params.id },\n        },\n    );\n\n    if (result.success) {\n        await db.users.update(req.params.id, { reference: result.reference, avatarUrl: result.fileUrl });\n        res.json({ avatarUrl: result.fileUrl });\n    } else {\n        res.status(400).json({ error: result.error });\n    }\n});\n```\n\n### Document Upload with Presigned URLs\n\n```typescript\n// Frontend requests upload URL\napp.post(\"/documents/request-upload\", async (req, res) =\u003e {\n    const { fileName, fileSize } = req.body;\n\n    const result = await storage.generateUploadUrl(\n        fileName,\n        \"application/pdf\",\n        fileSize,\n        `documents/${req.user.id}`,\n    );\n\n    // Store pending upload in database\n    await db.documents.create({\n        reference: result.reference,\n        userId: req.user.id,\n        status: \"pending\",\n    });\n\n    res.json({\n        uploadUrl: result.uploadUrl,\n        reference: result.reference,\n    });\n});\n\n// Frontend confirms upload complete\napp.post(\"/documents/confirm-upload\", async (req, res) =\u003e {\n    const { reference } = req.body;\n\n    const result = await storage.validateAndConfirmUpload(reference, {\n        expectedContentType: \"application/pdf\",\n    });\n\n    if (result.success) {\n        await db.documents.update(\n            { reference },\n            {\n                status: \"uploaded\",\n                size: result.actualFileSize,\n            },\n        );\n        res.json({ success: true, viewUrl: result.viewUrl });\n    } else {\n        await db.documents.delete({ reference });\n        res.status(400).json({ error: result.error });\n    }\n});\n```\n\n### Bulk File Upload\n\n```typescript\napp.post(\"/gallery/upload\", upload.array(\"photos\", 20), async (req, res) =\u003e {\n    const files = req.files as Express.Multer.File[];\n\n    const results = await storage.uploadFiles(files, {\n        maxSize: 10 * 1024 * 1024,\n        allowedMimeTypes: [\"image/jpeg\", \"image/png\"],\n    });\n\n    const successful = results.filter((r) =\u003e r.success);\n    const failed = results.filter((r) =\u003e !r.success);\n\n    res.json({\n        uploaded: successful.length,\n        failed: failed.length,\n        files: successful.map((r) =\u003e ({\n            reference: r.reference,\n            url: r.fileUrl,\n        })),\n        errors: failed.map((r) =\u003e r.error),\n    });\n});\n```\n\n---\n\n## Migrating Between Providers\n\nMoving from local development to cloud production? Or switching cloud providers? Here's how.\n\n### Local to S3\n\n```env\n# Before (development)\nFILE_DRIVER=local\nLOCAL_PATH=uploads\n\n# After (production)\nFILE_DRIVER=s3\nBUCKET_NAME=my-app-uploads\nAWS_REGION=us-east-1\n```\n\nYour code stays exactly the same. Files uploaded before migration remain in their original location — you'll need to migrate existing files separately if needed.\n\n### S3 to Azure\n\n```env\n# Before\nFILE_DRIVER=s3\nBUCKET_NAME=my-bucket\nAWS_REGION=us-east-1\n\n# After\nFILE_DRIVER=azure\nBUCKET_NAME=my-container\nAZURE_CONNECTION_STRING=DefaultEndpointsProtocol=https;AccountName=...\n```\n\n**Important**: If using presigned URLs, remember that Azure requires post-upload validation. Add `validateAndConfirmUpload()` calls to your confirmation endpoints.\n\n---\n\n## Migrating from v2 to v3\n\nv3 has breaking changes in dependencies, types, and configuration. Most apps require minimal code changes.\n\n### What Changed\n\n1. **Cloud SDKs are optional peer dependencies.** Install only what you need — no more downloading all SDKs.\n2. **Result types are discriminated unions.** `result.fileName` is guaranteed when `result.success === true`. Code that accessed properties without checking `success` may need updates.\n3. **Presigned driver subclasses removed.** `S3PresignedStorageDriver`, `GCSPresignedStorageDriver`, and `AzurePresignedStorageDriver` are no longer exported. Use the base driver classes or `StorageManager` (the `'s3-presigned'` driver string still works).\n4. **`rateLimit` option renamed to `rateLimiter`.** Now accepts either options or a custom adapter.\n5. **`getRateLimitStatus()` is async.** Returns a Promise.\n6. **`deleteFile()` returns `DeleteResult`** instead of `boolean`. Check `result.success` instead of the boolean value.\n7. **`IStorageDriver.delete()` returns `DeleteResult`** instead of `boolean`. Custom drivers must be updated.\n8. **`ensureDirectoryExists()` is async.** Returns a `Promise\u003cvoid\u003e` — add `await` to existing calls.\n9. **Presigned URL methods return stricter types.** `generateUploadUrl()` returns `PresignedUploadUrlResult` (guarantees `uploadUrl`, `fileName`, `reference`, `expiresIn` on success). `generateViewUrl()` returns `PresignedViewUrlResult` (guarantees `viewUrl`, `reference`, `expiresIn` on success).\n\n### Migration Steps\n\n1. Update the package:\n\n```bash\nnpm install express-storage@3\n```\n\n2. Install the SDK for your provider:\n\n```bash\n# If you use S3\nnpm install @aws-sdk/client-s3 @aws-sdk/lib-storage @aws-sdk/s3-request-presigner\n\n# If you use GCS\nnpm install @google-cloud/storage\n\n# If you use Azure\nnpm install @azure/storage-blob @azure/identity\n```\n\n3. Update result type access — `fileName` is now `reference`:\n\n```typescript\n// Before (v2)\nconst name = result.fileName!;\n\n// After (v3) — \"reference\" is the stored file path used for all subsequent operations\nif (result.success) {\n    const ref = result.reference;  // pass to deleteFile(), getMetadata(), generateViewUrl()\n    const url = result.fileUrl;    // URL to access the file\n}\n```\n\n4. Update rate limiting config (if used):\n\n```typescript\n// Before (v2)\nnew StorageManager({ driver: \"s3\", rateLimit: { maxRequests: 100 } });\n\n// After (v3)\nnew StorageManager({ driver: \"s3\", rateLimiter: { maxRequests: 100 } });\n```\n\nIf you forget to install a required SDK, you'll get a clear error message telling you exactly what to install.\n\n---\n\n## Why Express Storage over Alternatives?\n\nIf you're evaluating file upload libraries for Express.js, here's how Express Storage compares:\n\n| Feature                     | **Express Storage** | **multer-s3** | **express-fileupload** | **uploadfs** |\n| --------------------------- | ------------------- | ------------- | ---------------------- | ------------ |\n| AWS S3                      | Yes                 | Yes           | Manual                 | Yes          |\n| Google Cloud Storage        | Yes                 | No            | No                     | Yes          |\n| Azure Blob Storage          | Yes                 | No            | No                     | Yes          |\n| Local disk                  | Yes                 | No            | Yes                    | Yes          |\n| Presigned URLs              | Yes                 | No            | No                     | No           |\n| File validation             | Yes                 | No            | Partial                | No           |\n| TypeScript (native)         | Yes                 | No            | @types                 | No           |\n| Streaming uploads           | Yes                 | Yes           | No                     | No           |\n| Switch providers at runtime | Yes (env var)       | No            | No                     | No           |\n| Path traversal protection   | Yes                 | No            | No                     | No           |\n| Lifecycle hooks             | Yes                 | No            | No                     | No           |\n| Batch operations            | Yes                 | No            | No                     | No           |\n| Rate limiting               | Yes                 | No            | No                     | No           |\n\n**multer-s3** is great if you only need S3. Express Storage covers S3 *plus* GCS, Azure, and local disk with the same code — and adds presigned URLs, validation, and security that multer-s3 doesn't provide.\n\n---\n\n## TypeScript Support\n\nExpress Storage is written in TypeScript and exports all types:\n\n```typescript\n// Core — what most users need\nimport {\n    StorageManager,\n    InMemoryRateLimiter,\n    FileUploadResult,\n    DeleteResult,\n    PresignedUploadUrlResult,\n    StorageOptions,\n    FileValidationOptions,\n    UploadOptions,\n} from \"express-storage\";\n\n// Utilities — standalone helpers (import separately to keep your bundle small)\nimport { withRetry, formatFileSize, withConcurrencyLimit } from \"express-storage/utils\";\n\n// Drivers — for custom driver implementations or direct driver use\nimport { BaseStorageDriver, createDriver } from \"express-storage/drivers\";\n\n// Config — environment variable loading and validation\nimport { validateStorageConfig, loadAndValidateConfig } from \"express-storage/config\";\n\n// Discriminated unions — TypeScript narrows automatically\nconst result: FileUploadResult = await storage.uploadFile(file);\n\nif (result.success) {\n    // TypeScript knows: result is FileUploadSuccess\n    console.log(result.reference); // string — stored file path\n    console.log(result.fileUrl);   // string — URL to access\n} else {\n    // TypeScript knows: result is FileUploadError\n    console.log(result.error); // string (guaranteed)\n}\n```\n\n---\n\n## Contributing\n\nContributions are welcome!\n\n```bash\n# Clone the repository\ngit clone https://github.com/th3hero/express-storage.git\n\n# Install dependencies (includes all cloud SDKs for development)\nnpm install\n\n# Run tests\nnpm test\n\n# Run tests in watch mode\nnpm run test:watch\n\n# Build for production\nnpm run build\n\n# Run linting\nnpm run lint\n```\n\n---\n\n## License\n\nMIT License — use it however you want.\n\n---\n\n## Support\n\n- **Issues**: [GitHub Issues](https://github.com/th3hero/express-storage/issues)\n- **Author**: Alok Kumar ([@th3hero](https://github.com/th3hero))\n\n---\n\n**Made for developers who are tired of writing upload code from scratch.**\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fth3hero%2Fexpress-storage","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fth3hero%2Fexpress-storage","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fth3hero%2Fexpress-storage/lists"}