https://github.com/egeominotti/bunqueue
β‘ High-performance job queue server for Bun. SQLite persistence, cron jobs, priorities, DLQ. Zero dependencies.
https://github.com/egeominotti/bunqueue
background-jobs bun job queue s3 sqlite
Last synced: about 17 hours ago
JSON representation
β‘ High-performance job queue server for Bun. SQLite persistence, cron jobs, priorities, DLQ. Zero dependencies.
- Host: GitHub
- URL: https://github.com/egeominotti/bunqueue
- Owner: egeominotti
- License: mit
- Created: 2026-01-28T21:26:22.000Z (7 days ago)
- Default Branch: main
- Last Pushed: 2026-01-31T16:26:47.000Z (5 days ago)
- Last Synced: 2026-01-31T17:24:58.210Z (5 days ago)
- Topics: background-jobs, bun, job, queue, s3, sqlite
- Language: TypeScript
- Homepage:
- Size: 2.87 MB
- Stars: 21
- Watchers: 0
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
π Documentation β’
Features β’
Quick Start β’
Embedded β’
Server β’
Docker
---
## Why bunqueue?
> β οΈ **Bun only** β bunqueue requires [Bun](https://bun.sh) runtime. Node.js is not supported.
**Every other job queue requires external infrastructure.** bunqueue doesn't.
| Library | Requires |
|---------|----------|
| BullMQ | β Redis |
| Agenda | β MongoDB |
| Bee-Queue | β Redis |
| pg-boss | β PostgreSQL |
| Celery | β Redis/RabbitMQ |
| **bunqueue** | β
**Nothing. Zero. Nada.** |
bunqueue is the **only** job queue with:
- **BullMQ-compatible API** β Same `Queue`, `Worker`, `QueueEvents` you know
- **Zero external dependencies** β No Redis, no MongoDB, no nothing
- **Persistent storage** β SQLite survives restarts, no data loss
- **100K+ jobs/sec** β Faster than Redis-based queues
- **Single file deployment** β Just your app, that's it
```bash
# Others: Install Redis, configure connection, manage infrastructure...
# bunqueue:
bun add bunqueue
```
```typescript
import { Queue, Worker } from 'bunqueue/client';
// That's it. You're done. Start queuing.
```
---
## Quick Install
```bash
# Requires Bun runtime (https://bun.sh)
bun add bunqueue
```
bunqueue works in **two modes**:
| Mode | Description | Use Case |
|------|-------------|----------|
| **Embedded** | In-process, no server needed | Monolith, scripts, serverless |
| **Server** | Standalone TCP/HTTP server | Microservices, multi-process |
---
## Quick Start
### Embedded Mode (Recommended)
No server required. BullMQ-compatible API.
```typescript
import { Queue, Worker } from 'bunqueue/client';
// Create queue
const queue = new Queue('emails');
// Create worker
const worker = new Worker('emails', async (job) => {
console.log('Sending email to:', job.data.to);
await job.updateProgress(50);
return { sent: true };
}, { concurrency: 5 });
// Handle events
worker.on('completed', (job, result) => {
console.log(`Job ${job.id} completed:`, result);
});
worker.on('failed', (job, err) => {
console.error(`Job ${job.id} failed:`, err.message);
});
// Add jobs
await queue.add('send-welcome', { to: 'user@example.com' });
```
### Server Mode
For multi-process or microservice architectures.
**Terminal 1 - Start server:**
```bash
bunqueue start
```

**Terminal 2 - Producer:**
```typescript
const res = await fetch('http://localhost:6790/push', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
queue: 'emails',
data: { to: 'user@example.com' }
})
});
```
**Terminal 3 - Consumer:**
```typescript
while (true) {
const res = await fetch('http://localhost:6790/pull', {
method: 'POST',
body: JSON.stringify({ queue: 'emails', timeout: 5000 })
});
const job = await res.json();
if (job.id) {
console.log('Processing:', job.data);
await fetch('http://localhost:6790/ack', {
method: 'POST',
body: JSON.stringify({ id: job.id })
});
}
}
```
---
## Features
- **Blazing Fast** β 500K+ jobs/sec, built on Bun runtime
- **Dual Mode** β Embedded (in-process) or Server (TCP/HTTP)
- **BullMQ-Compatible API** β Easy migration with `Queue`, `Worker`, `QueueEvents`
- **Persistent Storage** β SQLite with WAL mode
- **Sandboxed Workers** β Isolated processes for crash protection
- **Priority Queues** β FIFO, LIFO, and priority-based ordering
- **Delayed Jobs** β Schedule jobs for later
- **Repeatable Jobs** β Recurring jobs with interval and limit
- **Cron Scheduling** β Recurring jobs with cron expressions
- **Queue Groups** β Organize queues in namespaces
- **Flow/Pipelines** β Chain jobs A β B β C with result passing
- **Retry & Backoff** β Automatic retries with exponential backoff
- **Dead Letter Queue** β Failed jobs preserved for inspection
- **Job Dependencies** β Parent-child relationships
- **Progress Tracking** β Real-time progress updates
- **Rate Limiting** β Per-queue rate limits
- **Webhooks** β HTTP callbacks on job events
- **Real-time Events** β WebSocket and SSE support
- **Prometheus Metrics** β Built-in monitoring
- **Full CLI** β Manage queues from command line
---
## Embedded Mode
### Queue API
```typescript
import { Queue } from 'bunqueue/client';
const queue = new Queue('my-queue');
// Add job
const job = await queue.add('task-name', { data: 'value' });
// Add with options
await queue.add('task', { data: 'value' }, {
priority: 10, // Higher = processed first
delay: 5000, // Delay in ms
attempts: 3, // Max retries
backoff: 1000, // Backoff base (ms)
timeout: 30000, // Processing timeout
jobId: 'unique-id', // Custom ID
removeOnComplete: true,
removeOnFail: false,
});
// Bulk add
await queue.addBulk([
{ name: 'task1', data: { id: 1 } },
{ name: 'task2', data: { id: 2 } },
]);
// Get job
const job = await queue.getJob('job-id');
// Remove job
await queue.remove('job-id');
// Get counts
const counts = await queue.getJobCounts();
// { waiting: 10, active: 2, completed: 100, failed: 5 }
// Queue control
await queue.pause();
await queue.resume();
await queue.drain(); // Remove waiting jobs
await queue.obliterate(); // Remove ALL data
```
### Worker API
```typescript
import { Worker } from 'bunqueue/client';
const worker = new Worker('my-queue', async (job) => {
console.log('Processing:', job.name, job.data);
// Update progress
await job.updateProgress(50, 'Halfway done');
// Add log
await job.log('Processing step completed');
// Return result
return { success: true };
}, {
concurrency: 10, // Parallel jobs
autorun: true, // Start automatically
});
// Events
worker.on('active', (job) => {
console.log(`Job ${job.id} started`);
});
worker.on('completed', (job, result) => {
console.log(`Job ${job.id} completed:`, result);
});
worker.on('failed', (job, err) => {
console.error(`Job ${job.id} failed:`, err.message);
});
worker.on('progress', (job, progress) => {
console.log(`Job ${job.id} progress:`, progress);
});
worker.on('error', (err) => {
console.error('Worker error:', err);
});
// Control
worker.pause();
worker.resume();
await worker.close(); // Graceful shutdown
await worker.close(true); // Force close
```
### SandboxedWorker
Run job processors in **isolated Bun Worker processes**. Perfect for:
- CPU-intensive tasks that would block the event loop
- Processing untrusted code/data
- Jobs that might crash or have memory leaks
- Workloads requiring process-level isolation
```typescript
import { Queue, SandboxedWorker } from 'bunqueue/client';
const queue = new Queue('image-processing');
// Create sandboxed worker pool
const worker = new SandboxedWorker('image-processing', {
processor: './imageProcessor.ts', // Runs in separate process
concurrency: 4, // 4 parallel worker processes
timeout: 60000, // 60s timeout per job
maxMemory: 256, // MB per worker (uses smol mode if β€64)
maxRestarts: 10, // Auto-restart crashed workers
});
worker.start();
// Add jobs normally
await queue.add('resize', {
image: 'photo.jpg',
width: 800
});
// Check worker stats
const stats = worker.getStats();
// { total: 4, busy: 2, idle: 2, restarts: 0 }
// Graceful shutdown
await worker.stop();
```
**Processor file** (`imageProcessor.ts`):
```typescript
// This runs in an isolated Bun Worker process
export default async (job: {
id: string;
data: any;
queue: string;
attempts: number;
progress: (value: number) => void;
}) => {
job.progress(10);
// CPU-intensive work - won't block main process
const result = await processImage(job.data.image, job.data.width);
job.progress(100);
return { processed: true, path: result };
};
```
**Comparison:**
| Feature | Worker | SandboxedWorker |
|---------|--------|-----------------|
| Execution | In-process | Separate process |
| Latency | ~0.002ms | ~2-5ms (IPC overhead) |
| Crash isolation | β | β
|
| Memory leak protection | β | β
|
| CPU-bound safety | β Blocks event loop | β
Isolated |
| Use case | Fast I/O tasks | Heavy computation |
### QueueEvents
Listen to queue events without processing jobs.
```typescript
import { QueueEvents } from 'bunqueue/client';
const events = new QueueEvents('my-queue');
events.on('waiting', ({ jobId }) => {
console.log(`Job ${jobId} waiting`);
});
events.on('active', ({ jobId }) => {
console.log(`Job ${jobId} active`);
});
events.on('completed', ({ jobId, returnvalue }) => {
console.log(`Job ${jobId} completed:`, returnvalue);
});
events.on('failed', ({ jobId, failedReason }) => {
console.log(`Job ${jobId} failed:`, failedReason);
});
events.on('progress', ({ jobId, data }) => {
console.log(`Job ${jobId} progress:`, data);
});
await events.close();
```
### Repeatable Jobs
Jobs that repeat automatically at fixed intervals.
```typescript
import { Queue, Worker } from 'bunqueue/client';
const queue = new Queue('heartbeat');
// Repeat every 5 seconds, max 10 times
await queue.add('ping', { timestamp: Date.now() }, {
repeat: {
every: 5000, // 5 seconds
limit: 10, // max 10 repetitions
}
});
// Infinite repeat (no limit)
await queue.add('health-check', {}, {
repeat: { every: 60000 } // every minute, forever
});
const worker = new Worker('heartbeat', async (job) => {
console.log('Heartbeat:', job.data);
return { ok: true };
});
```
### Queue Groups
Organize queues with namespaces.
```typescript
import { QueueGroup } from 'bunqueue/client';
// Create a group with namespace
const billing = new QueueGroup('billing');
// Get queues (automatically prefixed)
const invoices = billing.getQueue('invoices'); // β "billing:invoices"
const payments = billing.getQueue('payments'); // β "billing:payments"
// Get workers for the group
const worker = billing.getWorker('invoices', async (job) => {
console.log('Processing invoice:', job.data);
return { processed: true };
});
// List all queues in the group
const queues = billing.listQueues(); // ['invoices', 'payments']
// Bulk operations on the group
billing.pauseAll();
billing.resumeAll();
billing.drainAll();
```
### FlowProducer (Pipelines)
Chain jobs with dependencies and result passing.
```typescript
import { FlowProducer, Worker } from 'bunqueue/client';
const flow = new FlowProducer();
// Chain: A β B β C (sequential execution)
const { jobIds } = await flow.addChain([
{ name: 'fetch', queueName: 'pipeline', data: { url: 'https://api.example.com' } },
{ name: 'process', queueName: 'pipeline', data: {} },
{ name: 'store', queueName: 'pipeline', data: {} },
]);
// Parallel then merge: [A, B, C] β D
const result = await flow.addBulkThen(
[
{ name: 'fetch-1', queueName: 'parallel', data: { source: 'api1' } },
{ name: 'fetch-2', queueName: 'parallel', data: { source: 'api2' } },
{ name: 'fetch-3', queueName: 'parallel', data: { source: 'api3' } },
],
{ name: 'merge', queueName: 'parallel', data: {} }
);
// Tree structure
await flow.addTree({
name: 'root',
queueName: 'tree',
data: { level: 0 },
children: [
{ name: 'child1', queueName: 'tree', data: { level: 1 } },
{ name: 'child2', queueName: 'tree', data: { level: 1 } },
],
});
// Worker with parent result access
const worker = new Worker('pipeline', async (job) => {
if (job.name === 'fetch') {
const data = await fetchData(job.data.url);
return { data };
}
if (job.name === 'process' && job.data.__flowParentId) {
// Get result from previous job in chain
const parentResult = flow.getParentResult(job.data.__flowParentId);
return { processed: transform(parentResult.data) };
}
return { done: true };
});
```
### Shutdown
```typescript
import { shutdownManager } from 'bunqueue/client';
// Cleanup when done
shutdownManager();
```
---
## Server Mode
### Start Server
```bash
# Basic
bunqueue start
# With options
bunqueue start --tcp-port 6789 --http-port 6790 --data-path ./data/queue.db
# With environment variables
DATA_PATH=./data/bunqueue.db AUTH_TOKENS=secret bunqueue start
```
### Environment Variables
```env
TCP_PORT=6789
HTTP_PORT=6790
HOST=0.0.0.0
DATA_PATH=./data/bunqueue.db
AUTH_TOKENS=token1,token2
```
### HTTP API
```bash
# Push job
curl -X POST http://localhost:6790/push \
-H "Content-Type: application/json" \
-d '{"queue":"emails","data":{"to":"user@test.com"},"priority":10}'
# Pull job
curl -X POST http://localhost:6790/pull \
-H "Content-Type: application/json" \
-d '{"queue":"emails","timeout":5000}'
# Acknowledge
curl -X POST http://localhost:6790/ack \
-H "Content-Type: application/json" \
-d '{"id":"job-id","result":{"sent":true}}'
# Fail
curl -X POST http://localhost:6790/fail \
-H "Content-Type: application/json" \
-d '{"id":"job-id","error":"Failed to send"}'
# Stats
curl http://localhost:6790/stats
# Health
curl http://localhost:6790/health
# Prometheus metrics
curl http://localhost:6790/prometheus
```
### TCP Protocol
```bash
nc localhost 6789
# Commands (JSON)
{"cmd":"PUSH","queue":"tasks","data":{"action":"process"}}
{"cmd":"PULL","queue":"tasks","timeout":5000}
{"cmd":"ACK","id":"1","result":{"done":true}}
{"cmd":"FAIL","id":"1","error":"Something went wrong"}
```
---
## CLI
```bash
# Server
bunqueue start
bunqueue start --tcp-port 6789 --http-port 6790
# Jobs
bunqueue push emails '{"to":"user@test.com"}'
bunqueue push tasks '{"action":"sync"}' --priority 10 --delay 5000
bunqueue pull emails --timeout 5000
bunqueue ack
bunqueue fail --error "Failed"
# Job management
bunqueue job get
bunqueue job progress 50 --message "Processing"
bunqueue job cancel
# Queue control
bunqueue queue list
bunqueue queue pause emails
bunqueue queue resume emails
bunqueue queue drain emails
# Cron
bunqueue cron list
bunqueue cron add cleanup -q maintenance -d '{}' -s "0 * * * *"
bunqueue cron delete cleanup
# DLQ
bunqueue dlq list emails
bunqueue dlq retry emails
bunqueue dlq purge emails
# Monitoring
bunqueue stats
bunqueue metrics
bunqueue health
# Backup (S3)
bunqueue backup now
bunqueue backup list
bunqueue backup restore --force
```
---
## Docker
```bash
# Run
docker run -p 6789:6789 -p 6790:6790 ghcr.io/egeominotti/bunqueue
# With persistence
docker run -p 6789:6789 -p 6790:6790 \
-v bunqueue-data:/app/data \
-e DATA_PATH=/app/data/bunqueue.db \
ghcr.io/egeominotti/bunqueue
# With auth
docker run -p 6789:6789 -p 6790:6790 \
-e AUTH_TOKENS=secret \
ghcr.io/egeominotti/bunqueue
```
### Docker Compose
```yaml
version: "3.8"
services:
bunqueue:
image: ghcr.io/egeominotti/bunqueue
ports:
- "6789:6789"
- "6790:6790"
volumes:
- bunqueue-data:/app/data
environment:
- DATA_PATH=/app/data/bunqueue.db
- AUTH_TOKENS=your-secret-token
volumes:
bunqueue-data:
```
---
## S3 Backup
```env
S3_BACKUP_ENABLED=1
S3_ACCESS_KEY_ID=your-key
S3_SECRET_ACCESS_KEY=your-secret
S3_BUCKET=my-bucket
S3_REGION=us-east-1
S3_BACKUP_INTERVAL=21600000 # 6 hours
S3_BACKUP_RETENTION=7
```
Supported providers: AWS S3, Cloudflare R2, MinIO, DigitalOcean Spaces.
---
## When to Use What?
| Scenario | Mode |
|----------|------|
| Single app, monolith | **Embedded** |
| Scripts, CLI tools | **Embedded** |
| Serverless (with persistence) | **Embedded** |
| Microservices | **Server** |
| Multiple languages | **Server** (HTTP API) |
| Horizontal scaling | **Server** |
### Server Mode SDK
For communicating with bunqueue server from **separate processes**, use the [flashq](https://www.npmjs.com/package/flashq) SDK:
```bash
bun add flashq
```
```typescript
import { FlashQ } from 'flashq';
const client = new FlashQ({ host: 'localhost', port: 6789 });
// Push job
await client.push('emails', { to: 'user@test.com' });
// Pull and process
const job = await client.pull('emails');
if (job) {
console.log('Processing:', job.data);
await client.ack(job.id);
}
```
| Package | Use Case |
|---------|----------|
| `bunqueue/client` | Same process (embedded) |
| `flashq` | Different process (TCP client) |
```
βββββββββββββββββββ βββββββββββββββββββ
β Your App β β Your App β
β β β β
β bunqueue/clientβ β flashq β
β (embedded) β β (TCP client) β
ββββββββββ¬βββββββββ ββββββββββ¬βββββββββ
β β
β βΌ
β βββββββββββββββββββ
β β bunqueue server β
β β (port 6789) β
β βββββββββββββββββββ
β
Same process Different process
```
---
## Architecture
```
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β bunqueue β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Embedded Mode β Server Mode β
β (bunqueue/client) β (bunqueue start) β
β β β
β Queue, Worker β TCP (6789) + HTTP (6790) β
β in-process β multi-process β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Core Engine β
β ββββββββββββ ββββββββββββ βββββββββββββ ββββββββββββ β
β β Queues β β Workers β β Scheduler β β DLQ β β
β β(32 shards)β β β β (Cron) β β β β
β ββββββββββββ ββββββββββββ βββββββββββββ ββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β SQLite (WAL mode, 256MB mmap) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
---
## Contributing
```bash
bun install
bun test
bun run lint
bun run format
bun run check
```
---
## License
MIT License β see [LICENSE](LICENSE) for details.
---
Built with Bun π₯