⏱ Scheduled + immediate message queue

A durable message queue for now and later

Process immediate workloads and delayed jobs in one system. Built in Go, self-hostable, with native scheduled delivery, retries, and DLQ.

Get started in 60s → View on GitHub
Schedule a message with curl
# Schedule a payment reminder in 24 hours curl -X POST "http://localhost:8080/namespaces/payments/queues/reminders/messages" \ -H "Content-Type: application/json" \ -d "{\"body\":\"eyJ1c2VySWQiOjEyM30=\",\"deliver_at\":$(( $(date +%s%3N) + 86400000 ))}" # Response: {"id":"01JP5X4PRJ3G2V9WB8E7KT8M5F"} # Delivery: at or after the given UTC millisecond timestamp.
O(1) scheduled message peek
15k msgs/sec (single node, est.)
~8 MB Docker image
±15 ms delivery accuracy (Linux)
0 runtime dependencies

Why EpochQueue

Traditional scheduling needed workarounds.
EpochQueue makes scheduling first-class.

Until now, delayed and time-based workflows usually required multiple moving parts. EpochQueue brings them into one simple queueing model.

❌ Traditional approach

  • Separate scheduler jobs and queue workers to maintain
  • Frequent polling loops that grow cost with load
  • Multiple services and extra operational complexity
  • Delivery windows and timing limits in delay workflows
  • Language- or stack-specific tooling constraints
  • More glue code for retries, timeouts, and reliability

✅ The EpochQueue way

  • Native deliver_at — any future UTC millisecond
  • Configurable scheduling window (default 90 days)
  • Language-agnostic HTTP + WebSocket API
  • Single binary, zero dependencies, ~8 MB image
  • Runs on 512 MB RAM
  • Durable WAL + bbolt index — survives restarts

Why EpochQueue instead of a separate scheduler service?


Features

Everything a production queue needs. Nothing it doesn't.

Built ground-up in Go. No dependencies, no configuration sprawl.

Native scheduled delivery

Set deliver_at to any future UTC millisecond. An in-memory Min-Heap (O(1) peek, O(log N) insert) fires messages precisely when due — no database polling loop.

🔒

At-least-once delivery

Visibility timeouts lock messages while in-flight. If your consumer crashes without ACKing, the message becomes ready again automatically.

💀

Dead-letter queue

Every queue gets a paired DLQ. Failed messages land there after max_retries. Inspect and replay with a single API call.

🌐

Three consumer models

HTTP poll, WebSocket push (with backpressure), or Webhook delivery — pick what fits your architecture. Mix and match per queue.

📦

Durable storage

Append-only WAL + bbolt index. Crash-safe: EpochQueue replays the WAL on restart to restore all in-memory state exactly as it was.

📊

Built-in dashboard

Live queue depths, scheduled counts, DLQ alerts, pagination for 500+ queues. Create queues, send messages, replay DLQ — all from the browser.

🔭

Prometheus metrics

Published, consumed, ACKed, NACKed, DLQ-routed counters per queue. HTTP request latency histograms. Scrape from port 9090.

🔑

Auth + rate limiting

Static API key via X-Api-Key header. Per-IP token-bucket rate limiter and request body size cap built in.

🚀

Cluster-ready by design

WAL format carries Raft term + log_index fields from day one. Phase 2 adds 3-node Raft consensus without a storage format change.


"Send email 1 hour after signup? Schedule a payment retry in 24 hours? Cancel an order in 30 minutes? That's one deliver_at."
— Stop writing cron jobs. Start publishing messages.

How it works

Min-Heap scheduling.
O(1) peek. Always.

Traditional solutions poll the database every second. EpochQueue knows exactly when the next message is due and sleeps until that moment.

1

Publish with deliver_at

Set deliver_at to a UTC millisecond timestamp. If it's in the future, the message enters the scheduler's Min-Heap. If immediate, it goes straight to the READY queue.

2

Scheduler fires at exactly the right time

A background goroutine peeks at the heap root (O(1)) and sleeps until that timestamp. On fire, it pops the message and pushes it to READY.

3

Consumer receives the message

Your consumer polls, subscribes via WebSocket, or receives a webhook POST. The message is locked with a visibility timeout.

4

ACK or retry

Call DELETE /messages/:receipt_handle to ACK. Call POST .../nack to requeue. Exhaust retries and it moves to the DLQ.

Go SDK Go
import "github.com/sneh-joshi/epochqueue/pkg/client" c := client.New("http://localhost:8080") // Publish immediately id, _ := c.Publish(ctx, "payments", "invoices", payload) // Schedule in 1 hour id, _ = c.Publish(ctx, "payments", "invoices", payload, client.WithDelay(time.Hour)) // Schedule at an exact time id, _ = c.Publish(ctx, "payments", "invoices", payload, client.WithDeliverAt(monday9am)) // Consume + ACK msgs, _ := c.Consume(ctx, "payments", "invoices", 10) for _, m := range msgs { process(m.Body) c.Ack(ctx, m.ReceiptHandle) }

Quick start

Running in under 60 seconds.

Docker Compose is the fastest path. A single binary works just as well.

1

Clone & start

git clone https://github.com/sneh-joshi/epochqueue cd epochqueue/docker docker compose up -d
2

Open the dashboard

open http://localhost:8080/dashboard
3

Publish your first scheduled message

curl -X POST \\ http://localhost:8080/namespaces/demo/queues/jobs/messages \\ -H "Content-Type: application/json" \\ -d '{"body":"aGVsbG8gd29ybGQ=", "deliver_at":$(( $(date +%s%3N)+60000 ))}'

↑ Delivers in 60 seconds from now.

4

Add the Go SDK

go get github.com/sneh-joshi/epochqueue/pkg/client
docker-compose.yml
services: epochqueue: image: epochqueue:latest # local image until registry release ports: - "8080:8080" # API + dashboard - "9090:9090" # Prometheus metrics volumes: - ./data:/data - ./config.yaml:/config.yaml environment: - EPOCHQUEUE_AUTH_API_KEY=your-secret
Or build from source
git clone https://github.com/sneh-joshi/epochqueue cd epochqueue go build -o epochqueue ./cmd/server ./epochqueue --config config.yaml

Use cases

One queue. Dozens of cron job patterns eliminated.

Any task that runs "later" is a candidate.

📧

Onboarding email sequence

Publish welcome email jobs with deliver_at spaced 1 day, 3 days, 7 days after signup. No cron, no DB polling.

WithDelay(24 * time.Hour)
💳

Payment retry

After a failed charge, schedule a retry in 24 hours. If that fails, schedule another at 72 hours. No scheduler service needed.

WithDelay(24 * time.Hour)
🛒

Abandoned cart recovery

At checkout start, publish a reminder message for 30 minutes later. Cancel it if the order completes — or let it fire and send the nudge.

WithDelay(30 * time.Minute)
📝

Scheduled content publishing

Publish a "go live" job for a blog post at exactly 9 AM Monday. No CMS flag polling, no cron expression debugging.

WithDeliverAt(monday9am)
🔄

Subscription renewal

When a subscription is created, publish a renewal job for 3 days before expiry. When the job fires, publish the next one.

WithDeliverAt(expiryDate - 3 days)
📬

Rate-limited bulk email

Space out 10,000 marketing emails by staggering deliver_at timestamps. No external throttle service needed.

deliver_at: now + i * 100ms

Performance

Built for production.

Single-node numbers. Throughput varies significantly by environment — these are estimated ranges.

Metric Linux VPS
(2–4 vCPU, shared SSD)
Docker Desktop
(macOS / Windows VM)
Notes
Write throughput (fsync=interval) ~15,000 msgs/sec ~4,000 msgs/sec Default config. Bottleneck: WAL fsync + disk I/O.
Write throughput (fsync=never) ~80,000 msgs/sec ~4,000 msgs/sec Lossy mode — no durability guarantee on crash.
Read throughput ~30,000 msgs/sec ~2,000 msgs/sec 50 concurrent consumers, batch=100.
Delivery accuracy ±15–30 ms ±50–200 ms VM timer jitter dominates on Docker Desktop.
Scheduled heap size 1M entries ≈ ~100 MB RAM In-memory Min-Heap. Linear with entry count.
Maximum connections ~65,000 TCP OS ulimit — not an EpochQueue limit.
In-memory index @ 10 GB RAM ~100M messages bbolt page cache.
Docker image size ~8 MB Multi-stage scratch build. Measured.
Minimum RAM 512 MB Runs comfortably on free-tier VMs.
Max message size 256 KB Configurable via max_message_size_kb. Matches SQS's limit — for larger payloads, store data in object storage and pass a reference ID in the message body.
Max messages per queue 100,000 Default cap. Configurable via max_messages per queue.
Max batch size (consume) 100 msgs Max messages returned per Consume call. Configurable via max_batch_size.
Message metadata ≤16 keys · key ≤64 B · value ≤512 B String key/value pairs attached at publish time. Use for routing hints, trace IDs, or tags. Enforced at publish — rejected with 400 if exceeded.

* All throughput numbers are estimated. Write numbers: 50 concurrent producers, batch size 100, measured over 30 seconds. Read numbers: 50 concurrent consumers, batch size 100. Docker Desktop numbers measured on macOS with Apple Silicon.


Open source

MIT licensed. Community driven.

EpochQueue is fully open source. Contributions, bug reports, and feature discussions are welcome.

Star on GitHub Contributing Guide Documentation
Go 1.24 MIT License