Building a Job Queue System with BullMQ and Redis

Build a production-ready job queue with BullMQ and Redis. Delayed jobs, retries, priorities, concurrency, and monitoring with real TypeScript examples.

Y
Yash Pritwani
13 min read

Why You Need a Job Queue

Not everything can happen in the request-response cycle. Sending emails, processing images, generating reports, syncing data, running AI inference — these tasks take seconds or minutes, and your users should not wait.

PromptEmbed[0.2, 0.8...]VectorSearchtop-k=5LLM+ contextReplyRetrieval-Augmented Generation (RAG) Flow

RAG architecture: user prompts are embedded, matched against a vector store, then fed to an LLM with retrieved context.

A job queue lets you push work into a background process that runs independently. BullMQ is the most popular Node.js/TypeScript job queue, built on Redis. It handles millions of jobs per day at companies like Mozilla, Automattic, and many others.

Setup

npm install bullmq ioredis
// src/queue/connection.ts
import { Redis } from 'ioredis';

export const connection = new Redis({
  host: process.env.REDIS_HOST || 'localhost',
  port: parseInt(process.env.REDIS_PORT || '6379'),
  maxRetriesPerRequest: null,  // Required by BullMQ
});

Creating Queues and Producers

Get more insights on Tutorials

Join 2,000+ engineers who get our weekly deep-dives. No spam, unsubscribe anytime.

// src/queue/email.queue.ts
import { Queue } from 'bullmq';
import { connection } from './connection';

interface EmailJobData {
  to
  subject
  template
  variables: Record<string, string>;
}

export const emailQueue = new Queue<EmailJobData>('email', {
  connection,
  defaultJobOptions: {
    attempts: 3,
    backoff: {
      type: 'exponential',
      delay: 2000,  // 2s, 4s, 8s
    },
    removeOnComplete: { count: 1000 },  // Keep last 1000 completed
    removeOnFail: { count: 5000 },      // Keep last 5000 failed
  },
});

// Add a job
await emailQueue.add('welcome', {
  to: '[email protected]',
  subject: 'Welcome to TechSaaS',
  template: 'welcome',
  variables: { name: 'Alice' },
});

// Add a delayed job (send in 1 hour)
await emailQueue.add('reminder', {
  to: '[email protected]',
  subject: 'Complete your setup',
  template: 'reminder',
  variables: { name: 'Alice' },
}, {
  delay: 60 * 60 * 1000,  // 1 hour in ms
});

// Add a scheduled recurring job
await emailQueue.upsertJobScheduler(
  'weekly-digest',
  { pattern: '0 9 * * MON' },  // Every Monday at 9am
  {
    name: 'digest',
    data: { template: 'weekly-digest' },
  }
);

Creating Workers (Consumers)

// src/queue/email.worker.ts
import { Worker, Job } from 'bullmq';
import { connection } from './connection';
import { sendEmail } from '../services/email';
import { renderTemplate } from '../services/templates';

const emailWorker = new Worker<EmailJobData>(
  'email',
  async (job: Job<EmailJobData>) => {
    const { to, subject, template, variables } = job.data;

    // Update progress
    await job.updateProgress(10);

    // Render template
    const html = await renderTemplate(template, variables);
    await job.updateProgress(50);

    // Send email
    const result = await sendEmail({ to, subject, html });
    await job.updateProgress(100);

    // Return result (stored with completed job)
    return { messageId: result.messageId, sentAt: new Date().toISOString() };
  },
  {
    connection,
    concurrency: 5,        // Process 5 emails simultaneously
    limiter: {
      max: 50,             // Max 50 jobs
      duration: 60000,     // Per minute (respect email provider limits)
    },
  }
);

// Event handlers
emailWorker.on('completed', (job, result) => {
  console.log(`Email sent to ${job.data.to}: ${result.messageId}`);
});

emailWorker.on('failed', (job, error) => {
  console.error(`Email to ${job?.data.to} failed: ${error.message}`);
  // Alert if all retries exhausted
  if (job && job.attemptsMade >= (job.opts.attempts || 3)) {
    alertOpsTeam(`Email permanently failed: ${job.data.to}`);
  }
});

emailWorker.on('error', (error) => {
  console.error('Worker error:', error);
});

Job Priorities

// Priority: lower number = higher priority
await emailQueue.add('password-reset', data, { priority: 1 });    // Critical
await emailQueue.add('order-confirmation', data, { priority: 5 }); // Normal
await emailQueue.add('newsletter', data, { priority: 10 });        // Low
PrimaryRead + WriteReplica 1Replica 2WAL streamWAL streamRead-onlyRead-only

Database replication: the primary handles writes while replicas serve read queries via WAL streaming.

Flow (Job Dependencies)

BullMQ supports parent-child job flows:

import { FlowProducer } from 'bullmq';

const flowProducer = new FlowProducer({ connection });

// Child jobs run first, parent runs after all children complete
await flowProducer.add({
  name: 'generate-report',
  queueName: 'reports',
  data: { reportId: 'RPT-001' },
  children: [
    {
      name: 'fetch-sales-data',
      queueName: 'data-fetch',
      data: { source: 'sales', dateRange: '2025-11' },
    },
    {
      name: 'fetch-user-data',
      queueName: 'data-fetch',
      data: { source: 'users', dateRange: '2025-11' },
    },
    {
      name: 'fetch-analytics',
      queueName: 'data-fetch',
      data: { source: 'analytics', dateRange: '2025-11' },
    },
  ],
});
// fetch-sales-data, fetch-user-data, fetch-analytics run in parallel
// generate-report runs after all three complete

Multiple Queue Pattern

Organize work into separate queues by type:

// src/queue/index.ts
import { Queue } from 'bullmq';
import { connection } from './connection';

export const queues = {
  email: new Queue('email', { connection }),
  imageProcessing: new Queue('image-processing', { connection }),
  dataSync: new Queue('data-sync', { connection }),
  aiInference: new Queue('ai-inference', { connection }),
  webhookDelivery: new Queue('webhook-delivery', { connection }),
};

// Different concurrency per queue type
// Email: high concurrency, I/O bound
// Image processing: low concurrency, CPU bound
// AI inference: single concurrency, GPU bound

Error Handling and Dead Letter Queue

const worker = new Worker('critical-tasks', async (job) => {
  try {
    await processJob(job.data);
  } catch (error) {
    if (error instanceof TransientError) {
      // Retry-able error: throw to trigger BullMQ retry
      throw error;
    }

    if (error instanceof PermanentError) {
      // Non-retryable: move to dead letter queue
      await deadLetterQueue.add('failed-job', {
        originalQueue: 'critical-tasks',
        originalJobId: job.id,
        data: job.data,
        error: error.message,
        failedAt: new Date().toISOString(),
      });

      // Don't throw — mark as completed to prevent retries
      return { status: 'moved_to_dlq', error: error.message };
    }

    throw error;  // Unknown error: let BullMQ retry
  }
}, { connection, concurrency: 3 });

Monitoring with Bull Board

Free Resource

Free Cloud Architecture Checklist

A 47-point checklist covering security, scalability, cost optimization, and disaster recovery for production cloud environments.

Download the Checklist
// src/monitoring/bull-board.ts
import { createBullBoard } from '@bull-board/api';
import { BullMQAdapter } from '@bull-board/api/bullMQAdapter';
import { ExpressAdapter } from '@bull-board/express';

const serverAdapter = new ExpressAdapter();
serverAdapter.setBasePath('/admin/queues');

createBullBoard({
  queues: [
    new BullMQAdapter(emailQueue),
    new BullMQAdapter(imageQueue),
    new BullMQAdapter(aiQueue),
  ],
  serverAdapter,
});

// Mount on your Express app
app.use('/admin/queues', serverAdapter.getRouter());
InputHiddenHiddenOutput

Neural network architecture: data flows through input, hidden, and output layers.

Production Checklist

  1. Set maxRetriesPerRequest: null on Redis connection (BullMQ requirement)
  2. Always set removeOnComplete and removeOnFail to prevent Redis memory exhaustion
  3. Use exponential backoff for retries
  4. Monitor queue depth — growing queues mean workers cannot keep up
  5. Graceful shutdown: Call worker.close() on SIGTERM
  6. Separate worker processes from API servers for reliability
  7. Set appropriate concurrency per queue type
// Graceful shutdown
process.on('SIGTERM', async () => {
  console.log('Shutting down workers...');
  await emailWorker.close();
  await imageWorker.close();
  process.exit(0);
});

BullMQ with Redis is the backbone of background processing in the Node.js ecosystem. At TechSaaS, we use it for everything from email delivery to AI inference scheduling.

#bullmq#redis#job-queue#typescript#background-jobs#tutorial

Related Service

Cloud Solutions

Let our experts help you build the right technology strategy for your business.

Need help with tutorials?

TechSaaS provides expert consulting and managed services for cloud infrastructure, DevOps, and AI/ML operations.

We Will Build You a Demo Site — For Free

Like it? Pay us. Do not like it? Walk away, zero complaints. You will spend way less than hiring developers or any agency.

47+ companies trusted us
99.99% uptime
< 48hr response

No spam. No contracts. Just a free demo.