How to Design Apex logic to distribute batch processing dynamically across multiple Queueable jobs based on data volume.

How to Design Apex Logic to Distribute Batch Processing Dynamically Across Multiple Queueable Jobs
1. Why Use Dynamic Queueable Distribution?
Salesforce Batch Apex is powerful, but it has limitations:
| Limitation | Impact |
|---|---|
| Fixed batch size | Cannot adapt to fluctuating data volume |
| Limited error isolation | Failures can affect large chunks |
| Slower for complex workflows | Especially when chaining logic |
Queueable Apex allows:
- Job chaining
- Stateful logic
- Passing complex objects
- Dynamic partitioning of records
By dynamically splitting records across multiple Queueable jobs, we achieve:
- Higher parallelism
- Better fault isolation
- Adaptive workload scaling
- Improved governor limit safety
2. High-Level Architecture
Here’s the flow:
- Controller Class determines record volume
- It calculates optimal chunk size
- Records are split into subsets
- Each subset is sent to a Queueable worker
- Workers process independently
- Optional chaining or retry logic handles failures
Trigger / Scheduler / UI
|
v
Data Volume Analyzer
|
v
Queueable Dispatcher
| | |
Worker 1 Worker 2 Worker 3
JavaScript3. When to Prefer Queueable Over Batch Apex
| Scenario | Best Option |
|---|---|
| < 50k records with complex logic | Queueable |
| Need dynamic job scaling | Queueable |
| Need job-to-job dependency | Queueable |
| Need resumable processing | Queueable |
| Need simple mass updates | Batch Apex |
4. Core Design Principles
1️⃣ Volume-Aware Chunking
Adjust chunk size dynamically based on:
- Record complexity
- CPU usage
- DML volume
- Heap usage
2️⃣ Stateless Workers
Each Queueable job processes only its assigned chunk.
3️⃣ Resumable Orchestration
Dispatcher tracks progress and requeues failed chunks.
4️⃣ Governor-Safe Execution
Chunk size ensures no job breaches:
- 10,000 DML rows
- 100 SOQL queries
- 10 MB heap
5. Sample Business Scenario
Suppose we need to:
Process hundreds of thousands of Orders nightly:
- Calculate totals
- Update fulfillment status
- Sync to external system
- Log failures per record
Instead of one huge Batch Apex job, we dynamically fan out into multiple Queueable workers.
6. Step 1 – Dispatcher (Volume Analyzer)
This class:
- Queries record IDs
- Calculates chunk size dynamically
- Enqueues multiple Queueable workers
public class OrderProcessingDispatcher {
public static void startProcessing() {
// Step 1: Get record count
Integer totalRecords = [
SELECT COUNT()
FROM Order__c
WHERE Status__c = 'Pending'
];
if (totalRecords == 0) {
System.debug('No records to process.');
return;
}
// Step 2: Calculate chunk size dynamically
Integer chunkSize = calculateChunkSize(totalRecords);
// Step 3: Query IDs only (heap safe)
List<Id> orderIds = new List<Id>();
for (Order__c o : [
SELECT Id
FROM Order__c
WHERE Status__c = 'Pending'
ORDER BY CreatedDate
]) {
orderIds.add(o.Id);
}
// Step 4: Split into chunks
List<List<Id>> partitions = partition(orderIds, chunkSize);
// Step 5: Enqueue workers
for (List<Id> subset : partitions) {
System.enqueueJob(new OrderProcessingQueueable(subset));
}
}
// Dynamically adjust chunk size
private static Integer calculateChunkSize(Integer totalRecords) {
if (totalRecords <= 2_000) return 200;
if (totalRecords <= 10_000) return 500;
if (totalRecords <= 50_000) return 1_000;
return 2_000;
}
private static List<List<Id>> partition(List<Id> source, Integer size) {
List<List<Id>> result = new List<List<Id>>();
for (Integer i = 0; i < source.size(); i += size) {
result.add(source.subList(i, Math.min(i + size, source.size())));
}
return result;
}
}
JavaScript7. Step 2 – Queueable Worker (Processor)
Each Queueable job:
- Receives a subset of record IDs
- Performs bulk-safe processing
- Handles errors independently
public class OrderProcessingQueueable implements Queueable, Database.AllowsCallouts {
private List<Id> orderIds;
public OrderProcessingQueueable(List<Id> orderIds) {
this.orderIds = orderIds;
}
public void execute(QueueableContext context) {
List<Order__c> orders = [
SELECT Id, Amount__c, Status__c, External_Id__c
FROM Order__c
WHERE Id IN :orderIds
];
List<Order__c> updates = new List<Order__c>();
List<Processing_Log__c> logs = new List<Processing_Log__c>();
for (Order__c o : orders) {
try {
// Business logic
o.Total__c = calculateTotal(o);
o.Status__c = 'Processed';
// External sync
syncToERP(o);
updates.add(o);
} catch (Exception e) {
logs.add(new Processing_Log__c(
Record_Id__c = o.Id,
Message__c = e.getMessage(),
Stacktrace__c = e.getStackTraceString()
));
}
}
if (!updates.isEmpty()) update updates;
if (!logs.isEmpty()) insert logs;
}
private Decimal calculateTotal(Order__c o) {
return o.Amount__c * 1.18;
}
private void syncToERP(Order__c o) {
// Callout logic here
}
}
JavaScript8. Step 3 – Adaptive Parallelism Control
Salesforce allows:
- Up to 50 active Queueable jobs per org
- 1 chained job per Queueable
To avoid queue overflow, use throttling:
public class QueueableThrottler {
public static Boolean canEnqueue() {
Integer activeJobs = [
SELECT COUNT()
FROM AsyncApexJob
WHERE JobType = 'Queueable'
AND Status IN ('Holding', 'Processing', 'Queued')
];
return activeJobs < 45;
}
}
JavaScriptUpdate dispatcher:
for (List<Id> subset : partitions) {
if (QueueableThrottler.canEnqueue()) {
System.enqueueJob(new OrderProcessingQueueable(subset));
} else {
System.enqueueJob(new DeferredQueueable(subset));
}
}
JavaScript9. Step 4 – Recursive Job Chaining for Massive Volumes
For hundreds of thousands or millions of records, enqueue in waves:
public class RecursiveDispatcherQueueable implements Queueable {
private List<Id> remainingIds;
public RecursiveDispatcherQueueable(List<Id> remainingIds) {
this.remainingIds = remainingIds;
}
public void execute(QueueableContext context) {
Integer chunkSize = 1_000;
List<Id> batch = remainingIds.subList(
0,
Math.min(chunkSize, remainingIds.size())
);
System.enqueueJob(new OrderProcessingQueueable(batch));
if (remainingIds.size() > chunkSize) {
List<Id> remaining = remainingIds.subList(
chunkSize,
remainingIds.size()
);
System.enqueueJob(new RecursiveDispatcherQueueable(remaining));
}
}
}
JavaScript10. Step 5 – Stateful Progress Tracking (Optional)
For resumable processing, store chunk status in a custom object:
Processing_Batch__c:
- Job_Id__c
- Total_Records__c
- Completed_Records__c
- Status__c
JavaScriptUpdate progress inside worker:
batch.Completed_Records__c += updates.size();
update batch;
JavaScript🔹 11. Dynamic Chunk Size Based on Runtime Metrics
We can dynamically tune chunk size based on:
- CPU time
- Heap size
- DML count
public class ChunkSizeOptimizer {
public static Integer optimize(Integer baseSize) {
if (Limits.getCpuTime() > 7_000) {
return Math.max(100, baseSize / 2);
}
if (Limits.getHeapSize() > 4_000_000) {
return Math.max(100, baseSize / 2);
}
return baseSize;
}
}
JavaScriptDispatcher usage:
Integer chunkSize = ChunkSizeOptimizer.optimize(calculateChunkSize(totalRecords));
JavaScript🔹 12. Error Isolation and Retry Pattern
Each Queueable job retries its own failures:
public class RetryableQueueable implements Queueable {
private List<Id> failedIds;
private Integer retryCount;
public RetryableQueueable(List<Id> failedIds, Integer retryCount) {
this.failedIds = failedIds;
this.retryCount = retryCount;
}
public void execute(QueueableContext context) {
if (retryCount > 3) return;
try {
System.enqueueJob(new OrderProcessingQueueable(failedIds));
} catch (Exception e) {
System.enqueueJob(new RetryableQueueable(failedIds, retryCount + 1));
}
}
}
JavaScript13. End-to-End Example Flow
Let’s combine everything.
Step 1 – Trigger / Scheduler
global class NightlyOrderScheduler implements Schedulable {
global void execute(SchedulableContext sc) {
OrderProcessingDispatcher.startProcessing();
}
}
JavaScript14. Comparison: Batch Apex vs Dynamic Queueable Fan-Out
| Feature | Batch Apex | Queueable Fan-Out |
|---|---|---|
| Chunk size | Fixed | Dynamic |
| Parallelism | Limited | High |
| Error isolation | Medium | Excellent |
| Chaining | Complex | Native |
| Runtime adaptiveness | No | Yes |
| Best for | Simple mass updates | Complex workflows |
15. Governor Limit Safety Analysis
Let’s analyze per Queueable job:
| Limit | Max | Typical Usage |
|---|---|---|
| SOQL queries | 100 | 1 |
| DML statements | 150 | 2 |
| DML rows | 10,000 | ~1,000 |
| Heap size | 12 MB async | ~2–5 MB |
| CPU time | 60,000 ms async | ~5–10 sec |
By keeping chunk size ≤ 1,000, each Queueable job safely stays within limits.
Related Posts

How to Automatically create a follow-up Task when a Lead is converted

How You need to update a related child record whenever a parent record’s status changes, but only if the status is “Closed Won.” How would you design this in Apex?
