Concurrency Patterns for Booking Systems — TypeScript#
SDE-3 Interview Prep | Slice Fintech | DSA & Coding Round
The Core Problem#
A booking system must enforce:
Total booked ≤ Total capacity
No two users can book the same unit simultaneously
TypeScript runs on Node.js which is single-threaded (event loop), but concurrency still matters via:
- Async/await races (multiple coroutines interleaving)
- Worker threads (
worker_threadsmodule) - Distributed systems (multiple Node processes / pods)
Pattern 1 — Mutex Lock (Async)#
What it is#
A Mutex (Mutual Exclusion) lock ensures that only one async operation can enter a critical section at a time. In Java you'd use synchronized or ReentrantLock. Node.js has neither, so we build one using Promises and a queue.
How it works#
When a coroutine tries to acquire() a locked mutex, it doesn't block the thread — it suspends itself by pushing a resolver into a queue. When the current holder calls release(), it hands the lock directly to the next waiter in the queue. This is cooperative, not preemptive.
When to use it#
Use this when multiple async functions race to modify the same shared resource (e.g., booking the same seat, debiting the same wallet). Without it, two awaited operations can both read available = 1, both pass the if check, and both decrement — resulting in available = -1.
Trade-offs#
- ✅ Prevents race conditions in async code
- ✅ Fine-grained (one lock per item, not one global lock)
- ⚠️ Queue can grow unboundedly under extreme load — add a
maxWaiterscap in production - ❌ Does NOT work across multiple Node.js processes or pods — use Redis for that (Pattern 6)
Node.js has no built-in synchronized. We simulate it with a Promise-based mutex.
class Mutex {
private queue: Array<() => void> = [];
private locked = false;
async acquire(): Promise<void> {
return new Promise((resolve) => {
if (!this.locked) {
this.locked = true;
resolve();
} else {
this.queue.push(resolve);
}
});
}
release(): void {
if (this.queue.length > 0) {
const next = this.queue.shift()!;
next(); // hand off the lock
} else {
this.locked = false;
}
}
}
class BookingService {
private inventory = new Map<string, number>();
private locks = new Map<string, Mutex>();
private getLock(itemId: string): Mutex {
if (!this.locks.has(itemId)) {
this.locks.set(itemId, new Mutex());
}
return this.locks.get(itemId)!;
}
async book(itemId: string, quantity: number): Promise<boolean> {
const lock = this.getLock(itemId);
await lock.acquire();
try {
const available = this.inventory.get(itemId) ?? 0;
if (available < quantity) return false;
this.inventory.set(itemId, available - quantity);
return true;
} finally {
lock.release(); // ALWAYS release in finally
}
}
async release(itemId: string, quantity: number): Promise<void> {
const lock = this.getLock(itemId);
await lock.acquire();
try {
const current = this.inventory.get(itemId) ?? 0;
this.inventory.set(itemId, current + quantity);
} finally {
lock.release();
}
}
}
// Usage
const service = new BookingService();
// Initialize inventory
// Concurrent bookings — only one proceeds at a time per item
Promise.all([
service.book("seat-A1", 1),
service.book("seat-A1", 1), // This will fail — seat already taken
service.book("seat-B2", 1), // Different seat — no contention
]).then(console.log); // [true, false, true]
Pattern 2 — Optimistic Locking (CAS simulation)#
What it is#
Optimistic Locking avoids acquiring a lock entirely. Instead, it reads the current value, computes the new value, and then atomically swaps the old for the new — but only if the value hasn't changed in between. This is called a Compare-And-Swap (CAS) operation.
How it works#
- Read the current inventory count (e.g.,
5) - Compute the new count (e.g.,
5 - 1 = 4) - CAS: "Set to
4only if it's still5" - If the CAS fails (someone else changed it), read again and retry
When to use it#
Use this when contention is low — i.e., most of the time, two users aren't competing for the exact same item at the exact same millisecond. It's ideal for distributed inventories or separate items in a catalog. It avoids the overhead of acquiring and releasing a lock on the happy path.
Trade-offs#
- ✅ No lock overhead on the happy path (very fast)
- ✅ No risk of deadlock (no locks held)
- ⚠️ Under high contention (100 users for 1 seat), the spin loop burns CPU — use Pessimistic Lock instead
- ⚠️ In single-threaded Node.js, true CAS conflicts don't happen mid-synchronous code; this pattern truly shines with Worker Threads or Redis
WATCH/MULTI/EXEC
Philosophy: Assume conflicts won't happen. Retry if they do.
class AtomicInteger {
private value: number;
constructor(initial: number) {
this.value = initial;
}
get(): number {
return this.value;
}
// Simulate Compare-And-Swap
compareAndSet(expected: number, newValue: number): boolean {
if (this.value === expected) {
this.value = newValue;
return true;
}
return false;
}
}
class OptimisticBookingService {
private inventory = new Map<string, AtomicInteger>();
initItem(itemId: string, count: number): void {
this.inventory.set(itemId, new AtomicInteger(count));
}
book(itemId: string, quantity: number): boolean {
const counter = this.inventory.get(itemId);
if (!counter) throw new Error(`Item ${itemId} not found`);
// Spin loop — retry on conflict
while (true) {
const current = counter.get();
if (current < quantity) return false; // not enough inventory
if (counter.compareAndSet(current, current - quantity)) {
return true; // CAS succeeded
}
// CAS failed → another coroutine changed it, retry
}
}
}
// Note: In real Node.js single-thread, CAS conflicts won't happen
// mid-synchronous code. This pattern shines in Worker Threads or
// distributed Redis-based CAS (WATCH/MULTI/EXEC).
Pattern 3 — Semaphore (Capacity Limiting)#
What it is#
A Semaphore is a counter-based concurrency primitive. It maintains a pool of N "permits". Each caller must acquire a permit before proceeding; when done, they release it back. If all permits are taken, new callers wait until one is returned.
How it works#
Think of it like a car park with N spaces. A barrier arm counts cars in and out. If the lot is full, you wait at the barrier until someone exits. The Semaphore is that barrier — it doesn't care who is inside, only how many.
When to use it#
Use this when your constraint is capacity rather than identity. A concert venue has 500 seats. You don't care which specific code path holds a permit, only that no more than 500 are active simultaneously. Semaphores are also used to cap the number of concurrent DB connections or external API calls.
Key insight — Semaphore ≠ identity tracking#
A Semaphore only counts. It doesn't know which seat was booked. You still need a Set<string> alongside it to track which specific seats are taken. Mixing up these two responsibilities is a common interview mistake.
Trade-offs#
- ✅ Simple and efficient for capacity enforcement
- ✅ Works with
fair = truefor FIFO ordering (no starvation) - ⚠️ Does not prevent two users booking the same specific seat — pair with identity tracking
- ❌ In-process only; does not work across pods without Redis
Limit concurrent bookings to exactly N slots.
class Semaphore {
private permits: number;
private waitQueue: Array<() => void> = [];
constructor(permits: number) {
this.permits = permits;
}
async acquire(): Promise<void> {
if (this.permits > 0) {
this.permits--;
return;
}
// No permits available — wait
return new Promise((resolve) => {
this.waitQueue.push(() => {
this.permits--;
resolve();
});
});
}
release(): void {
this.permits++;
if (this.waitQueue.length > 0) {
const next = this.waitQueue.shift()!;
this.permits--; // will be re-incremented in the queued callback
this.permits++; // cancel the above — let callback handle it
next();
}
}
}
class SeatBookingService {
private semaphore: Semaphore;
private bookedSeats = new Set<string>();
constructor(totalSeats: number) {
this.semaphore = new Semaphore(totalSeats);
}
async bookSeat(seatId: string): Promise<boolean> {
await this.semaphore.acquire();
if (this.bookedSeats.has(seatId)) {
this.semaphore.release(); // seat already taken, return permit
return false;
}
this.bookedSeats.add(seatId);
return true;
}
cancelSeat(seatId: string): void {
if (this.bookedSeats.delete(seatId)) {
this.semaphore.release();
}
}
availableSeats(): number {
return (this.semaphore as any).permits; // expose for debugging
}
}
Pattern 4 — Idempotency Token (Fintech Critical)#
What it is#
Idempotency means: no matter how many times you perform the same operation, the result is the same as if you'd done it once. An Idempotency Token (also called an idempotency key) is a unique identifier attached to a request that lets the server detect and suppress duplicates.
Why it matters in fintech#
In payment and booking systems, the client often can't tell if a request succeeded — the response might be lost due to a network timeout. The safe recovery is to retry. But a naive retry might charge the user twice or book two seats. Idempotency keys solve this: the server remembers the result for a given key and returns the cached response on all subsequent retries — without re-executing the operation.
How it works#
Three states for an incoming key:
- Already processed → return cached result immediately (no DB hit)
- In-flight → a Promise is already running for this key; return that same Promise so all callers get the same result
- New → process normally, cache the result, then resolve all waiters
When to use it#
Any time a client might retry a request — payment initiation, seat reservation, coupon redemption, fund transfer. This is a first-class design primitive at Slice, Razorpay, and Stripe. Mentioning it unprompted in an interview immediately signals senior-level thinking.
Trade-offs#
- ✅ Prevents double-charges and duplicate bookings from retries
- ✅ Cheap: cache lookup is O(1)
- ⚠️ Cache must be durable (Redis/DB) in production — in-memory is lost on restart
- ⚠️ Keys need a TTL (e.g., 24h) to prevent unbounded memory growth
- ⚠️ The key must be generated client-side and be truly unique per logical operation (UUID v4 is standard)
Most important pattern for Slice interviews. Prevents double-booking from retried network requests.
interface BookingRequest {
itemId: string;
userId: string;
quantity: number;
}
interface BookingResult {
success: boolean;
bookingId?: string;
message: string;
}
class IdempotentBookingService {
private processedRequests = new Map<string, BookingResult>();
private inFlight = new Map<string, Promise<BookingResult>>();
private inventory = new Map<string, number>();
async book(
idempotencyKey: string,
request: BookingRequest
): Promise<BookingResult> {
// 1. Already processed — return cached result
const cached = this.processedRequests.get(idempotencyKey);
if (cached) return cached;
// 2. In-flight — return same promise (prevents duplicate processing)
const inflight = this.inFlight.get(idempotencyKey);
if (inflight) return inflight;
// 3. New request — process and cache
const promise = this.processBooking(request);
this.inFlight.set(idempotencyKey, promise);
try {
const result = await promise;
this.processedRequests.set(idempotencyKey, result);
return result;
} finally {
this.inFlight.delete(idempotencyKey);
}
}
private async processBooking(request: BookingRequest): Promise<BookingResult> {
const available = this.inventory.get(request.itemId) ?? 0;
if (available < request.quantity) {
return { success: false, message: "Insufficient inventory" };
}
this.inventory.set(request.itemId, available - request.quantity);
const bookingId = `BKG-${Date.now()}-${Math.random().toString(36).slice(2)}`;
return { success: true, bookingId, message: "Booking confirmed" };
}
}
// Usage — simulating a retried request
const svc = new IdempotentBookingService();
const key = "user-123-request-456"; // same key on retry
const [r1, r2, r3] = await Promise.all([
svc.book(key, { itemId: "concert-A", userId: "u123", quantity: 1 }),
svc.book(key, { itemId: "concert-A", userId: "u123", quantity: 1 }), // retry
svc.book(key, { itemId: "concert-A", userId: "u123", quantity: 1 }), // retry
]);
// r1 === r2 === r3 (same bookingId, only charged once)
console.log(r1.bookingId === r2.bookingId); // true
Pattern 5 — Async Queue (Producer-Consumer)#
What it is#
The Producer-Consumer pattern decouples the rate at which work is submitted from the rate at which it's processed. A bounded queue sits between producers (e.g., incoming HTTP requests) and consumers (e.g., workers writing to the DB). This prevents the system from being overwhelmed during traffic spikes.
How it works#
An AsyncQueue accepts tasks and maintains a pool of concurrency active workers. When a worker finishes, it immediately picks up the next task from the queue. If all workers are busy, new tasks wait. Think of it as a restaurant kitchen: the queue is the order tickets rail, and workers are the chefs — you control how many chefs are cooking simultaneously.
When to use it#
Use this when:
- Your downstream resource (DB, payment gateway, external API) has a rate limit or limited connections
- You want to avoid thundering-herd: 10,000 requests arriving simultaneously shouldn't spawn 10,000 concurrent DB writes
- You want backpressure — when the queue is full, you can reject or delay new submissions gracefully
Real-world analogy at Slice#
A flash sale for 1,000 concert tickets might receive 50,000 requests in 2 seconds. An AsyncQueue with concurrency = 20 processes 20 bookings at a time, protecting the database from being crushed, while others wait or receive a "queue full" rejection.
Trade-offs#
- ✅ Protects downstream resources from overload
- ✅ Provides natural backpressure
- ✅ Easy to tune concurrency without code changes
- ⚠️ Adds latency — requests wait in queue
- ⚠️ In-process only — for multi-pod queuing, use a message broker (Kafka, RabbitMQ, SQS)
Decouple booking ingestion from processing under load.
type Task<T> = () => Promise<T>;
class AsyncQueue<T> {
private queue: Array<{ task: Task<T>; resolve: (v: T) => void; reject: (e: unknown) => void }> = [];
private activeWorkers = 0;
private readonly concurrency: number;
constructor(concurrency: number) {
this.concurrency = concurrency;
}
enqueue(task: Task<T>): Promise<T> {
return new Promise((resolve, reject) => {
this.queue.push({ task, resolve, reject });
this.drain();
});
}
private drain(): void {
while (this.activeWorkers < this.concurrency && this.queue.length > 0) {
const { task, resolve, reject } = this.queue.shift()!;
this.activeWorkers++;
task()
.then(resolve)
.catch(reject)
.finally(() => {
this.activeWorkers--;
this.drain(); // process next
});
}
}
}
// Usage: process at most 3 bookings concurrently
const bookingQueue = new AsyncQueue<BookingResult>(3);
const requests = Array.from({ length: 10 }, (_, i) => ({
idempotencyKey: `req-${i}`,
itemId: `seat-${i % 3}`,
quantity: 1,
}));
// All 10 enqueued, but only 3 run at a time
const results = await Promise.all(
requests.map((req) =>
bookingQueue.enqueue(() =>
fetch("/api/book", { method: "POST", body: JSON.stringify(req) }).then((r) => r.json())
)
)
);
Pattern 6 — Distributed Lock (Redis-based)#
What it is#
A Distributed Lock extends the concept of a mutex to work across multiple processes, servers, or pods. A single Node.js Mutex only works within one process — if your service runs on 3 pods, three requests can enter the critical section simultaneously. A distributed lock uses an external store (Redis) that all pods share.
How it works#
Redis's SET key value NX PX ttl command is atomic — it sets a key only if it doesn't already exist, with an expiry. This is the foundation:
- Acquire:
SET lock:seat-A1 <unique-token> NX PX 5000— succeeds only if no one else holds it - Release: A Lua script checks that the token matches (so Pod A can't accidentally release Pod B's lock) and then deletes it
- TTL (auto-expiry): If a pod crashes mid-operation, the lock auto-expires after 5 seconds — preventing permanent deadlock
Why the Lua script for release?#
Without it, there's a race: Pod A checks GET lock:seat-A1 == its-token ✓, then pauses (GC pause, network hiccup), then calls DEL. Meanwhile, the lock expired and Pod B acquired it. Pod A's DEL now deletes Pod B's lock. The Lua script makes check-and-delete atomic, eliminating this race.
When to use it#
Any booking system running more than one instance of the service. At Slice scale — multiple pods per service, deployed across availability zones — in-process locks are simply not enough. This is the production-grade solution.
Trade-offs#
- ✅ Works across multiple pods, processes, and servers
- ✅ TTL prevents permanent deadlock if a holder crashes
- ✅ Lua release prevents accidental lock theft
- ⚠️ Adds ~1–2ms network round-trip per lock operation
- ⚠️ Redis becomes a single point of failure — use Redis Sentinel/Cluster or the Redlock algorithm for higher availability
- ❌ The Redlock algorithm (multiple Redis nodes) is debated — Martin Kleppmann's critique is worth knowing for SDE-3 interviews
For multi-pod/multi-process deployments (how Slice actually runs this).
import { createClient } from "redis";
class RedisDistributedLock {
private client: ReturnType<typeof createClient>;
private readonly TTL_MS = 5000; // auto-expire to prevent deadlocks
constructor(redisUrl: string) {
this.client = createClient({ url: redisUrl });
}
async acquire(lockKey: string, token: string): Promise<boolean> {
// SET key token NX PX ttl — atomic in Redis
const result = await this.client.set(
`lock:${lockKey}`,
token,
{ NX: true, PX: this.TTL_MS }
);
return result === "OK";
}
async release(lockKey: string, token: string): Promise<void> {
// Lua script ensures we only delete OUR lock (not another holder's)
const script = `
if redis.call("get", KEYS[1]) == ARGV[1] then
return redis.call("del", KEYS[1])
else
return 0
end
`;
await this.client.eval(script, { keys: [`lock:${lockKey}`], arguments: [token] });
}
}
class DistributedBookingService {
private lock: RedisDistributedLock;
constructor(redisUrl: string) {
this.lock = new RedisDistributedLock(redisUrl);
}
async book(itemId: string, userId: string): Promise<boolean> {
const token = `${userId}-${Date.now()}`; // unique per request
const acquired = await this.lock.acquire(itemId, token);
if (!acquired) {
throw new Error("Resource locked by another process. Retry.");
}
try {
// Critical section — safe across multiple pods
return await this.processBooking(itemId, userId);
} finally {
await this.lock.release(itemId, token);
}
}
private async processBooking(itemId: string, userId: string): Promise<boolean> {
// DB operation here
return true;
}
}
Pattern 7 — CountDownLatch equivalent (Promise.all coordination)#
What it is#
A CountDownLatch is a synchronization barrier that blocks progress until a fixed number of operations have completed. Originated in Java's java.util.concurrent, it's a one-shot gate: initialized with count N, it opens permanently once N threads have called countDown().
How it works#
Imagine a rocket launch that requires sign-off from 3 systems: propulsion, navigation, and communications. No launch happens until all 3 give the green light. The latch starts at 3; each system decrements it upon readiness. When it hits 0, the launch proceeds — and the gate never closes again (one-shot).
When to use it#
Use this when you need to fan-out N async operations and wait for all of them before continuing. In a booking system this is useful for parallel pre-validation: check card validity, check fraud score, and check balance simultaneously before confirming the booking.
CountDownLatch vs CyclicBarrier#
| CountDownLatch | CyclicBarrier | |
|---|---|---|
| Reusable? | ❌ One-shot | ✅ Resets after each cycle |
| Direction | Waiters wait for workers | All threads wait for each other |
| Use case | Main thread waits for N tasks | N threads synchronize at a checkpoint |
In TypeScript/Node.js#
The Promise.all() idiom is the idiomatic equivalent and should be your default. Implement a full CountDownLatch class only when you need the decoupled countDown() + await() API (e.g., when countdowns happen from unrelated callback chains rather than returned Promises).
Trade-offs#
- ✅ Clean fan-out/join pattern — maximizes parallelism for independent tasks
- ✅
Promise.allfails fast — rejects as soon as any promise rejects (usePromise.allSettledif you need all results regardless) - ⚠️ All-or-nothing: if one validation hangs indefinitely, the whole booking hangs — always wrap with a timeout
class CountDownLatch {
private count: number;
private resolve!: () => void;
readonly promise: Promise<void>;
constructor(count: number) {
this.count = count;
this.promise = new Promise((res) => (this.resolve = res));
}
countDown(): void {
this.count--;
if (this.count <= 0) this.resolve();
}
async await(): Promise<void> {
return this.promise;
}
}
// Usage: wait for 3 validations before confirming booking
async function bookWithValidation(bookingId: string): Promise<void> {
const latch = new CountDownLatch(3);
// Run all validations concurrently
validateCard(bookingId).then(() => latch.countDown());
validateBalance(bookingId).then(() => latch.countDown());
validateFraud(bookingId).then(() => latch.countDown());
await latch.await(); // blocks until all 3 complete
await confirmBooking(bookingId);
}
// In practice, just use Promise.all:
async function bookWithValidationSimple(bookingId: string): Promise<void> {
await Promise.all([
validateCard(bookingId),
validateBalance(bookingId),
validateFraud(bookingId),
]);
await confirmBooking(bookingId);
}
Pattern 8 — Timeout + Retry with Exponential Backoff#
What it is#
Exponential Backoff is a retry strategy where each successive retry waits twice as long as the previous one. Combined with a per-attempt timeout, it prevents a failing downstream service from causing cascading failures — and prevents all retriers from hammering the service simultaneously (which would make recovery impossible).
How it works#
On failure, instead of retrying immediately (which could flood an already-struggling service), you wait:
- Attempt 1 fails → wait 100ms
- Attempt 2 fails → wait 200ms
- Attempt 3 fails → wait 400ms
Jitter (random ±10% variation on the delay) is added to desynchronize retriers. Without jitter, 1,000 clients that all failed at the same moment will all retry at exactly t+100ms — creating a synchronized thundering herd. Jitter staggers them.
The timeout race with Promise.race#
Promise.race([operation, timeout(2000)]) means: whichever settles first wins. If the booking API hangs for 10 seconds, the timeout promise rejects at 2 seconds and we move on. Without this, a single slow upstream call can hold your async operation open indefinitely.
When to use it#
Any time you call an external service that may transiently fail: payment gateway, fraud detection API, SMS/notification service, inventory microservice. At Slice, the payment gateway is a critical external dependency — retries with backoff are non-negotiable.
Trade-offs#
- ✅ Recovers from transient failures without manual intervention
- ✅ Protects downstream from thundering herd via jitter
- ✅
Promise.raceprevents indefinite hangs - ⚠️ Retries increase total latency — set
maxRetriesconservatively (3 is standard) - ⚠️ Some operations must NOT be retried without idempotency keys (e.g., a payment debit) — always combine with Pattern 4
- ❌ Does not help with systemic failures (service is down for 30 min) — use a Circuit Breaker for that
async function bookWithRetry(
itemId: string,
quantity: number,
maxRetries = 3
): Promise<BookingResult> {
let attempt = 0;
while (attempt < maxRetries) {
try {
const result = await Promise.race([
performBooking(itemId, quantity),
timeout(2000), // 2 second timeout
]);
return result as BookingResult;
} catch (error) {
attempt++;
if (attempt === maxRetries) throw error;
// Exponential backoff: 100ms, 200ms, 400ms...
const delay = 100 * Math.pow(2, attempt);
await sleep(delay);
}
}
throw new Error("Max retries exceeded");
}
function timeout(ms: number): Promise<never> {
return new Promise((_, reject) =>
setTimeout(() => reject(new Error(`Timed out after ${ms}ms`)), ms)
);
}
function sleep(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms));
}
Decision Framework#
Is this Node.js single-process async race?
└── Mutex (Promise-based) or Idempotency Map
Is contention LOW on the same resource?
└── Optimistic (CAS / retry)
Is the problem about CAPACITY (N slots)?
└── Semaphore
Do you need to prevent DUPLICATE requests (retries)?
└── Idempotency token map
Do you need to DECOUPLE ingestion from processing?
└── AsyncQueue with concurrency limit
Are there MULTIPLE PODS / processes?
└── Redis distributed lock (SET NX PX + Lua release)
Do you need N parallel tasks to finish together?
└── Promise.all or CountDownLatch
Interview Cheat Sheet — What Slice SDE-3 Interviewers Want to Hear#
- Identify the race condition first — don't jump to the solution
- Mention idempotency unprompted — fintech engineers live and die by this
- Distinguish async concurrency vs distributed concurrency — they're different problems
- Always release locks in
finally— show you think about failure paths - Quantify — "this handles ~5000 RPS with p99 < 50ms under normal load"