Skip to content
NeuralSkills
Code Review

Concurrency Review

Review concurrent code for race conditions, deadlocks, thread safety, atomic operations, and shared state bugs.

Advanced Free Published: April 15, 2026
Compatible Tools claude-codechatgptgeminicopilotcursorwindsurfuniversal

The Problem

Concurrency bugs are the hardest class of software defects. They appear intermittently, vanish under debugging, and depend on timing that cannot be reliably reproduced. A race condition in a payment system processes a charge twice. A shared cache without proper locking returns stale data. A deadlock freezes the server at 3 AM under peak load. These bugs survive testing because they require specific interleaving of operations that unit tests never exercise.

The Prompt

Review the following code for concurrency issues. Act as a distributed systems engineer analyzing code that runs under concurrent access.

LANGUAGE/RUNTIME: [e.g., Node.js (single-threaded + async), Go (goroutines), Java (threads), Python (asyncio)]
CONCURRENCY MODEL: [async/await, threads, goroutines, actors, web workers]
SHARED RESOURCES: [e.g., database, in-memory cache, file system, external API]

CODE:
[paste your code here]

Analyze across these concurrency dimensions:

1. **Race Conditions**
   - Are read-modify-write sequences atomic?
   - Can two requests/threads interleave between check and action (TOCTOU)?
   - Are shared variables accessed without synchronization?
   - Example: checking `if (stock > 0)` then decrementing — another request can slip in between.

2. **Deadlocks & Livelocks**
   - Are multiple locks acquired in inconsistent order?
   - Can circular wait conditions form?
   - Are there indefinite blocking operations without timeouts?

3. **Async Correctness (JS/Python)**
   - Are promises handled correctly (no unhandled rejections)?
   - Are shared closures mutated across await boundaries?
   - Can events fire in unexpected order after an await?
   - Are callbacks registered before the event can fire?

4. **Database Concurrency**
   - Are transactions used for multi-step operations?
   - Is the correct isolation level set (read committed vs serializable)?
   - Are optimistic locking patterns (version columns) used where needed?
   - Can duplicate records be created by concurrent INSERT?

5. **Cache Consistency**
   - Can stale cached data be served after the source changes?
   - Is there a thundering herd problem (cache miss triggers N identical queries)?
   - Are cache invalidation and data update atomic?

6. **Resource Lifecycle**
   - Are connections, file handles, and locks properly released in all paths (including errors)?
   - Can resource exhaustion occur under burst traffic (connection pool, file descriptors)?
   - Are background tasks tracked to prevent orphaned operations?

For each issue, provide:
- **Location**: File and line
- **Scenario**: Specific interleaving of events that triggers the bug
- **Probability**: Likely under load / rare but catastrophic / theoretical
- **Impact**: Data corruption / duplicate processing / deadlock / stale data
- **Fix**: Concurrency-safe replacement code

Example Output

## Concurrency Review: 3 issues found

### Likely Under Load: Double Charge Race Condition
Location: src/services/payment.ts:15
Code:
  const balance = await getBalance(userId);    // T1 reads $100
  if (balance >= amount) {                      // T1 checks: 100 >= 80 ✓
    await deductBalance(userId, amount);        // T2 also reads $100, also passes check
  }                                             // Result: $100 - $80 - $80 = -$60
Scenario: Two concurrent purchase requests for user with $100 balance.
Fix (optimistic locking):
  const result = await db.query(
    "UPDATE balances SET amount = amount - $1 WHERE user_id = $2 AND amount >= $1 RETURNING amount",
    [amount, userId]
  );
  if (result.rowCount === 0) throw new InsufficientBalance();

### Rare But Catastrophic: Unhandled Promise in Background Task
Location: src/workers/emailQueue.ts:28
Code: `processEmail(msg).catch(console.log)` — if processEmail rejects, only console.log catches it.
Impact: Crash in production Node.js (unhandled rejection = process exit in Node 15+).
Fix: Add proper error handling with dead letter queue:
  processEmail(msg).catch(error => {
    logger.error('Email processing failed', { messageId: msg.id, error });
    await deadLetterQueue.push(msg);
  });

When to Use

Run this on any code that handles concurrent requests (web servers, queue consumers, background jobs), shared state (caches, counters, balances), or database transactions involving multiple tables. Essential before deploying code that processes payments, manages inventory, or handles any operation where “doing it twice” has consequences.

Pro Tips

  • Describe the concurrency model — Node.js single-threaded async has fundamentally different risks than Java multi-threaded. Always specify the runtime.
  • Ask for a timing diagram — follow up with “Draw an ASCII timing diagram showing how two concurrent requests can interleave to trigger this race condition.”
  • Think in transactions — for database concurrency, ask “What is the minimum transaction isolation level needed to prevent this bug?”
  • Stress test mentally — ask “What happens if 100 users hit this endpoint simultaneously?” to force analysis of burst scenarios.