Why This Matters
A thread that checks while (count == 0) {} can burn one full CPU core while making no progress. On a 16-core training host, one mistaken busy-wait loop can consume 6.25 percent of the machine before doing a single useful load from the queue.
The racy part is worse. If a consumer checks count == 0, gets descheduled, and a producer inserts an item and signals before the consumer actually sleeps, the consumer can sleep forever. Condition variables and semaphores exist to make this check-and-sleep transition atomic with respect to the mutex or counter that guards the condition.
Core Definitions
Condition variable
A condition variable is a waiting queue associated with a predicate over shared state, such as $count > 0$. The operation wait(cv, mutex) atomically releases mutex and parks the calling thread, then re-acquires mutex before returning.
Predicate
A predicate is the Boolean condition that makes it legal for a thread to proceed. The predicate lives in ordinary shared data protected by a mutex. The condition variable does not store the predicate.
Semaphore
A semaphore is an integer counter plus a waiting queue. Dijkstra's P operation waits until the counter is positive and then decrements it. Dijkstra's V operation increments the counter and wakes one waiter if present.
Monitor
A monitor packages shared state, a mutex, and condition variables behind methods that acquire and release the mutex in a disciplined way. Java synchronized blocks and C++ code using std::mutex, std::unique_lock, and std::condition_variable are monitor-style programming.
Busy-Waiting And The Check-Then-Sleep Race
The simplest wrong consumer looks like this.
while (count == 0) {
/* burn cycles */
}
item = buffer[get];
Even if count is atomic, this code has two defects. First, it occupies a hardware thread while the queue is empty. Second, it usually needs a separate lock to protect get, put, and the element array. Adding a lock in the wrong place creates a deadlock.
pthread_mutex_lock(&m);
while (count == 0) {
/* producer needs m to change count, so nobody can proceed */
}
item = buffer[get];
pthread_mutex_unlock(&m);
Dropping the mutex before sleeping without an atomic wait creates a missed wakeup. A concrete schedule with one consumer C and one producer P shows the failure.
| Step | Thread | Action | count | Consumer state |
|---|---|---|---|---|
| 1 | C | locks m, observes count == 0 | 0 | running |
| 2 | C | unlocks m, about to sleep | 0 | runnable |
| 3 | P | locks m, inserts item | 1 | runnable |
| 4 | P | calls signal(notEmpty) | 1 | signal has no waiter |
| 5 | C | sleeps on notEmpty | 1 | asleep forever |
The missing operation is atomic release-and-sleep. pthread_cond_wait(¬Empty, &m) performs exactly that transition. The kernel or threading library ensures that no signal can pass between unlocking the mutex and placing the thread on the condition wait queue.
Condition Variable Protocol
The correct protocol is short.
pthread_mutex_lock(&m);
while (!predicate) {
pthread_cond_wait(&cv, &m);
}
/* predicate holds and m is locked */
pthread_mutex_unlock(&m);
The while is not decoration. POSIX permits spurious wakeups, so pthread_cond_wait may return even when nobody signaled the condition. A wakeup can also be real but no longer useful. Suppose two consumers sleep with count == 0. A producer inserts one item and calls pthread_cond_broadcast. Both consumers wake. One removes the item and sets count back to zero. The second consumer must re-check and sleep again.
Using if turns that schedule into an underflow.
pthread_mutex_lock(&m);
if (count == 0) {
pthread_cond_wait(¬Empty, &m);
}
/* wrong: count may still be 0 here */
item = buffer[get];
pthread_mutex_unlock(&m);
signal(cv) wakes at least one waiting thread if one exists. broadcast(cv) wakes all current waiters. Neither operation stores a token inside the condition variable. If no thread is waiting, the signal is lost by design. The state change is what persists, not the notification.
A common rule is to change the predicate while holding the mutex, then signal before or after unlocking depending on the implementation style. Holding the mutex during the signal prevents a waiter from seeing stale state and makes the code's proof simpler.
pthread_mutex_lock(&m);
count++;
pthread_cond_signal(¬Empty);
pthread_mutex_unlock(&m);
Use signal when one state change can enable only one waiter, such as inserting one item into an empty buffer. Use broadcast when one state change can enable many waiters or when different waiters have different predicates on the same condition variable, such as a configuration phase changing from LOADING to READY.
Bounded Buffer With Two Condition Variables
A bounded buffer has a fixed capacity $N$, an array, two indices, and a count. The invariants are $0 \leq count \leq N$, $put \in [0,N)$, and $get \in [0,N)$.
#include <pthread.h>
#define N 4
typedef struct {
int data[N];
int put;
int get;
int count;
pthread_mutex_t m;
pthread_cond_t notFull;
pthread_cond_t notEmpty;
} bounded_buffer;
void bb_init(bounded_buffer *b) {
b->put = 0;
b->get = 0;
b->count = 0;
pthread_mutex_init(&b->m, 0);
pthread_cond_init(&b->notFull, 0);
pthread_cond_init(&b->notEmpty, 0);
}
void bb_put(bounded_buffer *b, int x) {
pthread_mutex_lock(&b->m);
while (b->count == N) {
pthread_cond_wait(&b->notFull, &b->m);
}
b->data[b->put] = x;
b->put = (b->put + 1) % N;
b->count++;
pthread_cond_signal(&b->notEmpty);
pthread_mutex_unlock(&b->m);
}
int bb_get(bounded_buffer *b) {
pthread_mutex_lock(&b->m);
while (b->count == 0) {
pthread_cond_wait(&b->notEmpty, &b->m);
}
int x = b->data[b->get];
b->get = (b->get + 1) % N;
b->count--;
pthread_cond_signal(&b->notFull);
pthread_mutex_unlock(&b->m);
return x;
}
A numeric run with N = 4 shows why two condition variables are cleaner than one.
| Operation | put | get | count | data contents by index |
|---|---|---|---|---|
| start | 0 | 0 | 0 | [_, _, _, _] |
| put 10 | 1 | 0 | 1 | [10, _, _, _] |
| put 11 | 2 | 0 | 2 | [10, 11, _, _] |
| put 12 | 3 | 0 | 3 | [10, 11, 12, _] |
| get returns 10 | 3 | 1 | 2 | [old, 11, 12, _] |
| put 13 | 0 | 1 | 3 | [old, 11, 12, 13] |
| put 14 | 1 | 1 | 4 | [14, 11, 12, 13] |
At the last row the buffer is full. A producer waits on notFull, not notEmpty. A consumer waits on notEmpty, not notFull. Separating the queues avoids waking producers when only consumers can move, and avoids waking consumers when only producers can move.
The array slot marked old need not be cleared. The abstract state is the triple (put, get, count) plus the valid occupied positions. After get moves from 0 to 1, index 0 is free even though it still contains the integer 10 at the byte level.
Semaphores As Counted Resources
A semaphore stores capacity directly. If a buffer has four empty slots, an empty semaphore starts at 4. If it has zero filled slots, a full semaphore starts at 0. A separate mutex still protects the circular indices.
#include <semaphore.h>
#include <pthread.h>
#define N 4
int data[N];
int put_i = 0;
int get_i = 0;
sem_t empty_slots;
sem_t full_slots;
pthread_mutex_t m = PTHREAD_MUTEX_INITIALIZER;
void init_queue(void) {
sem_init(&empty_slots, 0, N);
sem_init(&full_slots, 0, 0);
}
void sem_put(int x) {
sem_wait(&empty_slots);
pthread_mutex_lock(&m);
data[put_i] = x;
put_i = (put_i + 1) % N;
pthread_mutex_unlock(&m);
sem_post(&full_slots);
}
int sem_get(void) {
sem_wait(&full_slots);
pthread_mutex_lock(&m);
int x = data[get_i];
get_i = (get_i + 1) % N;
pthread_mutex_unlock(&m);
sem_post(&empty_slots);
return x;
}
For N = 4, after three sem_put calls and one sem_get, the semaphore values are empty_slots = 2 and full_slots = 2. The indices might be put_i = 3 and get_i = 1. The semaphores track how many operations may start. The mutex tracks which thread edits the indices.
A binary semaphore with initial value 1 can imitate mutual exclusion, but it is not the same abstraction as a mutex. A mutex has ownership. The thread that locks it must release it. A semaphore has no owner in the Dijkstra model. One thread can sem_wait, and another can sem_post. That property is useful for handoff protocols, but it is a poor fit for protecting a data structure because accidental cross-thread release hides bugs.
Monitors In C++ And Java Style
A monitor turns the pattern into a convention the type system and syntax make hard to miss. In C++, std::unique_lock is used because condition_variable::wait must release and re-lock the mutex.
#include <condition_variable>
#include <mutex>
#include <queue>
class Queue {
std::mutex m;
std::condition_variable not_empty;
std::queue<int> q;
public:
void push(int x) {
std::lock_guard<std::mutex> g(m);
q.push(x);
not_empty.notify_one();
}
int pop() {
std::unique_lock<std::mutex> lk(m);
not_empty.wait(lk, [&] { return !q.empty(); });
int x = q.front();
q.pop();
return x;
}
};
The predicate overload of wait is shorthand for the while loop. It does not remove the need for a predicate. It re-checks !q.empty() after every wakeup.
Java's synchronized plus wait, notify, and notifyAll follows the same monitor shape. The brief semantic difference worth remembering is that most production APIs use Mesa-style behavior. A signal moves a waiter to runnable, but the signaler keeps running until it releases the lock. Therefore the waiter must re-check the predicate after it gets the lock back.
Key Result
For condition variables, correctness is a protocol invariant rather than a theorem about the condition variable alone.
The shared predicate must be protected by exactly the mutex passed to wait. Every thread follows this invariant.
For the bounded buffer, the safety invariant is this.
Every put executes only after count < N, then increments count by 1. Every get executes only after count > 0, then decrements count by 1. Because the mutex serializes the updates, no two threads can both observe the same old count and commit conflicting index changes.
For semaphores, the counted-resource invariant for the producer-consumer queue is this.
The put path decrements empty_slots before using a slot and increments full_slots after publishing the item. The get path decrements full_slots before reading an item and increments empty_slots after freeing the slot. The index mutex is still required; the semaphore counts permission, not the address of a particular array cell.
Common Confusions
Using if instead of while around wait
A condition variable wakeup is a hint to re-check shared state. It is not proof that the predicate is true. Use while (count == 0) pthread_cond_wait(...), not if.
Treating signal as a stored event
pthread_cond_signal does not leave a token for a future waiter. If no thread is waiting, the signal disappears. The durable fact must be the predicate change, such as count++.
Signaling without holding the lock
Some APIs permit notification after unlocking, but changing the predicate without the mutex is a bug. If the predicate update and the waiter's test are not serialized by the same mutex, the missed-wakeup schedule returns.
Replacing a mutex with a binary semaphore everywhere
A binary semaphore does not record which thread acquired it. That makes it useful for event handoff, but it removes ownership checks that catch wrong-thread unlocks in mutex-based code.
Exercises
Problem
A bounded buffer has N = 3, initial state put = 0, get = 0, count = 0. Execute put(5), put(6), get(), put(7), put(8). Give the final put, get, count, and array contents.
Problem
Construct a four-step missed-wakeup schedule for code that unlocks a mutex, then calls sleep(cv), instead of using atomic wait(cv, mutex).
Problem
For the semaphore version with N = 4, initial values empty_slots = 4 and full_slots = 0, execute two producers entering sem_put, then one consumer entering sem_get, assuming each operation completes. Give the final semaphore values.
References
Canonical:
- Arpaci-Dusseau and Arpaci-Dusseau, Operating Systems: Three Easy Pieces (2018), ch. 30-31, 48, covers condition variables, semaphores, and concurrency bugs
- Silberschatz, Galvin, and Gagne, Operating System Concepts (10th ed., 2018), ch. 6, covers synchronization tools, monitors, and semaphores
- Tanenbaum and Bos, Modern Operating Systems (4th ed., 2014), ch. 2.3, covers interprocess communication and classical synchronization
- Butenhof, Programming with POSIX Threads (1997), ch. 3-4, covers POSIX mutexes and condition variables
- Herlihy and Shavit, The Art of Multiprocessor Programming (2nd ed., 2020), ch. 8, covers monitors and blocking synchronization
Accessible:
- Bryant and O'Hallaron, Computer Systems: A Programmer's Perspective (3rd ed., 2016), ch. 12, gives practical pthread examples
- Linux man-pages project,
pthread_cond_wait(3),pthread_cond_signal(3), andsem_wait(3) - Oracle Multithreaded Programming Guide, condition variables and lost wakeup discussion
Next Topics
- /computationpath/threads-and-locks
- /computationpath/deadlock-and-lock-ordering
- /computationpath/scheduling-and-context-switches