Semaphore was proposed by Dijkstra (1965). It is a fundamental technique to manage concurrent processes or threads using a single integer value called a semaphore. A semaphore is a shared variable (or abstract data type) that is typically used to solve the critical-section problem and to achieve process synchronization in a multiprogramming environment.
Semaphores are manipulated by two atomic operations commonly named P and V (terms introduced by Dijkstra). Alternative names are wait (or down) for P, and signal (or up) for V.
Standard semantics (textbook, Dijkstra-style):
Atomicity of P and V means that the read-modify-write sequence on the semaphore variable is indivisible: no other process can interleave its own operations on that semaphore while the operation is in progress.

Consider two processes P1 and P2 and a binary semaphore s initialised to 1. A process executes P(s) before entering its critical section and V(s) after leaving it. If P1 executes P(s) and reduces s to 0, P2 executing P(s) will block until P1 executes V(s) and restores s to 1. In this way mutual exclusion is achieved using a binary semaphore.

Both mutexes and semaphores are synchronization primitives provided by operating systems (or libraries) to coordinate multiple threads or processes. They serve related but distinct purposes. Understanding the differences is important when designing concurrent programs.
Consider a producer and a consumer sharing a buffer of 4096 bytes.
Although a binary semaphore and a mutex can be implemented in similar ways, they are conceptually different. A mutex implies ownership and is a locking mechanism. A semaphore is a signalling/counting mechanism. For example, a thread holding a mutex must be the one to release it; a semaphore's V operation may be invoked by a different context to signal availability. Because of ownership semantics, a mutex is often preferred for pure mutual exclusion; using a binary semaphore to mimic a mutex can lead to subtle issues such as priority inversion unless handled carefully.
Q.1. Can a thread acquire more than one lock (Mutex)?
Ans: Yes, it is possible that a thread is in need of more than one resource, hence the locks. If any lock is not available the thread will wait (block) on the lock.
Q.2. Can a mutex be locked more than once?
Ans: A mutex is a lock. Only one state (locked/unlocked) is associated with it. However, a recursive mutex can be locked more than once (POSIX compliant systems), in which a count is associated with it, yet retains only one state (locked/unlocked). The programmer must unlock the mutex as many number times as it was locked.
Q.3. What happens if a non-recursive mutex is locked more than once.
Ans: Deadlock. If a thread that had already locked a mutex, tries to lock the mutex again, it will enter into the waiting list of that mutex, which results in a deadlock. It is because no other thread can unlock the mutex. An operating system implementer can exercise care in identifying the owner of the mutex and return if it is already locked by a same thread to prevent deadlocks.
Q.4. Are binary semaphore and mutex same?
Ans: No. We suggest treating them separately, as it is explained in signaling vs locking mechanisms. But a binary semaphore may experience the same critical issues (Example: priority inversion) associated with a mutex. We will cover these in a later article. A programmer can prefer mutex rather than creating a semaphore with count 1.
Q.5. What is a mutex and critical section?
Ans: Some operating systems use the same word critical section in the API. Usually a mutex is a costly operation due to protection protocols associated with it. At last, the objective of mutex is atomic access. There are other ways to achieve atomic access like disabling interrupts which can be much faster but ruins responsiveness. The alternate API makes use of disabling interrupts.
Q.6. What are events?
Ans: The semantics of mutex, semaphore, event, critical section, etc... are same. All are synchronization primitives. Based on their cost in using them they are different. We should consult the OS documentation for exact details.
Q.7. Can we acquire mutex/semaphore in an Interrupt Service Routine?
Ans: An ISR will run asynchronously in the context of current running thread. It is not recommended to query (blocking call) the availability of synchronization primitives in an ISR. The ISR are meant be short, the call to mutex/semaphore may block the current running thread. However, an ISR can signal a semaphore or unlock a mutex.
Q.8. What we mean by "thread blocking on mutex/semaphore" when they are not available?
Ans: Every synchronization primitive has a waiting list associated with it. When the resource is not available, the requesting thread will be moved from the running list of processors to the waiting list of the synchronization primitive. When the resource is available, the higher priority thread on the waiting list gets the resource (more precisely, it depends on the scheduling policies).
Q.9. Is it necessary that a thread must block always when resource is not available?
Ans: Not necessary. If the design is sure 'what has to be done when resource is not available', the thread can take up that work (a different code branch). To support application requirements the OS provides non-blocking API. For example POSIX pthread_mutex_trylock() API. When the mutex is not available the function returns immediately whereas the API pthread_mutex_lock() blocks the thread till the resource is available.
A lock variable is a simple software-based synchronization mechanism that can be implemented in user mode without kernel support. It typically uses busy waiting (spin-waiting). It can be used with more than two processes, but it has important limitations.
Conventionally, Lock = 0 implies the critical section is vacant (initial value), and Lock = 1 implies the critical section is occupied. A simple pseudo-code structure is:
// Producer-consumer style buffer (illustrative) char buffer[SIZE]; int count = 0, start = 0, end = 0; struct lock l; // initialize lock variable lock_init(&l); void put(char c) { // entry section lock_acquire(&l); // critical section begins while (count == SIZE) { lock_release(&l); lock_acquire(&l); } count++; buffer[start] = c; start++; if (start == SIZE) { start = 0; } // critical section ends // exit section lock_release(&l); } char get() { char c; // entry section lock_acquire(&l); // critical section begins while (count == 0) { lock_release(&l); lock_acquire(&l); } count--; c = buffer[end]; end++; if (end == SIZE) { end = 0; } // critical section ends // exit section lock_release(&l); return c; }

The lock-variable approach appears simple but can fail to guarantee mutual exclusion under realistic CPU instruction interleavings. To see why, consider an assembly-level sequence of operations that implement the "test-and-set" style check using separate instructions:
Consider two processes P1 and P2 competing for the critical section with initial Lock = 0. A problematic interleaving is:
In this sequence P1 used a stale value of the lock read earlier and so both P1 and P2 can be in the critical section simultaneously. This violates mutual exclusion. The essence of the problem is that the read-modify-write is not atomic: separate instructions allow interleavings that break correctness.
Any synchronization mechanism is judged by three primary properties:
Mutual exclusion is often considered the most important property; however, progress and bounded waiting are essential for fairness and liveness.
| 1. What are semaphores in process synchronization? | ![]() |
| 2. How do semaphores work in process synchronization? | ![]() |
| 3. What are the types of semaphores in process synchronization? | ![]() |
| 4. What is the purpose of using semaphores in process synchronization? | ![]() |
| 5. Can semaphores be used for interprocess communication? | ![]() |
![]() | Explore Courses for Computer Science Engineering (CSE) exam |
![]() | Get EduRev Notes directly in your Google search |