Dekker’s Algorithm | Operating System - Computer Science Engineering (CSE) PDF Download

Dekker’s algorithm in Process Synchronization

To obtain such a mutual exclusion, bounded waiting, and progress there have been several algorithms implemented, one of which is Dekker’s Algorithm. To understand the algorithm let’s understand the solution to the critical section problem first.
A process is generally represented as:
do {
    //entry section
        critical section
    //exit section
        remainder section
} while (TRUE);
The solution to critical section problem must ensure following three conditions:

  1. Mutual Exclusion
  2. Progress
  3. Bounded Waiting

One of solution for ensuring above all factors is Peterson’s solution.
Another one is Dekker’s Solution. Dekker’s algorithm was the first provably-correct solution to the critical section problem. It allows two threads to share a single-use resource without conflict, using only shared memory for communication. It avoids the strict alternation of a naïve turn-taking algorithm, and was one of the first mutual exclusion algorithms to be invented.
Although there are many versions of Dekker’s Solution, the final or 5th version is the one that satisfies all of the above conditions and is the most efficient of them all.

Note: Dekker’s Solution, mentioned here, ensures mutual exclusion between two processes only, it could be extended to more than two processes with the proper use of arrays and variables.

Algorithm: It requires both an array of Boolean values and an integer variable:
var flag: array [0..1] of boolean;
turn: 0..1;
repeat
        flag[i] := true;
        while flag[j] do
                if turn = j then
                begin
                        flag[i] := false;
                        while turn = j do no-op;
                        flag[i] := true;
                end;
                critical section
        turn := j;
        flag[i] := false;
               remainder section
until false;

First Version of Dekker’s Solution: The idea is to use common or shared thread number between processes and stop the other process from entering its critical section if the shared thread indicates the former one already running.
Main()
{
    int thread_number = 1;
    startThreads();
}  
Thread1()
{
    do {
        // entry section
        // wait until threadnumber is 1
        while (threadnumber == 2) ;
        // critical section
        // exit section
        // give access to the other thread
        threadnumber = 2;
        // remainder section
    } while (completed == false)
}
Thread2()
{  
    do {
        // entry section
        // wait until threadnumber is 2
        while (threadnumber == 1) ;
        // critical section
        // exit section
        // give access to the other thread
        threadnumber = 1;  
        // remainder section  
    } while (completed == false)
}
The problem arising in the above implementation is lockstep synchronization, i.e each thread depends on the other for its execution. If one of the processes completes, then the second process runs, gives access to the completed one and waits for its turn, however, the former process is already completed and would never run to return the access back to the latter one. Hence, the second process waits infinitely then.

Second Version of Dekker’s Solution: To remove lockstep synchronization, it uses two flags to indicate its current status and updates them accordingly at the entry and exit section.
Main()
{
    // flags to indicate if each thread is in
    // its critial section or not.
    boolean thread1 = false;
    boolean thread2 = false;
    startThreads();
}  
Thread1()
{  
    do {
        // entry section
        // wait until thread2 is in its critical section
        while (thread2 == true);
        // indicate thread1 entering its critical section
        thread1 = true;
        // critical section
        // exit section
        // indicate thread1 exiting its critical section
        thread1 = false;
        // remainder section
    } while (completed == false)
}  
Thread2()
{  
    do {
        // entry section
        // wait until thread1 is in its critical section
        while (thread1 == true)
           ;
        // indicate thread2 entering its critical section
        thread2 = true;
        // critical section
        // exit section
        // indicate thread2 exiting its critical section
        thread2 = false;
        // remainder section
    } while (completed == false)
}
The problem arising in the above version is mutual exclusion itself. If threads are preempted (stopped) during flag updation ( i.e during current_thread = true ) then, both the threads enter their critical section once the preempted thread is restarted, also the same can be observed at the start itself, when both the flags are false.

Third Version of Dekker’s Solution: To re-ensure mutual exclusion, it sets the flags before entry section itself.
Main()
{
    // flags to indicate if each thread is in
    // queue to enter its critical section
    boolean thread1wantstoenter = false;
    boolean thread2wantstoenter = false;
    startThreads();
}
Thread1()
{
    do {
        thread1wantstoenter = true;
        // entry section
        // wait until thread2 wants to enter
        // its critical section
        while (thread2wantstoenter == true) ;
        // critical section
        // exit section
        // indicate thread1 has completed
        // its critical section
        thread1wantstoenter = false;
        // remainder section
    } while (completed == false)
}
Thread2()
{
    do {
        thread2wantstoenter = true;
        // entry section
       // wait until thread1 wants to enter
        // its critical section
        while (thread1wantstoenter == true) ;
        // critical section
        // exit section
        // indicate thread2 has completed
        // its critical section
        thread2wantstoenter = false;
        // remainder section
    } while (completed == false)
}
The problem with this version is deadlock possibility. Both threads could set their flag as true simultaneously and both will wait infinitely later on.

Fourth Version of Dekker’s Solution: Uses small time interval to recheck the condition, eliminates deadlock and ensures mutual exclusion as well.
Main()
{
    // flags to indicate if each thread is in
    // queue to enter its critical section
    boolean thread1wantstoenter = false;
    boolean thread2wantstoenter = false;
    startThreads();
}
Thread1()
{
    do {
        thread1wantstoenter = true;
        while (thread2wantstoenter == true) {
            // gives access to other thread
            // wait for random amount of time
            thread1wantstoenter = false;
            thread1wantstoenter = true;
        }
        // entry section
        // wait until thread2 wants to enter
        // its critical section
        // critical section
        // exit section
        // indicate thread1 has completed
        // its critical section
        thread1wantstoenter = false;
        // remainder section
    } while (completed == false)
}
Thread2()
{
    do {
        thread2wantstoenter = true;
        while (thread1wantstoenter == true) {
            // gives access to other thread
            // wait for random amount of time
            thread2wantstoenter = false;
            thread2wantstoenter = true;
        }
        // entry section
        // wait until thread1 wants to enter
        // its critical section
        // critical section
        // exit section
        // indicate thread2 has completed
        // its critical section
        thread2wantstoenter = false;
        // remainder section
    } while (completed == false)
}
The problem with this version is the indefinite postponement. Also, random amount of time is erratic depending upon the situation in which the algorithm is being implemented, hence not an acceptable solution in business critical systems.

Dekker’s Algorithm: Final and completed Solution: Idea is to use favoured thread notion to determine entry to the critical section. Favoured thread alternates between the thread providing mutual exclusion and avoiding deadlock, indefinite postponement or lockstep synchronization.
Main()
{
    // to denote which thread will enter next
    int favouredthread = 1;
    // flags to indicate if each thread is in
    // queue to enter its critical section
    boolean thread1wantstoenter = false;
    boolean thread2wantstoenter = false;
    startThreads();
}  
Thread1()
{
    do {
        thread1wantstoenter = true;
        // entry section
        // wait until thread2 wants to enter
        // its critical section
        while (thread2wantstoenter == true) {
           // if 2nd thread is more favored
            if (favaouredthread == 2) {
               // gives access to other thread
               thread1wantstoenter = false;
                // wait until this thread is favored
                while (favouredthread == 2) ;
                thread1wantstoenter = true;
            }
        }
        // critical section
        // favor the 2nd thread
        favouredthread = 2;
        // exit section
        // indicate thread1 has completed
        // its critical section
        thread1wantstoenter = false;
        // remainder section
    } while (completed == false)
}
Thread2()
{
    do {
        thread2wantstoenter = true;
        // entry section
        // wait until thread1 wants to enter
        // its critical section
        while (thread1wantstoenter == true) {
            // if 1st thread is more favored
            if (favaouredthread == 1) {
                // gives access to other thread
                thread2wantstoenter = false;
                // wait until this thread is favored
                while (favouredthread == 1)
                   ;
                thread2wantstoenter = true;
            }
        }
        // critical section
        // favour the 1st thread
        favouredthread = 1;
        // exit section
        // indicate thread2 has completed
        // its critical section
        thread2wantstoenter = false;
        // remainder section
    } while (completed == false)
}
This version guarantees a complete solution to the critical solution problem.

The document Dekker’s Algorithm | Operating System - Computer Science Engineering (CSE) is a part of the Computer Science Engineering (CSE) Course Operating System.
All you need of Computer Science Engineering (CSE) at this link: Computer Science Engineering (CSE)
10 videos|99 docs|33 tests

Top Courses for Computer Science Engineering (CSE)

FAQs on Dekker’s Algorithm - Operating System - Computer Science Engineering (CSE)

1. What is Dekker's algorithm in process synchronization?
Ans. Dekker's algorithm is a mutual exclusion algorithm used in process synchronization. It allows two processes to access a shared resource without interference. It uses the concept of turn-taking, where each process takes turns to enter the critical section.
2. How does Dekker's algorithm work?
Ans. Dekker's algorithm uses a shared variable called 'turn' to determine which process can enter the critical section. Each process takes turns setting 'turn' to its own process ID and checking if it is its turn. If it is not its turn, the process waits until it becomes its turn. Once inside the critical section, the process completes its task and then sets 'turn' to the other process ID, allowing the other process to enter.
3. What is the purpose of Dekker's algorithm in process synchronization?
Ans. The purpose of Dekker's algorithm is to provide mutual exclusion between two processes accessing a shared resource. It ensures that only one process can enter the critical section at a time, preventing interference and maintaining data integrity.
4. What are the advantages of using Dekker's algorithm?
Ans. One advantage of Dekker's algorithm is that it does not rely on hardware support for synchronization. It can be implemented purely in software, making it portable across different systems. Additionally, it is a simple algorithm with low complexity, making it efficient for small-scale synchronization requirements.
5. Are there any limitations or drawbacks to Dekker's algorithm?
Ans. Yes, Dekker's algorithm has some limitations. It is not suitable for systems with more than two processes, as it does not provide mutual exclusion in such scenarios. It also suffers from the problem of busy waiting, where a process continuously checks if it is its turn, leading to wastage of CPU cycles. This can be mitigated by using other synchronization techniques like semaphores or locks.
10 videos|99 docs|33 tests
Download as PDF
Explore Courses for Computer Science Engineering (CSE) exam

Top Courses for Computer Science Engineering (CSE)

Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev
Related Searches

Dekker’s Algorithm | Operating System - Computer Science Engineering (CSE)

,

Important questions

,

video lectures

,

Dekker’s Algorithm | Operating System - Computer Science Engineering (CSE)

,

Summary

,

Semester Notes

,

practice quizzes

,

MCQs

,

Free

,

ppt

,

Extra Questions

,

Dekker’s Algorithm | Operating System - Computer Science Engineering (CSE)

,

study material

,

pdf

,

Previous Year Questions with Solutions

,

past year papers

,

Sample Paper

,

Viva Questions

,

shortcuts and tricks

,

Objective type Questions

,

mock tests for examination

,

Exam

;