Necessary Condition
A set of periodic real-time tasks would not be RMA schedulable unless they satisfy the following necessary condition:
i=1∑ n ei / pi = i=1∑ n ui ≤ 1
where ei is the worst case execution time and pi is the period of the task Ti, n is the number of tasks to be scheduled, and ui is the CPU utilization due to the task Ti. This test simply expresses the fact that the total CPU utilization due to all the tasks in the task set should be less than 1.
Sufficient Condition
The derivation of the sufficiency condition for RMA schedulability is an important result and was obtained by Liu and Layland in 1973. A formal derivation of the Liu and Layland’s results from first principles is beyond the scope of this discussion. We would subsequently refer to the sufficiency as the Liu and Layland’s condition. A set of n real-time periodic tasks are schedulable under RMA, if
i=1∑ n ui ≤ n(21/n − 1) (3.4/2.10)
where ui is the utilization due to task Ti. Let us now examine the implications of this result. If a set of tasks satisfies the sufficient condition, then it is guaranteed that the set of tasks would be RMA schedulable.
Consider the case where there is only one task in the system, i.e. n = 1.
Substituting n = 1 in Expr. 3.4, we get,
i=1∑ 1 ui ≤ 1(21/1 − 1) or i=1∑ 1 ui ≤ 1
Similarly for n = 2, we get,
i=1∑ 2 ui ≤ 2(21/2 − 1) or i=1∑ 2 ui ≤ 0.828
For n = 3, we get,
i=1∑ 3 ui ≤ 3(21/3 − 1) or i=1∑ 3 ui ≤ 0.78
For n → ∞, we get,
i=1∑ ∞ ui ≤ 3(21/∞ − 1) or i=1∑ ∞ ui ≤ ∞.0
Fig. 30.3 Achievable Utilization with the Number of Tasks under RMA
Evaluation of Expr. 3.4 when n → ∞ involves an indeterminate expression of the type ∞.0. By applying L’Hospital’s rule, we can verify that the right hand side of the expression evaluates to loge2 = 0.692. From the above computations, it is clear that the maximum CPU utilization that can be achieved under RMA is 1. This is achieved when there is only a single task in the system. As the number of tasks increases, the achievable CPU utilization falls and as n → ∞, the achievable utilization stabilizes at loge2, which is approximately 0.692. This is pictorially shown in Fig. 30.3. We now illustrate the applicability of the RMA schedulability criteria through a few examples.
Examples
Example 5: Check whether the following set of periodic real-time tasks is schedulable under RMA on a uniprocessor: T1 = (e1=20, p1=100), T2 = (e2=30, p2=150), T3 = (e3=60, p3=200).
Solution: Let us first compute the total CPU utilization achieved due to the three given tasks.
i=1∑3 ui = 20/100 + 30/150 + 60/200 = 0.7
This is less than 1; therefore the necessary condition for schedulability of the tasks is satisfied. Now checking for the sufficiency condition, the task set is schedulable under RMA if Liu and Layland’s condition given by Expr. 3.4 is satisfied Checking for satisfaction of Expr. 3.4, the maximum achievable utilization is given by:
3(2 1/3 − 1) = 0.78
The total utilization has already been found to be 0.7. Now substituting these in Liu and Layland’s criterion:
i=1∑ 3 ui ≤ 3(21/3 − 1)
Therefore, we get 0.7 < 0.78.
Expr. 3.4, a sufficient condition for RMA schedulability, is satisfied. Therefore, the task set is RMA-schedulable
Example 6: Check whether the following set of three periodic real-time tasks is schedulable under RMA on a uniprocessor: T1 = (e1=20, p1=100), T2 = (e2=30, p2=150), T3 = (e3=90, p3=200).
Solution: Let us first compute the total CPU utilization due to the given task set:
i=1∑ 3 ui = 20/100 + 30/150 + 90/200 = 0.7
Now checking for Liu and Layland criterion:
i=1∑ 3 ui ≤ 0.78
Since 0.85 is not ≤ 0.78, the task set is not RMA-schedulable.
Liu and Layland test (Expr. 2.10) is pessimistic in the following sense.
If a task set passes the Liu and Layland test, then it is guaranteed to be RMA schedulable. On the other hand, even if a task set fails the Liu and Layland test, it may still be RMA schedulable. |
It follows from this that even when a task set fails Liu and Layland’s test, we should not conclude that it is not schedulable under RMA. We need to test further to check if the task set is RMA schedulable. A test that can be per- formed to check whether a task set is RMA schedulable when it fails the Liu and Layland test is the Lehoczky’s test. Lehoczky’s test has been expressed as Theorem 3.
Theorem 3
A set of periodic real-time tasks is RMA schedulable under any task phasing, iff all the tasks meet their respective first deadlines under zero phasing.
Fig. 30.4 Worst Case Response Time for a Task Occurs When It is in Phase with Its Higher Priority Tasks
A formal proof of this Theorem is beyond the scope of this discussion. However, we provide an intuitive reasoning as to why Theorem 3 must be true. Intuitively, we can understand this result from the following reasoning. First let us try to understand the following fact.
The worst case response time for a task occurs when it is in phase with its higher |
To see why this statement must be true, consider the following statement. Under RMA whenever a higher priority task is ready, the lower priority tasks can not execute and have to wait. This implies that, a lower priority task will have to wait for the entire duration of execution of each higher priority task that arises during the execution of the lower priority task. More number of instances of a higher priority task will occur, when a task is in phase with it, when it is in phase with it rather than out of phase with it. This has been illustrated through a simple example in Fig. 30.4. In Fig. 30.4(a), a higher priority task T1=(10,30) is in phase with a lower priority task T2=(60,120), the response time of T2 is 90 msec. However, in Fig. 30.4(b), when T1 has a 20 msec phase, the response time of T2 becomes 80. Therefore, if a task meets its first deadline under zero phasing, then they it will meet all its deadlines.
Example 7: Check whether the task set of Example 6 is actually schedulable under RMA.
Solution: Though the results of Liu and Layland’s test were negative as per the results of Example 6, we can apply the Lehoczky test and observe the following:
For the task T1: e1 < p1 holds since 20 msec < 100 msec. Therefore, it would meet its first deadline (it does not have any tasks that have higher priority).
For the task T2: T1 is its higher priority task and considering 0 phasing, it would occur once before the deadline of T2. Therefore, (e1 + e2) < p2 holds, since 20 + 30 = 50 msec < 150 msec. Therefore, T2 meets its first deadline.
For the task T3: (2e1 + 2e2 + e3) < p3 holds, since 2∗20 + 2∗30 + 90 = 190msec < 200 msec.
We have considered 2∗e1 and 2∗e2 since T1 and T2 occur twice within the first deadline of T3. Therefore, T3 meets its first deadline. So, the given task set is schedulable under RMA. The schedulability test for T3 has pictorially been shown in Fig. 30.5. Since all the tasks meet their first deadlines under zero phasing, they are RMA schedulable according to Lehoczky’s results.
Fig. 30.6 Instances of T1 over a single instance of Ti
Let us now try to derive a formal expression for this important result of Lehoczky. Let {T1, T2, …,Ti} be the set of tasks to be scheduled. Let us also assume that the tasks have been ordered in descending order of their priority. That is, task priorities are related as: pr(T1) > pr(T2) > … > pr(Ti), where pr(Ti) denotes the priority of the task Ti. Observe that the task T1 has the highest priority and task Ti has the least priority. This priority ordering can be assumed without any loss of generalization since the required priority ordering among an arbitrary collection of tasks can always be achieved by a simple renaming of the tasks. Consider that the task Ti arrives at the time instant 0. Consider the example shown in Fig. 30.6. During the first instance of the task Ti, three instances of the task T1 have occurred. Each time T1 occurs, Ti has to wait since T1 has higher priority than Ti.
Let us now determine the exact number of times that T1 occurs within a single instance of Ti. This is given by ⎡pi / p1⎤. Since T1’s execution time is e1, then the total execution time required due to task T1 before the deadline of Ti is ⎡pi / p1⎤ ∗ e1. This expression can easily be generalized to consider the execution times all tasks having higher priority than Ti (i.e. T1, T2, …, Ti−1). Therefore, the time for which Ti will have to wait due to all its higher priority tasks can be expressed as:
k=1∑ i-1 ⎡pi / pk⎤ ∗ ek …(3.5/2.11)
Expression 3.5 gives the total time required to execute Ti’s higher priority tasks for which Ti would have to wait. So, the task Ti would meet its first deadline, iff
ei + k=1∑ i-1 ⎡pi / pk⎤ ∗ ek ≤ pi …(3.6/2.12)
That is, if the sum of the execution times of all higher priority tasks occurring before Ti’s first deadline, and the execution time of the task itself is less than its period pi, then Ti would complete before its first deadline. Note that in Expr. 3.6, we have implicitly assumed that the task periods equal their respective deadlines, i.e. pi = di. If pi < di, then the Expr. 3.6 would need modifications as follows.
ei + k=1∑ i-1 ⎡di / pk⎤ ∗ ek ≤ di …(3.7/2.13)
Note that even if Expr. 3.7 is not satisfied, there is some possibility that the task set may still be schedulable. This might happen because in Expr. 3.7 we have considered zero phasing among all the tasks, which is the worst case. In a given problem, some tasks may have non-zero phasing. Therefore, even when a task set narrowly fails to meet Expr 3.7, there is some chance that it may in fact be schedulable under RMA. To understand why this is so, consider a task set where one particular task Ti fails Expr. 3.7, making the task set not schedulable. The task misses its deadline when it is in phase with all its higher priority task. However, when the task has non-zero phasing with at least some of its higher priority tasks, the task might actually meet its first deadline contrary to any negative results of the expression 3.7. Let us now consider two examples to illustrate the applicability of the Lehoczky’s results.
Example 8: Consider the following set of three periodic real-time tasks: T1=(10,20), T2=(15,60), T3=(20,120) to be run on a uniprocessor. Determine whether the task set is schedulable under RMA.
Solution: First let us try the sufficiency test for RMA schedulability. By Expr. 3.4 (Liu and Layland test), the task set is schedulable if ∑ ui ≤ 0.78.
∑ ui = 10/20 + 15/60 + 20/120 = 0.91
This is greater than 0.78. Therefore, the given task set fails Liu and Layland test. Since Expr. 3.4 is a pessimistic test, we need to test further.
Let us now try Lehoczky’s test. All the tasks T1, T2, T3 are already ordered in decreasing order of their priorities.
Testing for task T1:
Since e1 (10 msec) is less than d1 (20 msec), T1 would meet its first deadline.
Testing for task T2:
15 + ⎡60/20⎤ ∗ 10 ≤ 60 or 15 + 30 = 45 ≤ 60 msec
The condition is satisfied. Therefore, T2 would meet its first deadline.
Testing for Task T3:
20 + ⎡120/20⎤ ∗ 10 + ⎡120/60⎤ ∗ 15 = 20 + 60 + 30 = 110 msec
This is less than T3’s deadline of 120. Therefore T3 would meet its first deadline. Since all the three tasks meet their respective first deadlines, the task set is RMA schedulable according to Lehoczky’s results.
Example 9: RMA is used to schedule a set of periodic hard real-time tasks in a system. Is it possible in this system that a higher priority task misses its deadline, whereas a lower priority task meets its deadlines? If your answer is negative, prove your denial. If your answer is affirmative, give an example involving two or three tasks scheduled using RMA where the lower priority task meets all its deadlines whereas the higher priority task misses its deadline.
Solution: Yes. It is possible that under RMA a higher priority task misses its deadline where as a lower priority task meets its deadline. We show this by constructing an example. Consider the following task set: T1 = (e1=15, p1=20), T2 = (e2=6, p2=35), T3 = (e3=3, p3=100). For the given task set, it is easy to observe that pr(T1) > pr(T2) > pr(T3). That is, T1, T2, T3 are ordered in decreasing order of their priorities.
For this task set, T3 meets its deadline according to Lehoczky’s test since
e3 + ⎡p3 / p2⎤ ∗ e2 + ⎡p3 / p1⎤ ∗ e1 = 3 + ( ⎡100/35⎤ ∗ 6) + ( ⎡100/20⎤ ∗ 15)
= 3 + (3 ∗ 6) + (5 ∗ 15) = 96 ≤ 100 msec.
But, T2 does not meet its deadline since
e2 + ⎡p2 / p1⎤ ∗ e1 = 6 + ( ⎡35/20⎤ ∗ 15) = 6 + (2 ∗ 15) = 36 msec.
This is greater than the deadline of T2 (35 msec). As a consequence of the results of Example 9, by observing that the lowest priority task of a given task set meets its first deadline, we can not conclude that the entire task set is RMA schedulable. On the contrary, it is necessary to check each task individually as to whether it meets its first deadline under zero phasing. If one finds that the lowest priority task meets its deadline, and concludes that the entire task set would be feasibly scheduled under RMA, he is likely to be flawed.
Achievable CPU Utilization
Liu and Layland’s results (Expr. 3.4) bounded the CPU utilization below which a task set would be schedulable. It is clear from Expr. 3.4 and Fig. 30.10 that the Liu and Layland schedulability criterion is conservative and restricts the maximum achievable utilization due to any task set which can be feasibly scheduled under RMA to 0.69 when the number of tasks in the task set is large. However, (as you might have already guessed) this is a pessimistic figure. In fact, it has been found experimentally that for a large collection of tasks with independent periods, the maximum utilization below which a task set can feasibly be scheduled is on the average close to 88%.
For harmonic tasks, the maximum achievable utilization (for a task set to have a feasible schedule) can still be higher. In fact, if all the task periods are harmonically related, then even a task set having 100% utilization can be feasibly scheduled. Let us first understand when are the periods of a task set said to be harmonically related. The task periods in a task set are said to be harmonically related, iff for any two arbitrary tasks Ti and Tk in the task set, whenever pi > pk, it should imply that pi is an integral multiple of pk. That is, whenever pi > pk, it should be possible to express pi as n ∗ pk for some integer n > 1. In other words, pk should squarely divide pi. An example of a harmonically related task set is the following: T1 = (5, 30), T2 = (8, 120), T3 = (12, 60).
It is easy to prove that a harmonically related task set with even 100% utilization can feasibly be scheduled.
Theorem 4
For a set of harmonically related tasks HS = {Ti}, the RMA schedulability criterion is given by i=1∑ n ui ≤ 1.
Proof: Let us assume that T1, T2, …, Tn be the tasks in the given task set. Let us further assume that the tasks in the task set T1, T2, …, Tn have been arranged in increasing order of their periods. That is, for any i and j, pi < pj whenever i < j. If this relationship is not satisfied, then a simple renaming of the tasks can achieve this. Now, according to Expr. 3.6, a task Ti meets its deadline, if ei + k=1∑ i-1 ⎡pi / pk⎤ ∗ ek ≤ pi.
However, since the task set is harmonically related, pi can be written as m ∗ pk for some m. Using this, ⎡pi / pk⎤ = pi / pk. Now, Expr. 3.6 can be written as:
ei + k=1∑ i-1 (pi / pk)∗ ek ≤ pi
For Ti = Tn, we can write, en + k=1∑ n-1 (pn / pk)∗ ek ≤ pn.
Dividing both sides of this expression by pn, we get the required result.
Hence, the task set would be schedulable iff k=1∑ n ek / pk ≤ 1 or i=1∑ n ui ≤ 1.
Advantages and Disadvantages of RMA
In this section, we first discuss the important advantages of RMA over EDF. We then point out some disadvantages of using RMA. As we had pointed out earlier, RMA is very commonly used for scheduling real-time tasks in practical applications. Basic support is available in almost all commercial real-time operating systems for developing applications using RMA. RMA is simple and efficient. RMA is also the optimal static priority task scheduling algorithm. Unlike EDF, it requires very few special data structures. Most commercial real-time operating systems support real-time (static) priority levels for tasks. Tasks having real-time priority levels are arranged in multilevel feedback queues (see Fig. 30.7). Among the tasks in a single level, these commercial real-time operating systems generally provide an option of either time-slicing and round-robin scheduling or FIFO scheduling.
RMA Transient Overload Handling: RMA possesses good transient overload handling capability. Good transient overload handling capability essentially means that, when a lower priority task does not complete within its planned completion time, it can not make any higher priority task to miss its deadline. Let us now examine how transient overload would affect a set of tasks scheduled under RMA. Will a delay in completion by a lower priority task affect a higher priority task? The answer is: ‘No’. A lower priority task even when it exceeds its planned execution time cannot make a higher priority task wait according to the basic principles of RMA − whenever a higher priority task is ready, it preempts any executing lower priority task. Thus, RMA is stable under transient overload and a lower priority task overshooting its completion time can not make a higher priority task to miss its deadline.
Fig. 30.7 Multi-Level Feedback Queue
The disadvantages of RMA include the following: It is very difficult to support aperiodic and sporadic tasks under RMA. Further, RMA is not optimal when task periods and deadlines differ.
Deadline Monotonic Algorithm (DMA)
RMA no longer remains an optimal scheduling algorithm for the periodic real-time tasks, when task deadlines and periods differ (i.e. di≠ pi) for some tasks in the task set to be scheduled. For such task sets, Deadline Monotonic Algorithm (DMA) turns out to be more proficient than RMA. DMA is essentially a variant of RMA and assigns priorities to tasks based on their deadlines, rather than assigning priorities based on task periods as done in RMA. DMA assigns higher priorities to tasks with shorter deadlines. When the relative deadline of every task is proportional to its period, RMA and DMA produce identical solutions. When the relative deadlines are arbitrary, DMA is more proficient than RMA in the sense that it can sometimes produce a feasible schedule when RMA fails. On the other hand, RMA always fails when DMA fails. We now illustrate our discussions using an example task set that is DMA schedulable but not RMA schedulable.
Example 10: Is the following task set schedulable by DMA? Also check whether it is schedulable using RMA. T1 = (e1=10, p1=50, d1=35), T2 = (e2=15, p2=100, d1=20), T3 = (e3=20, p3=200, d1=200) [time in msec].
Solution: First, let us check RMA schedulability of the given set of tasks, by checking the Lehoczky’s criterion. The tasks are already ordered in descending order of their priorities. Checking for T1:
10 msec < 35 msec. Hence, T1 would meet its first deadline. Checking for T2:
Thus, T2 will miss its first deadline. Hence, the given task set can not be feasibly scheduled under RMA.
Now let us check the schedulability using DMA:
Under DMA, the priority ordering of the tasks is as follows: pr(T2) > pr(T1) > pr(T3).
Checking for T2:
15 msec < 20 msec. Hence, T2 will meet its first deadline.
Checking for T1:
(15 + 10) < 35
Hence T1 will meet its first deadline.
Checking for T3:
(20 + 30 + 40) < 200
Therefore, T3 will meet its deadline.
Therefore, the given task set is schedulable under DMA but not under RMA.
Context Switching Overhead
So far, while determining schedulability of a task set, we had ignored the overheads incurred on account of context switching. Let us now investigate the effect of context switching overhead on schedulability of tasks under RMA.
It is easy to realize that under RMA, whenever a task arrives, it preempts at most one task − the task that is currently running. From this observation, it can be concluded that in the worstcase, each task incurs at most two context switches under RMA. One when it preempts the currently running task. And the other when it completes possibly the task that was preempted or some other task is dispatched to run. Of course, a task may incur just one context switching overhead, if it does not preempt any task. For example, it arrives when the processor is idle or when a higher priority task was running. However, we need to consider two context switches for every task, if we try to determine the worst-case context switching overhead.
For simplicity we can assume that context switching time is constant, and equals ‘c’ milliseconds where ‘c’ is a constant. From this, it follows that the net effect of context switches is to increase the execution time ei of each task Ti to at most ei + 2∗c. It is therefore clear that in order to take context switching time into consideration, in all schedulability computations, we need to replace ei by ei + 2∗c for each Ti.
Example 11: Check whether the following set of periodic real-time tasks is schedulable under RMA on a uniprocessor: T1 = (e1=20, p1=100), T2 = (e2=30, p2=150), T3 = (e3=90, p3=200). Assume that context switching overhead does not exceed 1 msec, and is to be taken into account in schedulability computations.
Solution: The net effect of context switches is to increase the execution time of each task by two context switching times. Therefore, the utilization due to the task set is:
i=1∑ 3 ui = 22/100 + 32/150 + 92/200 = 0.89
Since i=1∑ 3 ui > 0.78, the task set is not RMA schedulable according to the Liu and Layland test.
Let us try Lehoczky’s test. The tasks are already ordered in descending order of their priorities.
Checking for task T1:
The condition is satisfied; therefore T1 meets its first deadline.
Checking for task T2:
(22∗2) + 32 < 150
The condition is satisfied; therefore T2 meets its first deadline.
Checking for task T3:
(22∗2) + (32∗2) + 90 < 200.
The condition is satisfied; therefore T3 meets its first deadline.
Therefore, the task set can be feasibly scheduled under RMA even when context switching overhead is taken into consideration.
Self Suspension
A task might cause its self-suspension, when it performs its input/output operations or when it waits for some events/conditions to occur. When a task self suspends itself, the operating system removes it from the ready queue, places it in the blocked queue, and takes up the next eligible task for scheduling. Thus, self-suspension introduces an additional scheduling point, which we did not consider in the earlier sections. Accordingly, we need to augment our definition of a scheduling point given in Sec. 2.3.1 (lesson 2).
In event-driven scheduling, the scheduling points are defined by task completion, task arrival, and self-suspension events. |
Let us now determine the effect of self-suspension on the schedulability of a task set. Let us consider a set of periodic real-time tasks {T1, T2, …, Tn}, which have been arranged in the increasing order of their priorities (or decreasing order of their periods). Let the worst case self-suspension time of a task Ti is bi. Let the delay that the task Ti might incur due to its own self-suspension and the self-suspension of all higher priority tasks be bti. Then, bti can be expressed as:
bti = bi + k=1∑ i-1 min(ek, bk) …(3.8/2.15)
Self-suspension of a higher priority task Tk may affect the response time of a lower priority task Ti by as much as its execution time ek if ek < bk. This worst case delay might occur when the higher priority task after self-suspension starts its execution exactly at the time instant the lower priority task would have otherwise executed. That is, after self-suspension, the execution of the higher priority task overlaps with the lower priority task, with which it would otherwise not have overlapped. However, if ek > bk, then the self suspension of a higher priority task can delay a lower priority task by at most bk, since the maximum overlap period of the execution of a higher priority task due to self-suspension is restricted to bk.
Note that in a system where some of the tasks are non preemptable, the effect of self suspension is much more severe than that computed by Expr.3.8. The reason is that, every time a processor self suspends itself, it loses the processor. It may be blocked by a non-preemptive lower priority task after the completion of self-suspension. Thus, in a non-preemptable scenario, a task incurs delays due to self-suspension of itself and its higher priority tasks, and also the delay caused due to non-preemptable lower priority tasks. Obviously, a task can not get delayed due to the self-suspension of a lower priority non-preemptable task.
The RMA task schedulability condition of Liu and Layland (Expr. 3.4) needs to change when we consider the effect of self-suspension of tasks. To consider the effect of self-suspension in Expr. 3.4, we need to substitute ei by (ei + bti). If we consider the effect of self-suspension on task completion time, the Lehoczky criterion (Expr. 3.6) would also have to be generalized:
ei + bti + k=1∑ i-1 ⎡pi / pk⎤ ∗ ek ≤ pi … (3.9/2.16)
We have so far implicitly assumed that a task undergoes at most a single self suspension. However, if a task undergoes multiple self-suspensions, then expression 3.9 we derived above, would need to be changed. We leave this as an exercise for the reader.
Example 14: Consider the following set of periodic real-time tasks: T1 = (e1=10, p1=50), T2 = (e2=25, p2=150), T3 = (e3=50, p3=200) [all in msec]. Assume that the self-suspension times of T1, T2, and T3 are 3 msec, 3 msec, and 5 msec, respectively. Determine whether the tasks would meet their respective deadlines, if scheduled using RMA.
Solution: The tasks are already ordered in descending order of their priorities. By using the generalized Lehoczky’s condition given by Expr. 3.9, we get:
For T1 to be schedulable:
(10 + 3) < 50
Therefore T1 would meet its first deadline.
For T2 to be schedulable:
(25 + 6 + 10∗3) < 150
Therefore, T2 meets its first deadline.
For T3 to be schedulable:
(50 + 11 + (10∗4 + 25∗2)) < 200
This inequality is also satisfied. Therefore, T3 would also meet its first deadline.
It can therefore be concluded that the given task set is schedulable under RMA even when self-suspension of tasks is considered.
Self Suspension with Context Switching Overhead
Let us examine the effect of context switches on the generalized Lehoczky’s test (Expr.3.9) for schedulability of a task set, which takes self-suspension by tasks into account. In a fixed priority preemptable system, each task preempts at most one other task if there is no self suspension. Therefore, each task suffers at most two context switches − one context switch when it starts and another when it completes. It is easy to realize that any time when a task selfsuspends, it causes at most two additional context switches. Using a similar reasoning, we can determine that when each task is allowed to self-suspend twice, additional four context switching overheads are incurred. Let us denote the maximum context switch time as c. The effect of a single self-suspension of tasks is to effectively increase the execution time of each task Ti in the worst case from ei to (ei + 4∗c). Thus, context switching overhead in the presence of a single self-suspension of tasks can be taken care of by replacing the execution time of a task Ti by (ei + 4∗c) in Expr. 3.9. We can easily extend this argument to consider two, three, or more selfsuspensions.
47 videos|69 docs|65 tests
|
1. What is real-time task scheduling? |
2. What are the different types of real-time task scheduling algorithms? |
3. What are the challenges in real-time task scheduling? |
4. How does rate-monotonic scheduling differ from earliest deadline first scheduling? |
5. What are the advantages and disadvantages of real-time task scheduling algorithms? |
|
Explore Courses for Computer Science Engineering (CSE) exam
|