Clock-Driven Scheduling
Clock-driven schedulers make their scheduling decisions regarding which task to run next only at the clock interrupt points. Clock-driven schedulers are those for which the scheduling points are determined by timer interrupts. Clock- driven schedulers are also called off-line schedulers because these schedulers fix the schedule before the system starts to run. That is, the scheduler pre-determines which task will run when. Therefore, these schedulers incur very little run time overhead. However, a prominent shortcoming of this class of schedulers is that they can not satisfactorily handle aperiodic and sporadic tasks since the exact time of occurrence of these tasks can not be predicted. For this reason, this type of schedulers is also called static scheduler.
In this section, we study the basic features of two important clock-driven schedulers: table-driven and cyclic schedulers.
Table-Driven Scheduling
Table-driven schedulers usually pre-compute which task would run when, and store this schedule in a table at the time the system is designed or configured. Rather than automatic computation of the schedule by the scheduler, the application programmer can be given the freedom to select his own schedule for the set of tasks in the application and store the schedule in a table (called schedule table) to be used by the scheduler at run time.
An example of a schedule table is shown in Table 1. Table 1 shows that task T1 would be taken up for execution at time instant 0, T2 would start execution 3 milliseconds afterwards, and so on. An important question that needs to be addressed at this point is what would be the size of the schedule table that would be required for some given set of periodic real-time tasks to be run on a system? An answer to this question can be given as follows: if a set ST = {Ti} of n tasks is to be scheduled, then the entries in the table will replicate themselves after LCM (p1, p2, … ,pn) time units, where p1, p2, …, pn are the periods of T1, T2, ..., Tn. For example, if we have the following three tasks: (e1=5 msecs, p1=20 msecs), (e2=20 msecs, p2=100 msecs), (e3=30 msecs, p3=250 msecs); then, the schedule will repeat after every 1000 msecs. So, for any given task set, it is sufficient to store entries only for LCM (p1, p2, … ,pn) duration in the schedule table. LCM (p1, p2, …, pn) is called the major cycle of the set of tasks ST.
A major cycle of a set of tasks is an interval of time on the time line such that in each major cycle, the different tasks recur identically. |
In the reasoning we presented above for the computation of the size of a schedule table, one assumption that we implicitly made is that φi = 0. That is, all tasks are in phase.
Task | Start time in millisecs |
T1 | 0 |
T2 | 3 |
T3 | 10 |
T4 | 12 |
T5 | 17 |
Table 29.1 An Example of a Table-Driven Schedule
However, tasks often do have non-zero phase. It would be interesting to determine what would be the major cycle when tasks have non-zero phase. The result of an investigation into this issue has been given as Theorem 2.1.
Theorem 1
The major cycle of a set of tasks ST = {T1, T2, … , Tn} is LCM ({p1, p2, … , pn}) even when the tasks have arbitrary phasing.
Proof: As per our definition of a major cycle, even when tasks have non-zero phasing, task instances would repeat the same way in each major cycle. Let us consider an example in which the occurrences of a task Ti in a major cycle be as shown in Fig. 29.4. As shown in the example of Fig. 29.4, there are k-1 occurrences of the task Ti during a major cycle. The first occurrence of Ti starts φ time units from the start of the major cycle. The major cycle ends x time units after the last (i.e. (k-1)th) occurrence of the task Ti in the major cycle. Of course, this must be the same in each major cycle.
Fig. 29.4 Major Cycle When a Task Ti has Non-Zero Phasing
Assume that the size of each major cycle is M. Then, from an inspection of Fig. 29.4, for the task to repeat identically in each major cycle:
M = (k-1)pi + φ + x …(2.1)
Now, for the task Ti to have identical occurrence times in each major cycle, φ + x must equal to pi (see Fig. 29.4).
Substituting this in Expr. 2.1, we get, M = (k-1)∗ pi + pi = k∗ pi …(2.2)
So, the major cycle M contains an integral multiple of pi. This argument holds for each task in the task set irrespective of its phase. Therefore M = LCM ({p1, p2, … , pn}).
Cyclic Schedulers
Cyclic schedulers are very popular and are being extensively used in the industry. A large majority of all small embedded applications being manufactured presently are based on cyclic schedulers. Cyclic schedulers are simple, efficient, and are easy to program. An example application where a cyclic scheduler is normally used is a temperature controller. A temperature controller periodically samples the temperature of a room and maintains it at a preset value. Such temperature controllers are embedded in typical computer-controlled air conditioners.
Fig. 29.5 Major and Minor Cycles in a Cyclic Scheduler
A cyclic scheduler repeats a pre-computed schedule. The pre-computed schedule needs to be stored only for one major cycle. Each task in the task set to be scheduled repeats identically in every major cycle. The major cycle is divided into one or more minor cycles (see Fig. 29.5). Each minor cycle is also sometimes called a frame. In the example shown in Fig. 29.5, the major cycle has been divided into four minor cycles (frames). The scheduling points of a cyclic scheduler occur at frame boundaries. This means that a task can start executing only at the beginning of a frame.
The frame boundaries are defined through the interrupts generated by a periodic timer. Each task is assigned to run in one or more frames. The assignment of tasks to frames is stored in a schedule table. An example schedule table is shown in Figure 29.6.
Task Number | Frame Number |
T3 | f1 |
T1 | f2 |
T3 | f3 |
T4 | f4 |
Fig. 29.6 An Example Schedule Table for a Cyclic Scheduler
The size of the frame to be used by the scheduler is an important design parameter and needs to be chosen very carefully. A selected frame size should satisfy the following three constraints.
1. Minimum Context Switching: This constraint is imposed to minimize the number of context switches occurring during task execution. The simplest interpretation of this constraint is that a task instance must complete running within its assigned frame. Unless a task completes within its allocated frame, the task might have to be suspended and restarted in a later frame. This would require a context switch involving some processing overhead. To avoid unnecessary context switches, the selected frame size should be larger than the execution time of each task, so that when a task starts at a frame boundary it should be able to complete within the same frame. Formally, we can state this constraint as: max({ei}) < F where ei is the execution times of the of task Ti, and F is the frame size. Note that this constraint imposes a lower-bound on frame size, i.e., frame size F must not be smaller than max({ei}).
2. Minimization of Table Size: This constraint requires that the number of entries in the schedule table should be minimum, in order to minimize the storage requirement of the schedule table. Remember that cyclic schedulers are used in small embedded applications with a very small storage capacity. So, this constraint is important to the commercial success of a product. The number of entries to be stored in the schedule table can be minimized when the minor cycle squarely divides the major cycle. When the minor cycle squarely divides the major cycle, the major cycle contains an integral number of minor cycles (no fractional minor cycles). Unless the minor cycle squarely divides the major cycle, storing the schedule for one major cycle would not be sufficient, as the schedules in the major cycle would not repeat and this would make the size of the schedule table large. We can formulate this constraint as:
⎣M/F⎦ = M/F …(2.3)
In other words, if the floor of M/F equals M/F, then the major cycle would contain an integral number of frames.
Fig. 29.7 Satisfaction of a Task Deadline
3. Satisfaction of Task Deadline: This third constraint on frame size is necessary to meet the task deadlines. This constraint imposes that between the arrival of a task and its deadline, there must exist at least one full frame. This constraint is necessary since a task should not miss its deadline, because by the time it could be taken up for scheduling, the deadline was imminent. Consider this: a task can only be taken up for scheduling at the start of a frame. If between the arrival and completion of a task, not even one frame exists, a situation as shown in Fig. 29.7 might arise. In this case, the task arrives sometimes after the kth frame has started. Obviously it can not be taken up for scheduling in the kth frame and can only be taken up in the k+1th frame. But, then it may be too late to meet its deadline since the execution time of a task can be up to the size of a full frame. This might result in the task missing its deadline since the task might complete only at the end of (k+1)th frame much after the deadline d has passed. We therefore need a full frame to exist between the arrival of a task and its deadline as shown in Fig. 29.8, so that task deadlines could be met.
Fig. 29.8 A Full Frame Exists Between the Arrival and Deadline of a Task
More formally, this constraint can be formulated as follows: Suppose a task arises after ∆t time units have passed since the last frame (see Fig. 29.8). Then, assuming that a single frame is sufficient to complete the task, the task can complete before its deadline iff
(2F − ∆t) ≤ di, or 2F ≤ (di + ∆t). …(2.4)
Remember that the value of ∆t might vary from one instance of the task to another. The worst case scenario (where the task is likely to miss its deadline) occurs for the task instance having the minimum value of ∆t, such that ∆t > 0. This is the worst case scenario, since under this the task would have to wait the longest before its execution can start.
It should be clear that if a task arrives just after a frame has started, then the task would have to wait for the full duration of the current frame before it can be taken up for execution. If a task at all misses its deadline, then certainly it would be under such situations. In other words, the worst case scenario for a task to meet its deadline occurs for its instance that has the minimum separation from the start of a frame. The determination of the minimum separation value (i.e. min(∆t)) for a task among all instances of the task would help in determining a feasible frame size. We show by Theorem 2.2 that min(∆t) is equal to gcd(F, pi). Consequently, this constraint can be written as:
for every Ti, 2F – gcd(F, pi) ≤ di …(2.5)
Note that this constraint defines an upper-bound on frame size for a task Ti, i.e., if the frame size is any larger than the defined upper-bound, then tasks might miss their deadlines. Expr. 2.5 defined the frame size, from the consideration of one task only. Now considering all tasks, the frame size must be smaller than max(gcd(F, pi)+di)/2.
Theorem 2
The minimum separation of the task arrival from the corresponding frame start time (min(∆t)), considering all instances of a task Ti, is equal to gcd(F, pi).
Proof: Let g = gcd(F, pi), where gcd is the function determining the greatest common divisor of its arguments. It follows from the definition of gcd that g must squarely divide each of F and pi. Let Ti be a task with zero phasing. Now, assume that this Theorem is violated for certain integers m and n, such that the Ti(n) occurs in the mth frame and the difference between the start time of the mth frame and the nth task arrival time is less than g. That is, 0 < (m ∗ F – n ∗ pi) < g.
Dividing this expression throughout by g, we get:
0 < (m ∗ F/g – n ∗ pi/g) < 1 …(2.6)
However, F/g and pi/g are both integers because g is gcd(F, pi,). Therefore, we can write F/g = I1 and pi/g = I2 for some integral values I1 and I2. Substituting this in Expr 2.6, we get 0 < m∗I1 – n∗I2 < 1. Since m∗I1 and n∗I2 are both integers, their difference cannot be a fractional value lying between 0 and 1. Therefore, this expression can never be satisfied.
It can therefore be concluded that the minimum time between a frame boundary and the arrival of the corresponding instance of Ti can not be less than gcd(F, pi). For a given task set it is possible that more than one frame size satisfies all the three constraints. In such cases, it is better to choose the shortest frame size. This is because of the fact that the schedulability of a task set increases as more frames become available over a major cycle.
It should however be remembered that the mere fact that a suitable frame size can be determined does not mean that a feasible schedule would be found. It may so happen that there is not enough number of frames available in a major cycle to be assigned to all the task instances. We now illustrate how an appropriate frame size can be selected for cyclic schedulers through a few examples.
Examples
Example 1: A cyclic scheduler is to be used to run the following set of periodic tasks on a uniprocessor: T1 = (e1=1, p1=4), T2 = (e2=, p2=5), T3 = (e3=1, p3=20), T4 = (e4=2, p4=20). Select an appropriate frame size.
Solution: For the given task set, an appropriate frame size is the one that satisfies all the three required constraints. In the following, we determine a suitable frame size F which satisfies all the three required constraints.
Constraint 1: Let F be an appropriate frame size, then max {ei, F}. From this constraint, we get F ≥ 1.5.
Constraint 2: The major cycle M for the given task set is given by M = LCM(4,5,20) = 20. M should be an integral multiple of the frame size F, i.e., M mod F = 0. This consideration implies that F can take on the values 2, 4, 5, 10, 20. Frame size of 1 has been ruled out since it would violate the constraint 1.
Constraint 3: To satisfy this constraint, we need to check whether a selected frame size F satisfies the inequality: 2F − gcd(F, pi) < di for each pi.
Let us first try frame size 2.
For F = 2 and task T1:
2 ∗ 2 − gcd(2, 4) ≤ 4 ≡ 4 − 2 ≤ 4
Therefore, for p1 the inequality is satisfied.
Let us try for F = 2 and task T2:
2 ∗ 2 − gcd(2, 5) ≤ 5 ≡ 4 − 1 ≤ 5
Therefore, for p2 the inequality is satisfied.
Let us try for F = 2 and task T3:
2 ∗ 2 − gcd(2, 20) ≤ 20 ≡ 4 − 2 ≤ 20
Therefore, for p3 the inequality is satisfied.
For F = 2 and task T4:
2 ∗ 2 − gcd(2, 20) ≤ 20 ≡ 4 − 2 ≤ 20
For p4 the inequality is satisfied.
Thus, constraint 3 is satisfied by all tasks for frame size 2. So, frame size 2 satisfies all the three constraints. Hence, 2 is a feasible frame size.
Let us try frame size 4
For F = 4 and task T1:
2 ∗ 4 − gcd(4, 4) ≤ 4 ≡ 8 − 4 ≤ 4
Therefore, for p1 the inequality is satisfied.
Let us try for F = 4 and task T2:
2 ∗ 4 − gcd(4, 5) ≤ 5 ≡ 8 − 1 ≤ 5
For p2 the inequality is not satisfied. Therefore, we need not look any further. Clearly, F = 4 is not a suitable frame size.
Let us now try frame size 5, to check if that is also feasible.
For F = 5 and task T1, we have
2 ∗ 5 − gcd(5, 4) ≤ 4 ≡ 10 − 1 ≤ 4
The inequality is not satisfied for T1. We need not look any further. Clearly, F = 5 is not a suitable frame size.
Let us now try frame size 10.
Let us now try frame size 10.
2 ∗ 10 − gcd(10, 4) ≤ 4 ≡ 20 − 2 ≤ 4
The inequality is not satisfied for T1. We need not look any further. Clearly, F=10 is not a suitable frame size.
Let us try if 20 is a feasible frame size.
For F = 20 and task T1, we have
2 ∗ 20 − gcd(20, 4) ≤ 4 ≡ 40 − 4 ≤ 4
Therefore, F = 20 is also not suitable.
So, only the frame size 2 is suitable for scheduling. Even though for Example 1 we could successfully find a suitable frame size that satisfies all the three constraints, it is quite probable that a suitable frame size may not exist for many problems. In such cases, to find a feasible frame size we might have to split the task (or a few tasks) that is (are) causing violation of the constraints into smaller sub-tasks that can be scheduled in different frames.
Example 2: Consider the following set of periodic real-time tasks to be scheduled by a cyclic scheduler: T1 = (e1=1, p1=4), T2 = (e2=2, p2=5), T3 = (e3=5, p3=20). Determine a suitable frame size for the task set.
Solution:
Using the first constraint, we have F ≥ 5.
Using the second constraint, we have the major cycle M = LCM(4, 5, 20) = 20. So, the permissible values of F are 5, 10 and 20. Checking for a frame size that satisfies the third constraint, we can find that no value of F is suitable. To overcome this problem, we need to split the task that is making the task-set not schedulable. It is easy to observe that the task T3 has the largest execution time, and consequently due to constraint 1, makes the feasible frame sizes quite large.
We try splitting T3 into two or three tasks. After splitting T3 into three tasks, we have:
T3.1 = (20, 1, 20), T3.2 = (20, 2, 20), T3.3 = (20, 2, 20).
The possible values of F now are 2 and 4. We can check that now after splitting the tasks, F=2 and F=4 become feasible frame sizes. It is very difficult to come up with a clear set of guidelines to identify the exact task that is to be split, and the parts into which it needs to be split. Therefore, this needs to be done by trial and error. Further, as the number of tasks to be scheduled increases, this method of trial and error becomes impractical since each task needs to be checked separately. However, when the task set consists of only a few tasks we can easily apply this technique to find a feasible frame size for a set of tasks otherwise not schedulable by a cyclic scheduler.
A Generalized Task Scheduler
We have already stated that cyclic schedulers are overwhelmingly popular in low-cost realtime applications. However, our discussion on cyclic schedulers was so far restricted to scheduling periodic real-time tasks. On the other hand, many practical applications typically consist of a mixture of several periodic, aperiodic, and sporadic tasks. In this section, we discuss how aperiodic and sporadic tasks can be accommodated by cyclic schedulers. Recall that the arrival times of aperiodic and sporadic tasks are expressed statistically. Therefore, there is no way to assign aperiodic and sporadic tasks to frames without significantly lowering the overall achievable utilization of the system. In a generalized scheduler, initially a schedule (assignment of tasks to frames) for only periodic tasks is prepared. The sporadic and aperiodic tasks are scheduled in the slack times that may be available in the frames. Slack time in a frame is the time left in the frame after a periodic task allocated to the frame completes its execution. Non-zero slack time in a frame can exist only when the execution time of the task allocated to it is smaller than the frame size.
A sporadic task is taken up for scheduling only if enough slack time is available for the arriving sporadic task to complete before its deadline. Therefore, a sporadic task on its arrival is subjected to an acceptance test. The acceptance test checks whether the task is likely to be completed within its deadline when executed in the available slack times. If it is not possible to meet the task’s deadline, then the scheduler rejects it and the corresponding recovery routines for the task are run. Since aperiodic tasks do not have strict deadlines, they can be taken up for scheduling without any acceptance test and best effort can be made to schedule them in the slack times available. Though for aperiodic tasks no acceptance test is done, but no guarantee is given for a task’s completion time and best effort is made to complete the task as early as possible. An efficient implementation of this scheme is that the slack times are stored in a table and during acceptance test this table is used to check the schedulability of the arriving tasks. Another popular alternative is that the aperiodic and sporadic tasks are accepted without any acceptance test, and best effort is made to meet their respective deadlines.
Pseudo-code for a Generalized Scheduler: The following is the pseudo-code for a generalized cyclic scheduler we discussed, which schedules periodic, aperiodic and sporadic tasks. It is assumed that pre-computed schedule for periodic tasks is stored in a schedule table, and if required the sporadic tasks have already been subjected to an acceptance test and only those which have passed the test are available for scheduling.
cyclic-scheduler() {
current-task T = Schedule-Table[k];
k = k + 1;
k = k mod N; //N is the total number of tasks in the schedule
table
dispatch-current-task(T);
schedule-sporadic-tasks(); //Current task T completed early,
// sporadic tasks can be taken
up
schedule-aperiodic-tasks(); //At the end of the frame, the running
task
// is pre-empted if not complete
idle(); //No task to run, idle
}
The cyclic scheduler routine cyclic-scheduler () is activated at the end of every frame by a periodic timer. If the current task is not complete by the end of the frame, then it is suspended and the task to be run in the next frame is dispatched by invoking the routine cyclic-scheduler(). If the task scheduled in a frame completes early, then any existing sporadic or aperiodic task is taken up for execution.
Comparison of Cyclic with Table-Driven Scheduling
Both table-driven and cyclic schedulers are important clock-driven schedulers. A scheduler needs to set a periodic timer only once at the application initialization time. This timer continues to give an interrupt exactly at every frame boundary. But in table-driven scheduling, a timer has to be set every time a task starts to run. The execution time of a typical real-time task is usually of the order of a few milliseconds. Therefore, a call to a timer is made every few mill Seconds. This represents a significant overhead and results in degraded system performance. Therefore, a cyclic scheduler is more efficient than a table-driven scheduler. This probably is a reason why cyclic schedulers are so overwhelmingly popular especially in embedded applications. However, if the overhead of setting a timer can be ignored, a table-driven scheduler is more proficient than a cyclic scheduler because the size of the frame that needs to be chosen should be at least as long as the size of the largest execution time of a task in the task set. This is a source of inefficiency, since this results in processor time being wasted in case of those tasks whose execution times are smaller than the chosen frame size.
47 videos|69 docs|65 tests
|
1. What is real-time task scheduling? |
2. What are the challenges in real-time task scheduling? |
3. What are some common scheduling algorithms used in real-time task scheduling? |
4. How does real-time task scheduling differ from general-purpose task scheduling? |
5. What are the advantages of real-time task scheduling? |
|
Explore Courses for Computer Science Engineering (CSE) exam
|