A device with data transfer rate 10 KB/sec is connected to a CPU. Data...
In programmed I/O, CPU does continuous polling, To transfer 1B CPU polls for 10^-4 sec = 10^2 micro-sec of processing In interrupt mode CPU is interrupted on completion of i\o, To transfer 1B CPU does 4 micro-sec of processing(since transfer time between other components is negligible). Gain = 10^2 / 4 = 25
View all questions of this test
A device with data transfer rate 10 KB/sec is connected to a CPU. Data...
Given:
Data transfer rate = 10 KB/sec
Interrupt overhead = 4 microseconds
Byte transfer time = negligible
To find: Minimum performance gain of operating the device under interrupt mode over operating it under program controlled mode
Approach:
In program-controlled mode, the CPU continuously polls the device to check if any data is available for transfer. This results in wastage of CPU time and hence, slows down the system. On the other hand, in interrupt mode, the device signals the CPU when data is available for transfer. This saves CPU time and increases system performance. The performance gain can be calculated as follows:
Performance gain = (Time taken in program-controlled mode - Time taken in interrupt mode) / Time taken in program-controlled mode * 100%
We know that data transfer rate = 10 KB/sec = 10,000 bytes/sec
Time taken to transfer one byte = 1 / 10,000 sec = 0.1 milliseconds = 100 microseconds
Therefore, time taken to transfer n bytes = n * 100 microseconds
In program-controlled mode:
CPU time wasted in polling = 100 microseconds (for each byte)
Total CPU time wasted in polling for n bytes = n * 100 microseconds
Total time taken in program-controlled mode = n * 100 microseconds + n * byte transfer time + interrupt overhead
= n * 100 microseconds + n * 0 + 4 microseconds (negligible byte transfer time)
= n * 100 + 4 microseconds
In interrupt mode:
CPU time saved = 4 microseconds (for each byte)
Total CPU time saved in interrupt mode for n bytes = n * 4 microseconds
Total time taken in interrupt mode = n * byte transfer time + interrupt overhead + time taken to handle interrupts
= n * 0 + 4 microseconds + n * interrupt handling time
= 4 microseconds + n * interrupt handling time
Now, we need to find the minimum value of n for which the performance gain is maximum.
Let's assume that interrupt handling time per byte is t microseconds.
Then, the performance gain can be calculated as follows:
Performance gain = (n * 100 + 4 - 4 - n * t) / (n * 100 + 4) * 100%
= (100 - n * t / (n * 100 + 4)) * 100%
To maximize the performance gain, we need to minimize n * t / (n * 100 + 4)
Differentiating with respect to n, we get:
t / (n * 100 + 4)^2 - t / (n * 100 + 4) = 0
=> t / (n * 100 + 4) * (1 - t / (n * 100 + 4)) = 0
Since t and (n * 100 + 4) are positive, we get:
1 - t / (n * 100 + 4) = 0
=> t = n * 100 + 4
=> n = (t - 4) / 100
Substituting this value of n in the expression for performance gain, we get:
Performance gain = (100 - t / 100) * 100%
To maximize the performance gain, we need to minimize t, which means interrupt handling time should be as small as possible.
Therefore, the minimum performance gain of operating the device under
To make sure you are not studying endlessly, EduRev has designed Computer Science Engineering (CSE) study material, with Structured Courses, Videos, & Test Series. Plus get personalized analysis, doubt solving and improvement plans to achieve a great score in Computer Science Engineering (CSE).