Category: Operating System

SSTF Algorithm | Disk Scheduling Algorithms

Disk Scheduling Algorithms-

 

Before you go through this article, make sure that you have gone through the previous article on Disk Scheduling.

 

We have discussed-

  • Disk scheduling algorithms are used to schedule multiple requests for accessing the disk.
  • The purpose of disk scheduling algorithms is to reduce the total seek time.

 

Various disk scheduling algorithms are-

 

 

In this article, we will discuss about SSTF Disk Scheduling Algorithm.

 

SSTF Disk Scheduling Algorithm-

 

  • SSTF stands for Shortest Seek Time First.
  • This algorithm services that request next which requires least number of head movements from its current position regardless of the direction.
  • It breaks the tie in the direction of head movement.

 

Advantages-

 

  • It reduces the total seek time as compared to FCFS.
  • It provides increased throughput.
  • It provides less average response time and waiting time.

 

Disadvantages-

 

  • There is an overhead of finding out the closest request.
  • The requests which are far from the head might starve for the CPU.
  • It provides high variance in response time and waiting time.
  • Switching the direction of head frequently slows down the algorithm.

 

PRACTICE PROBLEMS BASED ON SSTF DISK SCHEDULING ALGORITHM-

 

Problem-01:

 

Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The SSTF scheduling algorithm is used. The head is initially at cylinder number 53 moving towards larger cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement (in number of cylinders) incurred while servicing these requests is _______.

 

Solution-

 

 

Total head movements incurred while servicing these requests

= (65 – 53) + (67 – 65) + (67 – 41) + (41 – 14) + (98 – 14) + (122 – 98) + (124 – 122) + (183 – 124)

= 12 + 2 + 26 + 27 + 84 + 24 + 2 + 59

= 236

 

Problem-02:

 

Consider a disk system with 100 cylinders. The requests to access the cylinders occur in following sequence-

4, 34, 10, 7, 19, 73, 2, 15, 6, 20

Assuming that the head is currently at cylinder 50, what is the time taken to satisfy all requests if it takes 1 ms to move from one cylinder to adjacent one and shortest seek time first policy is used?

  1. 95 ms
  2. 119 ms
  3. 233 ms
  4. 276 ms

 

Solution-

 

 

Total head movements incurred while servicing these requests

= (50 – 34) + (34 – 20) + (20 – 19) + (19 – 15) + (15 – 10) + (10 – 7) + (7 – 6) + (6 – 4) + (4 – 2) + (73 – 2)

= 16 + 14 + 1 + 4 + 5 + 3 + 1 + 2 + 2 + 71

= 119

 

Time taken for one head movement = 1 msec. So,

Time taken for 119 head movements

= 119 x 1 msec

= 119 msec

 

Thus, Option (B) is correct.

 

To gain better understanding about SSTF Disk Scheduling Algorithm,

Watch this Video Lecture

 

Next Article- SCAN Disk Scheduling Algorithm

 

Get more notes and other study material of Operating System.

Watch video lectures by visiting our YouTube channel LearnVidFun.

Disk Scheduling | Disk Scheduling Algorithms

Disk Scheduling-

 

Before you go through this article, make sure that you have gone through the previous article on Magnetic Disk.

 

Disk scheduling is a technique used by the operating system to schedule multiple requests for accessing the disk.

 

Disk Scheduling Algorithms-

 

  • The algorithms used for disk scheduling are called as disk scheduling algorithms.
  • The purpose of disk scheduling algorithms is to reduce the total seek time.

 

Various disk scheduling algorithms are-

 

 

  1. FCFS Algorithm
  2. SSTF Algorithm
  3. SCAN Algorithm
  4. C-SCAN Algorithm
  5. LOOK Algorithm
  6. C-LOOK Algorithm

 

In this article, we will discuss about FCFS Disk Scheduling Algorithm.

 

FCFS Disk Scheduling Algorithm-

 

  • As the name suggests, this algorithm entertains requests in the order they arrive in the disk queue.
  • It is the simplest disk scheduling algorithm.

 

Advantages-

 

  • It is simple, easy to understand and implement.
  • It does not cause starvation to any request.

 

Disadvantages-

 

  • It results in increased total seek time.
  • It is inefficient.

 

PRACTICE PROBLEM BASED ON FCFS DISK SCHEDULING ALGORITHM-

 

Problem-

 

Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The FCFS scheduling algorithm is used. The head is initially at cylinder number 53. The cylinders are numbered from 0 to 199. The total head movement (in number of cylinders) incurred while servicing these requests is _______.

 

Solution-

 

 

Total head movements incurred while servicing these requests

= (98 – 53) + (183 – 98) + (183 – 41) + (122 – 41) + (122 – 14) + (124 – 14) + (124 – 65) + (67 – 65)

= 45 + 85 + 142 + 81 + 108 + 110 + 59 + 2

= 632

 

To gain better understanding about FCFS Disk Scheduling Algorithm,

Watch this Video Lecture

 

Next Article- SSTF Disk Scheduling Algorithm

 

Get more notes and other study material of Operating System.

Watch video lectures by visiting our YouTube channel LearnVidFun.

Process Synchronization | Practice Problems

Process Synchronization-

 

Before you go through this article, make sure that you have gone through the previous articles on Process Synchronization.

 

We have discussed the following synchronization mechanisms-

 

In this article, we will discuss practice problems based on synchronization mechanisms.

 

PRACTICE PROBLEMS BASED ON SYNCHRONIZATION MECHANISMS-

 

Problem-01:

 

The enter_CS( ) and leave_CS( ) functions to implement critical section of a process are realized using test and set instruction as follows-

 

void enter_CS(X)
{
   while (test-and-set(X));
}
void leave_CS(X)
{
   X = 0;
}

 

In the above solution, X is a memory location associated with the CS and is initialized to 0. Now, consider the following statements-

  1. The above solution to CS problem is deadlock-free
  2. The solution is starvation free
  3. The processes enter CS in FIFO order
  4. More than one process can enter CS at the same time.

Which of the above statements is true?

  1. I only
  2. I and II
  3. II and III
  4. IV only

 

Solution-

 

Clearly, the given mechanism is test and set lock which has the following characteristics-

  • It ensures mutual exclusion.
  • It ensures freedom from deadlock.
  • It may cause the process to starve for the CPU.
  • It does not guarantee that processes will always execute in a FIFO order otherwise there would be no starvation.

 

Thus, Option (A) is correct.

 

Problem-02:

 

Fetch_And_Add(X,i) is an atomic Read-Modify-Write instruction that reads the value of memory location X, increments it by the value i and, returns the old value of X. It is used in the pseudo code shown below to implement a busy-wait lock. L is an unsigned integer shared variable initialized to 0. The value of 0 corresponds to lock being available, while any non-zero value corresponds to the lock being not available.

 

AcquireLock(L)
{
   while (Fetch_And_Add(L,1))
   L = 1;
}

ReleaseLock(L)
{
   L = 0;
}

 

This implementation-

  1. fails as L can overflow
  2. fails as L can take on a non-zero value when the lock is actually available
  3. works correctly but may starve some processes
  4. works correctly without starvation

 

Solution-

 

The given synchronization mechanism has been implemented as-

 

 

Working-

 

This synchronization mechanism works as explained in the following scenes-

 

Scene-01:

 

  • A process P1 arrives.
  • It executes the Fetch_And_Add(L, 1) instruction.
  • Since lock value is set to 0, so it returns value 0 to the while loop and sets the lock value to 0+1=1.
  • The returned value 0 breaks the while loop condition.
  • Process P1 enters the critical section and executes.

 

Scene-02:

 

  • Another process P2 arrives.
  • It executes the Fetch_And_Add(L, 1) instruction.
  • Since lock value is now set to 1, so it returns value 1 to the while loop and sets the lock value to 1+1=2.
  • The returned value 1 does not break the while loop condition.
  • Process P2 executes the next instruction L=1 and sets the lock value to 1 and again checks the condition.
  • The lock value keeps changing from 1 to 2 and then 2 to 1.
  • The process P2 is trapped inside an infinite while loop.
  • The while loop keeps the process P2 busy until the lock value becomes 0 and its condition breaks.

 

Scene-03:

 

  • Process P1 comes out of the critical section and sets the lock value to 0.
  • The while loop condition breaks.
  • Now, process P2 waiting for the critical section enters the critical section.

 

Failure of the Mechanism-

 

  • The mechanism fails to provide the synchronization among the processes.
  • This is explained below-

 

Explanation-

 

The occurrence of following scenes may lead to two processes present inside the critical section-

 

Scene-01:

 

  • A process P1 arrives.
  • It executes the Fetch_And_Add(L, 1) instruction.
  • Since lock value is set to 0, so it returns value 0 to the while loop and sets the lock value to 0+1=1.
  • The returned value 0 breaks the while loop condition.
  • Process P1 enters the critical section and executes.

 

Scene-02:

 

  • Another process P2 arrives.
  • It executes the Fetch_And_Add(L, 1) instruction.
  • Since lock value is now set to 1, so it returns value 1 to the while loop and sets the lock value to 1+1=2.
  • The returned value 1 does not break the while loop condition.
  • Now, as process P2 is about to enter the body of while loop, it gets preempted.

 

Scene-03:

 

  • Process P1 comes out of the critical section and sets the lock value to 0.

 

Scene-04:

 

  • Process P2 gets scheduled again.
  • It resumes its execution.
  • Before preemption, it had already satisfied the while loop condition.
  • Now, it begins execution from next instruction.
  • It sets the lock value to 1 and here the blunder happens.
  • This is because the lock is actually available and lock value = 0 but P2 itself sets the lock value to 1.
  • Then, it checks the condition and now there is no one who can set the lock value to zero.
  • Thus, Process P2 gets trapped inside the infinite while loop forever.
  • All the future processes too will be trapped inside the infinite while loop forever.

 

Thus,

  • Although mutual exclusion could be guaranteed but still the mechanism fails.
  • This is because lock value got set to a non-zero value even when the lock was available.

 

Also,

This synchronization mechanism leads to overflow of value ‘L’.

 

Explanation-

 

  • When every process gets preempt after executing the while loop condition, the value of lock will keep increasing with every process.
  • When first process arrives, it returns the value 0 and sets the lock value to 0+1=1 and gets preempted.
  • When second process arrives, it returns the value 1 and sets the lock value to 1+1=2 and gets preempted.
  • When third process arrives, it returns the value 2 and sets the lock value to 2+1=3 and gets preempted.
  • Thus, for very large number of processes preempting in the above manner, L will overflow.

 

Also,

This synchronization mechanism does not guarantee bounded waiting and may cause starvation.

 

Explanation-

 

  • There might exist an unlucky process which when arrives to execute the critical section finds it busy.
  • So, it keeps waiting in the while loop and eventually gets preempted.
  • When it gets rescheduled and comes to execute the critical section, it finds another process executing the critical section.
  • So, again, it keeps waiting in the while loop and eventually gets preempted.
  • This may happen several times which causes that unlucky process to starve for the CPU.

 

Thus, Options (A) and (B) are correct.

 

Problem-03:

 

Consider the methods used by processes P1 and P2 for accessing their critical sections whenever needed, as given below. The initial values of shared Boolean variables S1 and S2 are randomly assigned.

 

Method used by P1 Method used by P2
while (S1 == S2);

Critical Section

S1 = S2

while (S1 != S2);

Critical Section

S2 = !S1

 

Which one of the following statements describes the properties achieved?

  1. Mutual exclusion but not progress
  2. Progress but not mutual exclusion
  3. Neither mutual exclusion nor progress
  4. Both mutual exclusion and progress

 

Solution-

 

The initial values of shared Boolean variables S1 and S2 are randomly assigned. The assigned values may be-

  • S1 = 0 and S2 = 0
  • S1 = 0 and S2 = 1
  • S1 = 1 and S2 = 0
  • S1 = 1 and S2 = 1

 

Case-01: If S1 = 0 and S2 = 0-

 

In this case,

  • Process P1 will be trapped inside an infinite while loop.
  • However, process P2 gets the chance to execute.
  • Process P2 breaks the while loop condition, executes the critical section and then sets S2 = 1.
  • Now, S1 = 0 and S2 = 1.
  • Now, process P2 can not enter the critical section again but process P1 can enter the critical section.
  • Process P1 breaks the while loop condition, executes the critical section and then sets S1 = 1.
  • Now, S1 = 1 and S2 = 1.
  • Now, process P1 can not enter the critical section again but process P2 can enter the critical section.

 

Thus,

  • Processes P1 and P2 executes the critical section alternately starting with process P2.
  • Mutual exclusion is guaranteed.
  • Progress is not guaranteed because if one process does not execute, then other process would never be able to execute again.
  • Processes have to necessarily execute the critical section in strict alteration.

 

Case-02: If S1 = 0 and S2 = 1-

 

In this case,

  • Process P2 will be trapped inside an infinite while loop.
  • However, process P1 gets the chance to execute.
  • Process P1 breaks the while loop condition, executes the critical section and then sets S1 = 1.
  • Now, S1 = 1 and S2 = 1.
  • Now, process P1 can not enter the critical section again but process P2 can enter the critical section.
  • Process P2 breaks the while loop condition, executes the critical section and then sets S2 = 0.
  • Now, S1 = 1 and S2 = 0.
  • Now, process P2 can not enter the critical section again but process P1 can enter the critical section.

 

Thus,

  • Processes P1 and P2 executes the critical section alternately starting with process P1.
  • Mutual exclusion is guaranteed.
  • Progress is not guaranteed because if one process does not execute, then other process would never be able to execute again.
  • Processes have to necessarily execute the critical section in strict alteration.

 

Case-03: If S1 = 1 and S2 = 0-

 

  • This case is same as case-02.

 

Case-04: If S1 = 1 and S2 = 1-

 

  • This case is same as case-01.

 

Thus, Overall we can conclude-

  • Processes P1 and P2 executes the critical section alternatively.
  • Mutual exclusion is guaranteed.
  • Progress is not guaranteed because if one process does not execute, then other process would never be able to execute again.
  • Processes have to necessarily execute the critical section in strict alteration.

 

Thus, Option (A) is correct.

 

To watch video solutions and practice other problems,

Watch this Video Lecture

 

Next Article- Introduction to Semaphores

 

Get more notes and other study material of Operating System.

Watch video lectures by visiting our YouTube channel LearnVidFun.

Interest Variable | Process Synchronization

Process Synchronization-

 

Before you go through this article, make sure that you have gone through the previous article on Process Synchronization.

 

We have discussed-

  • Process Synchronization provides a synchronization among the processes.
  • Synchronization mechanisms allow the processes to access critical section in a synchronized manner.
  • This avoids the inconsistent results.

 

Interest Variable-

 

  • Interest variable is a synchronization mechanism that provides synchronization among two processes.
  • It uses an interest variable to provide the synchronization.

 

It is implemented as-

 

 

Initially, interest [0] and interest [1] are set to False.

  • Interest value [0] = False means that process P0 is not interested to enter the critical section.
  • Interest value [0] = True means that process P0 is interested to enter the critical section.
  • Interest value [1] = False means that process P1 is not interested to enter the critical section.
  • Interest value [1] = True means that process P1 is interested to enter the critical section.

 

Working-

 

This synchronization mechanism works as explained in the following scenes-

 

Scene-01:

 

  • Process P0 arrives.
  • It sets interest[0] = True.
  • Now, it executes while loop condition- interest [1] == True.
  • Since interest [1] is initialized to False, so it returns value 0 to the while loop.
  • The while loop condition breaks.
  • Process P0 enters the critical section and executes.
  • Now, even if process P0 gets preempted in the middle, process P1 can not enter the critical section.
  • Process P1 can not enter unless process P0 completes and sets the interest [0] = False.

 

Scene-02:

 

  • Process P1 arrives.
  • It sets interest[1] = True.
  • Now, it executes while loop condition- interest [0] == True.
  • Process P0 has already shown its interest by setting interest [0] = True.
  • So, it returns value 1 to the while loop.
  • The while loop condition satisfies.
  • The process P1 is trapped inside an infinite while loop.
  • The while loop keeps the process P1 busy until the interest [0] value becomes False and its condition breaks.

 

Scene-03:

 

  • Process P0 comes out of the critical section and sets the interest [0] value to False.
  • The while loop condition of process P1 breaks.
  • Now, the process P1 waiting for the critical section enters the critical section and execute.
  • Now, even if process P1 gets preempted in the middle, process P0 can not enter the critical section.
  • Process P0 can not enter unless process P1 completes and sets the interest [1] = False.

 

Also Read- Criteria For Synchronization Mechanisms

 

Characteristics-

 

The characteristics of this synchronization mechanism are-

  • It ensures mutual exclusion.
  • It does not follow strict alternation approach.
  • It ensures progress since if a process is not interested to enter the critical section, it never stops the other process to enter the critical section.
  • It is architectural neutral since it does not require any support from the operating system.
  • It is a busy waiting solution which keeps the CPU busy when the process is actually waiting.
  • It suffers from deadlock.
  • Since it suffers from deadlock, it does not guarantee bounded waiting.

 

How it suffers from deadlock?

 

  • This synchronization mechanism may cause deadlock between the processes.
  • Deadlock may occur through the following sequence of scenes-

 

Scene-01:

 

  • Process P0 arrives.
  • It sets interest[0] = True.
  • Now, it gets preempted and process P1 gets scheduled.

 

Scene-02:

 

  • Process P1 arrives.
  • It sets interest[1] = True.
  • Now, it gets preempted.

 

Scene-03:

 

  • Process P0 gets scheduled again.
  • Now, it can not break the while loop condition- interest [1] == True since process P1 has shown its interest for executing critical section before its arrival.
  • It keeps waiting in the infinite while loop for process P1 to complete its execution first.

 

Scene-04:

 

  • Later, Process P1 gets scheduled again.
  • Now, it also can not break the while loop condition- interest [0] == True since process P0 is also interested for executing critical section.
  • It keeps waiting in the infinite while loop for process P0 to complete its execution first.

Thus, both the processes are deadlocked.

 

To gain better understanding about Interest Variable,

Watch this Video Lecture

 

Next Article- Practice Problems On Synchronization Mechanisms

 

Get more notes and other study material of Operating System.

Watch video lectures by visiting our YouTube channel LearnVidFun

Turn Variable | Process Synchronization

Process Synchronization-

 

Before you go through this article, make sure that you have gone through the previous article on Process Synchronization.

 

We have discussed-

  • Process Synchronization provides a synchronization among the processes.
  • Synchronization mechanisms allow the processes to access critical section in a synchronized manner.
  • This avoids the inconsistent results.

 

Turn Variable-

 

  • Turn variable is a synchronization mechanism that provides synchronization among two processes.
  • It uses a turn variable to provide the synchronization.

 

It is implemented as-

 

 

Initially, turn value is set to 0.

  • Turn value = 0 means it is the turn of process P0 to enter the critical section.
  • Turn value = 1 means it is the turn of process P1 to enter the critical section.

 

Working-

 

This synchronization mechanism works as explained in the following scenes-

 

Scene-01:

 

  • Process P0 arrives.
  • It executes the turn!=0 instruction.
  • Since turn value is set to 0, so it returns value 0 to the while loop.
  • The while loop condition breaks.
  • Process P0 enters the critical section and executes.
  • Now, even if process P0 gets preempted in the middle, process P1 can not enter the critical section.
  • Process P1 can not enter unless process P0 completes and sets the turn value to 1.

 

Scene-02:

 

  • Process P1 arrives.
  • It executes the turn!=1 instruction.
  • Since turn value is set to 0, so it returns value 1 to the while loop.
  • The returned value 1 does not break the while loop condition.
  • The process P1 is trapped inside an infinite while loop.
  • The while loop keeps the process P1 busy until the turn value becomes 1 and its condition breaks.

 

Scene-03:

 

  • Process P0 comes out of the critical section and sets the turn value to 1.
  • The while loop condition of process P1 breaks.
  • Now, the process P1 waiting for the critical section enters the critical section and execute.
  • Now, even if process P1 gets preempted in the middle, process P0 can not enter the critical section.
  • Process P0 can not enter unless process P1 completes and sets the turn value to 0.

 

Also Read- Criteria For Synchronization Mechanisms

 

Characteristics-

 

The characteristics of this synchronization mechanism are-

  • It ensures mutual exclusion.
  • It follows the strict alternation approach.

 

Strict Alternation Approach

 

In strict alternation approach,

  • Processes have to compulsorily enter the critical section alternately whether they want it or not.
  • This is because if one process does not enter the critical section, then other process will never get a chance to execute again.

 

  • It does not guarantee progress since it follows strict alternation approach.
  • It ensures bounded waiting since processes are executed turn wise one by one and each process is guaranteed to get a chance.
  • It ensures processes does not starve for the CPU.
  • It is architectural neutral since it does not require any support from the operating system.
  • It is deadlock free.
  • It is a busy waiting solution which keeps the CPU busy when the process is actually waiting.

 

To gain better understanding about Turn Variable,

Watch this Video Lecture

 

Next Article- Interest Variable | Synchronization Mechanism

 

Get more notes and other study material of Operating System.

Watch video lectures by visiting our YouTube channel LearnVidFun.