Understand Process Synchronization with Example?

Process synchronization is a task used to manage the processes thus no two processes have access to the same data or resources. In other words, when two processes have access to the same data or resources process synchronization makes sure that the result of each process is isolated from the other. We will learn more about what is process synchronization in OS.

What is Process Synchronization in OS?

Process synchronization is very helpful when multiple processes are running at the same time and more than one process has access to the same data or resources at the same time. Process synchronization is generally used in the multi-process system. When more than two processes have access to the same data or resources at the same time it can cause data inconsistency so to remove this data inconsistency processes should be synchronized with each other.

In the above picture, We take an example of a bank account that has a current balance of 500 and there are two users which have access to that account. User 1 and User 2 both are trying to access the balance. If process 1 is for withdrawal and process 2 is for checking the balance both will occur at the same time then user 1 might get the wrong current balance. To avoid this kind of data inconsistency process synchronization in os is very helpful.

How Process Synchronization in OS works with an example?

We will learn how process synchronization in os works with help of an example. We will see an example of different processes trying to access the same data at the same time.

In the above example, there are three processes, Process 1 is trying to write the shared data while Process 2 and Process 3 are trying to read the same data so there are huge changes in Process 2, and Process 3 might get the wrong data.

Let’s understand some different sections of a program.

  • Entry Section:- This section is used to decide the entry of the process

  • Critical Section:- This section is used to make sure that only one process access and modifies the shared data or resources.

  • Exit Section:- This section is used to allow a process that is waiting in the entry section and make sure that finished processes are also removed from the critical section.

  • Remainder Section:- The remainder section contains other parts of the code which are not in the Critical or Exit sections.

What is Race Condition?

Race Condition occurs when more than one process tries to access and modify the same shared data or resources because many processes try to modify the shared data or resources there are huge chances of a process getting the wrong result or data. Therefore, every process race to say that it has correct data or resources and this is called a race condition.

The value of the shared data depends on the execution order of the process as many processes try to modify the data or resources at the same time. The race condition is associated with the critical section. Now the question arises that how to handle a race condition. We can tackle this problem by implementing logic in the critical section like only one process at a time can access the critical section and this section is called the atomic section.

What is the Critical Section Problem?

What the critical section do is, make sure that only one process at a time has access to shared data or resources and only that process can modify that data. Thus when many problems try to modify the shared data or resources critical section allows only a single process to access and modify the shared data or resources. Two functions are very important in the critical section wait() and signal(). To handle the entry of processes in the critical section wait() function is used. On the other hand, the single() function is used to remove finished processes from the critical section.

What happens if we remove the critical section? So if we remove a critical section, all the processes can access and modify shared data at the same time so we can not guarantee that the outcome will be true. We will see some essential conditions to solve critical section problems.

What are the Rules of Critical Sections?

There are basically three rules which need to be followed to solve critical section problems.

  • Mutual Exclusion:- Make sure one process is running is the critical section means one process is accessing the shared data or resources then no other process enters 3the critical section at the same time.

  • Progress:- If there is no process in the critical section and some other processes are waiting to enter into the critical section. Now which process will enter into the critical section is taken by these processes.

  • Bounding waiting:- When a new process makes a request to enter into the critical section there should be some waiting time or bound. This bound time is equal to the number of processes that are allowed to access critical sections before it.

Solutions to the Critical Section Problem:-

  • Peterson’s solution:-
    The computer scientist named Peterson gave a very utilized approach to solve critical section problem. This solution is a classical software-based solution.

In this solution when one process is executing in the critical section at the same time other processes can access the rest of the code and the opposite is also possible. The important thing is that this solution makes sure that only one process is executing a critical section at the same time. Let’s understand this solution with help of an example.

do{
    // process i will enter in critical section so mark flag of index i as True
    flag[i]=True;
    turn=i;
    while( flag[i]==True && turn==i){
    // the critical section
}
// process i is finished so mark the flag of index i as False
flag[i]=False;
// j is the next process which will enter the critical section
    turn=j;
}
while(True)

In the above example, there are n processes given (process 1, process 2, process 3,…, process n). A flag array is also created with a boolean data type and the initial value of all array is false. When a process enters a critical section mark the flag of that process’s index as True. When a process is finished its execution marks the flag of that process as False and assigns a turn to the next waiting process.

  • Synchronization Hardware:-
    As the name suggests we will try to find the solution to the critical section problem using hardware sometimes, this problem can be solved using hardware. To solve this problem some operating systems give the feature of locking in this function when a process enters the critical section it acquires a lock and the lock is removed when a process exit from the critical section. So this locking functionality makes sure that only one process at a time can enter into the critical section because when other processes try to enter into the critical section it is locked.

  • Mutex Lock:-
    Mutex Lock was introduced because the above method ( Synchronize Hardware) was not an easy method. To synchronize access to resources in the critical section Mutex Locking Mechanism is used. In this method, we use Lock which is set when a process enters into the critical section. When a process exits from the critical section the Lock is unset.

  • Semaphores:-
    In this method, a process sends a signal to another process that is waiting on semaphores. Semaphores are variables that are shared between processes. For synchronization among the processes, semaphores make use of the wait() and signal() functions.

Leave a Reply

Your email address will not be published. Required fields are marked *