100 %
0 %

Published on January 6, 2008

Author: tech2click

Source: slideshare.net

Process Synchronization and Deadlocks In a nutshell

Content Motivation Race Condition Critical Section problem & Solutions Classical problems in Synchronization Deadlocks

Motivation

Race Condition

Critical Section problem & Solutions

Classical problems in Synchronization

Why study these chapters? This is about getting processes to coordinate with each other. How do processes work with resources that must be shared between them Very interesting concepts!

This is about getting processes to coordinate with each other.

How do processes work with resources that must be shared between them

Very interesting concepts!

A race condition example A race condition is where multiple processes/threads concurrently read and write to a shared memory location and the result depends on the order of the execution. This was the cause of a patient death on a radiation therapy machine, the Therac-25 http://sunnyday.mit.edu/therac-25.html Yakima Software flow Also can happen in bank account database transactions with, say a husband and a wife accessing the same account simultaneously from different ATMs

A race condition is where multiple processes/threads concurrently read and write to a shared memory location and the result depends on the order of the execution.

This was the cause of a patient death on a radiation therapy machine, the Therac-25

http://sunnyday.mit.edu/therac-25.html

Yakima Software flow

Also can happen in bank account database transactions with, say a husband and a wife accessing the same account simultaneously from different ATMs

A race condition example (2) We will implement count++ and count-- and run them concurrently Let us say they are executed by different threads accessing a global variable At the end we expect count's value not to change

We will implement count++ and count-- and run them concurrently

Let us say they are executed by different threads accessing a global variable

At the end we expect count's value not to change

A race condition example (3) count++ implementation: register1 = count register1 = register1 + 1 count = register 1 count-- implementation: register2 = count register2 = register2 - 1 count = register2 Let count = 5 initially. One possible concurrent execution of count++ and count-- is register1 = count {register1 = 5} register1 = register1 + 1 {register1 = 6} register2 = count {register2 = 5} register2 = register2 - 1 {register2 = 4} count = register1 {count = 6} count = register2 {count = 4} count = 4 after count++ and count--, even though we started with count = 5 Easy question: what other values can count be from doing this incorrectly? Obviously, we would like to have count++ execute, followed by count-- (or vice versa)

count++ implementation:

register1 = count

register1 = register1 + 1

count = register 1

count-- implementation:

register2 = count

register2 = register2 - 1

count = register2

Let count = 5 initially. One possible concurrent execution of

count++ and count-- is

register1 = count {register1 = 5}

register1 = register1 + 1 {register1 = 6}

register2 = count {register2 = 5}

register2 = register2 - 1 {register2 = 4}

count = register1 {count = 6}

count = register2 {count = 4}

count = 4 after count++ and count--, even though we started with count = 5

Easy question: what other values can count be from doing this incorrectly?

Obviously, we would like to have count++ execute, followed by count-- (or vice versa)

A race condition example (4) Producer/consumer problem is more general form of the previous problem.

Producer/consumer problem is more general form of the previous problem.

Critical Sections A critical section is a piece of code that accesses a shared resource (data structure or device) that must not be concurrently accessed by more than one thread of execution. The goal is to provide a mechanism by which only one instance of a critical section is executing for a particular shared resource. Unfortunately, it is often very difficult to detect critical section bugs

A critical section is a piece of code that accesses a shared resource (data structure or device) that must not be concurrently accessed by more than one thread of execution.

The goal is to provide a mechanism by which only one instance of a critical section is executing for a particular shared resource.

Unfortunately, it is often very difficult to detect critical section bugs

Critical Sections (2) A Critical Section Environment contains: Entry Section Code requesting entry into the critical section. Critical Section Code in which only one process can execute at any one time. Exit Section The end of the critical section, releasing or allowing others in. Remainder Section Rest of the code AFTER the critical section.

A Critical Section Environment contains:

Entry Section Code requesting entry into the critical section.

Critical Section Code in which only one process can execute at any one time.

Exit Section The end of the critical section, releasing or allowing others in.

Remainder Section Rest of the code AFTER the critical section.

Critical Sections (3)

Solution to Critical-Section Problem The critical section must ENFORCE ALL THREE of the following rules: 1 . Mutual Exclusion - If process P i is executing in its critical section, then no other processes can be executing in their critical sections In many calls, this is abbreviated mutex 2. Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely 3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted Assume that each process executes at a nonzero speed No assumption concerning relative speed of the N processes

The critical section must ENFORCE ALL THREE of the following rules:

1 . Mutual Exclusion - If process P i is executing in its critical section, then no other processes can be executing in their critical sections

In many calls, this is abbreviated mutex

2. Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely

3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted

Assume that each process executes at a nonzero speed

No assumption concerning relative speed of the N processes

Critical Section Solutions Hardware Many systems provide hardware support for critical section code Uniprocessors – could disable interrupts Currently running code would execute without preemption Generally too inefficient on multiprocessor systems Have to wait for disable to propagate to all processors Operating systems using this not broadly scalable Modern machines provide special atomic hardware instructions Atomic = non-interruptable

Hardware

Many systems provide hardware support for critical section code

Uniprocessors – could disable interrupts

Currently running code would execute without preemption

Generally too inefficient on multiprocessor systems

Have to wait for disable to propagate to all processors

Operating systems using this not broadly scalable

Modern machines provide special atomic hardware instructions

Atomic = non-interruptable

Critical Section Solutions Software Peterson’s Solution : for two processes only. Semaphore : A flag used to indicate that a routine cannot proceed if a shared resource is already in use by another routine. The allowable operations on a semaphore are V(&quot;signal&quot;) and P(&quot;wait&quot;); both are atomic operations. Two types: counting and binary (mutex locks).

Software

Peterson’s Solution : for two processes only.

Semaphore : A flag used to indicate that a routine cannot proceed if a shared resource is already in use by another routine. The allowable operations on a semaphore are V(&quot;signal&quot;) and P(&quot;wait&quot;); both are atomic operations.

Two types: counting and binary (mutex locks).

Some Classical Problems in Synchronization Dining Philosophers.

Dining Philosophers.

Bridge Crossing Example Traffic only in one direction. Each section of a bridge can be viewed as a resource. If a deadlock occurs, it can be resolved if one car backs up (preempt resources and rollback). Several cars may have to be backed up if a deadlock occurs. Starvation is possible.

Traffic only in one direction.

Each section of a bridge can be viewed as a resource.

If a deadlock occurs, it can be resolved if one car backs up (preempt resources and rollback).

Several cars may have to be backed up if a deadlock occurs.

Starvation is possible.

Deadlocks Deadlock: processes waiting indefinitely with no chance of making progress. Starvation: a process waits for a long time to make progress.

Deadlock: processes waiting indefinitely with no chance of making progress.

Starvation: a process waits for a long time to make progress.

Deadlocks Deadlock applications not just OS Network Two processes may be blocking a send message to the other process if they are both waiting for a message from the other process Receive/waiting blocks writing Databases. Spooling/streaming data.

Network

Two processes may be blocking a send message to the other process if they are both waiting for a message from the other process

Databases.

Spooling/streaming data.

Deadlock Characterization Mutual exclusion: only one process at a time can use a resource. Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes. No preemption: a resource can be released only voluntarily by the process holding it, after that process has completed its task. Circular wait: there exists a set { P 0 , P 1 , …, P 0 } of waiting processes such that P 0 is waiting for a resource that is held by P 1 , P 1 is waiting for a resource that is held by P 2 , …, P n –1 is waiting for a resource that is held by P n , and P 0 is waiting for a resource that is held by P 0 . Deadlock can arise if four conditions hold simultaneously.

Mutual exclusion: only one process at a time can use a resource.

Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes.

No preemption: a resource can be released only voluntarily by the process holding it, after that process has completed its task.

Circular wait: there exists a set { P 0 , P 1 , …, P 0 } of waiting processes such that P 0 is waiting for a resource that is held by P 1 , P 1 is waiting for a resource that is held by

P 2 , …, P n –1 is waiting for a resource that is held by P n , and P 0 is waiting for a resource that is held by P 0 .

Resource-Allocation Graph V is partitioned into two types: P = { P 1 , P 2 , …, P n }, the set consisting of all the processes in the system. R = { R 1 , R 2 , …, R m }, the set consisting of all resource types in the system. request edge – directed edge P 1  R j assignment edge – directed edge R j  P i A set of vertices V and a set of edges E .

V is partitioned into two types:

P = { P 1 , P 2 , …, P n }, the set consisting of all the processes in the system.

R = { R 1 , R 2 , …, R m }, the set consisting of all resource types in the system.

request edge – directed edge P 1  R j

assignment edge – directed edge R j  P i

Resource-Allocation Graph (Cont.) Process Resource Type with 4 instances P i requests instance of R j P i is holding an instance of R j P i P i R j R j

Process

Resource Type with 4 instances

P i requests instance of R j

P i is holding an instance of R j

Example of a Resource Allocation Graph

Resource Allocation Graph With A Deadlock

Graph With A Cycle But No Deadlock

Basic Facts If graph contains no cycles  no deadlock. If graph contains a cycle  if only one instance per resource type, then deadlock. if several instances per resource type, possibility of deadlock.

If graph contains no cycles  no deadlock.

If graph contains a cycle 

if only one instance per resource type, then deadlock.

if several instances per resource type, possibility of deadlock.

Methods for Handling Deadlocks Ensure that the system will never enter a deadlock state. Allow the system to enter a deadlock state and then recover. Ignore the problem and pretend that deadlocks never occur in the system; used by most operating systems, including UNIX.

Ensure that the system will never enter a deadlock state.

Allow the system to enter a deadlock state and then recover.

Ignore the problem and pretend that deadlocks never occur in the system; used by most operating systems, including UNIX.

Deadlock Prevention Mutual Exclusion – not required for sharable resources; must hold for nonsharable resources. Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold any other resources. Require process to request and be allocated all its resources before it begins execution, or allow process to request resources only when the process has none. Low resource utilization; starvation possible. Restrain the ways request can be made.

Mutual Exclusion – not required for sharable resources; must hold for nonsharable resources.

Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold any other resources.

Require process to request and be allocated all its resources before it begins execution, or allow process to request resources only when the process has none.

Low resource utilization; starvation possible.

Deadlock Prevention (Cont.) No Preemption – If a process that is holding some resources requests another resource that cannot be immediately allocated to it, then all resources currently being held are released. Preempted resources are added to the list of resources for which the process is waiting. Process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting. Circular Wait – impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration.

No Preemption –

If a process that is holding some resources requests another resource that cannot be immediately allocated to it, then all resources currently being held are released.

Preempted resources are added to the list of resources for which the process is waiting.

Process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting.

Circular Wait – impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration.

Deadlock Avoidance Simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need. The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular-wait condition. Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes. Requires that the system has some additional a priori information available.

Simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need.

The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular-wait condition.

Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes.

 User name: Comment:

Related pages

Process Synchronisation and Deadlocks - School of Computing

Process Synchronisation and Deadlocks Example - Producer-Consumer problem. Shared memory buffer. Producer writes next object to In. Then advances In 1 step ...

Though essential for ensuring the thread safety of Java classes, synchronization, if used improperly, can create the possibility for deadlocks. If you ...

Deadlock - Wikipedia, the free encyclopedia

Deadlock can be avoided if certain information about processes are available to the operating system before allocation of resources, such as which ...

Threads, processes, and files Synchronizes with the creation or termination of a thread or process, ... Locks, Deadlocks, and Synchronization ...

Presentation "Lecture 4 Synchronization and Deadlocks ...

Lecture 4 Synchronization and Deadlocks. Lecture Highlights Introduction to Process Synchronization and Deadlock Handling Synchronization Methods.