Introduction to Deadlocks

 

Introduction to Deadlocks

In a multiprogramming environment, multiple threads or processes often compete for a finite set of system resources, such as CPU cycles, memory, files, I/O devices, mutex locks, and semaphores. Before a thread can use a resource, it must request it; if the resource is not currently available, the thread must wait until it is released by another thread.

In some situations, a set of threads may become permanently blocked, each waiting for a resource held by another thread in the same set. Because none of the threads can proceed, the system reaches a state in which no progress is possible. This condition is known as deadlock.

Deadlock is a form of liveness failure, meaning that the system continues to run, but some threads are unable to make forward progress indefinitely. As defined earlier, a deadlock occurs when every thread in a set of threads is waiting for an event that can be caused only by another thread in the same set.


Intuitive Example of Deadlock

A classic real-world analogy helps illustrate deadlock. Consider a law once passed by the Kansas legislature:

“When two trains approach each other at a crossing, both shall come to a full stop and neither shall start again until the other has gone.”

If both trains follow this rule, neither will ever proceed. Each train is waiting for the other to move first—this perfectly captures the essence of deadlock.


Deadlocks in Operating Systems

In operating systems, deadlocks commonly arise due to improper handling of resource allocation. Resources are typically non-sharable and must be used exclusively by one thread at a time. Examples include:

  • Mutex locks

  • Semaphores

  • Files

  • I/O devices

  • Memory segments

Threads must request resources before using them and release them afterward. If resource requests are not carefully ordered or coordinated, deadlock may occur.

Although operating systems maintain data structures to track resource allocation, most operating systems do not actively prevent deadlocks. Instead, it is usually the responsibility of application developers to design programs that avoid deadlock situations.


Why Deadlocks Are Difficult to Handle

Deadlocks are particularly challenging because:

  • They may occur only under certain scheduling conditions

  • They are often hard to reproduce and debug

  • Increased concurrency and parallelism in multicore systems makes deadlocks more likely

As systems scale and threads interact in more complex ways, the likelihood of deadlock increases unless careful synchronization techniques are used.


Scope of the Study

This section  focuses on:

  • How deadlocks occur in multithreaded systems

  • The conditions required for deadlock

  • Techniques for:

    • Deadlock prevention

    • Deadlock avoidance

    • Deadlock detection

    • Recovery from deadlock

Understanding deadlocks is essential for designing correct, efficient, and reliable concurrent programs.


Key Takeaway 

Deadlock occurs not because resources are scarce, but because they are requested and held in an unsafe manner.

Comments

Popular posts from this blog

Operating Systems OS PCCST403 Semester 4 BTech KTU CS 2024 Scheme

Introduction to Operating System -Virtualization, Concurrency, and Persistence

Differences Between Linux and Classic UNIX Kernels