Deadlock
Deadlock in computing refers to a situation where two or more processes are unable to proceed because each one is waiting for a resource held by the other. This standoff occurs primarily in environments that manage shared resources, such as operating systems and database systems. The classic example involves two processes: Process A holds a resource needed by Process B, while Process B holds a resource needed by Process A. This mutual dependency leads to a circular wait condition, which is one of the four necessary conditions for a deadlock to occur, alongside mutual exclusion, resource holding, and no preemption.
Addressing deadlock is essential for maintaining system efficiency and reliability. System designers often employ strategies such as deadlock detection, resolution methods, or preventive measures to mitigate these issues. While ignoring the problem is one approach, it is typically impractical. Additionally, improperly resolved deadlocks can lead to a related issue called livelock, where processes remain active but fail to progress. Understanding and managing deadlocks is crucial for ensuring smooth operation in multi-process computing environments.
Subject Terms
Deadlock
- Fields Of Study: Information Systems; Operating Systems; System Analysis

Abstract
In computing, a deadlock is when two or more processes are running simultaneously and each one needs access to a resource that the other is using. Neither process can terminate until the other one does, so they become stuck, or "locked." Deadlock can occur in several different areas of computer systems, including operating systems, database transaction processing, and software execution.
Holding Resources
In order to understand how a deadlock can arise in a computer system, one must first understand how resources are used in a computer system. One way is to think of the system as a bank account. A database keeps records of debits and credits to the account, and each new transaction is recorded in the database. If money is transferred from one account to another—for example, from a customer's savings account to their checking account—the transfer must be recorded twice, as both a debit from the savings account and a credit to the checking account. Recording the debit and recording the credit are two separate actions. However, in a transactional database, they represent a single transaction because together, they form a logical unit: money cannot be added to one account without being subtracted from another. In this case, the balances of both accounts represent system resources. In order to prevent another process, such as a withdrawal from the checking account, from interrupting the transaction before both the debit and the credit are recorded, the system enforces a rule known as mutual exclusion. This rule means that no two processes can access the same resources at once. Instead, the transfer process places a hold on the account balances until the transaction is complete. This practice is called resource holding, or sometimes "hold and wait." Only after the transfer is recorded as both a debit and a credit will the transfer process release its hold on the account balance resources. At this point, the withdrawal process is free to access the checking account balance.
Resource holding is necessary in such cases because if resources were not locked, it would not be clear how a system would resolve simultaneous events. In the example above, if the system had tried to calculate a withdrawal from the checking account before the credit to the account was recorded, the customer might have overdrawn their account. Resource holding is used to make systems more consistent, since it defines rules that allow ambiguities to be resolved in a straightforward manner.
Standoff
Deadlock can be understood as an unintended consequence of resource holding. While resource holding serves a legitimate purpose, inevitably situations arise in which two processes each hold a resource that the other needs in order to finish its work. Process A needs a resource locked by process B in order to finish, while process B needs a resource locked by process A to finish, and neither can relinquish its resources until its own process is completed. This situation is called a circular wait.
In 1971, computer scientist Edward G. Coffman described the four conditions, now known as the Coffman conditions, necessary for deadlock to occur:
- mutual exclusion,
- resource holding,
- no preemption, and
- circular wait.
Mutual exclusion, resource holding, and circular wait are described above. Preemption is when one process takes control of a resource being used by another process. Some designated processes have priority over other operations. As such, they can force other processes to stop and let them finish first. Preemption is typically reserved for processes that are fundamental to the operation of the system. If one of the conflicting processes can preempt another, deadlock will not occur. In fact, if any one of the Coffman conditions is not true, deadlock will not occur.
Responses to Deadlock
System designers have several options for dealing with deadlocks. One is to simply ignore the issue altogether, though this is not practical in most circumstances. A second approach is to focus on detecting deadlocks and then have a variety of options available for resolving them. A third approach is to try to avoid deadlocks by understanding which system operations tend to cause them. Finally, system designers may adopt a preventive approach in which they make changes to the system architecture specifically to avoid potential deadlocks.
Usually, some type of outside intervention is required to resolve a deadlock situation. To accomplish this, many systems will include mechanisms that function to detect and resolve deadlock. Ironically, in trying to resolve deadlock, these systems may produce a similar situation known as livelock. Like a deadlock, a livelock results when two or more concurrent processes are unable to complete. Unlike a deadlock, this happens not because the processes are at a standstill but because they are continually responding to one another. The situation can be compared to two people traveling in opposite directions down a hallway. To avoid bumping into each other, one person moves to their left, while the other moves to the right. In response to the other's actions, the first person then moves to their right, but at the same time, the other person moves to their left. In real life, this situation is eventually resolved. In a computer system, it can continue on an infinite loop until one or both processes are forced to end. If a deadlock detection algorithm can resolve a deadlock and the previously stuck processes begin to function again, they may produce another deadlock. This will trigger the detection algorithm again, and so on, creating a livelock. One way to avoid this would be to have the detection algorithm release one of the deadlocked processes and either abort the other or keep it on hold until it is completed.
As technology evolves into the twenty-first century, more efficient and accurate methods for the prevention and detection of deadlock are also developed. Such methods include machine learning-based approaches and improved resource allocation algorithms. Using dynamic resource allocation and predictive deadlock detection methods may benefit large-scale distribution systems with an excessive amount of concurrent processes or may be applied to cloud-based computing deadlocks.
Bibliography
Botlagunta, Madhavi, et al. "A Novel Resource Management Technique for Deadlock-Free Systems." International Journal of Information Technology, vol. 14, no. 2, 10 May 2021, pp. 627-35, doi:10.1007/s41870-021-00670-6. Accessed 15 Jan. 2025.
Connolly, Thomas M., and Carolyn E. Begg. Database Systems: A Practical Approach to Design, Implementation, and Management. 6th ed., Pearson, 2015.
Harth, Andreas, Katja Hose, and Ralf Schenkel, editors. Linked Data Management. CRC, 2014.
Hoffer, Jeffrey A.,et al. Modern Database Management. 13th ed., Pearson, 2021.
"Introduction of Deadlock in Operating System." Geeks for Geeks, 2 Jan. 2025, www.geeksforgeeks.org/introduction-of-deadlock-in-operating-system. Accessed 15 Jan. 2025.
Kshemkalyani, Ajay D., and Mukesh Singhal. Distributed Computing: Principles, Algorithms, and Systems. Cambridge UP, 2008.
Rahimi, Saeed K., and Frank S. Haug. Distributed Database Management Systems: A Practical Approach. Wiley, 2010.
Tsutsui, Shigeyoshi, and Yoshiji Fujimoto. "Deadlock Prevention in Process Control Computer Systems." The Computer Journal, vol. 30, no. 1, Feb. 1987, pp. 20-26, doi:10.1093/comjnl/30.1.20. Accessed 15 Jan. 2025.
Wills, Craig E. "Process Synchronization and Interprocess Communication." Computing Handbook: Computer Science and Software Engineering. Ed. Teofilo Gonzalez and Jorge Díaz-Herrera. 3rd ed., CRC, 2014.