Skip to main content

Queuing of Array Node Lock Requests

Queuing of Array Node Lock Requests

The basic queuing algorithm for array locks is to queue lock requests for the same resource strictly in the order received, even when there is no direct resource contention. This is illustrated in the following example, in which three locks on the same global array are requested by three different processes in the sequence shown:

Process A: LOCK ^x(1,1)
Process B: LOCK ^x(1)
Process C: LOCK ^x(1,2)

The status of these requests is as follows:

  • Process A holds a lock on ^x(1,1).

  • Process B cannot lock ^x(1) until Process A to releases its lock on ^x(1,1).

  • Process C is also blocked, but not by Process A’s lock; rather, it is the fact that Process B is waiting to explicitly lock ^x(1), and thus implicitly lock ^x(1,2), that blocks Process C.

This approach is designed to speed the next job in the sequence after the one holding the lock. Allowing Process C to jump Process B in the queue would speed Process C, but could unacceptably delay Process B, especially if there are many jobs like Process C.

The exception to the general rule that requests are processed in the order received is that a process holding a lock on a parent node is immediately granted any requested lock on a child of that node. For example, consider the following extension of the previous example:

Process A: LOCK ^x(1,1)
Process B: LOCK ^x(1)
Process C: LOCK ^x(1,2)
Process A: LOCK ^x(1,2)

In this case, Process A is immediately granted the requested lock on ^x(1,2), ahead of both Process B and Process C, because it already holds a lock on ^x(1,1).

Note:

This process queuing algorithm applies to all subscripted lock requests. However, the release of a nonsubscripted lock, such as LOCK ^x, when there are both nonsubscripted (LOCK +^x) and subscripted (LOCK +^x(1,1)) requests waiting is a special case, in which the lock request granted is unpredictable and may not follow process queuing.

FeedbackOpens in a new tab