1 /* 2 * Hunt - A refined core library for D programming language. 3 * 4 * Copyright (C) 2018-2019 HuntLabs 5 * 6 * Website: https://www.huntlabs.net/ 7 * 8 * Licensed under the Apache-2.0 License. 9 * 10 */ 11 12 module hunt.concurrency.AbstractQueuedSynchronizer; 13 14 import hunt.concurrency.AbstractOwnableSynchronizer; 15 import hunt.concurrency.atomic.AtomicHelper; 16 17 import hunt.collection.ArrayList; 18 import hunt.collection.Collection; 19 import hunt.util.DateTime; 20 import hunt.Exceptions; 21 22 import core.thread; 23 import core.sync.mutex; 24 import core.sync.condition; 25 26 import std.conv; 27 import std.datetime; 28 29 30 /** 31 * Provides a framework for implementing blocking locks and related 32 * synchronizers (semaphores, events, etc) that rely on 33 * first-in-first-out (FIFO) wait queues. This class is designed to 34 * be a useful basis for most kinds of synchronizers that rely on a 35 * single atomic {@code int} value to represent state. Subclasses 36 * must define the protected methods that change this state, and which 37 * define what that state means in terms of this object being acquired 38 * or released. Given these, the other methods in this class carry 39 * out all queuing and blocking mechanics. Subclasses can maintain 40 * other state fields, but only the atomically updated {@code int} 41 * value manipulated using methods {@link #getState}, {@link 42 * #setState} and {@link #compareAndSetState} is tracked with respect 43 * to synchronization. 44 * 45 * <p>Subclasses should be defined as non-internal helper 46 * classes that are used to implement the synchronization properties 47 * of their enclosing class. Class 48 * {@code AbstractQueuedSynchronizer} does not implement any 49 * synchronization interface. Instead it defines methods such as 50 * {@link #acquireInterruptibly} that can be invoked as 51 * appropriate by concrete locks and related synchronizers to 52 * implement their methods. 53 * 54 * <p>This class supports either or both a default <em>exclusive</em> 55 * mode and a <em>shared</em> mode. When acquired in exclusive mode, 56 * attempted acquires by other threads cannot succeed. Shared mode 57 * acquires by multiple threads may (but need not) succeed. This class 58 * does not "understand" these differences except in the 59 * mechanical sense that when a shared mode acquire succeeds, the next 60 * waiting thread (if one exists) must also determine whether it can 61 * acquire as well. Threads waiting in the different modes share the 62 * same FIFO queue. Usually, implementation subclasses support only 63 * one of these modes, but both can come into play for example in a 64 * {@link ReadWriteLock}. Subclasses that support only exclusive or 65 * only shared modes need not define the methods supporting the unused mode. 66 * 67 * <p>This class defines a nested {@link ConditionObject} class that 68 * can be used as a {@link Condition} implementation by subclasses 69 * supporting exclusive mode for which method {@link 70 * #isHeldExclusively} reports whether synchronization is exclusively 71 * held with respect to the current thread, method {@link #release} 72 * invoked with the current {@link #getState} value fully releases 73 * this object, and {@link #acquire}, given this saved state value, 74 * eventually restores this object to its previous acquired state. No 75 * {@code AbstractQueuedSynchronizer} method otherwise creates such a 76 * condition, so if this constraint cannot be met, do not use it. The 77 * behavior of {@link ConditionObject} depends of course on the 78 * semantics of its synchronizer implementation. 79 * 80 * <p>This class provides inspection, instrumentation, and monitoring 81 * methods for the internal queue, as well as similar methods for 82 * condition objects. These can be exported as desired into classes 83 * using an {@code AbstractQueuedSynchronizer} for their 84 * synchronization mechanics. 85 * 86 * <p>Serialization of this class stores only the underlying atomic 87 * integer maintaining state, so deserialized objects have empty 88 * thread queues. Typical subclasses requiring serializability will 89 * define a {@code readObject} method that restores this to a known 90 * initial state upon deserialization. 91 * 92 * <h3>Usage</h3> 93 * 94 * <p>To use this class as the basis of a synchronizer, redefine the 95 * following methods, as applicable, by inspecting and/or modifying 96 * the synchronization state using {@link #getState}, {@link 97 * #setState} and/or {@link #compareAndSetState}: 98 * 99 * <ul> 100 * <li>{@link #tryAcquire} 101 * <li>{@link #tryRelease} 102 * <li>{@link #tryAcquireShared} 103 * <li>{@link #tryReleaseShared} 104 * <li>{@link #isHeldExclusively} 105 * </ul> 106 * 107 * Each of these methods by default throws {@link 108 * UnsupportedOperationException}. Implementations of these methods 109 * must be internally thread-safe, and should in general be short and 110 * not block. Defining these methods is the <em>only</em> supported 111 * means of using this class. All other methods are declared 112 * {@code final} because they cannot be independently varied. 113 * 114 * <p>You may also find the inherited methods from {@link 115 * AbstractOwnableSynchronizer} useful to keep track of the thread 116 * owning an exclusive synchronizer. You are encouraged to use them 117 * -- this enables monitoring and diagnostic tools to assist users in 118 * determining which threads hold locks. 119 * 120 * <p>Even though this class is based on an internal FIFO queue, it 121 * does not automatically enforce FIFO acquisition policies. The core 122 * of exclusive synchronization takes the form: 123 * 124 * <pre> 125 * Acquire: 126 * while (!tryAcquire(arg)) { 127 * <em>enqueue thread if it is not already queued</em>; 128 * <em>possibly block current thread</em>; 129 * } 130 * 131 * Release: 132 * if (tryRelease(arg)) 133 * <em>unblock the first queued thread</em>; 134 * </pre> 135 * 136 * (Shared mode is similar but may involve cascading signals.) 137 * 138 * <p id="barging">Because checks in acquire are invoked before 139 * enqueuing, a newly acquiring thread may <em>barge</em> ahead of 140 * others that are blocked and queued. However, you can, if desired, 141 * define {@code tryAcquire} and/or {@code tryAcquireShared} to 142 * disable barging by internally invoking one or more of the inspection 143 * methods, thereby providing a <em>fair</em> FIFO acquisition order. 144 * In particular, most fair synchronizers can define {@code tryAcquire} 145 * to return {@code false} if {@link #hasQueuedPredecessors} (a method 146 * specifically designed to be used by fair synchronizers) returns 147 * {@code true}. Other variations are possible. 148 * 149 * <p>Throughput and scalability are generally highest for the 150 * default barging (also known as <em>greedy</em>, 151 * <em>renouncement</em>, and <em>convoy-avoidance</em>) strategy. 152 * While this is not guaranteed to be fair or starvation-free, earlier 153 * queued threads are allowed to recontend before later queued 154 * threads, and each recontention has an unbiased chance to succeed 155 * against incoming threads. Also, while acquires do not 156 * "spin" in the usual sense, they may perform multiple 157 * invocations of {@code tryAcquire} interspersed with other 158 * computations before blocking. This gives most of the benefits of 159 * spins when exclusive synchronization is only briefly held, without 160 * most of the liabilities when it isn't. If so desired, you can 161 * augment this by preceding calls to acquire methods with 162 * "fast-path" checks, possibly prechecking {@link #hasContended} 163 * and/or {@link #hasQueuedThreads} to only do so if the synchronizer 164 * is likely not to be contended. 165 * 166 * <p>This class provides an efficient and scalable basis for 167 * synchronization in part by specializing its range of use to 168 * synchronizers that can rely on {@code int} state, acquire, and 169 * release parameters, and an internal FIFO wait queue. When this does 170 * not suffice, you can build synchronizers from a lower level using 171 * {@link hunt.concurrency.atomic atomic} classes, your own custom 172 * {@link java.util.Queue} classes, and {@link LockSupport} blocking 173 * support. 174 * 175 * <h3>Usage Examples</h3> 176 * 177 * <p>Here is a non-reentrant mutual exclusion lock class that uses 178 * the value zero to represent the unlocked state, and one to 179 * represent the locked state. While a non-reentrant lock 180 * does not strictly require recording of the current owner 181 * thread, this class does so anyway to make usage easier to monitor. 182 * It also supports conditions and exposes some instrumentation methods: 183 * 184 * <pre> {@code 185 * class Mutex implements Lock, java.io.Serializable { 186 * 187 * // Our internal helper class 188 * private static class Sync extends AbstractQueuedSynchronizer { 189 * // Acquires the lock if state is zero 190 * bool tryAcquire(int acquires) { 191 * assert acquires == 1; // Otherwise unused 192 * if (compareAndSetState(0, 1)) { 193 * setExclusiveOwnerThread(Thread.getThis()); 194 * return true; 195 * } 196 * return false; 197 * } 198 * 199 * // Releases the lock by setting state to zero 200 * protected bool tryRelease(int releases) { 201 * assert releases == 1; // Otherwise unused 202 * if (!isHeldExclusively()) 203 * throw new IllegalMonitorStateException(); 204 * setExclusiveOwnerThread(null); 205 * setState(0); 206 * return true; 207 * } 208 * 209 * // Reports whether in locked state 210 * bool isLocked() { 211 * return getState() != 0; 212 * } 213 * 214 * bool isHeldExclusively() { 215 * // a data race, but safe due to out-of-thin-air guarantees 216 * return getExclusiveOwnerThread() == Thread.getThis(); 217 * } 218 * 219 * // Provides a Condition 220 * Condition newCondition() { 221 * return new ConditionObject(); 222 * } 223 * 224 * // Deserializes properly 225 * private void readObject(ObjectInputStream s) 226 * throws IOException, ClassNotFoundException { 227 * s.defaultReadObject(); 228 * setState(0); // reset to unlocked state 229 * } 230 * } 231 * 232 * // The sync object does all the hard work. We just forward to it. 233 * private final Sync sync = new Sync(); 234 * 235 * void lock() { sync.acquire(1); } 236 * bool tryLock() { return sync.tryAcquire(1); } 237 * void unlock() { sync.release(1); } 238 * Condition newCondition() { return sync.newCondition(); } 239 * bool isLocked() { return sync.isLocked(); } 240 * bool isHeldByCurrentThread() { 241 * return sync.isHeldExclusively(); 242 * } 243 * bool hasQueuedThreads() { 244 * return sync.hasQueuedThreads(); 245 * } 246 * void lockInterruptibly() { 247 * sync.acquireInterruptibly(1); 248 * } 249 * bool tryLock(Duration timeout) 250 * { 251 * return sync.tryAcquireNanos(1, unit.toNanos(timeout)); 252 * } 253 * }}</pre> 254 * 255 * <p>Here is a latch class that is like a 256 * {@link hunt.concurrency.CountDownLatch CountDownLatch} 257 * except that it only requires a single {@code signal} to 258 * fire. Because a latch is non-exclusive, it uses the {@code shared} 259 * acquire and release methods. 260 * 261 * <pre> {@code 262 * class BooleanLatch { 263 * 264 * private static class Sync extends AbstractQueuedSynchronizer { 265 * bool isSignalled() { return getState() != 0; } 266 * 267 * protected int tryAcquireShared(int ignore) { 268 * return isSignalled() ? 1 : -1; 269 * } 270 * 271 * protected bool tryReleaseShared(int ignore) { 272 * setState(1); 273 * return true; 274 * } 275 * } 276 * 277 * private final Sync sync = new Sync(); 278 * bool isSignalled() { return sync.isSignalled(); } 279 * void signal() { sync.releaseShared(1); } 280 * void await() { 281 * sync.acquireSharedInterruptibly(1); 282 * } 283 * }}</pre> 284 * 285 * @since 1.5 286 * @author Doug Lea 287 */ 288 abstract class AbstractQueuedSynchronizer : AbstractOwnableSynchronizer { 289 290 291 /** 292 * Creates a new {@code AbstractQueuedSynchronizer} instance 293 * with initial synchronization state of zero. 294 */ 295 protected this() { } 296 297 /** 298 * Head of the wait queue, lazily initialized. Except for 299 * initialization, it is modified only via method setHead. Note: 300 * If head exists, its waitStatus is guaranteed not to be 301 * CANCELLED. 302 */ 303 private Node head; 304 305 /** 306 * Tail of the wait queue, lazily initialized. Modified only via 307 * method enq to add new wait node. 308 */ 309 private Node tail; 310 311 /** 312 * The synchronization state. 313 */ 314 private int state; 315 316 /** 317 * Returns the current value of synchronization state. 318 * This operation has memory semantics of a {@code volatile} read. 319 * @return current state value 320 */ 321 protected final int getState() { 322 return state; 323 } 324 325 /** 326 * Sets the value of synchronization state. 327 * This operation has memory semantics of a {@code volatile} write. 328 * @param newState the new state value 329 */ 330 protected final void setState(int newState) { 331 state = newState; 332 } 333 334 /** 335 * Atomically sets synchronization state to the given updated 336 * value if the current state value equals the expected value. 337 * This operation has memory semantics of a {@code volatile} read 338 * and write. 339 * 340 * @param expect the expected value 341 * @param update the new value 342 * @return {@code true} if successful. False return indicates that the actual 343 * value was not equal to the expected value. 344 */ 345 protected final bool compareAndSetState(int expect, int update) { 346 return AtomicHelper.compareAndSet(state, expect, update); 347 } 348 349 // Queuing utilities 350 351 /** 352 * The number of nanoseconds for which it is faster to spin 353 * rather than to use timed park. A rough estimate suffices 354 * to improve responsiveness with very short timeouts. 355 */ 356 enum long SPIN_FOR_TIMEOUT_THRESHOLD = 1000L; 357 358 /** 359 * Inserts node into queue, initializing if necessary. See picture above. 360 * @param node the node to insert 361 * @return node's predecessor 362 */ 363 private Node enq(Node node) { 364 for (;;) { 365 Node oldTail = tail; 366 if (oldTail !is null) { 367 node.setPrevRelaxed(oldTail); 368 if (compareAndSetTail(oldTail, node)) { 369 oldTail.next = node; 370 return oldTail; 371 } 372 } else { 373 initializeSyncQueue(); 374 } 375 } 376 } 377 378 /** 379 * Creates and enqueues node for current thread and given mode. 380 * 381 * @param mode Node.EXCLUSIVE for exclusive, Node.SHARED for shared 382 * @return the new node 383 */ 384 private Node addWaiter(Node mode) { 385 Node node = new Node(mode); 386 387 for (;;) { 388 Node oldTail = tail; 389 if (oldTail !is null) { 390 node.setPrevRelaxed(oldTail); 391 if (compareAndSetTail(oldTail, node)) { 392 oldTail.next = node; 393 return node; 394 } 395 } else { 396 initializeSyncQueue(); 397 } 398 } 399 } 400 401 /** 402 * Sets head of queue to be node, thus dequeuing. Called only by 403 * acquire methods. Also nulls out unused fields for sake of GC 404 * and to suppress unnecessary signals and traversals. 405 * 406 * @param node the node 407 */ 408 private void setHead(Node node) { 409 head = node; 410 node.thread = null; 411 node.prev = null; 412 } 413 414 /** 415 * Wakes up node's successor, if one exists. 416 * 417 * @param node the node 418 */ 419 private void unparkSuccessor(Node node) { 420 /* 421 * If status is negative (i.e., possibly needing signal) try 422 * to clear in anticipation of signalling. It is OK if this 423 * fails or if status is changed by waiting thread. 424 */ 425 int ws = node.waitStatus; 426 if (ws < 0) 427 node.compareAndSetWaitStatus(ws, 0); 428 429 /* 430 * Thread to unpark is held in successor, which is normally 431 * just the next node. But if cancelled or apparently null, 432 * traverse backwards from tail to find the actual 433 * non-cancelled successor. 434 */ 435 Node s = node.next; 436 if (s is null || s.waitStatus > 0) { 437 s = null; 438 for (Node p = tail; p != node && p !is null; p = p.prev) 439 if (p.waitStatus <= 0) 440 s = p; 441 } 442 implementationMissing(false); 443 // if (s !is null) 444 // LockSupport.unpark(s.thread); 445 } 446 447 /** 448 * Release action for shared mode -- signals successor and ensures 449 * propagation. (Note: For exclusive mode, release just amounts 450 * to calling unparkSuccessor of head if it needs signal.) 451 */ 452 private void doReleaseShared() { 453 /* 454 * Ensure that a release propagates, even if there are other 455 * in-progress acquires/releases. This proceeds in the usual 456 * way of trying to unparkSuccessor of head if it needs 457 * signal. But if it does not, status is set to PROPAGATE to 458 * ensure that upon release, propagation continues. 459 * Additionally, we must loop in case a new node is added 460 * while we are doing this. Also, unlike other uses of 461 * unparkSuccessor, we need to know if CAS to reset status 462 * fails, if so rechecking. 463 */ 464 for (;;) { 465 Node h = head; 466 if (h !is null && h != tail) { 467 int ws = h.waitStatus; 468 if (ws == Node.SIGNAL) { 469 if (!h.compareAndSetWaitStatus(Node.SIGNAL, 0)) 470 continue; // loop to recheck cases 471 unparkSuccessor(h); 472 } 473 else if (ws == 0 && 474 !h.compareAndSetWaitStatus(0, Node.PROPAGATE)) 475 continue; // loop on failed CAS 476 } 477 if (h == head) // loop if head changed 478 break; 479 } 480 } 481 482 /** 483 * Sets head of queue, and checks if successor may be waiting 484 * in shared mode, if so propagating if either propagate > 0 or 485 * PROPAGATE status was set. 486 * 487 * @param node the node 488 * @param propagate the return value from a tryAcquireShared 489 */ 490 private void setHeadAndPropagate(Node node, int propagate) { 491 Node h = head; // Record old head for check below 492 setHead(node); 493 /* 494 * Try to signal next queued node if: 495 * Propagation was indicated by caller, 496 * or was recorded (as h.waitStatus either before 497 * or after setHead) by a previous operation 498 * (note: this uses sign-check of waitStatus because 499 * PROPAGATE status may transition to SIGNAL.) 500 * and 501 * The next node is waiting in shared mode, 502 * or we don't know, because it appears null 503 * 504 * The conservatism in both of these checks may cause 505 * unnecessary wake-ups, but only when there are multiple 506 * racing acquires/releases, so most need signals now or soon 507 * anyway. 508 */ 509 if (propagate > 0 || h is null || h.waitStatus < 0 || 510 (h = head) is null || h.waitStatus < 0) { 511 Node s = node.next; 512 if (s is null || s.isShared()) 513 doReleaseShared(); 514 } 515 } 516 517 // Utilities for various versions of acquire 518 519 /** 520 * Cancels an ongoing attempt to acquire. 521 * 522 * @param node the node 523 */ 524 private void cancelAcquire(Node node) { 525 // Ignore if node doesn't exist 526 if (node is null) 527 return; 528 529 node.thread = null; 530 531 // Skip cancelled predecessors 532 Node pred = node.prev; 533 while (pred.waitStatus > 0) 534 node.prev = pred = pred.prev; 535 536 // predNext is the apparent node to unsplice. CASes below will 537 // fail if not, in which case, we lost race vs another cancel 538 // or signal, so no further action is necessary, although with 539 // a possibility that a cancelled node may transiently remain 540 // reachable. 541 Node predNext = pred.next; 542 543 // Can use unconditional write instead of CAS here. 544 // After this atomic step, other Nodes can skip past us. 545 // Before, we are free of interference from other threads. 546 node.waitStatus = Node.CANCELLED; 547 548 // If we are the tail, remove ourselves. 549 if (node == tail && compareAndSetTail(node, pred)) { 550 pred.compareAndSetNext(predNext, null); 551 } else { 552 // If successor needs signal, try to set pred's next-link 553 // so it will get one. Otherwise wake it up to propagate. 554 int ws; 555 if (pred != head && 556 ((ws = pred.waitStatus) == Node.SIGNAL || 557 (ws <= 0 && pred.compareAndSetWaitStatus(ws, Node.SIGNAL))) && 558 pred.thread !is null) { 559 Node next = node.next; 560 if (next !is null && next.waitStatus <= 0) 561 pred.compareAndSetNext(predNext, next); 562 } else { 563 unparkSuccessor(node); 564 } 565 566 node.next = node; // help GC 567 } 568 } 569 570 /** 571 * Checks and updates status for a node that failed to acquire. 572 * Returns true if thread should block. This is the main signal 573 * control in all acquire loops. Requires that pred == node.prev. 574 * 575 * @param pred node's predecessor holding status 576 * @param node the node 577 * @return {@code true} if thread should block 578 */ 579 private static bool shouldParkAfterFailedAcquire(Node pred, Node node) { 580 int ws = pred.waitStatus; 581 if (ws == Node.SIGNAL) 582 /* 583 * This node has already set status asking a release 584 * to signal it, so it can safely park. 585 */ 586 return true; 587 if (ws > 0) { 588 /* 589 * Predecessor was cancelled. Skip over predecessors and 590 * indicate retry. 591 */ 592 do { 593 node.prev = pred = pred.prev; 594 } while (pred.waitStatus > 0); 595 pred.next = node; 596 } else { 597 /* 598 * waitStatus must be 0 or PROPAGATE. Indicate that we 599 * need a signal, but don't park yet. Caller will need to 600 * retry to make sure it cannot acquire before parking. 601 */ 602 pred.compareAndSetWaitStatus(ws, Node.SIGNAL); 603 } 604 return false; 605 } 606 607 /** 608 * Convenience method to interrupt current thread. 609 */ 610 static void selfInterrupt() { 611 // Thread.getThis().interrupt(); 612 implementationMissing(false); 613 } 614 615 /** 616 * Convenience method to park and then check if interrupted. 617 * 618 * @return {@code true} if interrupted 619 */ 620 private final bool parkAndCheckInterrupt() { 621 // LockSupport.park(this); 622 // return Thread.interrupted(); 623 implementationMissing(false); 624 return true; 625 } 626 627 /* 628 * Various flavors of acquire, varying in exclusive/shared and 629 * control modes. Each is mostly the same, but annoyingly 630 * different. Only a little bit of factoring is possible due to 631 * interactions of exception mechanics (including ensuring that we 632 * cancel if tryAcquire throws exception) and other control, at 633 * least not without hurting performance too much. 634 */ 635 636 /** 637 * Acquires in exclusive uninterruptible mode for thread already in 638 * queue. Used by condition wait methods as well as acquire. 639 * 640 * @param node the node 641 * @param arg the acquire argument 642 * @return {@code true} if interrupted while waiting 643 */ 644 final bool acquireQueued(Node node, int arg) { 645 bool interrupted = false; 646 try { 647 for (;;) { 648 Node p = node.predecessor(); 649 if (p == head && tryAcquire(arg)) { 650 setHead(node); 651 p.next = null; // help GC 652 return interrupted; 653 } 654 if (shouldParkAfterFailedAcquire(p, node)) 655 interrupted |= parkAndCheckInterrupt(); 656 } 657 } catch (Throwable t) { 658 cancelAcquire(node); 659 if (interrupted) 660 selfInterrupt(); 661 throw t; 662 } 663 } 664 665 /** 666 * Acquires in exclusive interruptible mode. 667 * @param arg the acquire argument 668 */ 669 private void doAcquireInterruptibly(int arg) { 670 Node node = addWaiter(Node.EXCLUSIVE); 671 try { 672 for (;;) { 673 Node p = node.predecessor(); 674 if (p == head && tryAcquire(arg)) { 675 setHead(node); 676 p.next = null; // help GC 677 return; 678 } 679 if (shouldParkAfterFailedAcquire(p, node) && 680 parkAndCheckInterrupt()) 681 throw new InterruptedException(); 682 } 683 } catch (Throwable t) { 684 cancelAcquire(node); 685 throw t; 686 } 687 } 688 689 /** 690 * Acquires in exclusive timed mode. 691 * 692 * @param arg the acquire argument 693 * @param nanosTimeout max wait time 694 * @return {@code true} if acquired 695 */ 696 private bool doAcquireNanos(int arg, long nanosTimeout) { 697 if (nanosTimeout <= 0L) 698 return false; 699 long deadline = Clock.currStdTime() + nanosTimeout; 700 Node node = addWaiter(Node.EXCLUSIVE); 701 try { 702 for (;;) { 703 Node p = node.predecessor(); 704 if (p == head && tryAcquire(arg)) { 705 setHead(node); 706 p.next = null; // help GC 707 return true; 708 } 709 nanosTimeout = deadline - Clock.currStdTime(); 710 if (nanosTimeout <= 0L) { 711 cancelAcquire(node); 712 return false; 713 } 714 implementationMissing(false); 715 // if (shouldParkAfterFailedAcquire(p, node) && 716 // nanosTimeout > SPIN_FOR_TIMEOUT_THRESHOLD) 717 // LockSupport.parkNanos(this, nanosTimeout); 718 // if (Thread.interrupted()) 719 // throw new InterruptedException(); 720 } 721 } catch (Throwable t) { 722 cancelAcquire(node); 723 throw t; 724 } 725 } 726 727 /** 728 * Acquires in shared uninterruptible mode. 729 * @param arg the acquire argument 730 */ 731 private void doAcquireShared(int arg) { 732 Node node = addWaiter(Node.SHARED); 733 bool interrupted = false; 734 try { 735 for (;;) { 736 Node p = node.predecessor(); 737 if (p == head) { 738 int r = tryAcquireShared(arg); 739 if (r >= 0) { 740 setHeadAndPropagate(node, r); 741 p.next = null; // help GC 742 return; 743 } 744 } 745 if (shouldParkAfterFailedAcquire(p, node)) 746 interrupted |= parkAndCheckInterrupt(); 747 } 748 } catch (Throwable t) { 749 cancelAcquire(node); 750 throw t; 751 } finally { 752 if (interrupted) 753 selfInterrupt(); 754 } 755 } 756 757 /** 758 * Acquires in shared interruptible mode. 759 * @param arg the acquire argument 760 */ 761 private void doAcquireSharedInterruptibly(int arg) { 762 Node node = addWaiter(Node.SHARED); 763 try { 764 for (;;) { 765 Node p = node.predecessor(); 766 if (p == head) { 767 int r = tryAcquireShared(arg); 768 if (r >= 0) { 769 setHeadAndPropagate(node, r); 770 p.next = null; // help GC 771 return; 772 } 773 } 774 if (shouldParkAfterFailedAcquire(p, node) && 775 parkAndCheckInterrupt()) 776 throw new InterruptedException(); 777 } 778 } catch (Throwable t) { 779 cancelAcquire(node); 780 throw t; 781 } 782 } 783 784 /** 785 * Acquires in shared timed mode. 786 * 787 * @param arg the acquire argument 788 * @param nanosTimeout max wait time 789 * @return {@code true} if acquired 790 */ 791 private bool doAcquireSharedNanos(int arg, long nanosTimeout) { 792 if (nanosTimeout <= 0L) 793 return false; 794 long deadline = Clock.currStdTime() + nanosTimeout; 795 Node node = addWaiter(Node.SHARED); 796 try { 797 for (;;) { 798 Node p = node.predecessor(); 799 if (p == head) { 800 int r = tryAcquireShared(arg); 801 if (r >= 0) { 802 setHeadAndPropagate(node, r); 803 p.next = null; // help GC 804 return true; 805 } 806 } 807 nanosTimeout = deadline - Clock.currStdTime(); 808 if (nanosTimeout <= 0L) { 809 cancelAcquire(node); 810 return false; 811 } 812 implementationMissing(false); 813 // if (shouldParkAfterFailedAcquire(p, node) && 814 // nanosTimeout > SPIN_FOR_TIMEOUT_THRESHOLD) 815 // LockSupport.parkNanos(this, nanosTimeout); 816 // if (Thread.interrupted()) 817 // throw new InterruptedException(); 818 } 819 } catch (Throwable t) { 820 cancelAcquire(node); 821 throw t; 822 } 823 } 824 825 // Main exported methods 826 827 /** 828 * Attempts to acquire in exclusive mode. This method should query 829 * if the state of the object permits it to be acquired in the 830 * exclusive mode, and if so to acquire it. 831 * 832 * <p>This method is always invoked by the thread performing 833 * acquire. If this method reports failure, the acquire method 834 * may queue the thread, if it is not already queued, until it is 835 * signalled by a release from some other thread. This can be used 836 * to implement method {@link Lock#tryLock()}. 837 * 838 * <p>The default 839 * implementation throws {@link UnsupportedOperationException}. 840 * 841 * @param arg the acquire argument. This value is always the one 842 * passed to an acquire method, or is the value saved on entry 843 * to a condition wait. The value is otherwise uninterpreted 844 * and can represent anything you like. 845 * @return {@code true} if successful. Upon success, this object has 846 * been acquired. 847 * @throws IllegalMonitorStateException if acquiring would place this 848 * synchronizer in an illegal state. This exception must be 849 * thrown in a consistent fashion for synchronization to work 850 * correctly. 851 * @throws UnsupportedOperationException if exclusive mode is not supported 852 */ 853 protected bool tryAcquire(int arg) { 854 throw new UnsupportedOperationException(); 855 } 856 857 /** 858 * Attempts to set the state to reflect a release in exclusive 859 * mode. 860 * 861 * <p>This method is always invoked by the thread performing release. 862 * 863 * <p>The default implementation throws 864 * {@link UnsupportedOperationException}. 865 * 866 * @param arg the release argument. This value is always the one 867 * passed to a release method, or the current state value upon 868 * entry to a condition wait. The value is otherwise 869 * uninterpreted and can represent anything you like. 870 * @return {@code true} if this object is now in a fully released 871 * state, so that any waiting threads may attempt to acquire; 872 * and {@code false} otherwise. 873 * @throws IllegalMonitorStateException if releasing would place this 874 * synchronizer in an illegal state. This exception must be 875 * thrown in a consistent fashion for synchronization to work 876 * correctly. 877 * @throws UnsupportedOperationException if exclusive mode is not supported 878 */ 879 protected bool tryRelease(int arg) { 880 throw new UnsupportedOperationException(); 881 } 882 883 /** 884 * Attempts to acquire in shared mode. This method should query if 885 * the state of the object permits it to be acquired in the shared 886 * mode, and if so to acquire it. 887 * 888 * <p>This method is always invoked by the thread performing 889 * acquire. If this method reports failure, the acquire method 890 * may queue the thread, if it is not already queued, until it is 891 * signalled by a release from some other thread. 892 * 893 * <p>The default implementation throws {@link 894 * UnsupportedOperationException}. 895 * 896 * @param arg the acquire argument. This value is always the one 897 * passed to an acquire method, or is the value saved on entry 898 * to a condition wait. The value is otherwise uninterpreted 899 * and can represent anything you like. 900 * @return a negative value on failure; zero if acquisition in shared 901 * mode succeeded but no subsequent shared-mode acquire can 902 * succeed; and a positive value if acquisition in shared 903 * mode succeeded and subsequent shared-mode acquires might 904 * also succeed, in which case a subsequent waiting thread 905 * must check availability. (Support for three different 906 * return values enables this method to be used in contexts 907 * where acquires only sometimes act exclusively.) Upon 908 * success, this object has been acquired. 909 * @throws IllegalMonitorStateException if acquiring would place this 910 * synchronizer in an illegal state. This exception must be 911 * thrown in a consistent fashion for synchronization to work 912 * correctly. 913 * @throws UnsupportedOperationException if shared mode is not supported 914 */ 915 protected int tryAcquireShared(int arg) { 916 throw new UnsupportedOperationException(); 917 } 918 919 /** 920 * Attempts to set the state to reflect a release in shared mode. 921 * 922 * <p>This method is always invoked by the thread performing release. 923 * 924 * <p>The default implementation throws 925 * {@link UnsupportedOperationException}. 926 * 927 * @param arg the release argument. This value is always the one 928 * passed to a release method, or the current state value upon 929 * entry to a condition wait. The value is otherwise 930 * uninterpreted and can represent anything you like. 931 * @return {@code true} if this release of shared mode may permit a 932 * waiting acquire (shared or exclusive) to succeed; and 933 * {@code false} otherwise 934 * @throws IllegalMonitorStateException if releasing would place this 935 * synchronizer in an illegal state. This exception must be 936 * thrown in a consistent fashion for synchronization to work 937 * correctly. 938 * @throws UnsupportedOperationException if shared mode is not supported 939 */ 940 protected bool tryReleaseShared(int arg) { 941 throw new UnsupportedOperationException(); 942 } 943 944 /** 945 * Returns {@code true} if synchronization is held exclusively with 946 * respect to the current (calling) thread. This method is invoked 947 * upon each call to a {@link ConditionObject} method. 948 * 949 * <p>The default implementation throws {@link 950 * UnsupportedOperationException}. This method is invoked 951 * internally only within {@link ConditionObject} methods, so need 952 * not be defined if conditions are not used. 953 * 954 * @return {@code true} if synchronization is held exclusively; 955 * {@code false} otherwise 956 * @throws UnsupportedOperationException if conditions are not supported 957 */ 958 protected bool isHeldExclusively() { 959 throw new UnsupportedOperationException(); 960 } 961 962 /** 963 * Acquires in exclusive mode, ignoring interrupts. Implemented 964 * by invoking at least once {@link #tryAcquire}, 965 * returning on success. Otherwise the thread is queued, possibly 966 * repeatedly blocking and unblocking, invoking {@link 967 * #tryAcquire} until success. This method can be used 968 * to implement method {@link Lock#lock}. 969 * 970 * @param arg the acquire argument. This value is conveyed to 971 * {@link #tryAcquire} but is otherwise uninterpreted and 972 * can represent anything you like. 973 */ 974 final void acquire(int arg) { 975 if (!tryAcquire(arg) && 976 acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) 977 selfInterrupt(); 978 } 979 980 /** 981 * Acquires in exclusive mode, aborting if interrupted. 982 * Implemented by first checking interrupt status, then invoking 983 * at least once {@link #tryAcquire}, returning on 984 * success. Otherwise the thread is queued, possibly repeatedly 985 * blocking and unblocking, invoking {@link #tryAcquire} 986 * until success or the thread is interrupted. This method can be 987 * used to implement method {@link Lock#lockInterruptibly}. 988 * 989 * @param arg the acquire argument. This value is conveyed to 990 * {@link #tryAcquire} but is otherwise uninterpreted and 991 * can represent anything you like. 992 * @throws InterruptedException if the current thread is interrupted 993 */ 994 final void acquireInterruptibly(int arg) { 995 // if (Thread.interrupted()) 996 // throw new InterruptedException(); 997 if (!tryAcquire(arg)) 998 doAcquireInterruptibly(arg); 999 } 1000 1001 /** 1002 * Attempts to acquire in exclusive mode, aborting if interrupted, 1003 * and failing if the given timeout elapses. Implemented by first 1004 * checking interrupt status, then invoking at least once {@link 1005 * #tryAcquire}, returning on success. Otherwise, the thread is 1006 * queued, possibly repeatedly blocking and unblocking, invoking 1007 * {@link #tryAcquire} until success or the thread is interrupted 1008 * or the timeout elapses. This method can be used to implement 1009 * method {@link Lock#tryLock(long, TimeUnit)}. 1010 * 1011 * @param arg the acquire argument. This value is conveyed to 1012 * {@link #tryAcquire} but is otherwise uninterpreted and 1013 * can represent anything you like. 1014 * @param nanosTimeout the maximum number of nanoseconds to wait 1015 * @return {@code true} if acquired; {@code false} if timed out 1016 * @throws InterruptedException if the current thread is interrupted 1017 */ 1018 final bool tryAcquireNanos(int arg, long nanosTimeout) { 1019 // if (Thread.interrupted()) 1020 // throw new InterruptedException(); 1021 return tryAcquire(arg) || 1022 doAcquireNanos(arg, nanosTimeout); 1023 } 1024 1025 /** 1026 * Releases in exclusive mode. Implemented by unblocking one or 1027 * more threads if {@link #tryRelease} returns true. 1028 * This method can be used to implement method {@link Lock#unlock}. 1029 * 1030 * @param arg the release argument. This value is conveyed to 1031 * {@link #tryRelease} but is otherwise uninterpreted and 1032 * can represent anything you like. 1033 * @return the value returned from {@link #tryRelease} 1034 */ 1035 final bool release(int arg) { 1036 if (tryRelease(arg)) { 1037 Node h = head; 1038 if (h !is null && h.waitStatus != 0) 1039 unparkSuccessor(h); 1040 return true; 1041 } 1042 return false; 1043 } 1044 1045 /** 1046 * Acquires in shared mode, ignoring interrupts. Implemented by 1047 * first invoking at least once {@link #tryAcquireShared}, 1048 * returning on success. Otherwise the thread is queued, possibly 1049 * repeatedly blocking and unblocking, invoking {@link 1050 * #tryAcquireShared} until success. 1051 * 1052 * @param arg the acquire argument. This value is conveyed to 1053 * {@link #tryAcquireShared} but is otherwise uninterpreted 1054 * and can represent anything you like. 1055 */ 1056 final void acquireShared(int arg) { 1057 if (tryAcquireShared(arg) < 0) 1058 doAcquireShared(arg); 1059 } 1060 1061 /** 1062 * Acquires in shared mode, aborting if interrupted. Implemented 1063 * by first checking interrupt status, then invoking at least once 1064 * {@link #tryAcquireShared}, returning on success. Otherwise the 1065 * thread is queued, possibly repeatedly blocking and unblocking, 1066 * invoking {@link #tryAcquireShared} until success or the thread 1067 * is interrupted. 1068 * @param arg the acquire argument. 1069 * This value is conveyed to {@link #tryAcquireShared} but is 1070 * otherwise uninterpreted and can represent anything 1071 * you like. 1072 * @throws InterruptedException if the current thread is interrupted 1073 */ 1074 final void acquireSharedInterruptibly(int arg) { 1075 // if (Thread.interrupted()) 1076 // throw new InterruptedException(); 1077 if (tryAcquireShared(arg) < 0) 1078 doAcquireSharedInterruptibly(arg); 1079 } 1080 1081 /** 1082 * Attempts to acquire in shared mode, aborting if interrupted, and 1083 * failing if the given timeout elapses. Implemented by first 1084 * checking interrupt status, then invoking at least once {@link 1085 * #tryAcquireShared}, returning on success. Otherwise, the 1086 * thread is queued, possibly repeatedly blocking and unblocking, 1087 * invoking {@link #tryAcquireShared} until success or the thread 1088 * is interrupted or the timeout elapses. 1089 * 1090 * @param arg the acquire argument. This value is conveyed to 1091 * {@link #tryAcquireShared} but is otherwise uninterpreted 1092 * and can represent anything you like. 1093 * @param nanosTimeout the maximum number of nanoseconds to wait 1094 * @return {@code true} if acquired; {@code false} if timed out 1095 * @throws InterruptedException if the current thread is interrupted 1096 */ 1097 final bool tryAcquireSharedNanos(int arg, long nanosTimeout) { 1098 // if (Thread.interrupted()) 1099 // throw new InterruptedException(); 1100 return tryAcquireShared(arg) >= 0 || 1101 doAcquireSharedNanos(arg, nanosTimeout); 1102 } 1103 1104 /** 1105 * Releases in shared mode. Implemented by unblocking one or more 1106 * threads if {@link #tryReleaseShared} returns true. 1107 * 1108 * @param arg the release argument. This value is conveyed to 1109 * {@link #tryReleaseShared} but is otherwise uninterpreted 1110 * and can represent anything you like. 1111 * @return the value returned from {@link #tryReleaseShared} 1112 */ 1113 final bool releaseShared(int arg) { 1114 if (tryReleaseShared(arg)) { 1115 doReleaseShared(); 1116 return true; 1117 } 1118 return false; 1119 } 1120 1121 // Queue inspection methods 1122 1123 /** 1124 * Queries whether any threads are waiting to acquire. Note that 1125 * because cancellations due to interrupts and timeouts may occur 1126 * at any time, a {@code true} return does not guarantee that any 1127 * other thread will ever acquire. 1128 * 1129 * @return {@code true} if there may be other threads waiting to acquire 1130 */ 1131 final bool hasQueuedThreads() { 1132 for (Node p = tail, h = head; p != h && p !is null; p = p.prev) 1133 if (p.waitStatus <= 0) 1134 return true; 1135 return false; 1136 } 1137 1138 /** 1139 * Queries whether any threads have ever contended to acquire this 1140 * synchronizer; that is, if an acquire method has ever blocked. 1141 * 1142 * <p>In this implementation, this operation returns in 1143 * constant time. 1144 * 1145 * @return {@code true} if there has ever been contention 1146 */ 1147 final bool hasContended() { 1148 return head !is null; 1149 } 1150 1151 /** 1152 * Returns the first (longest-waiting) thread in the queue, or 1153 * {@code null} if no threads are currently queued. 1154 * 1155 * <p>In this implementation, this operation normally returns in 1156 * constant time, but may iterate upon contention if other threads are 1157 * concurrently modifying the queue. 1158 * 1159 * @return the first (longest-waiting) thread in the queue, or 1160 * {@code null} if no threads are currently queued 1161 */ 1162 final Thread getFirstQueuedThread() { 1163 // handle only fast path, else relay 1164 return (head == tail) ? null : fullGetFirstQueuedThread(); 1165 } 1166 1167 /** 1168 * Version of getFirstQueuedThread called when fastpath fails. 1169 */ 1170 private Thread fullGetFirstQueuedThread() { 1171 /* 1172 * The first node is normally head.next. Try to get its 1173 * thread field, ensuring consistent reads: If thread 1174 * field is nulled out or s.prev is no longer head, then 1175 * some other thread(s) concurrently performed setHead in 1176 * between some of our reads. We try this twice before 1177 * resorting to traversal. 1178 */ 1179 Node h, s; 1180 Thread st; 1181 if (((h = head) !is null && (s = h.next) !is null && 1182 s.prev == head && (st = s.thread) !is null) || 1183 ((h = head) !is null && (s = h.next) !is null && 1184 s.prev == head && (st = s.thread) !is null)) 1185 return st; 1186 1187 /* 1188 * Head's next field might not have been set yet, or may have 1189 * been unset after setHead. So we must check to see if tail 1190 * is actually first node. If not, we continue on, safely 1191 * traversing from tail back to head to find first, 1192 * guaranteeing termination. 1193 */ 1194 1195 Thread firstThread = null; 1196 for (Node p = tail; p !is null && p != head; p = p.prev) { 1197 Thread t = p.thread; 1198 if (t !is null) 1199 firstThread = t; 1200 } 1201 return firstThread; 1202 } 1203 1204 /** 1205 * Returns true if the given thread is currently queued. 1206 * 1207 * <p>This implementation traverses the queue to determine 1208 * presence of the given thread. 1209 * 1210 * @param thread the thread 1211 * @return {@code true} if the given thread is on the queue 1212 * @throws NullPointerException if the thread is null 1213 */ 1214 final bool isQueued(Thread thread) { 1215 if (thread is null) 1216 throw new NullPointerException(); 1217 for (Node p = tail; p !is null; p = p.prev) 1218 if (p.thread == thread) 1219 return true; 1220 return false; 1221 } 1222 1223 /** 1224 * Returns {@code true} if the apparent first queued thread, if one 1225 * exists, is waiting in exclusive mode. If this method returns 1226 * {@code true}, and the current thread is attempting to acquire in 1227 * shared mode (that is, this method is invoked from {@link 1228 * #tryAcquireShared}) then it is guaranteed that the current thread 1229 * is not the first queued thread. Used only as a heuristic in 1230 * ReentrantReadWriteLock. 1231 */ 1232 final bool apparentlyFirstQueuedIsExclusive() { 1233 Node h, s; 1234 return (h = head) !is null && 1235 (s = h.next) !is null && 1236 !s.isShared() && 1237 s.thread !is null; 1238 } 1239 1240 /** 1241 * Queries whether any threads have been waiting to acquire longer 1242 * than the current thread. 1243 * 1244 * <p>An invocation of this method is equivalent to (but may be 1245 * more efficient than): 1246 * <pre> {@code 1247 * getFirstQueuedThread() != Thread.getThis() 1248 * && hasQueuedThreads()}</pre> 1249 * 1250 * <p>Note that because cancellations due to interrupts and 1251 * timeouts may occur at any time, a {@code true} return does not 1252 * guarantee that some other thread will acquire before the current 1253 * thread. Likewise, it is possible for another thread to win a 1254 * race to enqueue after this method has returned {@code false}, 1255 * due to the queue being empty. 1256 * 1257 * <p>This method is designed to be used by a fair synchronizer to 1258 * avoid <a href="AbstractQueuedSynchronizer.html#barging">barging</a>. 1259 * Such a synchronizer's {@link #tryAcquire} method should return 1260 * {@code false}, and its {@link #tryAcquireShared} method should 1261 * return a negative value, if this method returns {@code true} 1262 * (unless this is a reentrant acquire). For example, the {@code 1263 * tryAcquire} method for a fair, reentrant, exclusive mode 1264 * synchronizer might look like this: 1265 * 1266 * <pre> {@code 1267 * protected bool tryAcquire(int arg) { 1268 * if (isHeldExclusively()) { 1269 * // A reentrant acquire; increment hold count 1270 * return true; 1271 * } else if (hasQueuedPredecessors()) { 1272 * return false; 1273 * } else { 1274 * // try to acquire normally 1275 * } 1276 * }}</pre> 1277 * 1278 * @return {@code true} if there is a queued thread preceding the 1279 * current thread, and {@code false} if the current thread 1280 * is at the head of the queue or the queue is empty 1281 * @since 1.7 1282 */ 1283 final bool hasQueuedPredecessors() { 1284 Node h, s; 1285 if ((h = head) !is null) { 1286 if ((s = h.next) is null || s.waitStatus > 0) { 1287 s = null; // traverse in case of concurrent cancellation 1288 for (Node p = tail; p != h && p !is null; p = p.prev) { 1289 if (p.waitStatus <= 0) 1290 s = p; 1291 } 1292 } 1293 if (s !is null && s.thread != Thread.getThis()) 1294 return true; 1295 } 1296 return false; 1297 } 1298 1299 // Instrumentation and monitoring methods 1300 1301 /** 1302 * Returns an estimate of the number of threads waiting to 1303 * acquire. The value is only an estimate because the number of 1304 * threads may change dynamically while this method traverses 1305 * internal data structures. This method is designed for use in 1306 * monitoring system state, not for synchronization control. 1307 * 1308 * @return the estimated number of threads waiting to acquire 1309 */ 1310 final int getQueueLength() { 1311 int n = 0; 1312 for (Node p = tail; p !is null; p = p.prev) { 1313 if (p.thread !is null) 1314 ++n; 1315 } 1316 return n; 1317 } 1318 1319 /** 1320 * Returns a collection containing threads that may be waiting to 1321 * acquire. Because the actual set of threads may change 1322 * dynamically while constructing this result, the returned 1323 * collection is only a best-effort estimate. The elements of the 1324 * returned collection are in no particular order. This method is 1325 * designed to facilitate construction of subclasses that provide 1326 * more extensive monitoring facilities. 1327 * 1328 * @return the collection of threads 1329 */ 1330 final Collection!(Thread) getQueuedThreads() { 1331 ArrayList!(Thread) list = new ArrayList!(Thread)(); 1332 for (Node p = tail; p !is null; p = p.prev) { 1333 Thread t = p.thread; 1334 if (t !is null) 1335 list.add(t); 1336 } 1337 return list; 1338 } 1339 1340 /** 1341 * Returns a collection containing threads that may be waiting to 1342 * acquire in exclusive mode. This has the same properties 1343 * as {@link #getQueuedThreads} except that it only returns 1344 * those threads waiting due to an exclusive acquire. 1345 * 1346 * @return the collection of threads 1347 */ 1348 final Collection!(Thread) getExclusiveQueuedThreads() { 1349 ArrayList!(Thread) list = new ArrayList!(Thread)(); 1350 for (Node p = tail; p !is null; p = p.prev) { 1351 if (!p.isShared()) { 1352 Thread t = p.thread; 1353 if (t !is null) 1354 list.add(t); 1355 } 1356 } 1357 return list; 1358 } 1359 1360 /** 1361 * Returns a collection containing threads that may be waiting to 1362 * acquire in shared mode. This has the same properties 1363 * as {@link #getQueuedThreads} except that it only returns 1364 * those threads waiting due to a shared acquire. 1365 * 1366 * @return the collection of threads 1367 */ 1368 final Collection!(Thread) getSharedQueuedThreads() { 1369 ArrayList!(Thread) list = new ArrayList!(Thread)(); 1370 for (Node p = tail; p !is null; p = p.prev) { 1371 if (p.isShared()) { 1372 Thread t = p.thread; 1373 if (t !is null) 1374 list.add(t); 1375 } 1376 } 1377 return list; 1378 } 1379 1380 /** 1381 * Returns a string identifying this synchronizer, as well as its state. 1382 * The state, in brackets, includes the string {@code "State ="} 1383 * followed by the current value of {@link #getState}, and either 1384 * {@code "nonempty"} or {@code "empty"} depending on whether the 1385 * queue is empty. 1386 * 1387 * @return a string identifying this synchronizer, as well as its state 1388 */ 1389 override string toString() { 1390 return super.toString() 1391 ~ "[State = " ~ getState().to!string() ~ ", " 1392 ~ (hasQueuedThreads() ? "non" : "") ~ "empty queue]"; 1393 } 1394 1395 1396 // Internal support methods for Conditions 1397 1398 /** 1399 * Returns true if a node, always one that was initially placed on 1400 * a condition queue, is now waiting to reacquire on sync queue. 1401 * @param node the node 1402 * @return true if is reacquiring 1403 */ 1404 final bool isOnSyncQueue(Node node) { 1405 if (node.waitStatus == Node.CONDITION || node.prev is null) 1406 return false; 1407 if (node.next !is null) // If has successor, it must be on queue 1408 return true; 1409 /* 1410 * node.prev can be non-null, but not yet on queue because 1411 * the CAS to place it on queue can fail. So we have to 1412 * traverse from tail to make sure it actually made it. It 1413 * will always be near the tail in calls to this method, and 1414 * unless the CAS failed (which is unlikely), it will be 1415 * there, so we hardly ever traverse much. 1416 */ 1417 return findNodeFromTail(node); 1418 } 1419 1420 /** 1421 * Returns true if node is on sync queue by searching backwards from tail. 1422 * Called only when needed by isOnSyncQueue. 1423 * @return true if present 1424 */ 1425 private bool findNodeFromTail(Node node) { 1426 // We check for node first, since it's likely to be at or near tail. 1427 // tail is known to be non-null, so we could re-order to "save" 1428 // one null check, but we leave it this way to help the VM. 1429 for (Node p = tail;;) { 1430 if (p == node) 1431 return true; 1432 if (p is null) 1433 return false; 1434 p = p.prev; 1435 } 1436 } 1437 1438 /** 1439 * Transfers a node from a condition queue onto sync queue. 1440 * Returns true if successful. 1441 * @param node the node 1442 * @return true if successfully transferred (else the node was 1443 * cancelled before signal) 1444 */ 1445 final bool transferForSignal(Node node) { 1446 /* 1447 * If cannot change waitStatus, the node has been cancelled. 1448 */ 1449 if (!node.compareAndSetWaitStatus(Node.CONDITION, 0)) 1450 return false; 1451 1452 /* 1453 * Splice onto queue and try to set waitStatus of predecessor to 1454 * indicate that thread is (probably) waiting. If cancelled or 1455 * attempt to set waitStatus fails, wake up to resync (in which 1456 * case the waitStatus can be transiently and harmlessly wrong). 1457 */ 1458 Node p = enq(node); 1459 int ws = p.waitStatus; 1460 implementationMissing(false); 1461 // if (ws > 0 || !p.compareAndSetWaitStatus(ws, Node.SIGNAL)) 1462 // LockSupport.unpark(node.thread); 1463 return true; 1464 } 1465 1466 /** 1467 * Transfers node, if necessary, to sync queue after a cancelled wait. 1468 * Returns true if thread was cancelled before being signalled. 1469 * 1470 * @param node the node 1471 * @return true if cancelled before the node was signalled 1472 */ 1473 final bool transferAfterCancelledWait(Node node) { 1474 if (node.compareAndSetWaitStatus(Node.CONDITION, 0)) { 1475 enq(node); 1476 return true; 1477 } 1478 /* 1479 * If we lost out to a signal(), then we can't proceed 1480 * until it finishes its enq(). Cancelling during an 1481 * incomplete transfer is both rare and transient, so just 1482 * spin. 1483 */ 1484 while (!isOnSyncQueue(node)) 1485 Thread.yield(); 1486 return false; 1487 } 1488 1489 /** 1490 * Invokes release with current state value; returns saved state. 1491 * Cancels node and throws exception on failure. 1492 * @param node the condition node for this wait 1493 * @return previous sync state 1494 */ 1495 final int fullyRelease(Node node) { 1496 try { 1497 int savedState = getState(); 1498 if (release(savedState)) 1499 return savedState; 1500 throw new IllegalMonitorStateException(); 1501 } catch (Throwable t) { 1502 node.waitStatus = Node.CANCELLED; 1503 throw t; 1504 } 1505 } 1506 1507 // Instrumentation methods for conditions 1508 1509 /** 1510 * Queries whether the given ConditionObject 1511 * uses this synchronizer as its lock. 1512 * 1513 * @param condition the condition 1514 * @return {@code true} if owned 1515 * @throws NullPointerException if the condition is null 1516 */ 1517 // final bool owns(ConditionObject condition) { 1518 // return condition.isOwnedBy(this); 1519 // } 1520 1521 /** 1522 * Queries whether any threads are waiting on the given condition 1523 * associated with this synchronizer. Note that because timeouts 1524 * and interrupts may occur at any time, a {@code true} return 1525 * does not guarantee that a future {@code signal} will awaken 1526 * any threads. This method is designed primarily for use in 1527 * monitoring of the system state. 1528 * 1529 * @param condition the condition 1530 * @return {@code true} if there are any waiting threads 1531 * @throws IllegalMonitorStateException if exclusive synchronization 1532 * is not held 1533 * @throws IllegalArgumentException if the given condition is 1534 * not associated with this synchronizer 1535 * @throws NullPointerException if the condition is null 1536 */ 1537 // final bool hasWaiters(ConditionObject condition) { 1538 // if (!owns(condition)) 1539 // throw new IllegalArgumentException("Not owner"); 1540 // return condition.hasWaiters(); 1541 // } 1542 1543 /** 1544 * Returns an estimate of the number of threads waiting on the 1545 * given condition associated with this synchronizer. Note that 1546 * because timeouts and interrupts may occur at any time, the 1547 * estimate serves only as an upper bound on the actual number of 1548 * waiters. This method is designed for use in monitoring system 1549 * state, not for synchronization control. 1550 * 1551 * @param condition the condition 1552 * @return the estimated number of waiting threads 1553 * @throws IllegalMonitorStateException if exclusive synchronization 1554 * is not held 1555 * @throws IllegalArgumentException if the given condition is 1556 * not associated with this synchronizer 1557 * @throws NullPointerException if the condition is null 1558 */ 1559 // final int getWaitQueueLength(ConditionObject condition) { 1560 // if (!owns(condition)) 1561 // throw new IllegalArgumentException("Not owner"); 1562 // return condition.getWaitQueueLength(); 1563 // } 1564 1565 /** 1566 * Returns a collection containing those threads that may be 1567 * waiting on the given condition associated with this 1568 * synchronizer. Because the actual set of threads may change 1569 * dynamically while constructing this result, the returned 1570 * collection is only a best-effort estimate. The elements of the 1571 * returned collection are in no particular order. 1572 * 1573 * @param condition the condition 1574 * @return the collection of threads 1575 * @throws IllegalMonitorStateException if exclusive synchronization 1576 * is not held 1577 * @throws IllegalArgumentException if the given condition is 1578 * not associated with this synchronizer 1579 * @throws NullPointerException if the condition is null 1580 */ 1581 // final Collection!(Thread) getWaitingThreads(ConditionObject condition) { 1582 // if (!owns(condition)) 1583 // throw new IllegalArgumentException("Not owner"); 1584 // return condition.getWaitingThreads(); 1585 // } 1586 1587 /** 1588 * Condition implementation for a {@link AbstractQueuedSynchronizer} 1589 * serving as the basis of a {@link Lock} implementation. 1590 * 1591 * <p>Method documentation for this class describes mechanics, 1592 * not behavioral specifications from the point of view of Lock 1593 * and Condition users. Exported versions of this class will in 1594 * general need to be accompanied by documentation describing 1595 * condition semantics that rely on those of the associated 1596 * {@code AbstractQueuedSynchronizer}. 1597 * 1598 * <p>This class is Serializable, but all fields are transient, 1599 * so deserialized conditions have no waiters. 1600 */ 1601 // class ConditionObject : Condition { 1602 // /** First node of condition queue. */ 1603 // private Node firstWaiter; 1604 // /** Last node of condition queue. */ 1605 // private Node lastWaiter; 1606 1607 // /** 1608 // * Creates a new {@code ConditionObject} instance. 1609 // */ 1610 // this() { } 1611 1612 // // Internal methods 1613 1614 // /** 1615 // * Adds a new waiter to wait queue. 1616 // * @return its new wait node 1617 // */ 1618 // private Node addConditionWaiter() { 1619 // if (!isHeldExclusively()) 1620 // throw new IllegalMonitorStateException(); 1621 // Node t = lastWaiter; 1622 // // If lastWaiter is cancelled, clean out. 1623 // if (t !is null && t.waitStatus != Node.CONDITION) { 1624 // unlinkCancelledWaiters(); 1625 // t = lastWaiter; 1626 // } 1627 1628 // Node node = new Node(Node.CONDITION); 1629 1630 // if (t is null) 1631 // firstWaiter = node; 1632 // else 1633 // t.nextWaiter = node; 1634 // lastWaiter = node; 1635 // return node; 1636 // } 1637 1638 // /** 1639 // * Removes and transfers nodes until hit non-cancelled one or 1640 // * null. Split out from signal in part to encourage compilers 1641 // * to inline the case of no waiters. 1642 // * @param first (non-null) the first node on condition queue 1643 // */ 1644 // private void doSignal(Node first) { 1645 // do { 1646 // if ( (firstWaiter = first.nextWaiter) is null) 1647 // lastWaiter = null; 1648 // first.nextWaiter = null; 1649 // } while (!transferForSignal(first) && 1650 // (first = firstWaiter) !is null); 1651 // } 1652 1653 // /** 1654 // * Removes and transfers all nodes. 1655 // * @param first (non-null) the first node on condition queue 1656 // */ 1657 // private void doSignalAll(Node first) { 1658 // lastWaiter = firstWaiter = null; 1659 // do { 1660 // Node next = first.nextWaiter; 1661 // first.nextWaiter = null; 1662 // transferForSignal(first); 1663 // first = next; 1664 // } while (first !is null); 1665 // } 1666 1667 // /** 1668 // * Unlinks cancelled waiter nodes from condition queue. 1669 // * Called only while holding lock. This is called when 1670 // * cancellation occurred during condition wait, and upon 1671 // * insertion of a new waiter when lastWaiter is seen to have 1672 // * been cancelled. This method is needed to avoid garbage 1673 // * retention in the absence of signals. So even though it may 1674 // * require a full traversal, it comes into play only when 1675 // * timeouts or cancellations occur in the absence of 1676 // * signals. It traverses all nodes rather than stopping at a 1677 // * particular target to unlink all pointers to garbage nodes 1678 // * without requiring many re-traversals during cancellation 1679 // * storms. 1680 // */ 1681 // private void unlinkCancelledWaiters() { 1682 // Node t = firstWaiter; 1683 // Node trail = null; 1684 // while (t !is null) { 1685 // Node next = t.nextWaiter; 1686 // if (t.waitStatus != Node.CONDITION) { 1687 // t.nextWaiter = null; 1688 // if (trail is null) 1689 // firstWaiter = next; 1690 // else 1691 // trail.nextWaiter = next; 1692 // if (next is null) 1693 // lastWaiter = trail; 1694 // } 1695 // else 1696 // trail = t; 1697 // t = next; 1698 // } 1699 // } 1700 1701 // // methods 1702 1703 // /** 1704 // * Moves the longest-waiting thread, if one exists, from the 1705 // * wait queue for this condition to the wait queue for the 1706 // * owning lock. 1707 // * 1708 // * @throws IllegalMonitorStateException if {@link #isHeldExclusively} 1709 // * returns {@code false} 1710 // */ 1711 // final void signal() { 1712 // if (!isHeldExclusively()) 1713 // throw new IllegalMonitorStateException(); 1714 // Node first = firstWaiter; 1715 // if (first !is null) 1716 // doSignal(first); 1717 // } 1718 1719 // /** 1720 // * Moves all threads from the wait queue for this condition to 1721 // * the wait queue for the owning lock. 1722 // * 1723 // * @throws IllegalMonitorStateException if {@link #isHeldExclusively} 1724 // * returns {@code false} 1725 // */ 1726 // final void signalAll() { 1727 // if (!isHeldExclusively()) 1728 // throw new IllegalMonitorStateException(); 1729 // Node first = firstWaiter; 1730 // if (first !is null) 1731 // doSignalAll(first); 1732 // } 1733 1734 // /** 1735 // * Implements uninterruptible condition wait. 1736 // * <ol> 1737 // * <li>Save lock state returned by {@link #getState}. 1738 // * <li>Invoke {@link #release} with saved state as argument, 1739 // * throwing IllegalMonitorStateException if it fails. 1740 // * <li>Block until signalled. 1741 // * <li>Reacquire by invoking specialized version of 1742 // * {@link #acquire} with saved state as argument. 1743 // * </ol> 1744 // */ 1745 // final void awaitUninterruptibly() { 1746 // Node node = addConditionWaiter(); 1747 // int savedState = fullyRelease(node); 1748 // bool interrupted = false; 1749 // while (!isOnSyncQueue(node)) { 1750 // implementationMissing(false); 1751 // // LockSupport.park(this); 1752 // // if (Thread.interrupted()) 1753 // // interrupted = true; 1754 // } 1755 // if (acquireQueued(node, savedState) || interrupted) 1756 // selfInterrupt(); 1757 // } 1758 1759 // /* 1760 // * For interruptible waits, we need to track whether to throw 1761 // * InterruptedException, if interrupted while blocked on 1762 // * condition, versus reinterrupt current thread, if 1763 // * interrupted while blocked waiting to re-acquire. 1764 // */ 1765 1766 // /** Mode meaning to reinterrupt on exit from wait */ 1767 // private enum int REINTERRUPT = 1; 1768 // /** Mode meaning to throw InterruptedException on exit from wait */ 1769 // private enum int THROW_IE = -1; 1770 1771 // /** 1772 // * Checks for interrupt, returning THROW_IE if interrupted 1773 // * before signalled, REINTERRUPT if after signalled, or 1774 // * 0 if not interrupted. 1775 // */ 1776 // private int checkInterruptWhileWaiting(Node node) { 1777 // // return Thread.interrupted() ? 1778 // // (transferAfterCancelledWait(node) ? THROW_IE : REINTERRUPT) : 1779 // // 0; 1780 // // TODO: Tasks pending completion -@zxp at 10/14/2018, 3:41:57 PM 1781 // // 1782 // return 0; 1783 // } 1784 1785 // /** 1786 // * Throws InterruptedException, reinterrupts current thread, or 1787 // * does nothing, depending on mode. 1788 // */ 1789 // private void reportInterruptAfterWait(int interruptMode) { 1790 // if (interruptMode == THROW_IE) 1791 // throw new InterruptedException(); 1792 // else if (interruptMode == REINTERRUPT) 1793 // selfInterrupt(); 1794 // } 1795 1796 // /** 1797 // * Implements interruptible condition wait. 1798 // * <ol> 1799 // * <li>If current thread is interrupted, throw InterruptedException. 1800 // * <li>Save lock state returned by {@link #getState}. 1801 // * <li>Invoke {@link #release} with saved state as argument, 1802 // * throwing IllegalMonitorStateException if it fails. 1803 // * <li>Block until signalled or interrupted. 1804 // * <li>Reacquire by invoking specialized version of 1805 // * {@link #acquire} with saved state as argument. 1806 // * <li>If interrupted while blocked in step 4, throw InterruptedException. 1807 // * </ol> 1808 // */ 1809 // final void await() { 1810 // // if (Thread.interrupted()) 1811 // // throw new InterruptedException(); 1812 // Node node = addConditionWaiter(); 1813 // int savedState = fullyRelease(node); 1814 // int interruptMode = 0; 1815 // while (!isOnSyncQueue(node)) { 1816 // // LockSupport.park(this); 1817 // implementationMissing(false); 1818 // if ((interruptMode = checkInterruptWhileWaiting(node)) != 0) 1819 // break; 1820 // } 1821 // if (acquireQueued(node, savedState) && interruptMode != THROW_IE) 1822 // interruptMode = REINTERRUPT; 1823 // if (node.nextWaiter !is null) // clean up if cancelled 1824 // unlinkCancelledWaiters(); 1825 // if (interruptMode != 0) 1826 // reportInterruptAfterWait(interruptMode); 1827 // } 1828 1829 // /** 1830 // * Implements timed condition wait. 1831 // * <ol> 1832 // * <li>If current thread is interrupted, throw InterruptedException. 1833 // * <li>Save lock state returned by {@link #getState}. 1834 // * <li>Invoke {@link #release} with saved state as argument, 1835 // * throwing IllegalMonitorStateException if it fails. 1836 // * <li>Block until signalled, interrupted, or timed out. 1837 // * <li>Reacquire by invoking specialized version of 1838 // * {@link #acquire} with saved state as argument. 1839 // * <li>If interrupted while blocked in step 4, throw InterruptedException. 1840 // * </ol> 1841 // */ 1842 // final long awaitNanos(long nanosTimeout) { 1843 // // if (Thread.interrupted()) 1844 // // throw new InterruptedException(); 1845 1846 // // We don't check for nanosTimeout <= 0L here, to allow 1847 // // awaitNanos(0) as a way to "yield the lock". 1848 // long deadline = Clock.currStdTime() + nanosTimeout; 1849 // long initialNanos = nanosTimeout; 1850 // Node node = addConditionWaiter(); 1851 // int savedState = fullyRelease(node); 1852 // int interruptMode = 0; 1853 // while (!isOnSyncQueue(node)) { 1854 // if (nanosTimeout <= 0L) { 1855 // transferAfterCancelledWait(node); 1856 // break; 1857 // } 1858 // // TODO: Tasks pending completion -@zxp at 10/14/2018, 3:39:04 PM 1859 // // 1860 // implementationMissing(false); 1861 // // if (nanosTimeout > SPIN_FOR_TIMEOUT_THRESHOLD) 1862 // // LockSupport.parkNanos(this, nanosTimeout); 1863 // if ((interruptMode = checkInterruptWhileWaiting(node)) != 0) 1864 // break; 1865 // nanosTimeout = deadline - Clock.currStdTime(); 1866 // } 1867 // if (acquireQueued(node, savedState) && interruptMode != THROW_IE) 1868 // interruptMode = REINTERRUPT; 1869 // if (node.nextWaiter !is null) 1870 // unlinkCancelledWaiters(); 1871 // if (interruptMode != 0) 1872 // reportInterruptAfterWait(interruptMode); 1873 // long remaining = deadline - Clock.currStdTime(); // avoid overflow 1874 // return (remaining <= initialNanos) ? remaining : long.min; 1875 // } 1876 1877 // /** 1878 // * Implements absolute timed condition wait. 1879 // * <ol> 1880 // * <li>If current thread is interrupted, throw InterruptedException. 1881 // * <li>Save lock state returned by {@link #getState}. 1882 // * <li>Invoke {@link #release} with saved state as argument, 1883 // * throwing IllegalMonitorStateException if it fails. 1884 // * <li>Block until signalled, interrupted, or timed out. 1885 // * <li>Reacquire by invoking specialized version of 1886 // * {@link #acquire} with saved state as argument. 1887 // * <li>If interrupted while blocked in step 4, throw InterruptedException. 1888 // * <li>If timed out while blocked in step 4, return false, else true. 1889 // * </ol> 1890 // */ 1891 // final bool awaitUntil(SysTime deadline) { 1892 // long abstime = deadline.stdTime(); 1893 // // if (Thread.interrupted()) 1894 // // throw new InterruptedException(); 1895 // Node node = addConditionWaiter(); 1896 // int savedState = fullyRelease(node); 1897 // bool timedout = false; 1898 // int interruptMode = 0; 1899 // while (!isOnSyncQueue(node)) { 1900 // if (Clock.currStdTime() >= abstime) { 1901 // timedout = transferAfterCancelledWait(node); 1902 // break; 1903 // } 1904 // // LockSupport.parkUntil(this, abstime); 1905 // // TODO: Tasks pending completion -@zxp at 10/14/2018, 3:38:12 PM 1906 // // 1907 // implementationMissing(false); 1908 // if ((interruptMode = checkInterruptWhileWaiting(node)) != 0) 1909 // break; 1910 // } 1911 // if (acquireQueued(node, savedState) && interruptMode != THROW_IE) 1912 // interruptMode = REINTERRUPT; 1913 // if (node.nextWaiter !is null) 1914 // unlinkCancelledWaiters(); 1915 // if (interruptMode != 0) 1916 // reportInterruptAfterWait(interruptMode); 1917 // return !timedout; 1918 // } 1919 1920 // /** 1921 // * Implements timed condition wait. 1922 // * <ol> 1923 // * <li>If current thread is interrupted, throw InterruptedException. 1924 // * <li>Save lock state returned by {@link #getState}. 1925 // * <li>Invoke {@link #release} with saved state as argument, 1926 // * throwing IllegalMonitorStateException if it fails. 1927 // * <li>Block until signalled, interrupted, or timed out. 1928 // * <li>Reacquire by invoking specialized version of 1929 // * {@link #acquire} with saved state as argument. 1930 // * <li>If interrupted while blocked in step 4, throw InterruptedException. 1931 // * <li>If timed out while blocked in step 4, return false, else true. 1932 // * </ol> 1933 // */ 1934 // final bool await(Duration time) { 1935 // long nanosTimeout = time.total!(TimeUnit.HectoNanosecond)(); 1936 // // if (Thread.interrupted()) 1937 // // throw new InterruptedException(); 1938 // // We don't check for nanosTimeout <= 0L here, to allow 1939 // // await(0, unit) as a way to "yield the lock". 1940 // long deadline = Clock.currStdTime() + nanosTimeout; 1941 // Node node = addConditionWaiter(); 1942 // int savedState = fullyRelease(node); 1943 // bool timedout = false; 1944 // int interruptMode = 0; 1945 // while (!isOnSyncQueue(node)) { 1946 // if (nanosTimeout <= 0L) { 1947 // timedout = transferAfterCancelledWait(node); 1948 // break; 1949 // } 1950 // // TODO: Tasks pending completion -@zxp at 10/14/2018, 3:37:19 PM 1951 // // 1952 // implementationMissing(false); 1953 // // if (nanosTimeout > SPIN_FOR_TIMEOUT_THRESHOLD) 1954 // // LockSupport.parkNanos(this, nanosTimeout); 1955 // if ((interruptMode = checkInterruptWhileWaiting(node)) != 0) 1956 // break; 1957 // nanosTimeout = deadline - Clock.currStdTime(); 1958 // } 1959 // if (acquireQueued(node, savedState) && interruptMode != THROW_IE) 1960 // interruptMode = REINTERRUPT; 1961 // if (node.nextWaiter !is null) 1962 // unlinkCancelledWaiters(); 1963 // if (interruptMode != 0) 1964 // reportInterruptAfterWait(interruptMode); 1965 // return !timedout; 1966 // } 1967 1968 // // support for instrumentation 1969 1970 // /** 1971 // * Returns true if this condition was created by the given 1972 // * synchronization object. 1973 // * 1974 // * @return {@code true} if owned 1975 // */ 1976 // final bool isOwnedBy(AbstractQueuedSynchronizer sync) { 1977 // return sync == this.outer; // AbstractQueuedSynchronizer.this; 1978 // } 1979 1980 // /** 1981 // * Queries whether any threads are waiting on this condition. 1982 // * Implements {@link AbstractQueuedSynchronizer#hasWaiters(ConditionObject)}. 1983 // * 1984 // * @return {@code true} if there are any waiting threads 1985 // * @throws IllegalMonitorStateException if {@link #isHeldExclusively} 1986 // * returns {@code false} 1987 // */ 1988 // protected final bool hasWaiters() { 1989 // if (!isHeldExclusively()) 1990 // throw new IllegalMonitorStateException(); 1991 // for (Node w = firstWaiter; w !is null; w = w.nextWaiter) { 1992 // if (w.waitStatus == Node.CONDITION) 1993 // return true; 1994 // } 1995 // return false; 1996 // } 1997 1998 // /** 1999 // * Returns an estimate of the number of threads waiting on 2000 // * this condition. 2001 // * Implements {@link AbstractQueuedSynchronizer#getWaitQueueLength(ConditionObject)}. 2002 // * 2003 // * @return the estimated number of waiting threads 2004 // * @throws IllegalMonitorStateException if {@link #isHeldExclusively} 2005 // * returns {@code false} 2006 // */ 2007 // protected final int getWaitQueueLength() { 2008 // if (!isHeldExclusively()) 2009 // throw new IllegalMonitorStateException(); 2010 // int n = 0; 2011 // for (Node w = firstWaiter; w !is null; w = w.nextWaiter) { 2012 // if (w.waitStatus == Node.CONDITION) 2013 // ++n; 2014 // } 2015 // return n; 2016 // } 2017 2018 // /** 2019 // * Returns a collection containing those threads that may be 2020 // * waiting on this Condition. 2021 // * Implements {@link AbstractQueuedSynchronizer#getWaitingThreads(ConditionObject)}. 2022 // * 2023 // * @return the collection of threads 2024 // * @throws IllegalMonitorStateException if {@link #isHeldExclusively} 2025 // * returns {@code false} 2026 // */ 2027 // protected final Collection!Thread getWaitingThreads() { 2028 // if (!isHeldExclusively()) 2029 // throw new IllegalMonitorStateException(); 2030 // ArrayList!Thread list = new ArrayList!Thread(); 2031 // for (Node w = firstWaiter; w !is null; w = w.nextWaiter) { 2032 // if (w.waitStatus == Node.CONDITION) { 2033 // Thread t = w.thread; 2034 // if (t !is null) 2035 // list.add(t); 2036 // } 2037 // } 2038 // return list; 2039 // } 2040 // } 2041 2042 /** 2043 * Initializes head and tail fields on first contention. 2044 */ 2045 private final void initializeSyncQueue() { 2046 Node h; 2047 if (AtomicHelper.compareAndSet(head, null, (h = new Node()))) 2048 tail = h; 2049 } 2050 2051 /** 2052 * CASes tail field. 2053 */ 2054 private final bool compareAndSetTail(Node expect, Node update) { 2055 return AtomicHelper.compareAndSet(tail, expect, update); 2056 } 2057 } 2058 2059 2060 2061 /** 2062 * Wait queue node class. 2063 * 2064 * <p>The wait queue is a variant of a "CLH" (Craig, Landin, and 2065 * Hagersten) lock queue. CLH locks are normally used for 2066 * spinlocks. We instead use them for blocking synchronizers, but 2067 * use the same basic tactic of holding some of the control 2068 * information about a thread in the predecessor of its node. A 2069 * "status" field in each node keeps track of whether a thread 2070 * should block. A node is signalled when its predecessor 2071 * releases. Each node of the queue otherwise serves as a 2072 * specific-notification-style monitor holding a single waiting 2073 * thread. The status field does NOT control whether threads are 2074 * granted locks etc though. A thread may try to acquire if it is 2075 * first in the queue. But being first does not guarantee success; 2076 * it only gives the right to contend. So the currently released 2077 * contender thread may need to rewait. 2078 * 2079 * <p>To enqueue into a CLH lock, you atomically splice it in as new 2080 * tail. To dequeue, you just set the head field. 2081 * <pre> 2082 * +------+ prev +-----+ +-----+ 2083 * head | | <---- | | <---- | | tail 2084 * +------+ +-----+ +-----+ 2085 * </pre> 2086 * 2087 * <p>Insertion into a CLH queue requires only a single atomic 2088 * operation on "tail", so there is a simple atomic point of 2089 * demarcation from unqueued to queued. Similarly, dequeuing 2090 * involves only updating the "head". However, it takes a bit 2091 * more work for nodes to determine who their successors are, 2092 * in part to deal with possible cancellation due to timeouts 2093 * and interrupts. 2094 * 2095 * <p>The "prev" links (not used in original CLH locks), are mainly 2096 * needed to handle cancellation. If a node is cancelled, its 2097 * successor is (normally) relinked to a non-cancelled 2098 * predecessor. For explanation of similar mechanics in the case 2099 * of spin locks, see the papers by Scott and Scherer at 2100 * http://www.cs.rochester.edu/u/scott/synchronization/ 2101 * 2102 * <p>We also use "next" links to implement blocking mechanics. 2103 * The thread id for each node is kept in its own node, so a 2104 * predecessor signals the next node to wake up by traversing 2105 * next link to determine which thread it is. Determination of 2106 * successor must avoid races with newly queued nodes to set 2107 * the "next" fields of their predecessors. This is solved 2108 * when necessary by checking backwards from the atomically 2109 * updated "tail" when a node's successor appears to be null. 2110 * (Or, said differently, the next-links are an optimization 2111 * so that we don't usually need a backward scan.) 2112 * 2113 * <p>Cancellation introduces some conservatism to the basic 2114 * algorithms. Since we must poll for cancellation of other 2115 * nodes, we can miss noticing whether a cancelled node is 2116 * ahead or behind us. This is dealt with by always unparking 2117 * successors upon cancellation, allowing them to stabilize on 2118 * a new predecessor, unless we can identify an uncancelled 2119 * predecessor who will carry this responsibility. 2120 * 2121 * <p>CLH queues need a dummy header node to get started. But 2122 * we don't create them on construction, because it would be wasted 2123 * effort if there is never contention. Instead, the node 2124 * is constructed and head and tail pointers are set upon first 2125 * contention. 2126 * 2127 * <p>Threads waiting on Conditions use the same nodes, but 2128 * use an additional link. Conditions only need to link nodes 2129 * in simple (non-concurrent) linked queues because they are 2130 * only accessed when exclusively held. Upon await, a node is 2131 * inserted into a condition queue. Upon signal, the node is 2132 * transferred to the main queue. A special value of status 2133 * field is used to mark which queue a node is on. 2134 * 2135 * <p>Thanks go to Dave Dice, Mark Moir, Victor Luchangco, Bill 2136 * Scherer and Michael Scott, along with members of JSR-166 2137 * expert group, for helpful ideas, discussions, and critiques 2138 * on the design of this class. 2139 */ 2140 private final class Node { 2141 /** Marker to indicate a node is waiting in shared mode */ 2142 __gshared Node SHARED; // = new Node(); 2143 /** Marker to indicate a node is waiting in exclusive mode */ 2144 __gshared Node EXCLUSIVE = null; 2145 2146 /** waitStatus value to indicate thread has cancelled. */ 2147 enum int CANCELLED = 1; 2148 /** waitStatus value to indicate successor's thread needs unparking. */ 2149 enum int SIGNAL = -1; 2150 /** waitStatus value to indicate thread is waiting on condition. */ 2151 enum int CONDITION = -2; 2152 /** 2153 * waitStatus value to indicate the next acquireShared should 2154 * unconditionally propagate. 2155 */ 2156 enum int PROPAGATE = -3; 2157 2158 /** 2159 * Status field, taking on only the values: 2160 * SIGNAL: The successor of this node is (or will soon be) 2161 * blocked (via park), so the current node must 2162 * unpark its successor when it releases or 2163 * cancels. To avoid races, acquire methods must 2164 * first indicate they need a signal, 2165 * then retry the atomic acquire, and then, 2166 * on failure, block. 2167 * CANCELLED: This node is cancelled due to timeout or interrupt. 2168 * Nodes never leave this state. In particular, 2169 * a thread with cancelled node never again blocks. 2170 * CONDITION: This node is currently on a condition queue. 2171 * It will not be used as a sync queue node 2172 * until transferred, at which time the status 2173 * will be set to 0. (Use of this value here has 2174 * nothing to do with the other uses of the 2175 * field, but simplifies mechanics.) 2176 * PROPAGATE: A releaseShared should be propagated to other 2177 * nodes. This is set (for head node only) in 2178 * doReleaseShared to ensure propagation 2179 * continues, even if other operations have 2180 * since intervened. 2181 * 0: None of the above 2182 * 2183 * The values are arranged numerically to simplify use. 2184 * Non-negative values mean that a node doesn't need to 2185 * signal. So, most code doesn't need to check for particular 2186 * values, just for sign. 2187 * 2188 * The field is initialized to 0 for normal sync nodes, and 2189 * CONDITION for condition nodes. It is modified using CAS 2190 * (or when possible, unconditional writes). 2191 */ 2192 int waitStatus; 2193 2194 /** 2195 * Link to predecessor node that current node/thread relies on 2196 * for checking waitStatus. Assigned during enqueuing, and nulled 2197 * out (for sake of GC) only upon dequeuing. Also, upon 2198 * cancellation of a predecessor, we short-circuit while 2199 * finding a non-cancelled one, which will always exist 2200 * because the head node is never cancelled: A node becomes 2201 * head only as a result of successful acquire. A 2202 * cancelled thread never succeeds in acquiring, and a thread only 2203 * cancels itself, not any other node. 2204 */ 2205 Node prev; 2206 2207 /** 2208 * Link to the successor node that the current node/thread 2209 * unparks upon release. Assigned during enqueuing, adjusted 2210 * when bypassing cancelled predecessors, and nulled out (for 2211 * sake of GC) when dequeued. The enq operation does not 2212 * assign next field of a predecessor until after attachment, 2213 * so seeing a null next field does not necessarily mean that 2214 * node is at end of queue. However, if a next field appears 2215 * to be null, we can scan prev's from the tail to 2216 * double-check. The next field of cancelled nodes is set to 2217 * point to the node itself instead of null, to make life 2218 * easier for isOnSyncQueue. 2219 */ 2220 Node next; 2221 2222 /** 2223 * The thread that enqueued this node. Initialized on 2224 * construction and nulled out after use. 2225 */ 2226 Thread thread; 2227 2228 /** 2229 * Link to next node waiting on condition, or the special 2230 * value SHARED. Because condition queues are accessed only 2231 * when holding in exclusive mode, we just need a simple 2232 * linked queue to hold nodes while they are waiting on 2233 * conditions. They are then transferred to the queue to 2234 * re-acquire. And because conditions can only be exclusive, 2235 * we save a field by using special value to indicate shared 2236 * mode. 2237 */ 2238 Node nextWaiter; 2239 2240 /** 2241 * Returns true if node is waiting in shared mode. 2242 */ 2243 final bool isShared() { 2244 return nextWaiter == SHARED; 2245 } 2246 2247 /** 2248 * Returns previous node, or throws NullPointerException if null. 2249 * Use when predecessor cannot be null. The null check could 2250 * be elided, but is present to help the VM. 2251 * 2252 * @return the predecessor of this node 2253 */ 2254 Node predecessor() { 2255 Node p = prev; 2256 if (p is null) 2257 throw new NullPointerException(); 2258 else 2259 return p; 2260 } 2261 2262 shared static this() { 2263 SHARED = new Node(); 2264 } 2265 2266 /** Establishes initial head or SHARED marker. */ 2267 this() {} 2268 2269 /** Constructor used by addWaiter. */ 2270 this(Node nextWaiter) { 2271 this.nextWaiter = nextWaiter; 2272 thread = Thread.getThis(); 2273 } 2274 2275 /** Constructor used by addConditionWaiter. */ 2276 this(int waitStatus) { 2277 this.waitStatus = waitStatus; 2278 thread = Thread.getThis(); 2279 } 2280 2281 /** CASes waitStatus field. */ 2282 final bool compareAndSetWaitStatus(int expect, int update) { 2283 return AtomicHelper.compareAndSet(waitStatus, expect, update); 2284 } 2285 2286 /** CASes next field. */ 2287 final bool compareAndSetNext(Node expect, Node update) { 2288 return AtomicHelper.compareAndSet(next, expect, update); 2289 } 2290 2291 final void setPrevRelaxed(Node p) { 2292 this.prev = p; 2293 } 2294 }