{"id":13414759,"url":"https://github.com/code-review-checklists/java-concurrency","last_synced_at":"2025-04-12T14:55:38.005Z","repository":{"id":49949700,"uuid":"174599241","full_name":"code-review-checklists/java-concurrency","owner":"code-review-checklists","description":"Checklist for code reviews","archived":false,"fork":false,"pushed_at":"2020-11-07T13:36:36.000Z","size":185,"stargazers_count":1320,"open_issues_count":10,"forks_count":149,"subscribers_count":60,"default_branch":"master","last_synced_at":"2025-04-03T13:18:43.929Z","etag":null,"topics":["checklist","code-review","concurrency","java","java-concurrency","race-conditions","thread-safety"],"latest_commit_sha":null,"homepage":"","language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/code-review-checklists.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2019-03-08T19:47:01.000Z","updated_at":"2025-04-03T02:14:17.000Z","dependencies_parsed_at":"2022-08-22T21:20:58.404Z","dependency_job_id":null,"html_url":"https://github.com/code-review-checklists/java-concurrency","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/code-review-checklists%2Fjava-concurrency","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/code-review-checklists%2Fjava-concurrency/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/code-review-checklists%2Fjava-concurrency/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/code-review-checklists%2Fjava-concurrency/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/code-review-checklists","download_url":"https://codeload.github.com/code-review-checklists/java-concurrency/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248586243,"owners_count":21128995,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["checklist","code-review","concurrency","java","java-concurrency","race-conditions","thread-safety"],"created_at":"2024-07-30T21:00:35.273Z","updated_at":"2025-04-12T14:55:37.984Z","avatar_url":"https://github.com/code-review-checklists.png","language":null,"readme":"\n# Code Review Checklist: Java Concurrency\n\nDesign\n - [Concurrency is rationalized?](#rationalize)\n - [Can use patterns to simplify concurrency?](#use-patterns)\n   - Immutability/Snapshotting\n   - Divide and conquer\n   - Producer-consumer\n   - Instance confinement\n   - Thread/Task/Serial thread confinement\n   - Active object\n - **Code smells**, identifying that a class or a subsystem could potentially be redesigned for\n better:\n   - [Usage of `synchronized` with `wait`/`notify` instead of concurrency utilities\n   ](#avoid-wait-notify)\n   - [Nested critical sections](#avoid-nested-critical-sections)\n   - [Extension API call within a critical section](#non-open-call)\n   - [Large critical section](#minimize-critical-sections)\n   - [Waiting in a loop for some result](#justify-busy-wait)\n   - [Non-static `ThreadLocal`](#threadlocal-design)\n   - [`Thread.sleep()`](#no-sleep-schedule)\n\nDocumentation\n - [Thread safety is justified in comments?](#justify-document)\n - [Class (method, field) has concurrent access documentation?](#justify-document)\n - [Threading model of a subsystem (class) is described?](#threading-flow-model)\n - [Concurrent control flow (or data flow) of a subsystem (class) is described?\n ](#threading-flow-model)\n - [Class is documented as immutable, thread-safe, or not thread-safe?](#immutable-thread-safe)\n - [Used concurrency patterns are pronounced?](#name-patterns)\n - [`ConcurrentHashMap` is *not* stored in a variable of `Map` type?](#concurrent-map-type)\n - [`compute()`-like methods are *not* called on a variable of `ConcurrentMap` type?](#chm-type)\n - [`@GuardedBy` annotation is used?](#guarded-by)\n - [Safety of a benign race (e. g. unbalanced synchronization) is explained?](#document-benign-race)\n - [Each use of `volatile` is justified?](#justify-volatile)\n - [Each field that is neither `volatile` nor annotated with `@GuardedBy` has a comment?\n ](#plain-field)\n\nInsufficient synchronization\n - [Static methods and fields are thread-safe?](#static-thread-safe)\n - [Thread *doesn't* wait in a loop for a non-volatile field to be updated by another thread?\n ](#non-volatile-visibility)\n - [Read access to a non-volatile, concurrently updatable primitive field is protected?\n ](#non-volatile-protection)\n - [Servlets, Controllers, Filters, Handlers, `@Get`/`@Post` methods are thread-safe?\n ](#server-framework-sync)\n - [Calls to `DateFormat.parse()` and `format()` are synchronized?](#dateformat)\n\nExcessive thread safety\n - [No \"extra\" (pseudo) thread safety?](#pseudo-safety)\n - [No atomics on which only `get()` and `set()` are called?](#redundant-atomics)\n - [Class (method) needs to be thread-safe?](#unneeded-thread-safety)\n - [`ReentrantLock` (`ReentrantReadWriteLock`, `Semaphore`) needs to be fair?](#unneeded-fairness)\n\nRace conditions\n - [No `put()` or `remove()` calls on a `ConcurrentMap` (or Cache) after `get()` or\n `containsKey()`?](#chm-race)\n - [No point accesses to a non-thread-safe collection outside of critical sections?\n ](#unsafe-concurrent-point-read)\n - [Iteration over a non-thread-safe collection doesn't leak outside of a critical section?\n ](#unsafe-concurrent-iteration)\n - [A non-thread-safe collection is *not* returned wrapped in `Collections.unmodifiable*()` from\n a getter in a thread-safe class?](#unsafe-concurrent-iteration)\n - [A synchronized collection is not returned from a getter? in a thread-safe class?\n ](#unsafe-concurrent-iteration)\n - [Non-trivial mutable object is *not* returned from a getter in a thread-safe class?\n ](#concurrent-mutation-race)\n - [No separate getters to an atomically updated state?](#moving-state-race)\n - [No *check-then-act* race conditions (state used inside a critical section is read outside of\n it)?](#read-outside-critical-section-race)\n - [`coll.toArray(new E[coll.size()])` is *not* called on a synchronized collection?\n ](#read-outside-critical-section-race)\n - [No race conditions with user or programmatic input or interop between programs?\n ](#outside-world-race)\n - [No check-then-act race conditions with file system operations?](#outside-world-race)\n - [No concurrent `invalidate(key)` and `get()` calls  on Guava's loading `Cache`?\n ](#guava-cache-invalidation-race)\n - [`Cache.put()` is not used (nor exposed in the own Cache interface)?](#cache-invalidation-race)\n - [Concurrent invalidation race is not possible on a lazily initialized state?\n ](#cache-invalidation-race)\n - [Iteration, Stream pipeline, or copying a `Collections.synchronized*()` collection is protected\n by the lock?](#synchronized-collection-iter)\n - [A synchronized collection is passed into `containsAll()`, `addAll()`, `removeAll()`, or\n `putAll()` under the lock?](#synchronized-collection-iter)\n\nTesting\n - [Considered adding multi-threaded unit tests for a thread-safe class or method?\n ](#multi-threaded-tests)\n - [What is the worst thing that might happen if the code has a concurrency bug?\n ](#multi-threaded-tests)\n - [A shared `Random` instance is *not* used from concurrent test workers?](#concurrent-test-random)\n - [Concurrent test workers coordinate their start?](#coordinate-test-workers)\n - [There are more test threads than CPUs (if possible for the test)?](#test-workers-interleavings)\n - [Assertions in parallel threads and asynchronous code are handled properly?](#concurrent-assert)\n - [Checked the result of `CountDownLatch.await()`?](#check-await)\n\nLocks\n - [Can use some concurrency utility instead of a lock with conditional `wait` (`await`) calls?\n ](#avoid-wait-notify)\n - [Can use Guava’s `Monitor` instead of a lock with conditional `wait` (`await`) calls?\n ](#guava-monitor)\n - [Can use `synchronized` instead of a `ReentrantLock`?](#use-synchronized)\n - [`lock()` is called outside of `try {}`? No statements between `lock()` and `try {}`?\n ](#lock-unlock)\n\nAvoiding deadlocks\n - [Can avoid nested critical sections?](#avoid-nested-critical-sections)\n - [Locking order for nested critical sections is documented?](#document-locking-order)\n - [Dynamically determined locks for nested critical sections are ordered?](#dynamic-lock-ordering)\n - [No extension API calls within critical sections?](#non-open-call)\n - [No calls to `ConcurrentHashMap`'s methods (incl. `get()`) in `compute()`-like lambdas on the\n same map?](#chm-nested-calls)\n\nImproving scalability\n - [Critical section is as small as possible?](#minimize-critical-sections)\n - [Can use `ConcurrentHashMap.compute()` or Guava's `Striped` for per-key locking?\n ](#increase-locking-granularity)\n - [Can replace a blocking collection or a queue with a concurrent one?](#non-blocking-collections)\n - [Can use `ClassValue` instead of `ConcurrentHashMap\u003cClass, ...\u003e`?](#use-class-value)\n - [Considered `ReadWriteLock` (or `StampedLock`) instead of a simple lock?](#read-write-lock)\n - [`StampedLock` is used instead of `ReadWriteLock` when reentrancy is not needed?\n ](#use-stamped-lock)\n - [Considered `LongAdder` instead of an `AtomicLong` for a \"hot field\"?\n ](#long-adder-for-hot-fields)\n - [Considered queues from JCTools instead of the standard concurrent queues?](#jctools)\n - [Considered Caffeine cache instead of other caching libraries?](#caffeine)\n - [Can apply speculation (optimistic concurrency) technique?](#speculation)\n - [Considered `ForkJoinPool` instead of `newFixedThreadPool(N)`?](#fjp-instead-tpe)\n\nLazy initialization and double-checked locking\n - [Lazy initialization of a field should be thread-safe?](#lazy-init-thread-safety)\n - [Considered double-checked locking for a lazy initialization to improve performance?\n ](#use-dcl)\n - [Double-checked locking follows the SafeLocalDCL pattern?](#safe-local-dcl)\n  - [Considered eager initialization instead of a lazy initialization to simplify code?\n  ](#eager-init)\n - [Can do lazy initialization with a benign race and without locking to improve performance?\n ](#lazy-init-benign-race)\n - [Holder class idiom is used for lazy static fields rather than double-checked locking?\n ](#no-static-dcl)\n\nNon-blocking and partially blocking code\n - [Non-blocking code has enough comments to make line-by-line checking as easy as possible?\n ](#check-non-blocking-code)\n - [Can use immutable POJO + compare-and-swap operations to simplify non-blocking code?\n ](#swap-state-atomically)\n - [Boundaries of non-blocking or benignly racy code are identified with WARNING comments?\n ](#non-blocking-warning)\n - [Busy waiting (spin loop), all calls to `Thread.yield()` and `Thread.onSpinWait()` are justified?\n ](#justify-busy-wait)\n\nThreads and Executors\n - [Thread is named?](#name-threads)\n - [Can use `ExecutorService` instead of creating a new `Thread` each time some method is called?\n ](#reuse-threads)\n - [ExecutorServices are *not* created within short-lived objects (but rather reused)?\n ](#reuse-threads)\n - [No network I/O in a CachedThreadPool?](#cached-thread-pool-no-io)\n - [No blocking (incl. I/O) operations in a `ForkJoinPool` or in a parallel Stream pipeline?\n ](#fjp-no-blocking)\n - [Can execute non-blocking computation in `FJP.commonPool()` instead of a custom thread pool?\n ](#use-common-fjp)\n - [`ExecutorService` is shut down explicitly?](#explicit-shutdown)\n - [Callback is attached to a `CompletableFuture` (`SettableFuture`) in non-async mode only if\n either:](#cf-beware-non-async)\n   - the callback is lightweight and non-blocking; or\n   - the future is completed and the callback is attached from the same thread pool?\n - [Adding a callback to a `CompletableFuture` (`SettableFuture`) in non-async mode is justified?\n ](#cf-beware-non-async)\n - [Actions are delayed via a `ScheduledExecutorService` rather than `Thread.sleep()`?\n ](#no-sleep-schedule)\n - [Checked the result of `awaitTermination()`?](#check-await-termination)\n - [`ExecutorService` is *not* assigned into a variable of `Executor`\n type?](#executor-service-type-loss)\n - [`ScheduledExecutorService` is *not* assigned into a variable of `ExecutorService`\n type?](#unneeded-scheduled-executor-service)\n\nParallel Streams\n - [Parallel Stream computation takes more than 100us in total?](#justify-parallel-stream-use)\n - [Comment before a parallel Streams pipeline explains how it takes more than 100us in total?\n ](#justify-parallel-stream-use)\n\nFutures\n - [Non-blocking computation needs to be decorated as a `Future`?](#unneeded-future)\n - [Method returning a `Future` doesn't block?](#future-method-no-blocking)\n - [In a method returning a `Future`, considered wrapping an \"expected\" exception as a failed\n `Future`?](#future-method-failure-paths)\n \nThread interruption and `Future` cancellation\n - [Interruption status is restored before wrapping `InterruptedException` with another exception?\n ](#restore-interruption)\n - [`InterruptedException` is swallowed only in the following kinds of methods:\n ](#interruption-swallowing)\n   - `Runnable.run()`, `Callable.call()`, or methods to be passed to executors as lambda tasks; or\n   - Methods with \"try\" or \"best effort\" semantics?\n - [`InterruptedException` swallowing is documented for a method?](#interruption-swallowing)\n - [Can use Guava's `Uninterruptibles` to avoid `InterruptedException` swallowing?\n ](#interruption-swallowing)\n - [`Future` is canceled upon catching an `InterruptedException` or a `TimeoutException` on `get()`?\n ](#cancel-future)\n\nTime\n - [`nanoTime()` values are compared in an overflow-aware manner?](#nano-time-overflow)\n - [`currentTimeMillis()` is *not* used to measure time intervals and timeouts?\n ](#time-going-backward)\n - [Units for a time variable are identified in the variable's name or via `TimeUnit`?](#time-units)\n - [Negative timeouts and delays are treated as zeros?](#treat-negative-timeout-as-zero)\n - [Tasks connected to system time or UTC time are *not* scheduled using\n `ScheduledThreadPoolExecutor`?](#external-interaction-schedule)\n - [Human and external interactions on consumer devices are *not* scheduled using\n `ScheduledThreadPoolExecutor`?](#user-interaction-schedule)\n \n`ThreadLocal`\n - [`ThreadLocal` can be `static final`?](#tl-static-final)\n - [Can redesign a subsystem to avoid usage of `ThreadLocal` (esp. non-static one)?\n ](#threadlocal-design)\n - [`ThreadLocal` is *not* used just to avoid moderate amount of allocation?\n ](#threadlocal-performance)\n - [Considered replacing a non-static `ThreadLocal` with an instance-confined `Map\u003cThread, ...\u003e`?\n ](#tl-instance-chm)\n\nThread safety of Cleaners and native code\n - [`close()` is concurrently idempotent in a class with a `Cleaner` or `finalize()`?\n ](#thread-safe-close-with-cleaner)\n - [Method accessing native state calls `reachabilityFence()` in a class with a `Cleaner` or\n `finalize()`?](#reachability-fence)\n - [`Cleaner` or `finalize()` is used for real cleanup, not mere reporting?](#finalize-misuse)\n - [Considered making a class with native state thread-safe?](#thread-safe-native)\n\n\u003chr\u003e\n\n### Design\n\n\u003ca name=\"rationalize\"\u003e\u003c/a\u003e\n[#](#rationalize) Dn.1. If the patch introduces a new subsystem (class, method) with concurrent\ncode, is **the necessity for concurrency or thread safety rationalized in the patch description**?\nIs there a discussion of alternative design approaches that could simplify the concurrency model of\nthe code (see the next item)?\n\nA way to nudge thinking about concurrency design is demanding the usage of concurrency tools and\nlanguage constructs to be [justified in comments](#justify-document).\n\nSee also an item about [unneeded thread-safety of classes and methods](#unneeded-thread-safety).\n\n\u003ca name=\"use-patterns\"\u003e\u003c/a\u003e\n[#](#use-patterns) Dn.2. Is it possible to apply one or several design patterns (some of them are\nlisted below) to significantly **simplify the concurrency model of the code, while not considerably\ncompromising other quality aspects**, such as overall simplicity, efficiency, testability,\nextensibility, etc?\n\n**Immutability/Snapshotting.** When some state should be updated, a new immutable object (or a\nsnapshot within a mutable object) is created, published and used, while some concurrent threads may\nstill use older copies or snapshots. See [EJ Item 17], [JCIP 3.4], [RC.5](#moving-state-race) and\n[NB.2](#swap-state-atomically), `CopyOnWriteArrayList`, `CopyOnWriteArraySet`, [persistent data\nstructures](https://en.wikipedia.org/wiki/Persistent_data_structure).\n\n**Divide and conquer.** Work is split into several parts that are processed independently, each part\nin a single thread. Then the results of processing are combined. [Parallel\nStreams](#parallel-streams) or `ForkJoinPool` (see [TE.4](#fjp-no-blocking) and\n[TE.5](#use-common-fjp)) can be used to apply this pattern.\n\n**Producer-consumer.** Pieces of work are transmitted between worker threads via queues. See\n[JCIP 5.3], [Dl.1](#avoid-nested-critical-sections), [CSP](\nhttps://en.wikipedia.org/wiki/Communicating_sequential_processes), [SEDA](\nhttps://en.wikipedia.org/wiki/Staged_event-driven_architecture).\n\n**Instance confinement.** Objects of some root type encapsulate some complex hierarchical child\nstate. Root objects are solitarily responsible for the safety of accesses and modifications to the\nchild state from multiple threads. In other words, composed objects are synchronized rather than\nsynchronized objects are composed. See [JCIP 4.2, 10.1.3, 10.1.4].\n[RC.3](#unsafe-concurrent-iteration), [RC.4](#concurrent-mutation-race), and\n[RC.5](#moving-state-race) describe race conditions that could happen to instance-confined state.\n[TL.4](#tl-instance-chm) touches on instance confinement of thread-local state.\n\n**Thread/Task/Serial thread confinement.** Some state is made local to a thread using top-down\npass-through parameters or `ThreadLocal`. See [JCIP 3.3] and [the checklist section about\nThreadLocals](#threadlocal). Task confinement is a variation of the\nidea of thread confinement that is used in conjunction with the divide-and-conquer pattern. It\nusually comes in the form of lambda-captured \"context\" parameters or fields in the per-thread task\nobjects. Serial thread confinement is an extension of the idea of thread confinement for the\nproducer-consumer pattern, see [JCIP 5.3.2].\n\n**Active object.** An object manages its own `ExecutorService` or `Thread` to do the work. See the\n[article on Wikipedia](https://en.wikipedia.org/wiki/Active_object) and [TE.6](#explicit-shutdown).\n\n### Documentation\n\n\u003ca name=\"justify-document\"\u003e\u003c/a\u003e\n[#](#justify-document) Dc.1. For every class, method, and field that has signs of being thread-safe,\nsuch as the `synchronized` keyword, `volatile` modifiers on fields, use of any classes from\n`java.util.concurrent.*`, or third-party concurrency primitives, or concurrent collections: do their\nJavadoc comments include\n\n - **The justification for thread safety**: is it explained why a particular class, method or field\n has to be thread-safe?\n\n - **Concurrent access documentation**: is it enumerated from what methods and in contexts of\n what threads (executors, thread pools) each specific method of a thread-safe class is called?\n\nWherever some logic is parallelized or the execution is delegated to another thread, are there\ncomments explaining why it’s worse or inappropriate to execute the logic sequentially or in the same\nthread? See [PS.1](#justify-parallel-stream-use) regarding this.\n\nSee also [NB.3](#non-blocking-warning) and [NB.4](#justify-busy-wait) regarding justification of\nnon-blocking code, racy code, and busy waiting.\n\nIf the usage of concurrency tools is not justified, there is a possibility of code ending up with\n[unnecessary thread-safety](#unneeded-thread-safety), [redundant atomics](#redundant-atomics),\n[redundant `volatile` modifiers](#justify-volatile), or [unneeded Futures](#unneeded-future).\n\n\u003ca name=\"threading-flow-model\"\u003e\u003c/a\u003e\n[#](#threading-flow-model) Dc.2. If the patch introduces a new subsystem that uses threads or thread\npools, are there **high-level descriptions of the threading model, the concurrent control flow (or\nthe data flow) of the subsystem** somewhere, e. g. in the Javadoc comment for the package in\n`package-info.java` or for the main class of the subsystem? Are these descriptions kept up-to-date\nwhen new threads or thread pools are added or some old ones deleted from the system?\n\nDescription of the threading model includes the enumeration of threads and thread pools created and\nmanaged in the subsystem, and external pools used in the subsystem (such as\n`ForkJoinPool.commonPool()`), their sizes and other important characteristics such as thread\npriorities, and the lifecycle of the managed threads and thread pools.\n\nA high-level description of concurrent control flow should be an overview and tie together\nconcurrent control flow documentation for individual classes, see the previous item. If the\nproducer-consumer pattern is used, the concurrent control flow is trivial and the data flow should\nbe documented instead.\n\nDescribing threading models and control/data flow greatly improves the maintainability of the\nsystem, because in the absence of descriptions or diagrams developers spend a lot of time and effort\nto create and refresh these models in their minds. Putting the models down also helps to discover\nbottlenecks and the ways to simplify the design (see [Dn.2](#use-patterns)).\n\n\u003ca name=\"immutable-thread-safe\"\u003e\u003c/a\u003e\n[#](#immutable-thread-safe) Dc.3. For classes and methods that are parts of the public API or the\nextensions API of the project: is it specified in their Javadoc comments whether they are (or in\ncase of interfaces and abstract classes designed for subclassing in extensions, should they be\nimplemented as) **immutable, thread-safe, or not thread-safe**? For classes and methods that are (or\nshould be implemented as) thread-safe, is it documented precisely with what other methods (or\nthemselves) they may be called concurrently from multiple threads? See also [EJ Item 82],\n[JCIP 4.5], [CON52-J](\nhttps://wiki.sei.cmu.edu/confluence/display/java/CON52-J.+Document+thread-safety+and+use+annotations+where+applicable).\n\nIf the `@com.google.errorprone.annotations.Immutable` annotation is used to mark immutable classes,\n[Error Prone](https://errorprone.info/) static analysis tool is capable to detect when a class is\nnot actually immutable (see a relevant [bug\npattern](https://errorprone.info/bugpattern/Immutable)).\n\n\u003ca name=\"name-patterns\"\u003e\u003c/a\u003e\n[#](#name-patterns) Dc.4. For subsystems, classes, methods, and fields that use some concurrency\ndesign patterns, either high-level, such as those mentioned in [Dn.2](#use-patterns), or low-level,\nsuch as [double-checked locking](#lazy-init) or [spin looping](#justify-busy-wait): are the used\n**concurrency patterns pronounced in the design or implementation comments** for the respective\nsubsystems, classes, methods, and fields? This helps readers to make sense out of the code quicker.\n\nPronouncing the used patterns in comments may be replaced with more succinct documentation\nannotations, such as `@Immutable` ([Dc.3](#immutable-thread-safe)), `@GuardedBy`\n([Dc.7](#guarded-by)), `@LazyInit` ([LI.5](#lazy-init-benign-race)), or annotations that you define\nyourself for specific patterns which appear many times in your project.\n\n\u003ca name=\"concurrent-map-type\"\u003e\u003c/a\u003e\n[#](#concurrent-map-type) Dc.5. Are `ConcurrentHashMap` and `ConcurrentSkipListMap` objects stored\nin fields and variables of `ConcurrentHashMap` or `ConcurrentSkipListMap` or **`ConcurrentMap`\ntype**, but not just `Map`?\n\nThis is important, because in code like the following:\n\n    ConcurrentMap\u003cString, Entity\u003e entities = getEntities();\n    if (!entities.containsKey(key)) {\n      entities.put(key, entity);\n    } else {\n      ...\n    }\n\nIt should be pretty obvious that there might be a race condition because an entity may be put into\nthe map by a concurrent thread between the calls to `containsKey()` and `put()` (see\n[RC.1](#chm-race) about this type of race conditions). While if the type of the entities variable\nwas just `Map\u003cString, Entity\u003e` it would be less obvious and readers might think this is only\nslightly suboptimal code and pass by.\n\nIt’s possible to turn this advice into [an inspection](\nhttps://github.com/apache/incubator-druid/pull/6898/files#diff-3aa5d63fbb1f0748c146f88b6f0efc81R239)\nin IntelliJ IDEA.\n\n\u003ca name=\"chm-type\"\u003e\u003c/a\u003e\n[#](#chm-type) Dc.6. An extension of the previous item: are ConcurrentHashMaps on which `compute()`,\n`computeIfAbsent()`, `computeIfPresent()`, or `merge()` methods are called stored in fields and\nvariables of `ConcurrentHashMap` type rather than `ConcurrentMap`? This is because\n`ConcurrentHashMap` (unlike the generic `ConcurrentMap` interface) guarantees that the lambdas\npassed into `compute()`-like methods are performed atomically per key, and the thread safety of the\nclass may depend on that guarantee.\n\nThis advice may seem to be overly pedantic, but if used in conjunction with a static analysis rule\nthat prohibits calling `compute()`-like methods on `ConcurrentMap`-typed objects that are not\nConcurrentHashMaps (it’s possible to create such inspection in IntelliJ IDEA too) it could prevent\nsome bugs: e. g. **calling `compute()` on a `ConcurrentSkipListMap` might be a race condition** and\nit’s easy to overlook that for somebody who is used to rely on the strong semantics of `compute()`\nin `ConcurrentHashMap`.\n\n\u003ca name=\"guarded-by\"\u003e\u003c/a\u003e\n[#](#guarded-by) Dc.7. Is **`@GuardedBy` annotation used**? If accesses to some fields should be\nprotected by some lock, are those fields annotated with `@GuardedBy`? Are private methods that are\ncalled from within critical sections in other methods annotated with `@GuardedBy`? If the project\ndoesn’t depend on any library containing this annotation (it’s provided by [`jcip-annotations`](\nhttps://search.maven.org/artifact/net.jcip/jcip-annotations/1.0/jar), [`error_prone_annotations`](\nhttps://search.maven.org/search?q=a:error_prone_annotations%20g:com.google.errorprone), [`jsr305`](\nhttps://search.maven.org/search?q=g:com.google.code.findbugs%20a:jsr305), and other libraries) and\nfor some reason it’s undesirable to add such dependency, it should be mentioned in Javadoc comments\nfor the respective fields and methods that accesses and calls to them should be protected by some\nspecified locks.\n\nSee [JCIP 2.4] for more information about `@GuardedBy`.\n\nUsage of `@GuardedBy` is especially beneficial in conjunction with [Error Prone](\nhttps://errorprone.info/) tool which is able to [statically check for unguarded accesses to fields\nand methods with @GuardedBy annotations](https://errorprone.info/bugpattern/GuardedBy). There is\nalso an inspection \"Unguarded field access\" in IntelliJ IDEA with the same effect.\n\n\u003ca name=\"document-benign-race\"\u003e\u003c/a\u003e\n[#](#document-benign-race) Dc.8. If in a thread-safe class some **fields are accessed both from\nwithin critical sections and outside of critical sections**, is it explained in comments why this\nis safe? For example, unprotected read-only access to a reference to an immutable object might be\nbenignly racy (see [RC.5](#moving-state-race)). Answering this question also helps to prevent\nproblems described in [IS.2](#non-volatile-visibility), [IS.3](#non-volatile-protection), and\n[RC.2](#unsafe-concurrent-point-read).\n\nInstead of writing a comment explaining that access to a *lazily initialized field* outside of a\ncritical section is safe, the field could just be annotated with [`@LazyInit`](\nhttp://errorprone.info/api/latest/com/google/errorprone/annotations/concurrent/LazyInit.html) from\n[`error_prone_annotations`](\nhttps://search.maven.org/search?q=a:error_prone_annotations%20g:com.google.errorprone) (but make\nsure to read the Javadoc for this annotation and to check that the field conforms to the\ndescription; [LI.3](#safe-local-dcl) and [LI.5](#lazy-init-benign-race) mention potential pitfalls).\n\nApart from the explanations why the partially blocking or racy code is safe, there should also be\ncomments justifying such error-prone code and warning the developers that the code should be\nmodified and reviewed with double attention: see [NB.3](#non-blocking-warning).\n\nThere is an inspection \"Field accessed in both synchronized and unsynchronized contexts\" in IntelliJ\nIDEA which helps to find classes with unbalanced synchronization.\n\n\u003ca name=\"justify-volatile\"\u003e\u003c/a\u003e\n[#](#justify-volatile) Dc.9. Regarding every field with a `volatile` modifier: **does it really need\nto be `volatile`**? Does the Javadoc comment for the field explain why the semantics of `volatile`\nfield reads and writes (as defined in the [Java Memory Model](\nhttps://docs.oracle.com/javase/specs/jls/se11/html/jls-17.html#jls-17.4)) are required for the\nfield?\n\nSimilarly to what is noted in the previous item, justification for a lazily initialized field to be\n`volatile` could be omitted if the lazy initialization pattern itself is identified, according to\n[Dc.4](#name-patterns).  When `volatile` on a field is needed to ensure *safe publication* of\nobjects written into it (see [JCIP 3.5] or [here](\nhttps://shipilev.net/blog/2014/safe-public-construction/#_safe_publication)) or\n[linearizability of values observed by reader threads](#safe-local-dcl), then just mentioning\n\"safe publication\" or \"linearizable reads\" in the Javadoc comment for the field is sufficient, it's\nnot needed to elaborate the semantics of `volatile` which ensure the safe publication or the\nlinearizability.\n\nBy extension, this item also applies when an `AtomicReference` (or a primitive atomic) is used\ninstead of raw `volatile`, along with the consideration about [unnecessary\natomic](#redundant-atomics) which might also be relevant in this case.\n\n\u003ca name=\"plain-field\"\u003e\u003c/a\u003e\n[#](#plain-field) Dc.10. Is it explained in the **Javadoc comment for each mutable field in a\nthread-safe class that is neither `volatile` nor annotated with `@GuardedBy`**, why that is safe?\nPerhaps, the field is only accessed and mutated from a single method or a set of methods that are\nspecified to be called only from a single thread sequentially (described as per\n[Dc.1](#justify-document)). This recommendation also applies to `final` fields that store objects of\nnon-thread-safe classes when those objects could be mutated from some methods of the enclosing\nthread-safe class. See [IS.2](#non-volatile-visibility), [IS.3](#non-volatile-protection),\n[RC.2](#unsafe-concurrent-point-read), [RC.3](#unsafe-concurrent-iteration), and\n[RC.4](#concurrent-mutation-race) about what could go wrong with such code.\n\n### Insufficient synchronization\n\n\u003ca name=\"static-thread-safe\"\u003e\u003c/a\u003e\n[#](#static-thread-safe) IS.1. **Can non-private static methods be called concurrently from multiple\nthreads?** If there is a non-private static field with mutable state, such as a collection, is it an\ninstance of a thread-safe class or synchronized using some `Collections.synchronizedXxx()` method?\n\nNote that calls to `DateFormat.parse()` and `format()` must be synchronized because they mutate the\nobject: see [IS.5](#dateformat).\n\n\u003ca name=\"non-volatile-visibility\"\u003e\u003c/a\u003e\n[#](#non-volatile-visibility) IS.2. There is no situation where some **thread awaits until a\nnon-volatile field has a certain value in a loop, expecting it to be updated from another thread?**\nThe field should at least be `volatile` to ensure eventual visibility of concurrent updates. See\n[JCIP 3.1, 3.1.4], and [VNA00-J](\nhttps://wiki.sei.cmu.edu/confluence/display/java/VNA00-J.+Ensure+visibility+when+accessing+shared+primitive+variables)\nfor more details and examples.\n\nEven if the respective field is `volatile`, busy waiting for a condition in a loop can be abused\neasily and therefore should be justified in a comment: see [NB.4](#justify-busy-wait).\n\n[Dc.10](#plain-field) also demands adding explaining comments to mutable fields which are neither\n`volatile` nor annotated with `@GuardedBy` which should inevitably lead to the discovery of the\nvisibility issue.\n\n\u003ca name=\"non-volatile-protection\"\u003e\u003c/a\u003e\n[#](#non-volatile-protection) IS.3. **Read accesses to non-volatile primitive fields which can be\nupdated concurrently are protected with a lock as well as the writes?** The minimum reason for this\nis that reads of `long` and `double` fields are non-atomic (see [JLS 17.7](\nhttps://docs.oracle.com/javase/specs/jls/se11/html/jls-17.html#jls-17.7), [JCIP 3.1.2], [VNA05-J](\nhttps://wiki.sei.cmu.edu/confluence/display/java/VNA05-J.+Ensure+atomicity+when+reading+and+writing+64-bit+values)).\nBut even with other types of fields, unbalanced synchronization creates possibilities for downstream\nbugs related to the visibility (see the previous item) and the lack of the expected happens-before\nrelationships.\n\nAs well as with the previous item, accurate documentation of benign races\n([Dc.8](#document-benign-race) and [Dc.10](#plain-field)) should reliably expose the cases when\nunbalanced synchronization is problematic.\n\nSee also [RC.2](#unsafe-concurrent-point-read) regarding unbalanced synchronization of read accesses\nto mutable objects, such as collections.\n\nThere is a relevant inspection \"Field accessed in both synchronized and unsynchronized contexts\" in\nIntelliJ IDEA.\n\n\u003ca name=\"server-framework-sync\"\u003e\u003c/a\u003e\n[#](#server-framework-sync) IS.4. **Is the business logic written for server frameworks\nthread-safe?** This includes:\n - `Servlet` implementations\n - `@(Rest)Controller`-annotated classes, `@Get/PostMapping`-annotated methods in Spring\n - `@SessionScoped` and `@ApplicationScoped` managed beans in JSF\n - `Filter` and `Handler` implementations in various synchronous and asynchronous frameworks\n (including Jetty, Netty, Undertow)\n - `@GET`- and `@POST`-annotated methods (resources) in JAX-RS (RESTful APIs)\n\nIt's easy to forget that if such code mutates some state (e. g. fields in the class), it must be\nproperly synchronized or access only concurrent collections and classes.\n\n\u003ca name=\"dateformat\"\u003e\u003c/a\u003e\n[#](#dateformat) IS.5. **Calls to `parse()` and `format()` on a shared instance of `DateFormat` are\nsynchronized**, e. g. if a `DateFormat` is stored in a static field? Although `parse()` and\n`format()` may look \"read-only\", they actually mutate the receiving `DateFormat` object.\n\nAn inspection \"Non thread-safe static field access\" in IntelliJ IDEA helps to catch such concurrency\nbugs.\n\n### Excessive thread safety\n\n\u003ca name=\"pseudo-safety\"\u003e\u003c/a\u003e\n[#](#pseudo-safety) ETS.1. An example of excessive thread safety is a class where every modifiable\nfield is `volatile` or an `AtomicReference` or other atomic, and every collection field stores a\nconcurrent collection (e. g. `ConcurrentHashMap`), although all accesses to those fields are\n`synchronized`.\n\n**There shouldn’t be any \"extra\" thread safety in code, there should be just enough of it.**\nDuplication of thread safety confuses readers because they might think the extra thread safety\nprecautions are (or used to be) needed for something but will fail to find the purpose.\n\nThe exception from this principle is the `volatile` modifier on the lazily initialized field in the\n[safe local double-checked locking pattern](\nhttp://hg.openjdk.java.net/code-tools/jcstress/file/9270b927e00f/tests-custom/src/main/java/org/openjdk/jcstress/tests/singletons/SafeLocalDCL.java#l71)\nwhich is the recommended way to implement double-checked locking, despite that `volatile` is\n[excessive for correctness](https://shipilev.net/blog/2014/safe-public-construction/#_correctness)\nwhen the lazily initialized object has all `final` fields[*](\nhttps://shipilev.net/blog/2014/safe-public-construction/#_safe_initialization). Without that\n`volatile` modifier the thread safety of the double-checked locking could easily be broken by a\nchange (addition of a non-final field) in the class of lazily initialized objects, though that class\nshould not be aware of subtle concurrency implications. If the class of lazily initialized objects\nis *specified* to be immutable (see [Dc.3](#immutable-thread-safe)) the `volatile` is still\nunnecessary and the [UnsafeLocalDCL](\nhttp://hg.openjdk.java.net/code-tools/jcstress/file/9270b927e00f/tests-custom/src/main/java/org/openjdk/jcstress/tests/singletons/UnsafeLocalDCL.java#l71)\npattern could be used safely, but the fact that some class has all `final` fields doesn’t\nnecessarily mean that it’s immutable.\n\nSee also [the section about double-checked locking](#lazy-init).\n\n\u003ca name=\"redundant-atomics\"\u003e\u003c/a\u003e\n[#](#redundant-atomics) ETS.2. Aren’t there **`AtomicReference`, `AtomicBoolean`, `AtomicInteger` or\n`AtomicLong` fields on which only `get()` and `set()` methods are called?** Simple fields with\n`volatile` modifiers can be used instead, but `volatile` might not be needed too; see\n[Dc.9](#justify-volatile).\n\n\u003ca name=\"unneeded-thread-safety\"\u003e\u003c/a\u003e\n[#](#unneeded-thread-safety) ETS.3. **Does a class (method) need to be thread-safe?** May a class\nbe accessed (method called) concurrently from multiple threads (without *happens-before*\nrelationships between the accesses or calls)? Can a class (method) be simplified by making it\nnon-thread-safe?\n\nSee also [Ft.1](#unneeded-future) about unneeded wrapping of a computation into a `Future` and\n[Dc.9](#justify-volatile) about potentially unneeded `volatile` modifiers.\n\nThis item is a close relative of [Dn.1](#rationalize) (about rationalizing concurrency and thread\nsafety in the patch description) and [Dc.1](#justify-document) (about justifying concurrency in\nJavadocs for classes and methods, and documenting concurrent access). If these actions are done, it\nshould be self-evident whether the class (method) needs to be thread-safe or not. There may be\ncases, however, when it might be desirable to make the class (method) thread-safe although it's not\nsupposed to be accessed or called concurrently as of the moment of the patch. For example, thread\nsafety may be needed to ensure memory safety (see [CN.4](#thread-safe-native) about this).\nAnticipating some changes to the codebase that make the class (method) being accessed from multiple\nthreads may be another reason to make the class (method) thread-safe up front.\n\n\u003ca name=\"unneeded-fairness\"\u003e\u003c/a\u003e\n[#](#unneeded-fairness) ETS.4. **Does a `ReentrantLock` (or `ReentrantReadWriteLock`, `Semaphore`)\nneed to be fair?** To justify the throughput penalty of making a lock fair it should be demonstrated\nthat a lack of fairness leads to unacceptably long starvation periods in some threads trying to\nacquire the lock or pass the semaphore. This should be documented in the Javadoc comment for the\nfield holding the lock or the semaphore. See [JCIP 13.3] for more details.\n\n### Race conditions\n\n\u003ca name=\"chm-race\"\u003e\u003c/a\u003e\n[#](#chm-race) RC.1. Aren’t **`ConcurrentMap` (or Cache) objects updated with separate\n`containsKey()`, `get()`, `put()` and `remove()` calls** instead of a single call to\n`compute()`/`computeIfAbsent()`/`computeIfPresent()`/`replace()`?\n\n\u003ca name=\"unsafe-concurrent-point-read\"\u003e\u003c/a\u003e\n[#](#unsafe-concurrent-point-read) RC.2. Aren’t there **point read accesses such as `Map.get()`,\n`containsKey()` or `List.get()` outside of critical sections to a non-thread-safe collection such as\n`HashMap` or `ArrayList`**, while new entries can be added to the collection concurrently, even\nthough there is a happens-before edge between the moment when some entry is put into the collection\nand the moment when the same entry is point-queried outside of a critical section?\n\nThe problem is that when new entries can be added to a collection, it grows and changes its internal\nstructure from time to time (HashMap rehashes the hash table, `ArrayList` reallocates the internal\narray). At such moments races might happen and unprotected point read accesses might fail with\n`NullPointerException`, `ArrayIndexOutOfBoundsException`, or return `null` or some random entry.\n\nNote that this concern applies to `ArrayList` even when elements are only added to the end of the\nlist. However, a small change in `ArrayList`’s implementation in OpenJDK could have disallowed data\nraces in such cases at very little cost. If you are subscribed to the concurrency-interest mailing\nlist, you could help to bring attention to this problem by reviving [this thread](\nhttp://cs.oswego.edu/pipermail/concurrency-interest/2018-September/016526.html).\n\nSee also [IS.3](#non-volatile-protection) regarding unbalanced synchronization of accesses to\nprimitive fields.\n\n\u003ca name=\"unsafe-concurrent-iteration\"\u003e\u003c/a\u003e\n[#](#unsafe-concurrent-iteration) RC.3. A variation of the previous item: isn’t a non-thread-safe\ncollection such as `HashMap` or `ArrayList` **iterated outside of a critical section**, while it may\nbe modified concurrently? This could happen by accident when an `Iterable`, `Iterator` or `Stream`\nover a collection is returned from a method of a thread-safe class, even though the iterator or\nstream is created within a critical section. Note that **returning unmodifiable collection views\nlike `Collections.unmodifiableList()` from getters wrapping collection fields that may be modified\nconcurrently doesn't solve this problem.** If the collection is relatively small, it should be\ncopied entirely, or a copy-on-write collection (see [Sc.3](#non-blocking-collections)) should be\nused instead of a non-thread-safe collection.\n\nNote that calling `toString()` on a collection (e. g. in a logging statement) implicitly iterates\nover it.\n\nLike the previous item, this one applies to growing ArrayLists too.\n\nThis item applies even to synchronized collections: see [RC.10](#synchronized-collection-iter) for\ndetails.\n\n\u003ca name=\"concurrent-mutation-race\"\u003e\u003c/a\u003e\n[#](#concurrent-mutation-race) RC.4. Generalization of the previous item: aren’t **non-trivial\nobjects that can be mutated concurrently returned from getters** in a thread-safe class (and thus\ninevitably leaking outside of critical sections)?\n\n\u003ca name=\"moving-state-race\"\u003e\u003c/a\u003e\n[#](#moving-state-race) RC.5. If there are multiple variables in a thread-safe class that are\n**updated at once but have individual getters**, isn’t there a race condition in the code that calls\nthose getters? If there is, the variables should be made `final` fields in a dedicated POJO, that\nserves as a snapshot of the updated state. The POJO is stored in a field of the thread-safe class,\ndirectly or as an `AtomicReference`. Multiple getters to individual fields should be replaced with\na single getter that returns the POJO. This allows avoiding a race condition in the client code by\nreading a consistent snapshot of the state at once.\n\nThis pattern is also very useful for creating safe and reasonably simple non-blocking code: see\n[NB.2](#swap-state-atomically) and [JCIP 15.3.1].\n\n\u003ca name=\"read-outside-critical-section-race\"\u003e\u003c/a\u003e\n[#](#read-outside-critical-section-race) RC.6. If some logic within some critical section depends on\nsome data that principally is part of the internal mutable state of the class, but was read outside\nof the critical section or in a different critical section, isn’t there a race condition because the\n**local copy of the data may become out of sync with the internal state by the time when the\ncritical section is entered**? This is a typical variant of check-then-act race condition, see\n[JCIP 2.2.1].\n\nAn example of this race condition is calling `toArray()` on synchronized collections with a sized\narray:\n```java\nlist = Collections.synchronizedList(new ArrayList\u003c\u003e());\n...\nelements = list.toArray(new Element[list.size()]);\n```\nThis might unexpectedly leave the `elements` array with some nulls in the end if there are some\nconcurrent removals from the list. Therefore, `toArray()` on a synchronized collection should be\ncalled with a zero-length array: `toArray(new Element[0])`, which is also not worse from the\nperformance perspective: see \"[Arrays of Wisdom of the\nAncients](https://shipilev.net/blog/2016/arrays-wisdom-ancients/)\".\n\nSee also [RC.9](#cache-invalidation-race) about cache invalidation races which are similar to\ncheck-then-act races.\n\n\u003ca name=\"outside-world-race\"\u003e\u003c/a\u003e\n[#](#outside-world-race) RC.7. Aren't there **race conditions between the code (i. e. program\nruntime actions) and some actions in the outside world** or actions performed by some other programs\nrunning on the machine? For example, if some configurations or credentials are hot reloaded from\nsome file or external registry, reading separate configuration parameters or separate credentials\n(such as username and password) in separate transactions with the file or the registry may be racing\nwith a system operator updating those configurations or credentials.\n\nAnother example is checking that a file exists (or not exists) and then reading, deleting, or\ncreating it, respectively, while another program or a user may delete or create the file between the\ncheck and the act. It's not always possible to cope with such race conditions, but it's useful to\nkeep such possibilities in mind. Prefer static methods from [`java.nio.file.Files`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/nio/file/Files.html) class and\nNIO file reading/writing API to methods from the old `java.io.File` for file system operations.\nMethods from `Files` are more sensitive to file system race conditions and tend to throw exceptions\nin adverse cases, while methods on `File` swallow errors and make it hard even to detect race\nconditions. Static methods from `Files` also support `StandardOpenOption.CREATE` and `CREATE_NEW`\nwhich may help to ensure some extra atomicity.\n\n\u003ca name=\"guava-cache-invalidation-race\"\u003e\u003c/a\u003e\n[#](#guava-cache-invalidation-race) RC.8. If you are **using Guava Cache and `invalidate(key)`, are\nyou not affected by the [race condition](https://github.com/google/guava/issues/1881)** which can\nleave a `Cache` with an invalid (stale) value mapped for a key? Consider using [Caffeine cache](\nhttps://github.com/ben-manes/caffeine) which doesn't have this problem. Caffeine is also faster and\nmore scalable than Guava Cache: see [Sc.9](#caffeine).\n\n\u003ca name=\"cache-invalidation-race\"\u003e\u003c/a\u003e\n[#](#cache-invalidation-race) RC.9. Generalization of the previous item: isn't there a potential\n**cache invalidation race** in the code? There are several ways to get into this problem:\n - Using `Cache.put()` method concurrently with `invalidate()`. Unlike\n [RC.8](#guava-cache-invalidation-race), this is a race regardless of what caching library is used,\n not necessarily Guava. This is also similar to [RC.1](#chm-race).\n - Having `put()` and `invalidate()` methods exposed in the own Cache interface. This places the\n burden of synchronizing `put()` (together the preceding \"checking\" code, such as `get()`) and\n `invalidate()` calls on the users of the API which really should be the job of the Cache\n implementation.\n - There is some [lazily initialized state](#lazy-init) in a mutable object which can be invalidated\n upon mutation of the object, and can also be accessed concurrently with the mutation. This means\n the class is in the category of [non-blocking concurrency](#non-blocking): see the corresponding\n checklist items. A way to avoid cache invalidation race in this case is to wrap the primary state\n and the cached state into a POJO and replace it atomically, as described in\n [NB.2](#swap-state-atomically).\n\n\u003ca name=\"synchronized-collection-iter\"\u003e\u003c/a\u003e\n[#](#synchronized-collection-iter) RC.10. **The whole iteration loop over a synchronized collection\n(i. e. obtained from one of the `Collections.synchronizedXxx()` static factory methods), or a\nStream pipeline using a synchronized collection as a source is protected by `synchronized (coll)`?**\nSee [the Javadoc](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/Collections.html#synchronizedCollection(java.util.Collection))\nfor examples and details.\n\nThis also applies to passing synchronized collections into:\n - Copy constructors of other collections, e. g. `new ArrayList\u003c\u003e(synchronizedColl)`\n - Static factory methods of other collections, e. g. `List.copyOf()`, `Set.copyOf()`,\n `ImmutableMap.copyOf()`\n - Bulk methods on other collections:\n   - `otherColl.containsAll(synchronizedColl)`\n   - `otherColl.addAll(synchronizedColl)`\n   - `otherColl.removeAll(synchronizedColl)`\n   - `otherMap.putAll(synchronizedMap)`\n   - `otherColl.containsAll(synchronizedMap.keySet())`\n   - Etc.\n\nBecause in all these cases there is an implicit iteration on the source collection.\n\nSee also [RC.3](#unsafe-concurrent-iteration) about unprotected iteration over non-thread-safe\ncollections.\n\n### Testing\n\n\u003ca name=\"multi-threaded-tests\"\u003e\u003c/a\u003e\n[#](#multi-threaded-tests) T.1. **Was it considered to add multi-threaded unit tests for a\nthread-safe class or method?** Single-threaded tests don't really test the thread safety and\nconcurrency. Note that this question doesn't mean to indicate that there *must* be concurrent unit\ntests for every piece of concurrent code in the project because correct concurrent tests take a lot\nof effort to write and therefore they might often have low ROI.\n\n**What is the worst thing that might happen if this code has a concurrency bug?** This is a useful\nquestion to inform the decision about writing concurrent tests. The consequences may range from a\ntiny, entirely undetectable memory leak, to storing corrupted data in a durable database or\na security breach.\n\n\u003ca name=\"concurrent-test-random\"\u003e\u003c/a\u003e\n[#](#concurrent-test-random) T.2. **Isn't a shared `java.util.Random` object used for data\ngeneration in a concurrency test?** `java.util.Random` is synchronized internally, so if multiple\ntest threads (which are conceived to access the tested class concurrently) access the same\n`java.util.Random` object then the test might degenerate to a mostly synchronous one and fail to\nexercise the concurrency properties of the tested class. See [JCIP 12.1.3]. `Math.random()` is a\nsubject for this problem too because internally `Math.random()` uses a globally shared\n`java.util.Random` instance. Use `ThreadLocalRandom` instead. \n\n\u003ca name=\"coordinate-test-workers\"\u003e\u003c/a\u003e\n[#](#coordinate-test-workers) T.3. Do **concurrent test workers coordinate their start using a latch\nsuch as `CountDownLatch`?** If they don't much or even all of the test work might be done by the\nfirst few started workers. See [JCIP 12.1.3] for more information.\n\n\u003ca name=\"test-workers-interleavings\"\u003e\u003c/a\u003e\n[#](#test-workers-interleavings) T.4. Are there **more test threads than there are available\nprocessors** if possible for the test? This will help to generate more thread scheduling\ninterleavings and thus test the logic for the absence of race conditions more thoroughly. See\n[JCIP 12.1.6] for more information. The number of available processors on the machine can be\nobtained as [`Runtime.getRuntime().availableProcessors()`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Runtime.html#availableProcessors()).\n\n\u003ca name=\"concurrent-assert\"\u003e\u003c/a\u003e\n[#](#concurrent-assert) T.5. Are there **no regular assertions in code that is executed not in the\nmain thread running the unit test?** Consider the following example:\n```java\n@Test public void testServiceListener() {\n  // Missed assertion -- Don't do this!\n  service.addListener(event -\u003e Assert.assertEquals(Event.Type.MESSAGE_RECEIVED, event.getType()));\n  service.sendMessage(\"test\");\n}\n```\nAssuming the `service` executes the code of listeners asynchronously in some internally-managed or\na shared thread pool, even if the assertion within the listener's lambda fails, JUnit and TestNG\nwill think the test has passed.\n\nThe solution to this problem is either to pass the data (or thrown exceptions) from a concurrent\nthread back to the main test thread and verify it in the end of the test, or to use\n[ConcurrentUnit](https://github.com/jhalterman/concurrentunit) library which takes over\nthe boilerplate associated with the first approach.\n\n\u003ca name=\"check-await\"\u003e\u003c/a\u003e\n[#](#check-await) T.6. **Is the result of [`CountDownLatch.await()`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/CountDownLatch.html#await(long,java.util.concurrent.TimeUnit))\nmethod calls checked?** The most frequent form of this mistake is forgetting to wrap\n`CountDownLatch.await()` into `assertTrue()` in tests, which makes the test to not actually verify\nthat the production code works correctly. The absence of a check in production code might cause\nrace conditions.\n\nApart from `CountDownLatch.await`, the other similar methods whose result must be checked are:\n - [`Lock.tryLock()`](\n https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/locks/Lock.html)\n and `tryAcquire()` methods on [`Semaphore`](\n https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/Semaphore.html)\n and [`RateLimiter`](\n https://guava.dev/releases/28.1-jre/api/docs/com/google/common/util/concurrent/RateLimiter.html)\n from Guava\n  - [`Monitor.enter(...)`](https://guava.dev/releases/28.1-jre/api/docs/com/google/common/util/concurrent/Monitor.html) in Guava\n - [`Condition.await(...)`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/locks/Condition.html#await(long,java.util.concurrent.TimeUnit))\n - `awaitTermination()` and `awaitQuiescence()` methods. There is a [separate\n item](#check-await-termination) about them.\n - [`Process.waitFor(...)`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Process.html#waitFor(long,java.util.concurrent.TimeUnit))\n\nIt's possible to find these problems using static analysis, e. g. by configuring the \"Result of\nmethod call ignored\" inspection in IntelliJ IDEA to recognize `Lock.tryLock()`,\n`CountDownLatch.await()` and other methods listed above. They are *not* in the default set of\nchecked methods, so they should be added manually in the inspection configuration.\n\n\u003ca name=\"replacing-locks-with-concurrency-utilities\"\u003e\u003c/a\u003e\n### Locks\n\n\u003ca name=\"avoid-wait-notify\"\u003e\u003c/a\u003e\n[#](#avoid-wait-notify) Lk.1. Is it possible to use concurrent collections and/or utilities from\n`java.util.concurrent.*` and **avoid using locks with `Object.wait()`/`notify()`/`notifyAll()`**?\nCode redesigned around concurrent collections and utilities is often both clearer and less\nerror-prone than code implementing the equivalent logic with intrinsic locks, `Object.wait()` and\n`notify()` (`Lock` objects with `await()` and `signal()` are not different in this regard). See\n[EJ Item 81] for more information.\n\n\u003ca name=\"guava-monitor\"\u003e\u003c/a\u003e\n[#](#guava-monitor) Lk.2. Is it possible to **simplify code that uses intrinsic locks or `Lock`\nobjects with conditional waits by using Guava’s [`Monitor`](\nhttps://google.github.io/guava/releases/27.0.1-jre/api/docs/com/google/common/util/concurrent/Monitor.html)\ninstead**?\n\n\u003ca name=\"use-synchronized\"\u003e\u003c/a\u003e\n[#](#use-synchronized) Lk.3. **Isn't `ReentrantLock` used when `synchronized` would suffice?**\n`ReentrantLock` shoulndn't be used in the situations when none of its distinctive features\n(`tryLock()`, timed and interruptible locking methods, etc.) are used. Note that reentrancy is *not*\nsuch a feature: intrinsic Java locks support reentrancy too. The ability for a `ReentrantLock` to be\nfair is seldom such a feature: see [ETS.4](#unneeded-fairness). See [JCIP 13.4] for more information\nabout this.\n\nThis advice also applies when a class uses a private lock object (instead of `synchronized (this)`\nor synchronized methods) to protect against accidental or malicious interference by the clients\nacquiring synchronizing on the object of the class: see [JCIP 4.2.1],  [EJ Item 82], [LCK00-J](\nhttps://wiki.sei.cmu.edu/confluence/display/java/LCK00-J.+Use+private+final+lock+objects+to+synchronize+classes+that+may+interact+with+untrusted+code).\n\n\u003ca name=\"lock-unlock\"\u003e\u003c/a\u003e\n[#](#lock-unlock) Lk.4. **Locking (`lock()`, `lockInterruptibly()`, `tryLock()`) and `unlock()`\nmethods are used strictly with the recommended [try-finally idiom](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/locks/Lock.html)\nwithout deviations?**\n - `lock()` (or `lockInterruptibly()`) call goes *before* the `try {}` block rather than within it?\n - There are no statements between the `lock()` (or `lockInterruptibly()`) call and the beginning of\n the `try {}` block?\n - `unlock()` call is the first statement within the `finally {}` block?\n\nThis advice doesn't apply when locking methods and `unlock()` should occur in different scopes, i.\ne. not within the recommended try-finally idiom altogether. The containing methods could be\nannotated with Error Prone's [`@LockMethod`](\nhttps://errorprone.info/api/latest/com/google/errorprone/annotations/concurrent/LockMethod.html) and\n`@UnlockMethod` annotations.\n\nThere is a \"Lock acquired but not safely unlocked\" inspection in IntelliJ IDEA which corresponds to\nthis item.\n\nSee also [LCK08-J](\nhttps://wiki.sei.cmu.edu/confluence/display/java/LCK08-J.+Ensure+actively+held+locks+are+released+on+exceptional+conditions).\n\n### Avoiding deadlocks\n\n\u003c!-- Preserving former anchor with a typo. --\u003e\n\u003ca name=\"avoid-nested-critial-sections\"\u003e\u003c/a\u003e\n\u003ca name=\"avoid-nested-critical-sections\"\u003e\u003c/a\u003e\n[#](#avoid-nested-critical-sections) Dl.1. If a thread-safe class is implemented so that there are\nnested critical sections protected by different locks, **is it possible to redesign the code to get\nrid of nested critical sections**? Sometimes a class could be split into several distinct classes,\nor some work that is done within a single thread could be split between several threads or tasks\nwhich communicate via concurrent queues. See [JCIP 5.3] for more information about the\nproducer-consumer pattern.\n\nThere is an inspection \"Nested 'synchronized' statement\" in IntelliJ IDEA corresponding to this\nitem.\n\n\u003ca name=\"document-locking-order\"\u003e\u003c/a\u003e\n[#](#document-locking-order) Dl.2. If restructuring a thread-safe class to avoid nested critical\nsections is not reasonable, was it deliberately checked that the locks are acquired in the same\norder throughout the code of the class? **Is the locking order documented in the Javadoc comments\nfor the fields where the lock objects are stored?**\n\nSee [LCK07-J](\nhttps://wiki.sei.cmu.edu/confluence/display/java/LCK07-J.+Avoid+deadlock+by+requesting+and+releasing+locks+in+the+same+order)\nfor examples.\n\n\u003ca name=\"dynamic-lock-ordering\"\u003e\u003c/a\u003e\n[#](#dynamic-lock-ordering) Dl.3. If there are nested critical sections protected by several\n(potentially different) **dynamically determined locks (for example, associated with some business\nlogic entities), are the locks ordered before the acquisition**? See [JCIP 10.1.2] for more\ninformation.\n\n\u003ca name=\"non-open-call\"\u003e\u003c/a\u003e\n[#](#non-open-call) Dl.4. Aren’t there **calls to some callbacks (listeners, etc.) that can be\nconfigured through public API or extension interface calls within critical sections**? With such\ncalls, the system might be inherently prone to deadlocks because the external logic executed within\na critical section may be unaware of the locking considerations and call back into the logic of the\nsystem, where some more locks may be acquired, potentially forming a locking cycle that might lead\nto a deadlock. Also, the external logic could just perform some time-consuming operation and by\nthat harm the efficiency of the system (see [Sc.1](#minimize-critical-sections)). See [JCIP 10.1.3]\nand [EJ Item 79] for more information.\n\nWhen public API or extension interface calls happen within lambdas passed into `Map.compute()`,\n`computeIfAbsent()`, `computeIfPresent()`, and `merge()`, there is a risk of not only deadlocks (see\nthe next item) but also race conditions which could result in a corrupted map (if it's not a\n`ConcurrentHashMap`, e. g. a simple `HashMap`) or runtime exceptions.\n\nBeware that a [`CompletableFuture`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/CompletableFuture.html\n) or a [`ListenableFuture`](\nhttps://guava.dev/releases/28.1-jre/api/docs/com/google/common/util/concurrent/ListenableFuture.html\n) returned from a public API opens a door for performing some user-defined callbacks from the place\nwhere the future is completed, deep inside the library (framework). If the future is completed\nwithin a critical section, or from an `ExecutorService` whose threads must not block, such as a\n`ForkJoinPool` (see [TE.4](#fjp-no-blocking)) or the worker pool of an I/O library, consider either\nreturning a simple `Future`, or documenting that stages shouldn't be attached to this\n`CompletableFuture` in the *default execution mode*. See [TE.7](#cf-beware-non-async) for more\ninformation.\n\n\u003ca name=\"chm-nested-calls\"\u003e\u003c/a\u003e\n[#](#chm-nested-calls) Dl.5. Aren't there **calls to methods on a `ConcurrentHashMap` instance\nwithin lambdas passed into `compute()`-like methods called on the same map?** For example, the\nfollowing code is deadlock-prone:\n```java\nmap.compute(key, (String k, Integer v) -\u003e {\n  if (v == null || v == 0) {\n    return map.get(DEFAULT_KEY);\n  }\n  return v;\n});\n```\nNote that nested calls to non-lambda accepting methods, *including read-only access methods like\n`get()`* create the possibility of deadlocks as well as nested calls to `compute()`-like methods\nbecause the former are not always lock-free. \n\n### Improving scalability\n\n\u003ca name=\"minimize-critical-sections\"\u003e\u003c/a\u003e\n[#](#minimize-critical-sections) Sc.1. **Are critical sections as small as possible?** For every\ncritical section: can’t some statements in the beginning and the end of the section be moved out of\nit? Not only minimizing critical sections improves scalability, but also makes it easier to review\nthem and spot race conditions and deadlocks.\n\nThis advice equally applies to lambdas passed into `ConcurrentHashMap`’s `compute()`-like methods.\n\nSee also [JCIP 11.4.1] and [EJ Item 79].\n\n\u003ca name=\"increase-locking-granularity\"\u003e\u003c/a\u003e\n[#](#increase-locking-granularity) Sc.2. Is it possible to **increase locking granularity**? If a\nthread-safe class encapsulates accesses to map, is it possible to **turn critical sections into\nlambdas passed into `ConcurrentHashMap.compute()`** or `computeIfAbsent()` or `computeIfPresent()`\nmethods to enjoy effective per-key locking granularity? Otherwise, is it possible to use\n**[Guava’s `Striped`](https://github.com/google/guava/wiki/StripedExplained)** or an equivalent? See\n[JCIP 11.4.3] for more information about lock striping.\n\n\u003ca name=\"non-blocking-collections\"\u003e\u003c/a\u003e\n[#](#non-blocking-collections) Sc.3. Is it possible to **use non-blocking collections instead of\nblocking ones?** Here are some possible replacements within JDK:\n\n - `Collections.synchronizedMap(HashMap)`, `Hashtable` → `ConcurrentHashMap`\n - `Collections.synchronizedSet(HashSet)` → `ConcurrentHashMap.newKeySet()`\n - `Collections.synchronizedMap(TreeMap)` → `ConcurrentSkipListMap`. By the way,\n `ConcurrentSkipListMap` is not the state of the art concurrent sorted dictionary implementation.\n [SnapTree](https://github.com/nbronson/snaptree) is [more efficient](\n https://github.com/apache/incubator-druid/pull/6719) than `ConcurrentSkipListMap` and there have\n been some research papers presenting algorithms that are claimed to be more efficient than\n SnapTree.\n - `Collections.synchronizedSet(TreeSet)` → `ConcurrentSkipListSet`\n - `Collections.synchronizedList(ArrayList)`, `Vector` → `CopyOnWriteArrayList`\n - `LinkedBlockingQueue` → `ConcurrentLinkedQueue`\n - `LinkedBlockingDeque` → `ConcurrentLinkedDeque`\n\nConsider also using queues from JCTools instead of concurrent queues from the JDK: see\n[Sc.8](#jctools).\n\nSee also an item about using [`ForkJoinPool` instead of `newFixedThreadPool(N)`](#fjp-instead-tpe)\nfor high-traffic executor services, which internally amounts to replacing a single blocking queue of\ntasks inside `ThreadPoolExecutor` with multiple non-blocking queues inside `ForkJoinPool`.\n\n\u003ca name=\"use-class-value\"\u003e\u003c/a\u003e\n[#](#use-class-value) Sc.4. Is it possible to **use [`ClassValue`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/ClassValue.html) instead of\n`ConcurrentHashMap\u003cClass, ...\u003e`?** Note, however, that unlike `ConcurrentHashMap` with its\n`computeIfAbsent()` method `ClassValue` doesn’t guarantee that per-class value is computed only\nonce, i. e. `ClassValue.computeValue()` might be executed by multiple concurrent threads. So if the\ncomputation inside `computeValue()` is not thread-safe, it should be synchronized separately. On the\nother hand, `ClassValue` does guarantee that the same value is always returned from\n`ClassValue.get()` (unless `remove()` is called).\n\n\u003ca name=\"read-write-lock\"\u003e\u003c/a\u003e\n[#](#read-write-lock) Sc.5. Was it considered to **replace a simple lock with a `ReadWriteLock`**?\nBeware, however, that it’s more expensive to acquire and release a `ReentrantReadWriteLock` than a\nsimple intrinsic lock, so the increase in scalability comes at the cost of reduced throughput. If\nthe operations to be performed under a lock are short, or if a lock is already striped (see\n[Sc.2](##increase-locking-granularity)) and therefore very lightly contended, **replacing a simple\nlock with a `ReadWriteLock` might have a net negative effect** on the application performance. See\n[this comment](\nhttps://medium.com/@leventov/interesting-perspective-thanks-i-didnt-think-about-this-before-e044eec71870)\nfor more details.\n\n\u003ca name=\"use-stamped-lock\"\u003e\u003c/a\u003e\n[#](#use-stamped-lock) Sc.6. Is it possible to use a **[`StampedLock`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/locks/StampedLock.html)\ninstead of a `ReentrantReadWriteLock`** when reentrancy is not needed?\n\n\u003ca name=\"long-adder-for-hot-fields\"\u003e\u003c/a\u003e\n[#](#long-adder-for-hot-fields) Sc.7. Is it possible to use **[`LongAdder`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/atomic/LongAdder.html)\nfor \"hot fields\"** (see [JCIP 11.4.4]) instead of `AtomicLong` or `AtomicInteger` on which only\nmethods like `incrementAndGet()`, `decrementAndGet()`, `addAndGet()` and (rarely) `get()` is called,\nbut not `set()` and `compareAndSet()`?\n\nNote that a field should be really updated from several concurrent threads steadily to justify using\n`LongAdder`. If the field is usually updated only from one thread at a time (there may be several\nupdating threads, but each of them accesses the field infrequently, so the updates from different\nthreads rarely happen at the same time) it's still better to use `AtomicLong` or `AtomicInteger`\nbecause they take less memory than `LongAdder` and their updates are cheaper.\n\n\u003ca name=\"jctools\"\u003e\u003c/a\u003e\n[#](#jctools) Sc.8. Was it considered to **use one of the array-based queues from [the JCTools\nlibrary](https://www.baeldung.com/java-concurrency-jc-tools) instead of `ArrayBlockingQueue`**?\nThose queues from JCTools are classified as blocking, but they avoid lock acquisition in many cases\nand are generally much faster than `ArrayBlockingQueue`.\n\nSee also [Sc.3](#non-blocking-collections) regarding replacing blocking queues (and other\ncollections) with non-blocking equivalents within JDK.\n\n\u003ca name=\"caffeine\"\u003e\u003c/a\u003e\n[#](#caffeine) Sc.9. Was it considered to **use [Caffeine](https://github.com/ben-manes/caffeine)\ncache instead of other Cache implementations (such from Guava)**? [Caffeine's performance](\nhttps://github.com/ben-manes/caffeine/wiki/Benchmarks) is very good compared to other caching\nlibraries.\n\nAnother reason to use Caffeine instead of Guava Cache is that it avoids an invalidation race: see\n[RC.8](#guava-cache-invalidation-race).\n\n\u003ca name=\"speculation\"\u003e\u003c/a\u003e\n[#](#speculation) Sc.10. When some state or a condition is checked, or resource is allocated within\na critical section, and the result is usually expected to be positive (access granted,\nresource allocated) because the entity is rarely in a protected, restricted, or otherwise special\nstate, or because the underlying resource is rarely in shortage, is it possible to **apply\nspeculation (optimistic concurrency) to improve scalability**? This means to use lighter-weight\nsynchronization (e. g. shared locking with a `ReadWriteLock` or a `StampedLock` instead of exclusive\nlocking) sufficient just to detect a shortage of the resource or that the entity is in a special\nstate, and fallback to heavier synchronization only when necessary.\n\nThis principle is used internally in many scalable concurrent data structures, including\n`ConcurrentHashMap` and JCTools's queues, but could be applied on the higher logical level as well.\n\nSee also the article about [Optimistic concurrency control](\nhttps://en.wikipedia.org/wiki/Optimistic_concurrency_control) on Wikipedia.\n\n\u003ca name=\"fjp-instead-tpe\"\u003e\u003c/a\u003e\n[#](#fjp-instead-tpe) Sc.11. Was it considered to **use a `ForkJoinPool` instead of a\n`ThreadPoolExecutor` with N threads** (e. g. returned from one of `Executors.newFixedThreadPool()`\nmethods), for thread pools on which a lot of small tasks are executed? `ForkJoinPool` is more\nscalable because internally it maintains one queue per each worker thread, whereas\n`ThreadPoolExecutor` has a single, blocking task queue shared among all threads.\n\n`ForkJoinPool` implements `ExecutorService` as well as `ThreadPoolExecutor`, so could often be a\ndrop in replacement. For caveats and details, see [this](\nhttp://cs.oswego.edu/pipermail/concurrency-interest/2020-January/017058.html) and [this](\nhttp://cs.oswego.edu/pipermail/concurrency-interest/2020-February/017061.html) messages by Doug Lea.\n\nSee also items about [using non-blocking collections (including queues) instead of blocking\nones](#non-blocking-collections) and about [using JCTools queues](#jctools).\n\n\u003ca name=\"lazy-init\"\u003e\u003c/a\u003e\n### Lazy initialization and double-checked locking\n\nRegarding all items in this section, see also [EJ Item 83] and \"[Safe Publication this and Safe\nInitialization in Java](https://shipilev.net/blog/2014/safe-public-construction/)\".\n\n\u003ca name=\"lazy-init-thread-safety\"\u003e\u003c/a\u003e\n[#](#lazy-init-thread-safety) LI.1. For every lazily initialized field: **is the initialization code\nthread-safe and might it be called from multiple threads concurrently?** If the answers are \"no\" and\n\"yes\", either double-checked locking should be used or the initialization should be eager.\n\nBe especially wary of using lazy-initialization in mutable objects. They are prone to cache\ninvalidation race conditions: see [RC.9](#cache-invalidation-race).\n\n\u003ca name=\"use-dcl\"\u003e\u003c/a\u003e\n[#](#use-dcl) LI.2. If a field is initialized lazily under a simple lock, is it possible to use\ndouble-checked locking instead to improve performance?\n\n\u003ca name=\"safe-local-dcl\"\u003e\u003c/a\u003e\n[#](#safe-local-dcl) LI.3. Does double-checked locking follow the [SafeLocalDCL](\nhttp://hg.openjdk.java.net/code-tools/jcstress/file/9270b927e00f/tests-custom/src/main/java/org/openjdk/jcstress/tests/singletons/SafeLocalDCL.java#l71)\npattern, as noted in [ETS.1](#pseudo-safety)?\n\nIf the initialized objects are immutable a more efficient [UnsafeLocalDCL](\nhttp://hg.openjdk.java.net/code-tools/jcstress/file/9270b927e00f/tests-custom/src/main/java/org/openjdk/jcstress/tests/singletons/UnsafeLocalDCL.java#l71)\npattern might also be used. However, if the lazily-initialized field is not `volatile` and there are\naccesses to the field that bypass the initialization path, the value of the **field must be\ncarefully cached in a local variable**. For example, the following code is buggy:\n```java\nprivate MyImmutableClass lazilyInitializedField;\n\nvoid doSomething() {\n  ...\n  if (lazilyInitializedField != null) {       // (1)\n    lazilyInitializedField.doSomethingElse(); // (2) - Can throw NPE!\n  }\n}\n```\nThis code might result in a `NullPointerException`, because although a non-null value is observed\nwhen the field is read the first time at line 1, the second read at line 2 could observe null.\n\nThe above code could be fixed as follows:\n```java\nvoid doSomething() {\n  MyImmutableClass lazilyInitialized = this.lazilyInitializedField;\n  if (lazilyInitialized != null) {\n    // Calling doSomethingElse() on a local variable to avoid NPE:\n    // see https://github.com/code-review-checklists/java-concurrency#safe-local-dcl\n    lazilyInitialized.doSomethingElse();\n  }\n}\n```\nSee \"[Wishful Thinking: Happens-Before Is The Actual Ordering](\nhttps://shipilev.net/blog/2016/close-encounters-of-jmm-kind/#wishful-hb-actual)\" and\n\"[Date-Race-Ful Lazy Initialization for Performance](\nhttp://jeremymanson.blogspot.com/2008/12/benign-data-races-in-java.html)\" for more information.\n\n\u003ca name=\"eager-init\"\u003e\u003c/a\u003e\n[#](#eager-init) LI.4. In each particular case, doesn’t the **net impact of double-checked locking\nand lazy field initialization on performance and complexity overweight the benefits of lazy\ninitialization?** Isn’t it ultimately better to initialize the field eagerly?\n\n\u003ca name=\"lazy-init-benign-race\"\u003e\u003c/a\u003e\n[#](#lazy-init-benign-race) LI.5. If a field is initialized lazily under a simple lock or using\ndouble-checked locking, does it really need locking? If nothing bad may happen if two threads do the\ninitialization at the same time and use different copies of the initialized state then a benign race\ncould be allowed. The initialized field should still be `volatile` (unless the initialized objects\nare immutable) to ensure there is a happens-before edge between threads doing the initialization and\nreading the field. This is called *a single-check idiom* (or *a racy single-check idiom* if the\nfield doesn't have a `volatile` modifier) in [EJ Item 83].\n\nAnnotate such fields with [`@LazyInit`](\nhttp://errorprone.info/api/latest/com/google/errorprone/annotations/concurrent/LazyInit.html) from\n[`error_prone_annotations`](\nhttps://search.maven.org/search?q=a:error_prone_annotations%20g:com.google.errorprone). The place\nin code with the race should also be identified with WARNING comments: see\n[NB.3](#non-blocking-warning).\n\n\u003ca name=\"no-static-dcl\"\u003e\u003c/a\u003e\n[#](#no-static-dcl) LI.6. Is **[lazy initialization holder class idiom](\nhttps://en.wikipedia.org/wiki/Initialization-on-demand_holder_idiom) used for static fields which\nmust be lazy rather than double-checked locking?** There are no reasons to use double-checked\nlocking for static fields because lazy initialization holder class idiom is simpler, harder to make\nmistake in, and is at least as efficient as double-checked locking (see benchmark results in \"[Safe\nPublication and Safe Initialization in\nJava](https://shipilev.net/blog/2014/safe-public-construction/)\").\n\n\u003ca name=\"non-blocking\"\u003e\u003c/a\u003e\n### Non-blocking and partially blocking code\n\n\u003ca name=\"check-non-blocking-code\"\u003e\u003c/a\u003e\n[#](#check-non-blocking-code) NB.1. If there is some non-blocking or semi-symmetrically blocking\ncode that mutates the state of a thread-safe class, was it deliberately checked that if a **thread\non a non-blocking mutation path is preempted after each statement, the object is still in a valid\nstate**? Are there enough comments, perhaps before almost every statement where the state is\nchanged, to make it relatively easy for readers of the code to repeat and verify the check?\n\n\u003ca name=\"swap-state-atomically\"\u003e\u003c/a\u003e\n[#](#swap-state-atomically) NB.2. Is it possible to simplify some non-blocking code by **confining\nall mutable state in an immutable POJO and update it via compare-and-swap operations**? This pattern\nis also mentioned in [RC.5](#moving-state-race). Instead of a POJO, a single `long` value could be\nused if all parts of the state are integers that can together fit 64 bits. See also [JCIP 15.3.1].\n\n\u003ca name=\"non-blocking-warning\"\u003e\u003c/a\u003e\n[#](#non-blocking-warning) NB.3. Are there **visible WARNING comments identifying the boundaries of\nnon-blocking code**? The comments should mark the start and the end of non-blocking code, partially\nblocking code, and benignly racy code (see [Dc.8](#document-benign-race) and\n[LI.5](#lazy-init-benign-race)). The opening comments should:\n\n 1. Justify the need for such error-prone code (which is a special case of\n [Dc.1](#justify-document)).\n 2. **Warn developers that changes in the following code should be made (and reviewed) extremely\n carefully.**\n\n\u003ca name=\"justify-busy-wait\"\u003e\u003c/a\u003e\n[#](#justify-busy-wait) NB.4. If some condition is awaited in a (busy) loop, like in the following\nexample:\n```java\nvolatile boolean condition;\n\n// in some method:\n    while (!condition) {\n      // Or Thread.sleep/yield/onSpinWait, or no statement, i. e. a pure spin wait\n      TimeUnit.SECONDS.sleep(1L);\n    }\n    // ... do something when condition is true\n```\n**Is it explained in a comment why busy waiting is needed in the specific case**, and why the\ncosts and potential problems associated with busy waiting (see [JCIP 12.4.2] and [JCIP 14.1.1])\neither don't apply in the specific case or are outweighed by the benefits?\n\nIf there is no good reason for spin waiting, it's preferable to synchronize explicitly using a tool\nsuch as [`Semaphore`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/Semaphore.html),\n[`CountDownLatch`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/CountDownLatch.html\n), or [`Exchanger`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/Exchanger.html),\nor if the logic for which the spin loop awaits is executed in some `ExecutorService`, it's better to\nadd a callback to the corresponding `CompletableFuture` or [`ListenableFuture`](\nhttps://github.com/google/guava/wiki/ListenableFutureExplained) (check [TE.7](#cf-beware-non-async)\nabout doing this properly).\n\nIn test code waiting for some condition, a library such as [Awaitility](\nhttps://github.com/awaitility/awaitility) could be used instead of explicit looping with\n`Thread.sleep` calls.\n\nSince `Thread.yield()` and `Thread.onSpinWait()` are rarely, if ever, useful outside of spin loops,\nthis item could also be interpreted as that there should be a comment to every call to either of\nthese methods, explaining either why they are called outside of a spin loop, or justifying the spin\nloop itself.\n\nIn any case, the field checked in the busy loop must be `volatile`: see\n[IS.2](#non-volatile-visibility) for details.\n\nThe busy wait pattern is covered by IntelliJ IDEA's inspections \"Busy wait\" and \"while loop spins on\nfield\".\n\n### Threads and Executors\n\n\u003ca name=\"name-threads\"\u003e\u003c/a\u003e\n[#](#name-threads) TE.1. **Are Threads given names** when created? Are ExecutorServices created with\nthread factories that name threads?\n\nIt appears that different projects have different policies regarding other aspects of `Thread`\ncreation: whether to make them daemon with `setDaemon()`, whether to set thread priorities and\nwhether a `ThreadGroup` should be specified. Many of such rules can be effectively enforced with\n[forbidden-apis](https://github.com/policeman-tools/forbidden-apis).\n\n\u003ca name=\"reuse-threads\"\u003e\u003c/a\u003e\n[#](#reuse-threads) TE.2. Aren’t there threads created and started, but not stored in fields, a-la\n**`new Thread(...).start()`**, in some methods that may be called repeatedly? Is it possible to\ndelegate the work to a cached or a shared `ExecutorService` instead?\n\nAnother form of this problem is when a **`Thread` (or an `ExecutorService`) is created and managed\nwithin objects (in other words, [active objects](https://en.wikipedia.org/wiki/Active_object)) that\nare relatively short-lived.** Is it possible to reuse executors by creating them one level up the\nstack and passing shared executors to constructors of the short-lived objects, or a shared\n`ExecutorService` stored in a static field?\n\n\u003ca name=\"cached-thread-pool-no-io\"\u003e\u003c/a\u003e\n[#](#cached-thread-pool-no-io) TE.3. **Aren’t some network I/O operations performed in an\n`Executors.newCachedThreadPool()`-created `ExecutorService`?** If a machine that runs the\napplication has network problems or the network bandwidth is exhausted due to increased load,\nCachedThreadPools that perform network I/O might begin to create new threads uncontrollably.\n\nNote that completing some `CompletableFuture` or `SettableFuture` from inside a cached thread pool\nand then returning this future to a user might expose the thread pool to executing unwanted actions\nif the future is used improperly: see [TE.7](#cf-beware-non-async) for details.\n\n\u003ca name=\"fjp-no-blocking\"\u003e\u003c/a\u003e\n[#](#fjp-no-blocking) TE.4. **Aren’t there blocking or I/O operations performed in tasks scheduled\nto a `ForkJoinPool`** (except those performed via a [`managedBlock()`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/ForkJoinPool.html#managedBlock(java.util.concurrent.ForkJoinPool.ManagedBlocker)\n) call)? Parallel `Stream` operations are executed in the common `ForkJoinPool` implicitly, as well\nas the lambdas passed into `CompletableFuture`’s methods whose names end with \"Async\" but not\naccepting a custom executor.\n\nNote that attaching blocking or I/O operations to a `CompletableFuture` as stage in *default\nexecution mode* (via methods like `thenAccept()`, `thenApply()`, `handle()`, etc.) might also\ninadvertently lead to performing them in `ForkJoinPool` from where the future may be completed: see\n[TE.7](#cf-beware-non-async).\n\n`Thread.sleep()` is a blocking operation.\n\nThis advice should not be taken too far: occasional transient IO (such as that may happen during\nlogging) and operations that may rarely block (such as `ConcurrentHashMap.put()` calls) usually\nshouldn’t disqualify all their callers from execution in a `ForkJoinPool` or in a parallel `Stream`.\nSee [Parallel Stream Guidance](http://gee.cs.oswego.edu/dl/html/StreamParallelGuidance.html) for the\nmore detailed discussion of these tradeoffs.\n\nSee also [the section about parallel Streams](#parallel-streams).\n\n\u003ca name=\"use-common-fjp\"\u003e\u003c/a\u003e\n[#](#use-common-fjp) TE.5. An opposite of the previous item: **can non-blocking computations be\nparallelized or executed asynchronously by submitting tasks to `ForkJoinPool.commonPool()` or via\nparallel Streams instead of using a custom thread pool** (e. g. created by one of the static factory\nmethods from `ExecutorServices`)? Unless the custom thread pool is configured with a `ThreadFactory`\nthat specifies a non-default priority for threads or a custom exception handler (see\n[TE.1](#name-threads)) there is little reason to create more threads in the system instead of\nreusing threads of the common `ForkJoinPool`.\n\n\u003ca name=\"explicit-shutdown\"\u003e\u003c/a\u003e\n[#](#explicit-shutdown) TE.6. Is every **`ExecutorService` treated as a resource and is shut down\nexplicitly in the `close()` method of the containing object**, or in a try-with-resources of a\ntry-finally statement? Failure to shutdown an `ExecutorService` might lead to a thread leak even if\nan `ExecutorService` object is no longer accessible, because some implementations (such as\n`ThreadPoolExecutor`) shutdown themselves in a finalizer, [while `finalize()` is not guaranteed to\never be called](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Object.html#finalize()) by\nthe JVM.\n\nTo make explicit shutdown possible, first, [`ExecutorService` objects must not be assinged into\nvariables and fields of `Executor` type](#executor-service-type-loss).\n\n\u003ca name=\"cf-beware-non-async\"\u003e\u003c/a\u003e\n[#](#cf-beware-non-async) TE.7. Are **non-async stages attached to a `CompletableFuture` simple and\nnon-blocking** unless the future is [completed](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/CompletableFuture.html#complete(T)\n) from a thread in the same thread pool as the thread from where a `CompletionStage` is attached?\nThis also applies when asynchronous callback is attached using Guava's\n`ListenableFuture.addListener()` or [`Futures.addCallback()`](\nhttps://guava.dev/releases/28.1-jre/api/docs/com/google/common/util/concurrent/Futures.html#addCallback-com.google.common.util.concurrent.ListenableFuture-com.google.common.util.concurrent.FutureCallback-java.util.concurrent.Executor-\n) methods and [`directExecutor()`](\nhttps://guava.dev/releases/28.1-jre/api/docs/com/google/common/util/concurrent/MoreExecutors.html#directExecutor--\n) (or an equivalent) is provided as the executor for the callback.\n\nNon-async execution is called *default execution* (or *default mode*) in the documentation for\n[`CompletionStage`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/CompletionStage.html):\nthese are methods `thenApply()`, `thenAccept()`, `thenRun()`, `handle()`, etc. Such stages may be\nexecuted *either* from the thread adding a stage, or from the thread calling `future.complete()`.\n\nIf the `CompletableFuture` originates from a library, it's usually unknown in which thread\n(executor) it is completed, therefore, chaining a heavyweight, blocking, or I/O operation to such a\nfuture might lead to problems described in [TE.3](#cached-thread-pool-no-io),\n[TE.4](#fjp-no-blocking) (if the future is completed from a `ForkJoinPool` or an event loop executor\nin an asynchronous library), and [Dl.4](#non-open-call).\n\nEven if the `CompletableFuture` is created in the same codebase as stages attached to it, but is\ncompleted in a different thread pool, default stage execution mode leads to non-deterministic\nscheduling of operations. If these operations are heavyweight and/or blocking, this reduces the\nsystem's operational predictability and robustness, and also creates a subtle dependency between the\ncomponents: e. g. if the component which completes the future decides to migrate this action to a\n`ForkJoinPool`, it could suddenly lead to the problems described in the previous paragraph.\n\nA lightweight, non-blocking operation which is OK to attach to a `CompletableFuture` as a non-async\nstage may be something like incrementing an `AtomicInteger` counter, adding an element to a\nnon-blocking queue, putting a value into a `ConcurentHashMap`, or a logging statement.\n\nIt's also fine to attach a stage in the default mode if preceded with `if (future.isDone())` check\nwhich guarantees that the completion stage will be executed immediately in the current thread\n(assuming the current thread belongs to the proper thread pool to perform the stage action).\n\nThe specific reason(s) making non-async stage or callback attachment permissible (future completion\nand stage attachment happening in the same thread pool; simple/non-blocking callback; or\n`future.isDone()` check) should be identified in a comment.\n\nSee also the Javadoc for [`ListenableFuture.addListener()`](\nhttps://guava.dev/releases/28.1-jre/api/docs/com/google/common/util/concurrent/ListenableFuture.html#addListener-java.lang.Runnable-java.util.concurrent.Executor-\n) describing this problem.\n\n\u003ca name=\"no-sleep-schedule\"\u003e\u003c/a\u003e\n[#](#no-sleep-schedule) TE.8. Is it possible to **execute a task or an action with a delay via a\n[`ScheduledExecutorService`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/ScheduledExecutorService.html)\nrather than by calling `Thread.sleep()` before performing the work** or submitting the task to\nan executor? `ScheduledExecutorService` allows to execute many such tasks on a small number of\nthreads, while the approach with `Thread.sleep` requires a dedicated thread for every delayed\naction. Sleeping in the context of an unknown executor (if there is insufficient *concurrent access\ndocumentation* for the method, as per [Dc.1](#justify-document), or if this is a\nconcurrency-agnostic library method) before submitting the task to an executor is also bad: the\ncontext executor may not be well-suited for blocking calls such as `Thread.sleep`, see\n[TE.4](#fjp-no-blocking) for details.\n\nThis item equally applies to scheduling one-shot and recurrent delayed actions, there are methods\nfor both scenarios in `ScheduledExecutorService`.\n\nBe cautious, however, about scheduling tasks with affinity to system time or UTC time (e. g.\nbeginning of each hour) using `ScheduledThreadPoolExecutor`: it can experience [unbounded clock\ndrift](#external-interaction-schedule).\n\n\u003ca name=\"check-await-termination\"\u003e\u003c/a\u003e\n[#](#check-await-termination) TE.9. **The result of `ExecutorService.awaitTermination()` method\ncalls is checked?** Calling `awaitTermination()` (or `ForkJoinPool.awaitQuiescence()`) and not\nchecking the result makes little sense. If it's actually important to await termination, e. g. to\nensure a happens-before relation between the completion of the actions scheduled to the\n`ExecutorService` and some actions following the `awaitTermination()` call, or because termination\nmeans a release of some heavy resource and if the resource is not released there is a noticeable\nleak, then it is reasonable to at least check the result of `awaitTermination()` and log a warning\nif the result is negative, making debugging potential problems in the future easier. Otherwise, if\nawaiting termination really makes no difference, then it's better to not call `awaitTermination()`\nat all.\n\nApart from `ExecutorService`, this item also applies to `awaitTermination()` methods on\n[`AsynchronousChannelGroup`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/nio/channels/AsynchronousChannelGroup.html)\nand [`io.grpc.ManagedChannel`](https://grpc.github.io/grpc-java/javadoc/io/grpc/ManagedChannel.html)\nfrom [gRPC-Java](https://github.com/grpc/grpc-java).\n\nIt's possible to find omitted checks for `awaitTermination()` results using [Structural search\ninspection](https://www.jetbrains.com/help/phpstorm/general-structural-search-inspection.html) in\nIntelliJ IDEA with the following pattern:\n```\n$x$.awaitTermination($y$, $z$);\n```\n\nSee also a similar item about [not checking the result of `CountDownLatch.await()`](#check-await).\n\n\u003ca name=\"executor-service-type-loss\"\u003e\u003c/a\u003e\n[#](#executor-service-type-loss) TE.10. **Isn't `ExecutorService` assigned into a variable or a\nfield of `Executor` type?** This makes it impossible to follow the practice of [explicit shutdown of\n`ExecutorService` objects](#explicit-shutdown).\n\nIn IntelliJ IDEA, it's possible to find violations of this practice automatically using two patterns\nadded to [Structural Search inspection](\nhttps://www.jetbrains.com/help/phpstorm/general-structural-search-inspection.html):\n - \"Java\" pattern: `$x$ = $y$`, where the \"Type\" of `$x$` is `Executor` (\"within type hierarchy\"\n flag is off) and the \"Type\" of `$y$` is `ExecutorService` (\"within type hierarchy\" flag is on).\n - \"Java - Class Member\" pattern: `$Type$ $x$ = $y$;`, where the \"Text\" of `$Type$` is `Executor`\n and the \"Type\" of `$y$` is `ExecutorService` (within type hierarchy).\n\n\u003ca name=\"unneeded-scheduled-executor-service\"\u003e\u003c/a\u003e\n[#](#unneeded-scheduled-executor-service) TE.11. **Isn't `ScheduledExecutorService` assigned into\na variable or a field of `ExecutorService` type?** This is wasteful because the primary Java\nimplementation of `ScheduledExecutorService`, [`ScheduledThreadPoolExecutor`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/ScheduledThreadPoolExecutor.html)\n(it is returned from `Executors.newScheduledThreadPool()` and `newSingleThreadScheduledExecutor()`\nstatic factory methods) uses a `PriorityQueue` internally to manage tasks which incurs higher memory\nfootprint and CPU overhead compared to non-scheduled `ExecutorService` implementations such as\nvanilla `ThreadPoolExecutor` or `ForkJoinPool`.\n\nThis problem could be caught statically in a way similar to what is described in the [previous\nitem](#executor-service-type-loss).\n\n### Parallel Streams\n\n\u003ca name=\"justify-parallel-stream-use\"\u003e\u003c/a\u003e\n[#](#justify-parallel-stream-use) PS.1. For every use of parallel Streams via\n`Collection.parallelStream()` or `Stream.parallel()`: **is it explained why parallel `Stream` is\nused in a comment preceding the stream operation?** Are there back-of-the-envelope calculations or\nreferences to benchmarks showing that the total CPU time cost of the parallelized computation\nexceeds [100 microseconds](http://gee.cs.oswego.edu/dl/html/StreamParallelGuidance.html)?\n\nIs there a note in the comment that parallelized operations are generally I/O-free and non-blocking,\nas per [TE.4](#fjp-no-blocking)? The latter might be obvious momentary, but as codebase evolves the\nlogic that is called from the parallel stream operation might become blocking accidentally. Without\ncomment, it’s harder to notice the discrepancy and the fact that the computation is no longer a good\nfit for parallel Streams. It can be fixed by calling the non-blocking version of the logic again or\nby using a simple sequential `Stream` instead of a parallel `Stream`.\n\n### Futures\n\n\u003ca name=\"unneeded-future\"\u003e\u003c/a\u003e\n[#](#unneeded-future) Ft.1. Does a method returning a `Future` do some blocking operation\nasynchronously? If it doesn't, **was it considered to perform non-blocking computation logic and\nreturn the result directly from the method, rather than within a `Future`?** There are situations\nwhen someone might still want to return a `Future` wrapping some non-blocking computation,\nessentially relieving the users from writing boilerplate code like\n`CompletableFuture.supplyAsync(obj::expensiveComputation)` if all of them want to run the method\nasynchronously. But if at least some of the clients don't need the indirection, it's better not to\nwrap the logic into a `Future` prematurely and give the users of the API a choice to do this\nthemselves.\n\nSee also [ETS.3](#unneeded-thread-safety) about unneeded thread-safety of a method.\n\n\u003ca name=\"future-method-no-blocking\"\u003e\u003c/a\u003e\n[#](#future-method-no-blocking) Ft.2. Aren't there **blocking operations in a method returning a\n`Future` before asynchronous execution is started**, and is it started at all? Here is the\nantipattern:\n```java\n// DON'T DO THIS\nFuture\u003cSalary\u003e getSalary(Employee employee) throws ConnectionException {\n  Branch branch = retrieveBranch(employee); // A database or an RPC call\n  return CompletableFuture.supplyAsync(() -\u003e {\n    return retrieveSalary(branch, employee); // Another database or an RPC call\n  }, someBlockingIoExecutor());\n}\n```\nBlocking the caller thread is unexpected for a user seeing a method returning a `Future`.\n\nAn example completely without asynchrony:\n```java\n// DON'T DO THIS\nFuture\u003cSalary\u003e getSalary(Employee employee) throws ConnectionException {\n  SalaryDTO salaryDto = retrieveSalary(employee); // A database or an RPC call\n  Salary salary = toSalary(salaryDto);\n  return completedFuture(salary); // Or Guava's immediateFuture(), Scala's successful()\n}\n```\n\nIf the `retrieveSalary()` method is not blocking itself, the `getSalary()` [may not need to return\na `Future`](#unneeded-future).\n\nAnother problem with making blocking calls before scheduling an `Future` is that the resulting code\nhas [multiple failure paths](#future-method-failure-paths): either the future may complete\nexceptionally, or the method itself may throw an exception (typically from the blocking operation),\nwhich is illustrated by `getSalary() throws ConnectionException` in the above examples.\n\nThis advice also applies when a method returns any object representing an asynchronous execution\nother than `Future`, such as `Deferred`, [`Flow.Publisher`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/Flow.Publisher.html\n), [`org.reactivestreams.Publisher`](\nhttps://www.reactive-streams.org/reactive-streams-1.0.3-javadoc/org/reactivestreams/Publisher.html),\nor RxJava's [`Observable`](http://reactivex.io/RxJava/javadoc/io/reactivex/Observable.html).\n\n\u003ca name=\"future-method-failure-paths\"\u003e\u003c/a\u003e\n[#](#future-method-failure-paths) Ft.3. If a method returns a `Future` and some logic in the\nbeginning of it may lead to an *expected failure* (i. e. not a result of a programming bug), **was\nit considered to propagate an expected failure by a `Future` completed exceptionally, rather than\nthrowing from the method?** For example, the following method:\n```java\nFuture\u003cResponse\u003e makeQuery(String query) throws InvalidQueryException {\n  Request req = compile(query); // Can throw an InvalidQueryException\n  return CompletableFuture.supplyAsync(() -\u003e service.remoteCall(req), someBlockingIoExecutor());\n}\n```\nMay be converted into:\n```java\nFuture\u003cResponse\u003e makeQuery(String query) {\n  try {\n    Request req = compile(query);\n  } catch (InvalidQueryException e) {\n    // Explicit catch preserves the semantics of the original version of makeQuery() most closely.\n    // If compile(query) is an expensive computation, it may be undesirable to schedule it to\n    // someBlockingIoExecutor() by simply moving compile(query) into the lambda below because if\n    // this pool has more threads than CPUs then too many compilations might be started in parallel,\n    // leading to excessive switches between threads running CPU-bound tasks.\n    // Another alternative is scheduling compile(query) to the common FJP:\n    // CompletableFuture.supplyAsync(() -\u003e compile(query))\n    //   .thenApplyAsync(service::remoteCall, someBlockingIoExecutor());\n    CompletableFuture\u003cResponse\u003e f = new CompletableFuture\u003c\u003e();\n    f.completeExceptionally(e);\n    return f; // Or use Guava's immediateFailedFuture()\n  }\n  return CompletableFuture.supplyAsync(() -\u003e service.remoteCall(req), someBlockingIoExecutor());\n}\n```\nThe point of this refactoring is unification of failure paths, so that the users of the API don't\nhave to deal with multiple different ways of handling errors from the method.\n\nSimilarly to [the previous item](#future-method-no-blocking), this consideration also applies when\na method returns any object representing an asynchronous execution other than `Future`, such as\n`Deferred`, `Publisher`, or `Observable`.\n\n### Thread interruption and `Future` cancellation\n\n\u003ca name=\"restore-interruption\"\u003e\u003c/a\u003e\n[#](#restore-interruption) IF.1. If some code propagates `InterruptedException` wrapped into another\nexception (e. g. `RuntimeException`), is **the interruption status of the current thread restored\nbefore the wrapping exception is thrown?**\n\nPropagating `InterruptedException` wrapped into another exception is a controversial practice\n(especially in libraries) and it may be prohibited in some projects completely, or in specific\nsubsystems.\n\n\u003ca name=\"interruption-swallowing\"\u003e\u003c/a\u003e\n[#](#interruption-swallowing) IF.2. If some method **returns normally after catching an\n`InterruptedException`**, is this coherent with the (documented) semantics of the method? Returning\nnormally after catching an `InterruptedException` usually makes sense only in two types of methods:\n\n - `Runnable.run()` or `Callable.call()` themselves, or methods that are intended to be submitted as\n tasks to some Executors as method references. `Thread.currentThread().interrupt()` should still be\n called before returning from the method, assuming that the interruption policy of the threads in\n the `Executor` is unknown.\n - Methods with \"try\" or \"best effort\" semantics. Documentation for such methods should be clear\n that they stop attempting to do something when the thread is interrupted, restore the interruption\n status of the thread and return. For example, `log()` or `sendMetric()` could probably be such\n methods, as well as `boolean trySendMoney()`, but not `void sendMoney()`.\n\nIf a method doesn’t fall into either of these categories, it should propagate `InterruptedException`\ndirectly or wrapped into another exception (see the previous item), or it should not return normally\nafter catching an `InterruptedException`, but rather continue execution in some sort of retry loop,\nsaving the interruption status and restoring it before returning (see an [example](\nhttp://jcip.net/listings/NoncancelableTask.java) from JCIP). Fortunately, in most situations, it’s\nnot needed to write such boilerplate code: **one of the methods from [`Uninterruptibles`](\nhttps://google.github.io/guava/releases/27.0.1-jre/api/docs/com/google/common/util/concurrent/Uninterruptibles.html\n) utility class from Guava can be used.**\n\n\u003ca name=\"cancel-future\"\u003e\u003c/a\u003e\n[#](#cancel-future) IF.3. If an **`InterruptedException` or a `TimeoutException` is caught on a\n`Future.get()` call** and the task behind the future doesn’t have side effects, i. e. `get()` is\ncalled only to obtain and use the result in the context of the current thread rather than achieve\nsome side effect, is the future [canceled](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/Future.html#cancel(boolean))?\n\nSee [JCIP 7.1] for more information about thread interruption and task cancellation.\n\n### Time\n\n\u003ca name=\"nano-time-overflow\"\u003e\u003c/a\u003e\n[#](#nano-time-overflow) Tm.1. Are values returned from **`System.nanoTime()` compared in an\noverflow-aware manner**, as described in [the documentation](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/System.html#nanoTime()) for\nthis method?\n\n\u003ca name=\"time-going-backward\"\u003e\u003c/a\u003e\n[#](#time-going-backward) Tm.2. **Isn't `System.currentTimeMillis()` used for time comparisons,\ntimed blocking, measuring intervals, timeouts, etc.?** `System.currentTimeMillis()` is a subject to\nthe \"time going backward\" phenomenon. This might happen due to a time correction on a server, for\nexample.\n\n`System.nanoTime()` should be used instead of `currentTimeMillis()` for the purposes of time\ncomparision, interval measurements, etc. Values returned from `nanoTime()` never decrease (but may\noverflow — see the previous item). Warning: `nanoTime()` didn’t always uphold to this guarantee in\nOpenJDK until 8u192 (see [JDK-8184271](https://bugs.openjdk.java.net/browse/JDK-8184271)). Make sure\nto use the freshest distribution.\n\nIn distributed systems, the [leap second](https://en.wikipedia.org/wiki/Leap_second) adjustment\ncauses similar issues.\n\n\u003ca name=\"time-units\"\u003e\u003c/a\u003e\n[#](#time-units) Tm.3. Do **variables that store time limits and periods have suffixes identifying\ntheir units**, for example, \"timeoutMillis\" (also -Seconds, -Micros, -Nanos) rather than just\n\"timeout\"? In method and constructor parameters, an alternative is providing a [`TimeUnit`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/TimeUnit.html)\nparameter next to a \"timeout\" parameter. This is the preferred option for public APIs.\n\n\u003ca name=\"treat-negative-timeout-as-zero\"\u003e\u003c/a\u003e\n[#](#treat-negative-timeout-as-zero) Tm.4. **Do methods that have \"timeout\" and \"delay\" parameters\ntreat negative arguments as zeros?** This is to obey the principle of least astonishment because all\ntimed blocking methods in classes from `java.util.concurrent.*` follow this convention.\n\n\u003ca name=\"external-interaction-schedule\"\u003e\u003c/a\u003e\n[#](#external-interaction-schedule) Tm.5. **Tasks that should happen at a certain system time, UTC\ntime, or wall-clock time far in the future, or run periodically with a cadence expressed in terms of\nsystem/UTC/wall-clock time (rather than internal machine's CPU time) are *not* scheduled with\n`ScheduledThreadPoolExecutor`?** `ScheduledThreadPoolExecutor` (this class is also behind all\nfactory methods in `Executors` which return a `ScheduledExecutorService`) uses `System.nanoTime()`\nfor timing intervals. [`nanoTime()` can drift against the system time and the UTC time.](\nhttps://medium.com/@leventov/cronscheduler-a-reliable-java-scheduler-for-external-interactions-cb7ce4a4f2cd)\n\n[`CronScheduler`](https://github.com/TimeAndSpaceIO/CronScheduler) is a scheduling class designed\nto be proof against unbounded clock drift relative to UTC or system time for both one-shot or\nperiodic tasks. See more detailed recommendations on [choosing between\n`ScheduledThreadPoolExecutor` and `CronScheduler`](\nhttps://medium.com/@leventov/cronscheduler-a-reliable-java-scheduler-for-external-interactions-cb7ce4a4f2cd#4926).\nOn Android, use [Android-specific APIs](\nhttps://android.jlelse.eu/schedule-tasks-and-jobs-intelligently-in-android-e0b0d9201777).\n\n\u003ca name=\"user-interaction-schedule\"\u003e\u003c/a\u003e\n[#](#user-interaction-schedule) Tm.6. On consumer devices (PCs, laptops, tables, phones),\n**`ScheduledThreadPoolExecutor` (or `Timer`) is *not* used for human interaction tasks or\ninteractions between the device and a remote service?** Examples of human interaction tasks are\nalarms, notifications, timers, or task management. Examples of interactions between user's device\nand remote services are checking for new e-mails or messages, widget updates, or software updates.\nThe reason for this is that [neither `ScheduledThreadPoolExecutor` nor `Timer` account for machine\nsuspension](\nhttps://medium.com/@leventov/cronscheduler-a-reliable-java-scheduler-for-external-interactions-cb7ce4a4f2cd#dcfe)\n(such as sleep or hibernation mode). On Android, use [Android-specific APIs](\nhttps://android.jlelse.eu/schedule-tasks-and-jobs-intelligently-in-android-e0b0d9201777) instead.\nConsider [CronScheduler](https://github.com/TimeAndSpaceIO/CronScheduler) as a replacement for\n`ScheduledThreadPoolExecutor` in these cases for end-user JVM apps.\n\n### `ThreadLocal`\n\n\u003ca name=\"tl-static-final\"\u003e\u003c/a\u003e\n[#](#tl-static-final) TL.1. **Can `ThreadLocal` field be `static final`?** There are three cases\nwhen a `ThreadLocal` cannot be static:\n- It *holds some state specific to the containing instance object*, rather than, for example,\n  reusable objects to avoid allocations (which would be the same for all `ThreadLocal`-containing\n  instances).\n- A method using a `ThreadLocal` may call another method (or the same method, recursively) that also\n  uses this `ThreadLocal`, but on a different containing object.\n- There is a class (or `enum`) modelling a specific type of `ThreadLocal` usage, and there are only\n  limited number of instances of this class in the JVM: i. e. all are constants stored in\n  `static final` fields, or `enum` constants.\n\nIf a usage of `ThreadLocal` doesn't fall into either of these categories, it can be `static final`.\n\nThere is an inspection \"ThreadLocal field not declared static final\" in IntelliJ IDEA which\ncorresponds to this item.\n\nStatic `ThreadLocal` fields could also be enforced using Checkstyle, using the following combination\nof checks:\n```xml\n\u003c!-- Enforce 'private static final' order of modifiers --\u003e\n\u003cmodule name=\"ModifierOrder\" /\u003e\n\n\u003c!-- Ensure all ThreadLocal fields are private --\u003e\n\u003c!-- Requires https://github.com/sevntu-checkstyle/sevntu.checkstyle --\u003e\n\u003cmodule name=\"AvoidModifiersForTypesCheck\"\u003e\n  \u003cproperty name=\"forbiddenClassesRegexpProtected\" value=\"ThreadLocal\"/\u003e\n  \u003cproperty name=\"forbiddenClassesRegexpPublic\" value=\"ThreadLocal\"/\u003e\n  \u003cproperty name=\"forbiddenClassesRegexpPackagePrivate\" value=\"ThreadLocal\"/\u003e\n\u003c/module\u003e\n\n\u003c!-- Prohibit any ThreadLocal field which is not private static final --\u003e\n\u003cmodule name=\"Regexp\"\u003e\n  \u003cproperty name=\"id\" value=\"nonStaticThreadLocal\"/\u003e\n  \u003cproperty name=\"format\"\n    value=\"^\\s*private\\s+(ThreadLocal|static\\s+ThreadLocal|final\\s+ThreadLocal)\"/\u003e\n  \u003cproperty name=\"illegalPattern\" value=\"true\"/\u003e\n  \u003cproperty name=\"message\" value=\"Non-static final ThreadLocal\"/\u003e\n\u003c/module\u003e\n```\n\n\u003ca name=\"threadlocal-design\"\u003e\u003c/a\u003e\n[#](#threadlocal-design) TL.2. Doesn't a **`ThreadLocal` mask issues with the code, such as poor\ncontrol flow or data flow design?** Is it possible to redesign the system without using\n`ThreadLocal`, would that be simpler? This is especially true for instance-level (non-static)\n`ThreadLocal` fields; see also [TL.1](#tl-static-final) and [TL.4](#tl-instance-chm) about them.\n\nSee [Dc.2](#threading-flow-model) about the importance of articulating the control flow and the data\nflow of a subsystem which may help to uncover other issues with the design.\n\n\u003ca name=\"threadlocal-performance\"\u003e\u003c/a\u003e\n[#](#threadlocal-performance) TL.3. Isn't a **`ThreadLocal` used only to reuse some small heap\nobjects, cheap to allocate and initialize, that would otherwise need to be allocated relatively\ninfrequently?** In this case, the cost of accessing a `ThreadLocal` would likely outweigh the\nbenefit from reducing allocations. The evidence should be supplied that introducing a `ThreadLocal`\nshortens the GC pauses and/or increases the overall throughput of the system.\n\n\u003ca name=\"tl-instance-chm\"\u003e\u003c/a\u003e\n[#](#tl-instance-chm) TL.4. If the threads which execute code with usage of a non-static\n`ThreadLocal` are long-living and there is a fixed number of them (e. g. workers of a fixed-sized\n`ThreadPoolExecutor`) and there is a greater number of shorter-living `ThreadLocal`-containing\nobjects, was it considered to **replace the instance-level `ThreadLocal\u003cVal\u003e` with a\n`ConcurrentHashMap\u003cThread, Val\u003e threadLocalValues` confined to the objects**, accessed like\n`threadLocalValues.get(Thread.currentThread())`? This approach requires some confidence and\nknowledge about the threading model of the subsystem (see [Dc.2](threading-flow-model)), though it\nmay also be trivial if Active Object pattern is used (see [Dn.2](#use-patterns)), but is much\nfriendlier to GC because no short-living weak references are produced.\n\n### Thread safety of Cleaners and native code\n\n\u003ca name=\"thread-safe-close-with-cleaner\"\u003e\u003c/a\u003e\n[#](#thread-safe-close-with-cleaner) CN.1. If a class manages native resources and employs\n`java.lang.ref.Cleaner` (or `sun.misc.Cleaner`; or overrides `Object.finalize()`) to ensure that\nresources are freed when objects of the class are garbage collected, and the class implements\n`Closeable` with the same cleanup logic executed from `close()` directly rather than through\n`Cleanable.clean()` (or `sun.misc.Cleaner.clean()`) to be able to distinguish between explicit\n`close()` and cleanup through a cleaner (for example, `clean()` can log a warning about the object\nnot being closed explicitly before freeing the resources), is it ensured that even if the **cleanup\nlogic is called concurrently from multiple threads, the actual cleanup is performed only once**? The\ncleanup logic in such classes should obviously be idempotent because it’s usually expected to be\ncalled twice: the first time from the `close()` method and the second time from the cleaner or\n`finalize()`. The catch is that the cleanup *must be concurrently idempotent, even if `close()` is\nnever called concurrently on objects of the class*. That’s because the garbage collector may\nconsider the object to become unreachable before the end of a `close()` call and initiate cleanup\nthrough the cleaner or `finalize()` while `close()` is still being executed.\n\nAlternatively, `close()` could simply delegate to `Cleanable.clean()` (`sun.misc.Cleaner.clean()`)\nwhich is concurrently idempotent itself. But then it’s impossible to distinguish between explicit\nand automatic cleanup.\n\nSee also [JLS 12.6.2](https://docs.oracle.com/javase/specs/jls/se11/html/jls-12.html#jls-12.6.2).\n\n\u003ca name=\"reachability-fence\"\u003e\u003c/a\u003e\n[#](#reachability-fence) CN.2. In a class with some native state that has a cleaner or overrides\n`finalize()`, are **bodies of all methods that interact with the native state wrapped with\n`try { ... } finally { Reference.reachabilityFence(this); }`**,\nincluding constructors and the `close()` method, but excluding `finalize()`? This is needed because\nan object could become unreachable and the native memory might be freed from the cleaner while the\nmethod that interacts with the native state is being executed, that might lead to use-after-free or\nJVM memory corruption.\n\n`reachabilityFence()` in `close()` also eliminates the race between `close()` and the cleanup\nexecuted through the cleaner or `finalize()` (see the previous item), but it may be a good idea to\nretain the thread safety precautions in the cleanup procedure, especially if the class in question\nbelongs to the public API of the project because otherwise if `close()` is accidentally or\nmaliciously called concurrently from multiple threads, the JVM might crash due to double memory free\nor, worse, memory might be silently corrupted, while the promise of the Java platform is that\nwhatever buggy some code is, as long as it passes bytecode verification, thrown exceptions should be\nthe worst possible outcome, but the virtual machine shouldn’t crash. [CN.4](#thread-safe-native)\nalso stresses on this principle.\n\n`Reference.reachabilityFence()` has been added in JDK 9. If the project targets JDK 8 and Hotspot\nJVM, [any method with an empty body is an effective emulation of `reachabilityFence()`](\nhttp://mail.openjdk.java.net/pipermail/core-libs-dev/2018-February/051312.html).\n\nSee the documentation for [`Reference.reachabilityFence()`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/ref/Reference.html#reachabilityFence(java.lang.Object))\nand [this discussion](http://cs.oswego.edu/pipermail/concurrency-interest/2015-December/014609.html)\nin the concurrency-interest mailing list for more information.\n\n\u003ca name=\"finalize-misuse\"\u003e\u003c/a\u003e\n[#](#finalize-misuse) CN.3. Aren’t there classes that have **cleaners or override `finalize()` not\nto free native resources**, but merely to return heap objects to some pools, or merely to report\nthat some heap objects are not returned to some pools? This is an antipattern because of the\ntremendous complexity of using cleaners and `finalize()` correctly (see the previous two items) and\nthe negative impact on performance (especially of `finalize()`), that might be even larger than the\nimpact of not returning objects back to some pool and thus slightly increasing the garbage\nallocation rate in the application. If the latter issue arises to be any important, it should better\nbe diagnosed with [async-profiler](https://github.com/jvm-profiling-tools/async-profiler) in the\nallocation profiling mode (-e alloc) than by registering cleaners or overriding `finalize()`.\n\nThis advice also applies when pooled objects are direct ByteBuffers or other Java wrappers of native\nmemory chunks. [async-profiler -e malloc](\nhttps://stackoverflow.com/questions/53576163/interpreting-jemaloc-data-possible-off-heap-leak/53598622#53598622)\ncould be used in such cases to detect direct memory leaks.\n\n\u003ca name=\"thread-safe-native\"\u003e\u003c/a\u003e\n[#](#thread-safe-native) CN.4. If some **classes have some state in native memory and are used\nactively in concurrent code, or belong to the public API of the project, was it considered making\nthem thread-safe**? As described in [CN.2](#reachability-fence), if objects of such classes are\ninadvertently accessed from multiple threads without proper synchronization, memory corruption and\nJVM crashes might result. This is why classes in the JDK such as [`java.util.zip.Deflater`](\nhttp://hg.openjdk.java.net/jdk/jdk/file/a772e65727c5/src/java.base/share/classes/java/util/zip/Deflater.java)\nuse synchronization internally despite `Deflater` objects are not intended to be used concurrently\nfrom multiple threads.\n\nNote that making classes with some state in native memory thread-safe also implies that the **native\nstate should be safely published in constructors**. This means that either the native state should\nbe stored exclusively in `final` fields, or [`VarHandle.storeStoreFence()`](\nhttps://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/invoke/VarHandle.html#storeStoreFence())\nshould be called in constructors after full initialization of the native state. If the project\ntargets JDK 9 and `VarHandle` is not available, the same effect could be achieved by wrapping\nconstructors’ bodies in `synchronized (this) { ... }`.\n\n\u003chr\u003e\n\n\u003ca name=\"forbid-jdk-internally-synchronized\"\u003e\u003c/a\u003e\n[#](#forbid-jdk-internally-synchronized) Bonus: is [forbidden-apis](\nhttps://github.com/policeman-tools/forbidden-apis) configured for the project and are\n`java.util.StringBuffer`, `java.util.Random` and `Math.random()` prohibited? `StringBuffer` and\n`Random` are thread-safe and all their methods are synchronized, which is never useful in practice\nand only inhibits the performance. In OpenJDK, `Math.random()` delegates to a global static `Random`\ninstance. `StringBuilder` should be used instead of `StringBuffer`, `ThreadLocalRandom` or\n`SplittableRandom` should be used instead of `Random` (see also [T.2](#concurrent-test-random)).\n\n## Reading List\n\n - [JLS] Java Language Specification, [Memory Model](\n https://docs.oracle.com/javase/specs/jls/se11/html/jls-17.html#jls-17.4) and [`final` field\n semantics](https://docs.oracle.com/javase/specs/jls/se11/html/jls-17.html#jls-17.5).\n - [EJ] \"Effective Java\" by Joshua Bloch, Chapter 11. Concurrency.\n - [JCIP] \"Java Concurrency in Practice\" by Brian Goetz, Tim Peierls, Joshua Bloch, Joseph Bowbeer,\n David Holmes, and Doug Lea.\n - Posts by Aleksey Shipilёv:\n    - [Safe Publication and Safe Initialization in Java](\n    https://shipilev.net/blog/2014/safe-public-construction/)\n    - [Java Memory Model Pragmatics](https://shipilev.net/blog/2014/jmm-pragmatics/)\n    - [Close Encounters of The Java Memory Model Kind](\n    https://shipilev.net/blog/2016/close-encounters-of-jmm-kind/)\n - [When to use parallel streams](http://gee.cs.oswego.edu/dl/html/StreamParallelGuidance.html)\n written by Doug Lea, with the help of Brian Goetz, Paul Sandoz, Aleksey Shipilev, Heinz Kabutz,\n Joe Bowbeer, …\n - [SEI CERT Oracle Coding Standard for Java](\n https://wiki.sei.cmu.edu/confluence/display/java/SEI+CERT+Oracle+Coding+Standard+for+Java):\n    - [Rule 08. Visibility and Atomicity (VNA)](\n    https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=88487824)\n    - [Rule 09. Locking (LCK)](\n    https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=88487666)\n    - [Rec. 18. Concurrency (CON)](\n    https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=88487352)\n\n## Concurrency checklists for other programming langugages\n\n - C++: [Concurrency and parallelism](\n http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#cp-concurrency-and-parallelism) section\n in C++ Core Guidelines.\n - Go: [Concurrency](https://golang.org/doc/effective_go.html#concurrency) section in *Effective\n Go*.\n\n\u003chr\u003e\n\n## Authors\n\nThis checklist was originally published as a [post on Medium](\nhttps://medium.com/@leventov/code-review-checklist-java-concurrency-49398c326154).\n\nThe following people contributed ideas and comments about this checklist before it was imported to\nGithub:\n\n - [Roman Leventov](https://github.com/leventov)\n - [Marko Topolnik](https://stackoverflow.com/users/1103872/marko-topolnik)\n - [Matko Medenjak](https://github.com/mmedenjak)\n - [Chris Vest](https://github.com/chrisvest)\n - [Simon Willnauer](https://github.com/s1monw)\n - [Ben Manes](https://github.com/ben-manes)\n - [Gleb Smirnov](https://github.com/gvsmirnov)\n - [Andrey Satarin](https://github.com/asatarin)\n - [Benedict Jin](https://github.com/asdf2014)\n - [Petr Janeček](https://stackoverflow.com/users/1273080/petr-jane%C4%8Dek)\n\nThe ideas for some items are taken from \"[Java Concurrency Gotchas](\nhttps://www.slideshare.net/alexmiller/java-concurrency-gotchas-3666977)\" presentation by [Alex\nMiller](https://github.com/puredanger) and [What is the most frequent concurrency issue you've\nencountered in Java?](https://stackoverflow.com/questions/461896) question on StackOverflow (thanks\nto Alex Miller who created this question and the contributors).\n \nAt the moment when this checklist was imported to Github, all text was written by Roman Leventov.\n \nThe checklist is not considered complete, comments and contributions are welcome!\n\n## No Copyright\n\nThis checklist is public domain. By submitting a PR to this repository contributors agree to release\ntheir writing to public domain.\n","funding_links":[],"categories":["Java Code Review Checklists","Others","Resources","资源"],"sub_categories":["Awesome Lists","Related Awesome Lists","相关的真棒列表"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcode-review-checklists%2Fjava-concurrency","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcode-review-checklists%2Fjava-concurrency","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcode-review-checklists%2Fjava-concurrency/lists"}