An open API service indexing awesome lists of open source software.

https://github.com/q3769/conottle

A Java concurrent API to throttle the maximum concurrency to process tasks for any given client while the total number of clients being serviced in parallel can also be throttled
https://github.com/q3769/conottle

concurrency-framework concurrency-limiter concurrency-throttle java java-concurrency-library java-concurrency-management java-concurrency-throttle java-concurrent-api java-multithreading java-thread-management middleware middleware-framework multithreading-framework multithreading-library throttle-requests

Last synced: about 1 month ago
JSON representation

A Java concurrent API to throttle the maximum concurrency to process tasks for any given client while the total number of clients being serviced in parallel can also be throttled

Awesome Lists containing this project

README

        

[![Maven Central](https://img.shields.io/maven-central/v/io.github.q3769/conottle.svg?label=Maven%20Central)](https://search.maven.org/search?q=g:%22io.github.q3769%22%20AND%20a:%22conottle%22)

# conottle

A Java concurrent API to throttle the maximum concurrency to process tasks for any given client while the total number
of clients being serviced in parallel can also be throttled

- **conottle** is short for **con**currency thr**ottle**.

## User story

As an API user, I want to execute tasks for any given client with a configurable maximum concurrency while the total
number of clients being serviced in parallel can also be limited.

## Prerequisite

Java 8 or better

## Get it...

[![Maven Central](https://img.shields.io/maven-central/v/io.github.q3769/conottle.svg?label=Maven%20Central)](https://search.maven.org/search?q=g:%22io.github.q3769%22%20AND%20a:%22conottle%22)

Install as a compile-scope dependency in Maven or other build tools alike.

## Use it...

### API

```java
public interface ClientTaskExecutor {
/**
* @param command {@link Runnable} command to run asynchronously. All such commands under the same {@code clientId}
* are run in parallel, albeit throttled at a maximum concurrency.
* @param clientId A key representing a client whose tasks are throttled while running in parallel
* @return {@link Future} holding the run status of the {@code command}
*/
default Future execute(Runnable command, Object clientId) {
return submit(Executors.callable(command, null), clientId);
}

/**
* @param task {@link Callable} task to run asynchronously. All such tasks under the same {@code clientId} are run
* in parallel, albeit throttled at a maximum concurrency.
* @param clientId A key representing a client whose tasks are throttled while running in parallel
* @param Type of the task result
* @return {@link Future} representing the result of the {@code task}
*/
Future submit(Callable task, Object clientId);
}

```

The interface uses `Future` as the return type, mainly to reduce conceptual weight of the API. The implementation
actually returns `CompletableFuture`, and can be used/cast as such if need be.

### Sample usage

```java
import java.util.concurrent.Executors;

class submit {
Conottle conottle = Conottle.builder()
.maxClientsInParallel(100)
.maxParallelismPerClient(4)
.workerExecutorService(Executors.newCachedThreadPool())
.build();

@Test
void customized() {
int clientCount = 2;
int clientTaskCount = 10;
List> futures = new ArrayList<>(); // class Task implements Callable
int maxActiveExecutorCount = 0;
for (int c = 0; c < clientCount; c++) {
String clientId = "clientId-" + (c + 1);
for (int t = 0; t < clientTaskCount; t++) {
futures.add(this.conottle.submit(new Task(clientId + "-task-" + t, MIN_TASK_DURATION), clientId));
maxActiveExecutorCount = Math.max(maxActiveExecutorCount, conottle.countActiveExecutors());
}
}
assertEquals(clientCount, maxActiveExecutorCount, "should be 1:1 between a client and its executor");
int taskTotal = futures.size();
assertEquals(clientTaskCount * clientCount, taskTotal);
int doneCount = 0;
for (Future future : futures) {
if (future.isDone()) {
doneCount++;
}
}
assertTrue(doneCount < futures.size());
info.log("not all of the {} tasks were done immediately", taskTotal);
info.atDebug().log("{} out of {} were done", doneCount, futures.size());
for (Future future : futures) {
await().until(future::isDone);
}
info.log("all of the {} tasks were done eventually", taskTotal);
await().until(() -> this.conottle.countActiveExecutors() == 0);
info.log("no active executor lingers when all tasks complete");
}

@AfterEach
void close() {
this.conottle.close();
}
}
```

All builder parameters are optional:

- `maxParallelismPerClient` is the maximum concurrency at which one single client's tasks can execute. If omitted or set
to a non-positive integer, then the default is `Runtime.getRuntime().availableProcessors()`.
- `maxClientsInParallel` is the maximum number of clients that can be serviced in parallel. If omitted or set to a
non-positive integer, then the default is `Runtime.getRuntime().availableProcessors()`.
- `workerExecutorService` is the global async thread pool to service all requests for all clients. If omitted, the
default is a fork-join thread pool whose capacity is and `Runtime.getRuntime().availableProcessors()`.

This API has no technical/programmatic upper limit on the parameter values to set for total number of parallelism or
clients to be supported. Once set, the only limit is on runtime concurrency at any given moment: Before proceeding,
excessive tasks or clients will have to wait for active ones to run for completion - that is, the throttling effect.