https://github.com/mccutchen/speculatively
Package speculatively provides a simple mechanism to re-execute a task in parallel only after some initial timeout has elapsed.
https://github.com/mccutchen/speculatively
golang speculative-execution
Last synced: 2 months ago
JSON representation
Package speculatively provides a simple mechanism to re-execute a task in parallel only after some initial timeout has elapsed.
- Host: GitHub
- URL: https://github.com/mccutchen/speculatively
- Owner: mccutchen
- License: mit
- Created: 2018-05-15T22:20:23.000Z (about 7 years ago)
- Default Branch: main
- Last Pushed: 2025-02-26T04:10:49.000Z (3 months ago)
- Last Synced: 2025-02-26T05:20:39.559Z (3 months ago)
- Topics: golang, speculative-execution
- Language: Go
- Homepage: https://pkg.go.dev/github.com/mccutchen/speculatively
- Size: 18.6 KB
- Stars: 10
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# speculatively
[](https://pkg.go.dev/github.com/mccutchen/speculatively)
[](https://github.com/mccutchen/speculatively/actions/workflows/test.yaml)
[](https://codecov.io/gh/mccutchen/speculatively)
[](https://goreportcard.com/report/github.com/mccutchen/speculatively)Package `speculatively` provides a simple mechanism to speculatively execute a
task in parallel only after some initial timeout has elapsed:```go
// An example task that will wait for a random amount of time before returning
task := func(ctx context.Context) (string, error) {
delay := time.Duration(float64(250*time.Millisecond) * rand.Float64())
select {
case <-time.After(delay):
return "success", nil
case <-ctx.Done():
return "timeout", ctx.Err()
}
}ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond)
defer cancel()// If task doesn't return within 20ms, it will be executed again in parallel
result, err := speculatively.Do(ctx, 20*time.Millisecond, task)
```This was inspired by the ["Defeat your 99th percentile with speculative task"
blog post][1], which describes it nicely:> The inspiration came from BigData world. In Spark when task execution runs
> suspiciously long the application master starts the same task speculatively
> on a different executor but it lets the long running tasks to continue. The
> solution looked elegant:
>
> * Service response time limit is 50ms.
>
> * If the first attempt doesn’t finish within 25ms start a new one, but
> keep the first thread running.
>
> * Wait for either thread to finish and take result from the first one
> ready.The speculative tasks implemented here are similar to "hedged requests" as
described in [The Tail at Scale][2] and implemented in the Query example
function in [Go Concurrency Patterns: Timing out, moving on][3], but they a)
have no knowledge of different replicas and b) wait for a caller-controlled
timeout before launching additional tasks.[1]: https://archive.is/QDqM3
[2]: http://www-inst.eecs.berkeley.edu/~cs252/sp17/papers/TheTailAtScale.pdf
[3]: https://blog.golang.org/go-concurrency-patterns-timing-out-and