{"id":13813482,"url":"https://github.com/destel/rill","last_synced_at":"2025-04-14T20:44:36.654Z","repository":{"id":228442616,"uuid":"752054943","full_name":"destel/rill","owner":"destel","description":"Go toolkit for clean, composable, channel-based concurrency","archived":false,"fork":false,"pushed_at":"2025-02-02T22:07:50.000Z","size":222,"stargazers_count":1590,"open_issues_count":1,"forks_count":20,"subscribers_count":8,"default_branch":"main","last_synced_at":"2025-02-16T04:24:35.331Z","etag":null,"topics":["channels","concurrency","functional-programming","generics","go","golang","goroutines","pipeline","streaming"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/destel.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-02-02T23:01:19.000Z","updated_at":"2025-02-16T02:34:32.000Z","dependencies_parsed_at":"2025-03-10T12:49:56.294Z","dependency_job_id":null,"html_url":"https://github.com/destel/rill","commit_stats":null,"previous_names":["destel/rill"],"tags_count":12,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/destel%2Frill","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/destel%2Frill/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/destel%2Frill/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/destel%2Frill/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/destel","download_url":"https://codeload.github.com/destel/rill/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248960273,"owners_count":21189981,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["channels","concurrency","functional-programming","generics","go","golang","goroutines","pipeline","streaming"],"created_at":"2024-08-04T04:01:19.148Z","updated_at":"2025-04-14T20:44:36.632Z","avatar_url":"https://github.com/destel.png","language":"Go","readme":"# Rill [![GoDoc](https://pkg.go.dev/badge/github.com/destel/rill)](https://pkg.go.dev/github.com/destel/rill) [![Go Report Card](https://goreportcard.com/badge/github.com/destel/rill)](https://goreportcard.com/report/github.com/destel/rill) [![codecov](https://codecov.io/gh/destel/rill/graph/badge.svg?token=252K8OQ7E1)](https://codecov.io/gh/destel/rill) [![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go) \n\nRill is a toolkit that brings composable concurrency to Go, making it easier to build concurrent programs from simple, reusable parts.\nIt reduces boilerplate while preserving Go's natural channel-based model.\n\n```bash\ngo get -u github.com/destel/rill\n```\n\n\n## Goals\n\n- **Make common tasks easier.**  \nRill provides a cleaner and safer way of solving common concurrency problems, such as parallel job execution or\nreal-time event processing.\nIt removes boilerplate and abstracts away the complexities of goroutine, channel, and error management.\nAt the same time, developers retain full control over the concurrency level of all operations.\n\n- **Make concurrent code composable and clean.**  \nMost functions in the library take Go channels as inputs and return new, transformed channels as outputs.\nThis allows them to be chained in various ways to build reusable pipelines from simpler parts,\nsimilar to Unix pipes.\nAs a result, concurrent programs become clear sequences of reusable operations.\n\n- **Centralize error handling.**  \nErrors are automatically propagated through a pipeline and can be handled in a single place at the end.\nFor more complex scenarios, Rill also provides tools to intercept and handle errors at any point in a pipeline.\n\n- **Simplify stream processing.**    \nThanks to Go channels, built-in functions can handle potentially infinite streams, processing items as they arrive.\nThis makes Rill a convenient tool for real-time processing or handling large datasets that don't fit in memory.\n\n- **Provide solutions for advanced tasks.**  \nBeyond basic operations, the library includes ready-to-use functions for batching, ordered fan-in, map-reduce, \nstream splitting, merging, and more. Pipelines, while usually linear, can have any cycle-free topology (DAG).\n\n- **Support custom extensions.**  \nSince Rill operates on standard Go channels, it's easy to write custom functions compatible with the library.\n\n- **Keep it lightweight.**  \nRill has a small, type-safe, channel-based API, and zero dependencies, making it straightforward to integrate into existing projects.\nIt's also lightweight in terms of resource usage, ensuring that the number of memory allocations and goroutines\ndoes not grow with the input size.\n\n\n## Quick Start\nLet's look at a practical example: fetch users from an API, activate them, and save the changes back. \nIt shows how to control concurrency at each step while keeping the code clean and manageable.\n**ForEach** returns on the first error, and context cancellation via defer stops all remaining fetches.\n\n\n[Try it](https://pkg.go.dev/github.com/destel/rill#example-package)\n```go\nfunc main() {\n\tctx, cancel := context.WithCancel(context.Background())\n\tdefer cancel()\n\n\t// Convert a slice of user IDs into a channel\n\tids := rill.FromSlice([]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, nil)\n\n\t// Read users from the API.\n\t// Concurrency = 3\n\tusers := rill.Map(ids, 3, func(id int) (*mockapi.User, error) {\n\t\treturn mockapi.GetUser(ctx, id)\n\t})\n\n\t// Activate users.\n\t// Concurrency = 2\n\terr := rill.ForEach(users, 2, func(u *mockapi.User) error {\n\t\tif u.IsActive {\n\t\t\tfmt.Printf(\"User %d is already active\\n\", u.ID)\n\t\t\treturn nil\n\t\t}\n\n\t\tu.IsActive = true\n\t\terr := mockapi.SaveUser(ctx, u)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tfmt.Printf(\"User saved: %+v\\n\", u)\n\t\treturn nil\n\t})\n\n\t// Handle errors\n\tfmt.Println(\"Error:\", err)\n}\n```\n\n\n## Batching\nProcessing items in batches rather than individually can significantly improve performance in many scenarios, \nparticularly when working with external services or databases. Batching reduces the number of queries and API calls, \nincreases throughput, and typically lowers costs.\n\nTo demonstrate batching, let's improve the previous example by using the API's bulk fetching capability. \nThe **Batch** function transforms a stream of individual IDs into a stream of slices. This enables the use of `GetUsers` API \nto fetch multiple users in a single call, instead of making individual `GetUser` calls.\n\n\n\n[Try it](https://pkg.go.dev/github.com/destel/rill#example-package-Batching)\n```go\nfunc main() {\n\tctx, cancel := context.WithCancel(context.Background())\n\tdefer cancel()\n\n\t// Convert a slice of user IDs into a channel\n\tids := rill.FromSlice([]int{\n\t\t1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,\n\t\t21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,\n\t}, nil)\n\n\t// Group IDs into batches of 5\n\tidBatches := rill.Batch(ids, 5, -1)\n\n\t// Bulk fetch users from the API\n\t// Concurrency = 3\n\tuserBatches := rill.Map(idBatches, 3, func(ids []int) ([]*mockapi.User, error) {\n\t\treturn mockapi.GetUsers(ctx, ids)\n\t})\n\n\t// Transform the stream of batches back into a flat stream of users\n\tusers := rill.Unbatch(userBatches)\n\n\t// Activate users.\n\t// Concurrency = 2\n\terr := rill.ForEach(users, 2, func(u *mockapi.User) error {\n\t\tif u.IsActive {\n\t\t\tfmt.Printf(\"User %d is already active\\n\", u.ID)\n\t\t\treturn nil\n\t\t}\n\n\t\tu.IsActive = true\n\t\terr := mockapi.SaveUser(ctx, u)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tfmt.Printf(\"User saved: %+v\\n\", u)\n\t\treturn nil\n\t})\n\n\t// Handle errors\n\tfmt.Println(\"Error:\", err)\n}\n```\n\n\n## Real-Time Batching\nReal-world applications often need to handle events or data that arrives at unpredictable rates. While batching is still \ndesirable for efficiency, waiting to collect a full batch might introduce unacceptable delays when \nthe input stream becomes slow or sparse.\n\nRill solves this with timeout-based batching: batches are emitted either when they're full or after a specified timeout, \nwhichever comes first. This approach ensures optimal batch sizes during high load while maintaining responsiveness during quiet periods.\n\nConsider an application that needs to update users' _last_active_at_ timestamps in a database. The function responsible \nfor this - `UpdateUserTimestamp` can be called concurrently, at unpredictable rates, and from different parts of the application.\nPerforming all these updates individually may create too many concurrent queries, potentially overwhelming the database.\n\nIn the example below, the updates are queued into `userIDsToUpdate` channel and then grouped into batches of up to 5 items, \nwith each batch sent to the database as a single query.\nThe **Batch** function is used with a timeout of 100ms, ensuring zero latency during high load, \nand up to 100ms latency with smaller batches during quiet periods.\n\n[Try it](https://pkg.go.dev/github.com/destel/rill#example-package-BatchingRealTime)\n```go\nfunc main() {\n\t// Start the background worker that processes the updates\n\tgo updateUserTimestampWorker()\n\n\t// Do some updates. They'll be automatically grouped into\n\t// batches: [1,2,3,4,5], [6,7], [8]\n\tUpdateUserTimestamp(1)\n\tUpdateUserTimestamp(2)\n\tUpdateUserTimestamp(3)\n\tUpdateUserTimestamp(4)\n\tUpdateUserTimestamp(5)\n\tUpdateUserTimestamp(6)\n\tUpdateUserTimestamp(7)\n\ttime.Sleep(500 * time.Millisecond) // simulate sparse updates\n\tUpdateUserTimestamp(8)\n}\n\n// This is the queue of user IDs to update.\nvar userIDsToUpdate = make(chan int)\n\n// UpdateUserTimestamp is the public API for updating the last_active_at column in the users table\nfunc UpdateUserTimestamp(userID int) {\n\tuserIDsToUpdate \u003c- userID\n}\n\n// This is a background worker that sends queued updates to the database in batches.\n// For simplicity, there are no retries, error handling and synchronization\nfunc updateUserTimestampWorker() {\n\n\tids := rill.FromChan(userIDsToUpdate, nil)\n\n\tidBatches := rill.Batch(ids, 5, 100*time.Millisecond)\n\n\t_ = rill.ForEach(idBatches, 1, func(batch []int) error {\n\t\tfmt.Printf(\"Executed: UPDATE users SET last_active_at = NOW() WHERE id IN (%v)\\n\", batch)\n\t\treturn nil\n\t})\n}\n```\n\n\n\n## Errors, Termination and Contexts\nError handling can be non-trivial in concurrent applications. Rill simplifies this by providing a structured approach to the problem.\nPipelines typically consist of a sequence of non-blocking channel transformations, followed by a blocking stage that returns a final result and an error.\nThe general rule is: any error occurring anywhere in a pipeline is propagated down to the final stage,\nwhere it's caught by some blocking function and returned to the caller.\n\nRill provides a wide selection of blocking functions. Here are some commonly used ones:\n\n- **ForEach:** Concurrently applies a user function to each item in the stream.\n  [Example](https://pkg.go.dev/github.com/destel/rill#example-ForEach)\n- **ToSlice:** Collects all stream items into a slice.\n  [Example](https://pkg.go.dev/github.com/destel/rill#example-ToSlice)\n- **First:** Returns the first item or error encountered in the stream and discards the rest\n  [Example](https://pkg.go.dev/github.com/destel/rill#example-First)\n- **Reduce:** Concurrently reduces the stream to a single value, using a user provided reducer function.\n  [Example](https://pkg.go.dev/github.com/destel/rill#example-Reduce)\n- **All:** Concurrently checks if all items in the stream satisfy a user provided condition.\n  [Example](https://pkg.go.dev/github.com/destel/rill#example-All)\n- **Err:** Returns the first error encountered in the stream or nil, and discards the rest of the stream.\n  [Example](https://pkg.go.dev/github.com/destel/rill#example-Err) \n\n\nAll blocking functions share a common behavior. In case of an early termination (before reaching the end of the input stream or in case of an error),\nsuch functions initiate background draining of the remaining items. This is done to prevent goroutine leaks by ensuring that\nall goroutines feeding the stream are allowed to complete.\n\nRill is context-agnostic, meaning that it does not enforce any specific context usage.\nHowever, it's recommended to make user-defined pipeline stages context-aware.\nThis is especially important for the initial stage, as it allows to stop feeding the pipeline with new items after the context cancellation.\nIn practice the first stage is often naturally context-aware through Go's standard APIs for databases, HTTP clients, and other external sources. \n\nIn the example below the `CheckAllUsersExist` function uses several concurrent workers to check if all users  \nfrom the given list exist. When an error occurs (like a non-existent user), the function returns that error  \nand cancels the context, which in turn stops all remaining user fetches.\n\n[Try it](https://pkg.go.dev/github.com/destel/rill#example-package-Context)\n```go\nfunc main() {\n\tctx := context.Background()\n\n\t// ID 999 doesn't exist, so fetching will stop after hitting it.\n\terr := CheckAllUsersExist(ctx, 3, []int{1, 2, 3, 4, 5, 999, 7, 8, 9, 10, 11, 12, 13, 14, 15})\n\tfmt.Printf(\"Check result: %v\\n\", err)\n}\n\n// CheckAllUsersExist uses several concurrent workers to check if all users with given IDs exist.\nfunc CheckAllUsersExist(ctx context.Context, concurrency int, ids []int) error {\n\t// Create new context that will be canceled when this function returns\n\tctx, cancel := context.WithCancel(ctx)\n\tdefer cancel()\n\n\t// Convert the slice into a stream\n\tidsStream := rill.FromSlice(ids, nil)\n\n\t// Fetch users concurrently.\n\tusers := rill.Map(idsStream, concurrency, func(id int) (*mockapi.User, error) {\n\t\tu, err := mockapi.GetUser(ctx, id)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"failed to fetch user %d: %w\", id, err)\n\t\t}\n\n\t\tfmt.Printf(\"Fetched user %d\\n\", id)\n\t\treturn u, nil\n\t})\n\n\t// Return the first error (if any) and cancel remaining fetches via context\n\treturn rill.Err(users)\n}\n```\n\nIn the example above only the second stage (`mockapi.GetUser`) of the pipeline is context-aware.\n**FromSlice** works well here since the input is small, iteration is fast and context cancellation prevents expensive API calls regardless.\nThe following code demonstrates how to replace **FromSlice** with **Generate** when full context awareness becomes important.\n\n```go\nidsStream := rill.Generate(func(send func(int), sendErr func(error)) {\n\tfor _, id := range ids {\n\t\tif ctx.Err() != nil {\n\t\t\treturn\n\t\t}\n\t\tsend(id)\n\t}\n})\n```\n\n\n\n## Order Preservation (Ordered Fan-In)\nConcurrent processing can boost performance, but since tasks take different amounts of time to complete,\nthe results' order usually differs from the input order. While out-of-order results are acceptable in many scenarios, \nsome cases require preserving the original order. This seemingly simple problem is deceptively challenging to solve correctly.\n\nTo address this, Rill provides ordered versions of its core functions, such as **OrderedMap** or **OrderedFilter**.\nThese functions perform additional synchronization under the hood to ensure that if value **x** precedes value **y** in the input stream,\nthen **f(x)** will precede **f(y)** in the output.\n\nHere's a practical example: finding the first occurrence of a specific string among 1000 large files hosted online.\nDownloading all files at once would consume too much memory, processing them sequentially would be too slow,\nand traditional concurrency patterns do not preserve the order of files, making it challenging to find the first match.\n\nThe combination of **OrderedFilter** and **First** functions solves this elegantly,\nwhile downloading and keeping in memory at most 5 files at a time. **First** returns on the first match,\nthis triggers the context cancellation via defer, stopping URL generation and file downloads.\n\n[Try it](https://pkg.go.dev/github.com/destel/rill#example-package-Ordering)\n\n```go\nfunc main() {\n\tctx, cancel := context.WithCancel(context.Background())\n\tdefer cancel()\n\n\t// The string to search for in the downloaded files\n\tneedle := []byte(\"26\")\n\n\t// Generate a stream of URLs from https://example.com/file-0.txt \n\t// to https://example.com/file-999.txt\n\t// Stop generating URLs if the context is canceled\n\turls := rill.Generate(func(send func(string), sendErr func(error)) {\n\t\tfor i := 0; i \u003c 1000 \u0026\u0026 ctx.Err() == nil; i++ {\n\t\t\tsend(fmt.Sprintf(\"https://example.com/file-%d.txt\", i))\n\t\t}\n\t})\n\n\t// Download and process the files\n\t// At most 5 files are downloaded and held in memory at the same time\n\tmatchedUrls := rill.OrderedFilter(urls, 5, func(url string) (bool, error) {\n\t\tfmt.Println(\"Downloading:\", url)\n\n\t\tcontent, err := mockapi.DownloadFile(ctx, url)\n\t\tif err != nil {\n\t\t\treturn false, err\n\t\t}\n\n\t\t// keep only URLs of files that contain the needle\n\t\treturn bytes.Contains(content, needle), nil\n\t})\n\n\t// Find the first matched URL\n\tfirstMatchedUrl, found, err := rill.First(matchedUrls)\n\tif err != nil {\n\t\tfmt.Println(\"Error:\", err)\n\t\treturn\n\t}\n\n\t// Print the result\n\tif found {\n\t\tfmt.Println(\"Found in:\", firstMatchedUrl)\n\t} else {\n\t\tfmt.Println(\"Not found\")\n\t}\n}\n```\n\n\n## Stream Merging and FlatMap\nRill comes with the **Merge** function that combines multiple streams into a single one. Another, often overlooked,\nfunction that can combine streams is **FlatMap**. It's a powerful tool that transforms each input item into its own stream,\nand then merges all these streams together. \n\nIn the example below, **FlatMap** transforms each department into a stream of users, then merges these streams into one.\nLike other Rill functions, **FlatMap** gives full control over concurrency. \nIn this particular case the concurrency level is 3, meaning that users are fetched from at most 3 departments at the same time. \n\nAdditionally, this example demonstrates how to write a reusable streaming wrapper over paginated API calls - the `StreamUsers` function.\nThis wrapper can be useful both on its own and as part of larger pipelines.\n\n[Try it](https://pkg.go.dev/github.com/destel/rill#example-package-FlatMap)\n```go\nfunc main() {\n\tctx, cancel := context.WithCancel(context.Background())\n\tdefer cancel()\n\n\t// Start with a stream of department names\n\tdepartments := rill.FromSlice([]string{\"IT\", \"Finance\", \"Marketing\", \"Support\", \"Engineering\"}, nil)\n\n\t// Stream users from all departments concurrently.\n\t// At most 3 departments at the same time.\n\tusers := rill.FlatMap(departments, 3, func(department string) \u003c-chan rill.Try[*mockapi.User] {\n\t\treturn StreamUsers(ctx, \u0026mockapi.UserQuery{Department: department})\n\t})\n\n\t// Print the users from the combined stream\n\terr := rill.ForEach(users, 1, func(user *mockapi.User) error {\n\t\tfmt.Printf(\"%+v\\n\", user)\n\t\treturn nil\n\t})\n\tfmt.Println(\"Error:\", err)\n}\n\n// StreamUsers is a reusable streaming wrapper around the mockapi.ListUsers function.\n// It iterates through all listing pages and uses [Generate] to simplify sending users and errors to the resulting stream.\n// This function is useful both on its own and as part of larger pipelines.\nfunc StreamUsers(ctx context.Context, query *mockapi.UserQuery) \u003c-chan rill.Try[*mockapi.User] {\n\treturn rill.Generate(func(send func(*mockapi.User), sendErr func(error)) {\n\t\tvar currentQuery mockapi.UserQuery\n\t\tif query != nil {\n\t\t\tcurrentQuery = *query\n\t\t}\n\n\t\tfor page := 0; ; page++ {\n\t\t\tcurrentQuery.Page = page\n\n\t\t\tusers, err := mockapi.ListUsers(ctx, \u0026currentQuery)\n\t\t\tif err != nil {\n\t\t\t\tsendErr(err)\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tif len(users) == 0 {\n\t\t\t\tbreak\n\t\t\t}\n\n\t\t\tfor _, user := range users {\n\t\t\t\tsend(user)\n\t\t\t}\n\t\t}\n\t})\n}\n```\n\n**Note:** Starting from Go 1.24, thanks to generic type aliases, the return type of the `StreamUsers` function \ncan optionally be simplified to `rill.Stream[*mockapi.User]`\n\n```go\nfunc StreamUsers(ctx context.Context, query *mockapi.UserQuery) rill.Stream[*mockapi.User] {\n    ...\n}\n```\n\n\n## Go 1.23 Iterators\nStarting from Go 1.23, the language added *range-over-function* feature, allowing users to define custom iterators \nfor use in for-range loops. This feature enables Rill to integrate seamlessly with existing iterator-based functions\nin the standard library and third-party packages.\n\nRill provides **FromSeq** and **FromSeq2** functions to convert an iterator into a stream, \nand **ToSeq2** function to convert a stream back into an iterator.\n\n**ToSeq2** can be a good alternative to **ForEach** when concurrency is not needed. \nIt gives more control and performs all necessary cleanup and draining, even if the loop is terminated early using *break* or *return*.\n\n[Try it](https://pkg.go.dev/github.com/destel/rill#example-ToSeq2)\n\n```go\nfunc main() {\n\t// Convert a slice of numbers into a stream\n\tnumbers := rill.FromSlice([]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, nil)\n\n\t// Transform each number\n\t// Concurrency = 3\n\tsquares := rill.Map(numbers, 3, func(x int) (int, error) {\n\t\treturn square(x), nil\n\t})\n\n\t// Convert the stream into an iterator and use for-range to print the results\n\tfor val, err := range rill.ToSeq2(squares) {\n\t\tif err != nil {\n\t\t\tfmt.Println(\"Error:\", err)\n\t\t\tbreak // cleanup is done regardless of early exit\n\t\t}\n\t\tfmt.Printf(\"%+v\\n\", val)\n\t}\n}\n```\n\n\n## Testing Strategy\nRill has a test coverage of over 95%, with testing focused on:\n- **Correctness**: ensuring that functions produce accurate results at different levels of concurrency\n- **Concurrency**: confirming that correct number of goroutines is spawned and utilized\n- **Ordering**: ensuring that ordered versions of functions preserve the order, while basic versions do not\n\n\n## Blog Posts\nTechnical articles exploring different aspects and applications of Rill's concurrency patterns:\n- [Real-Time Batching in Go](https://destel.dev/blog/real-time-batching-in-go)\n- [Fast Listing of Files from S3, GCS and Other Object Storages](https://destel.dev/blog/fast-listing-of-files-from-s3-gcs-and-other-object-storages)\n\n\n## Contributing\nThank you for your interest in improving Rill! Before submitting your pull request, please consider:\n\n- Focus on generic, widely applicable solutions\n- Consider use cases. Try to avoid highly specialized features that could be separate packages\n- Keep the API surface clean and focused\n- Try to avoid adding functions that can be easily misused\n- Avoid external dependencies \n- Add tests and documentation\n- For major changes, prefer opening an issue first to discuss the approach\n\nFor bug reports and feature requests, please include a clear description and minimal example when possible.\n","funding_links":[],"categories":["Goroutines","Go"],"sub_categories":["Search and Analytic Databases","检索及分析资料库"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdestel%2Frill","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdestel%2Frill","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdestel%2Frill/lists"}