{"id":29798216,"url":"https://github.com/vienneraphael/batchling","last_synced_at":"2026-03-09T05:05:48.625Z","repository":{"id":304350831,"uuid":"1006090218","full_name":"vienneraphael/batchling","owner":"vienneraphael","description":"Save 50% off GenAI costs in two lines of code","archived":false,"fork":false,"pushed_at":"2026-03-03T06:51:46.000Z","size":5047,"stargazers_count":15,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-03-03T09:26:15.100Z","etag":null,"topics":["ai-inference","anthropic","anthropic-api","api","async","batch","batch-processing","batchling","doubleword","gemini","generative-ai","llm","llm-inference","mistral","openai","openai-api","python","python-library","request-batching","togetherai"],"latest_commit_sha":null,"homepage":"https://batchling.pages.dev","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/vienneraphael.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":"AGENTS.md","dco":null,"cla":null}},"created_at":"2025-06-21T13:22:37.000Z","updated_at":"2026-03-03T06:51:49.000Z","dependencies_parsed_at":"2025-08-17T16:05:17.639Z","dependency_job_id":"089b1473-9a67-44a5-b267-83509c565d09","html_url":"https://github.com/vienneraphael/batchling","commit_stats":null,"previous_names":["vienneraphael/batchling"],"tags_count":14,"template":false,"template_full_name":null,"purl":"pkg:github/vienneraphael/batchling","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vienneraphael%2Fbatchling","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vienneraphael%2Fbatchling/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vienneraphael%2Fbatchling/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vienneraphael%2Fbatchling/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/vienneraphael","download_url":"https://codeload.github.com/vienneraphael/batchling/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vienneraphael%2Fbatchling/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30283703,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-09T02:57:19.223Z","status":"ssl_error","status_checked_at":"2026-03-09T02:56:26.373Z","response_time":61,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai-inference","anthropic","anthropic-api","api","async","batch","batch-processing","batchling","doubleword","gemini","generative-ai","llm","llm-inference","mistral","openai","openai-api","python","python-library","request-batching","togetherai"],"created_at":"2025-07-28T07:13:21.995Z","updated_at":"2026-03-09T05:05:48.606Z","avatar_url":"https://github.com/vienneraphael.png","language":"Python","readme":"\u003c!-- markdownlint-disable-file MD041 MD001 --\u003e\n\u003cdiv align=\"center\"\u003e\n\u003cimg src=\"https://raw.githubusercontent.com/vienneraphael/batchling/main/docs/assets/images/batchling-compressed.webp\" alt=\"batchling logo\" width=\"500\" role=\"img\"\u003e\n\u003c/div\u003e\n\u003cp align=\"center\"\u003e\n    \u003cem\u003eSave 50% off GenAI costs in two lines of code\u003c/em\u003e\n\u003c/p\u003e\n\u003cp align=\"center\"\u003e\n\u003ca href=\"https://github.com/vienneraphael/batchling/actions/workflows/ci.yml\" target=\"_blank\"\u003e\u003cimg src=\"https://github.com/vienneraphael/batchling/actions/workflows/ci.yml/badge.svg\" alt=\"CI\"\u003e\u003c/a\u003e\n\u003ca href=\"https://pypi.org/project/batchling\" target=\"_blank\"\u003e\u003cimg src=\"https://img.shields.io/pypi/v/batchling?color=%2334D058\u0026label=pypi\" alt=\"PyPI version\"\u003e\u003c/a\u003e\n\u003ca href=\"https://github.com/vienneraphael/batchling\" target=\"_blank\"\u003e\u003cimg src=\"https://img.shields.io/badge/python-3.11%20%7C%203.12%20%7C%203.13%20%7C%203.14-3776AB?logo=python\u0026logoColor=white\" alt=\"Python versions\"\u003e\u003c/a\u003e\n\u003ca href=\"https://github.com/vienneraphael/batchling/blob/main/LICENSE\" target=\"_blank\"\u003e\u003cimg src=\"https://img.shields.io/badge/license-MIT-34D058\" alt=\"MIT license\"\u003e\u003c/a\u003e\n\u003ca href=\"https://discord.gg/8sdXXCXaHK\" target=\"_blank\"\u003e\u003cimg src=\"https://img.shields.io/badge/discord-join-5865F2?logo=discord\u0026logoColor=white\" alt=\"Join Discord\"\u003e\u003c/a\u003e\n\u003ca href=\"https://www.linkedin.com/in/raphael-vienne/\" target=\"_blank\"\u003e\u003cimg src=\"https://img.shields.io/badge/linkedin-connect-0A66C2?logo=linkedin\u0026logoColor=white\" alt=\"LinkedIn\"\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n---\n\nbatchling is a frictionless, batteries-included plugin to convert any GenAI async function or script into half-cost batch jobs.\n\nKey features:\n\n- **Simple**: a simple 2-liner gets you 50% off your GenAI bill instantly.\n- **Transparent**: Your code remains the same, no added behaviors. Track sent batches easily.\n- **Global**: Integrates with most providers and all frameworks.\n- **Safe**: Get a complete breakdown of your cost savings before launching a single batch.\n- **Lightweight**: Very few dependencies.\n\n\u003cdetails markdown=\"1\"\u003e\n\n\u003csummary\u003e\u003cstrong\u003eWhat's the catch?\u003c/strong\u003e\u003c/summary\u003e\n\nThe batch is the catch!\n\nBatch APIs enable you to process large volumes of requests asynchronously (usually at 50% lower cost compared to real-time API calls). It's perfect for workloads that don't need immediate responses such as:\n\n- Running mass offline evaluations\n- Classifying large datasets\n- Generating large-scale embeddings\n- Offline summarization\n- Synthetic data generation\n- Structured data extraction (e.g. OCR)\n- Audio transcriptions/translations at scale\n\nCompared to using standard endpoints directly, Batch API offers:\n\n- **Better cost efficiency**: usually 50% cost discount compared to synchronous APIs\n- **Higher rate limits**: Substantially more headroom with separate rate limit pools\n- **Large-scale support**: Process thousands of requests per batch\n- **Flexible completion**: Best-effort completion within 24 hours with progress tracking, batches usually complete within an hour.\n\n\u003c/details\u003e\n\n\u003c/br\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/vienneraphael/batchling/main/docs/assets/images/batchling_cli.gif\" alt=\"batchling CLI demo\" width=\"900\" role=\"img\"\u003e\n\u003c/p\u003e\n\n## Installation\n\nbatchling is available on PyPI as `batchling`, install using either `pip`:\n\n```bash\npip install batchling\n```\n\n## Get Started\n\nbatchling integrates smoothly with any async function doing GenAI calls or within a whole async script that you'd run with `asyncio`.\n\nLet's suppose we have an existing script `main.py` that uses the OpenAI client to make two parallel calls using `asyncio.gather`:\n\n### Using the async context manager (recommended)\n\nTo selectively batchify certain pieces of your code execution, you can rely on the `batchify` function, which exposes an async context manager.\n\n```py title=\"main.py\"\nimport asyncio\nfrom batchling import batchify\nfrom openai import AsyncOpenAI\n\nasync def generate():\n    client = AsyncOpenAI()\n    questions = [\n        \"Who is the best French painter? Answer in one short sentence.\",\n        \"What is the capital of France?\",\n    ]\n    tasks = [\n        client.responses.create(input=question, model=\"gpt-4o-mini\") for question in questions\n    ]\n    async with batchify(): # Runs your tasks as batches, save 50%\n        responses = await asyncio.gather(*tasks)\n    for response in responses:\n        content = response.output[-1].content # skip reasoning output, get straight to the answer\n        print(content[0].text)\n\nif __name__ == \"__main__\":\n    asyncio.run(generate())\n\n```\n\nThen, just run `main.py` like you would normally:\n\n```bash\npython main.py\n```\n\nOutput:\n\n```text\nThe best French painter is often considered to be Claude Monet, a leading figure in the Impressionist movement.\nThe capital of France is Paris.\n```\n\n### Using the CLI wrapper\n\nFor you to switch this async execution to a batched inference one, you just have to run your script using the `batchling` CLI and targetting the main function ran by `asyncio`:\n\n```py title=\"main.py\"\nimport asyncio\nfrom openai import AsyncOpenAI\n\nasync def generate():\n    client = AsyncOpenAI()\n    questions = [\n        \"Who is the best French painter? Answer in one short sentence.\",\n        \"What is the capital of France?\",\n    ]\n    tasks = [\n        client.responses.create(input=question, model=\"gpt-4o-mini\") for question in questions\n    ]\n    responses = await asyncio.gather(*tasks)\n    for response in responses:\n        content = response.output[-1].content # skip reasoning output, get straight to the answer\n        print(content[0].text)\n\n```\n\nOutput:\n\n```text\nThe best French painter is often considered to be Claude Monet, a leading figure in the Impressionist movement.\nThe capital of France is Paris.\n```\n\nRun your function in batch mode:\n\n```bash\nbatchling main.py:generate\n```\n\n## Supported providers\n\n| Name        | Batch API Docs URL                                                       |\n|-------------|--------------------------------------------------------------------------|\n| Anthropic   | \u003chttps://docs.anthropic.com/en/docs/build-with-claude/batch-processing\u003e  |\n| Doubleword  | \u003chttps://docs.doubleword.ai/batches/getting-started-with-batched-api\u003e    |\n| Gemini      | \u003chttps://ai.google.dev/gemini-api/docs/batch-mode\u003e                       |\n| Groq        | \u003chttps://console.groq.com/docs/batch\u003e                                    |\n| Mistral     | \u003chttps://docs.mistral.ai/capabilities/batch/\u003e                            |\n| OpenAI      | \u003chttps://platform.openai.com/docs/guides/batch\u003e                          |\n| Together    | \u003chttps://docs.together.ai/docs/batch-inference\u003e                          |\n| XAI         | \u003chttps://docs.x.ai/developers/advanced-api-usage/batch-api\u003e              |\n\n## Next Steps\n\nTo try `batchling` for yourself, follow  this [quickstart guide](https://batchling.pages.dev/quickstart/).\n\nRead the [docs](https://batchling.pages.dev/batchify/) to learn more about how you can save on your GenAI expenses with `batchling`.\n\nIf you have any question, file an [issue](https://github.com/vienneraphael/batchling/issues) on GitHub.\n\n## Connect\n\n- Community (Discord): \u003chttps://discord.gg/8sdXXCXaHK\u003e\n- LinkedIn: \u003chttps://www.linkedin.com/in/raphael-vienne/\u003e\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fvienneraphael%2Fbatchling","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fvienneraphael%2Fbatchling","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fvienneraphael%2Fbatchling/lists"}