{"id":45709228,"url":"https://github.com/swival/swival","last_synced_at":"2026-04-08T00:03:36.866Z","repository":{"id":340464662,"uuid":"1166161774","full_name":"Swival/swival","owner":"Swival","description":"A small, powerful, open-source CLI coding agent that works with open models.","archived":false,"fork":false,"pushed_at":"2026-03-12T14:03:21.000Z","size":1656,"stargazers_count":24,"open_issues_count":1,"forks_count":3,"subscribers_count":2,"default_branch":"master","last_synced_at":"2026-03-12T17:08:09.372Z","etag":null,"topics":["agent","ai","cli","code","coding","eval","huggingface","lmstudio"],"latest_commit_sha":null,"homepage":"https://swival.dev","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Swival.png","metadata":{"files":{"readme":"README.md","changelog":"ChangeLog.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-02-24T23:57:47.000Z","updated_at":"2026-03-12T14:03:26.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/Swival/swival","commit_stats":null,"previous_names":["swival/swival"],"tags_count":22,"template":false,"template_full_name":null,"purl":"pkg:github/Swival/swival","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Swival%2Fswival","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Swival%2Fswival/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Swival%2Fswival/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Swival%2Fswival/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Swival","download_url":"https://codeload.github.com/Swival/swival/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Swival%2Fswival/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30469868,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-13T11:00:43.441Z","status":"ssl_error","status_checked_at":"2026-03-13T11:00:23.173Z","response_time":60,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agent","ai","cli","code","coding","eval","huggingface","lmstudio"],"created_at":"2026-02-25T02:06:15.368Z","updated_at":"2026-04-08T00:03:36.850Z","avatar_url":"https://github.com/Swival.png","language":"Python","readme":"![Swival Logo](.media/logo.png)\n\n# Swival\n\nA coding agent for any model. [Documentation](https://swival.dev/)\n\nSwival is a CLI coding agent built to be practical, reliable, and easy to use.\nIt works with frontier models, but its main goal is to be as reliable as\npossible with smaller models, including local ones. It is designed from the\nground up to handle tight context windows and limited resources without falling\napart.\n\nIt connects to [LM Studio](https://lmstudio.ai/),\n[HuggingFace Inference API](https://huggingface.co/inference-api),\n[OpenRouter](https://openrouter.ai/),\n[Google Gemini](https://ai.google.dev/),\n[ChatGPT Plus/Pro](https://chatgpt.com/), any OpenAI-compatible server (ollama,\nllama.cpp, mlx_lm.server, vLLM, etc.), or any external command\n(`codex exec`, custom wrappers, etc.), sends your task, and runs an autonomous tool loop until\nit produces an answer. With LM Studio it auto-discovers your\nloaded model, so there's nothing to configure. Pure Python, no framework.\n\n## Quickstart\n\nPick the provider that matches how you want to run models:\n\n| Provider         | Auth                                                | Required flags                                    | First command                                                                        |\n| ---------------- | --------------------------------------------------- | ------------------------------------------------- | ------------------------------------------------------------------------------------ |\n| LM Studio        | none                                                | none                                              | `swival \"Refactor src/api.py\"`                                                       |\n| HuggingFace      | `HF_TOKEN` or `--api-key`                           | `--provider huggingface --model ORG/MODEL`        | `swival --provider huggingface --model zai-org/GLM-5 \"task\"`                         |\n| OpenRouter       | `OPENROUTER_API_KEY` or `--api-key`                 | `--provider openrouter --model MODEL`             | `swival --provider openrouter --model z-ai/glm-5 \"task\"`                             |\n| Google Gemini    | `GEMINI_API_KEY`, `OPENAI_API_KEY`, or `--api-key`  | `--provider google --model MODEL`                 | `swival --provider google --model gemini-2.5-flash \"task\"`                           |\n| ChatGPT Plus/Pro | browser auth on first run or `CHATGPT_API_KEY`      | `--provider chatgpt --model MODEL`                | `swival --provider chatgpt --model gpt-5.4 \"task\"`                                   |\n| Generic          | optional `OPENAI_API_KEY`                           | `--provider generic --base-url URL --model MODEL` | `swival --provider generic --base-url http://127.0.0.1:8080 --model my-model \"task\"` |\n| AWS Bedrock      | AWS credential chain (`AWS_PROFILE`, env vars, IAM) | `--provider bedrock --model MODEL`                | `swival --provider bedrock --model global.anthropic.claude-opus-4-6-v1 \"task\"`       |\n| Command          | none                                                | `--provider command --model \"COMMAND\"`            | `swival --provider command --model \"codex exec --full-auto\" \"task\"`                  |\n\nRun `swival --help` for the grouped CLI reference and copy-paste examples.\n\n### LM Studio\n\n1. Install [LM Studio](https://lmstudio.ai/) and load a model with tool-calling\n   support. Recommended first model:\n   [qwen3-coder-next](https://huggingface.co/unsloth/Qwen3-Coder-Next-GGUF)\n   (great quality/speed tradeoff on local hardware).\n   Crank the context size as high as your hardware allows.\n2. Start the LM Studio server.\n3. Install Swival (requires Python 3.13+):\n\n```sh\nuv tool install swival\n```\n\nOn macOS you can also use Homebrew: `brew install swival/tap/swival`\n\n4. Run:\n\n```sh\nswival \"Refactor the error handling in src/api.py\"\n```\n\nThat's it. Swival finds the model, connects, and goes to work.\n\n### HuggingFace\n\n```sh\nexport HF_TOKEN=hf_...\nuv tool install swival\nswival \"Refactor the error handling in src/api.py\" \\\n    --provider huggingface --model zai-org/GLM-5\n```\n\nYou can also point it at a dedicated endpoint with `--base-url` and `--api-key`.\n\n### OpenRouter\n\n```sh\nexport OPENROUTER_API_KEY=sk_or_...\nuv tool install swival\nswival \"Refactor the error handling in src/api.py\" \\\n    --provider openrouter --model z-ai/glm-5\n```\n\n### Google Gemini\n\n```sh\nexport GEMINI_API_KEY=...\nuv tool install swival\nswival \"Refactor the error handling in src/api.py\" \\\n    --provider google --model gemini-2.5-flash\n```\n\n### ChatGPT Plus/Pro\n\nUse OpenAI models through your existing ChatGPT Plus or Pro subscription -- no\nAPI key needed.\n\n```sh\nuv tool install swival\nswival \"Refactor the error handling in src/api.py\" \\\n    --provider chatgpt --model gpt-5.4\n```\n\nOn first use, a device code and URL are printed to your terminal. Open the URL,\nenter the code, and authorize with your ChatGPT account. Tokens are cached\nlocally for subsequent runs.\n\n### Generic (OpenAI-compatible)\n\n```sh\nswival \"Refactor the error handling in src/api.py\" \\\n    --provider generic \\\n    --base-url http://127.0.0.1:8080 \\\n    --model my-model\n```\n\nWorks with ollama, llama.cpp, mlx_lm.server, vLLM, DeepSeek API, and anything\nelse that speaks the OpenAI chat completions protocol. No API key required for\nlocal servers.\n\n### Interactive sessions\n\n```sh\nswival\n```\n\nThe REPL carries conversation history across questions, which makes it good for\nexploratory work and longer tasks.\n\n### Task Input From Stdin\n\nIf you omit the positional task and pipe stdin, Swival reads the task from\nstdin.\n\n```sh\nswival -q \u003c objective.md\n\ncat prompts/review.md | swival --provider huggingface --model zai-org/GLM-5\n```\n\nUseful for long prompts, shell-quoting avoidance, and scripted workflows.\n\n### Updates and uninstall\n\n```sh\nuv tool upgrade swival    # update (uv)\nuv tool uninstall swival  # remove (uv)\nbrew upgrade swival       # update (Homebrew)\nbrew uninstall swival     # remove (Homebrew)\n```\n\n## What makes it different\n\n**Reliable with small models.** Context management is one of Swival's strengths.\nIt keeps things clean and focused, which is especially important when you are\nworking with models that have tight context windows. Graduated compaction,\npersistent thinking notes, and a todo checklist all survive context resets, so\nthe agent doesn't lose track of multi-step plans even under pressure.\n\n**Your models, your way.** Works with LM Studio, HuggingFace Inference API,\nOpenRouter, Google Gemini, ChatGPT Plus/Pro, any OpenAI-compatible server, and\nany external command. With LM Studio, it auto-discovers whatever model you have\nloaded. With HuggingFace or OpenRouter, point it at any supported model. With\nGoogle Gemini, use Gemini models through Google's native API. With ChatGPT\nPlus/Pro, authenticate through your browser and use OpenAI's models through your\nexisting subscription. With the generic provider, connect to ollama, llama.cpp,\nmlx_lm.server, vLLM, or any other compatible server. With the command provider,\nshell out to any program that reads a prompt on stdin and writes a response on\nstdout. You pick the model and the infrastructure.\n\n**Review loop and LLM-as-a-judge.** Swival has a configurable review loop that\ncan run external reviewer scripts or use a built-in LLM-as-judge to\nautomatically evaluate and retry agent output. Good for quality assurance on\ntasks that matter.\n\n**Built for benchmarking.** Pass `--report report.json` and Swival writes a\nmachine-readable evaluation report with per-call LLM timing, tool\nsuccess/failure counts, context compaction events, and guardrail interventions.\nUseful for comparing models, settings, skills, and MCP servers systematically\non real coding tasks.\n\n**Secrets stay on your machine.** Swival transparently detects API keys and\ncredential tokens in LLM messages and encrypts them before they leave your\nmachine when you enable secret encryption with `--encrypt-secrets`. The LLM\nnever sees the real values. Decryption happens locally when the response comes\nback, so tools still work normally. See [Secret Encryption](docs.md/secrets.md)\nfor details.\n\n**Cross-session memory.** The agent remembers things across sessions. It stores\nnotes in a local memory file and retrieves the most relevant entries for each\nnew conversation using BM25 ranking, so context from past work carries forward\nwithout bloating the prompt. Use `/learn` in the REPL to teach it something\non the spot.\n\n**Pick up where you left off.** When a session is interrupted — Ctrl+C, max\nturns, context overflow — Swival saves its state to disk. Next time you run it\nin the same directory, it picks up where it left off: what it was doing, what\nit had figured out, and what was left.\n\n**A2A server mode.** Run `swival --serve` and your agent becomes an A2A\nendpoint that other agents can call over HTTP. Multi-turn context, streaming,\nrate limiting, and bearer auth are built in.\n\n**Skills, MCP, and A2A.** Extend the agent with SKILL.md-based skills for\nreusable workflows, connect to external tools via the Model Context Protocol,\nand talk to remote agents via the Agent-to-Agent (A2A) protocol.\n\n**Small enough to read and hack.** A compact Python codebase with no framework\nunderneath. If something doesn't work the way you want, change it.\n\n**CLI-native.** stdout is exclusively the final answer. All diagnostics go to\nstderr. Pipe Swival's output straight into another command or a file.\n\n## Documentation\n\nFull documentation is available at [swival.dev](https://swival.dev/).\n\n- [Getting Started](docs.md/getting-started.md) -- installation, first run, what\n  happens under the hood\n- [Usage](docs.md/usage.md) -- one-shot mode, REPL mode, CLI flags, piping,\n  exit codes\n- [Tools](docs.md/tools.md) -- what the agent can do: file ops, search, editing,\n  web fetching, thinking, task tracking, command execution\n- [Safety and Sandboxing](docs.md/safety-and-sandboxing.md) -- path resolution,\n  symlink protection, filesystem access modes, command execution modes\n- [Skills](docs.md/skills.md) -- creating and using SKILL.md-based agent skills\n- [Customization](docs.md/customization.md) -- config files, project instructions,\n  system prompt overrides, tuning parameters\n- [Context Management](docs.md/context-management.md) -- compaction, snapshots,\n  knowledge survival, and how Swival handles tight context windows\n- [Providers](docs.md/providers.md) -- LM Studio, HuggingFace, OpenRouter,\n  Google Gemini, ChatGPT Plus/Pro, AWS Bedrock, generic OpenAI-compatible\n  server, and command (external program) configuration\n- [MCP](docs.md/mcp.md) -- connecting external tool servers via the Model Context\n  Protocol\n- [A2A](docs.md/a2a.md) -- connecting to remote agents via the Agent-to-Agent\n  protocol\n- [Reports](docs.md/reports.md) -- JSON reports for benchmarking and evaluation\n- [Web Browsing](docs.md/web-browsing.md) -- Chrome DevTools MCP, Lightpanda\n  MCP, and agent-browser for web interaction\n- [Reviews](docs.md/reviews.md) -- external reviewer scripts for automated QA\n  and LLM-as-judge evaluation\n- [Secret Encryption](docs.md/secrets.md) -- transparent encryption of\n  credentials before they reach the LLM provider\n- [Outbound LLM Filter](docs.md/llm-filter.md) -- user-defined scripts to\n  redact or block outbound LLM requests\n- [Lifecycle Hooks](docs.md/lifecycle-hooks.md) -- startup/exit hooks for\n  syncing state to remote storage\n- [Custom Commands](docs.md/custom-commands.md) -- REPL custom command setup\n  and execution\n- [Python API](docs.md/python-api.md) -- library API for embedding Swival in\n  Python applications\n- [Not Just for Frontier Models](docs.md/open-models.md) -- why Swival is\n  built to work well with small and open models too\n- [Using Swival with AgentFS](docs.md/agentfs.md) -- copy-on-write filesystem\n  sandboxing for safe agent runs\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fswival%2Fswival","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fswival%2Fswival","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fswival%2Fswival/lists"}