{"id":13502859,"url":"https://github.com/efugier/smartcat","last_synced_at":"2025-03-29T12:33:07.591Z","repository":{"id":205829006,"uuid":"715183622","full_name":"efugier/smartcat","owner":"efugier","description":"Putting a brain behind `cat`🐈‍⬛ Integrating language models in the Unix commands ecosystem through text streams.","archived":false,"fork":false,"pushed_at":"2025-02-23T23:54:22.000Z","size":12083,"stargazers_count":440,"open_issues_count":8,"forks_count":20,"subscribers_count":4,"default_branch":"main","last_synced_at":"2025-03-24T05:15:46.983Z","etag":null,"topics":["ai","chatgpt","cli","command-line","command-line-tool","copilot","llm","mistral-ai","unix"],"latest_commit_sha":null,"homepage":"https://crates.io/crates/smartcat","language":"Rust","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/efugier.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-11-06T16:28:08.000Z","updated_at":"2025-03-23T19:48:40.000Z","dependencies_parsed_at":"2023-11-16T16:47:31.937Z","dependency_job_id":"1e3c4078-b2b9-498d-bb9a-49dd2eebb784","html_url":"https://github.com/efugier/smartcat","commit_stats":null,"previous_names":["efugier/pipelm","efugier/smartcat"],"tags_count":35,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/efugier%2Fsmartcat","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/efugier%2Fsmartcat/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/efugier%2Fsmartcat/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/efugier%2Fsmartcat/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/efugier","download_url":"https://codeload.github.com/efugier/smartcat/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":246187190,"owners_count":20737459,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","chatgpt","cli","command-line","command-line-tool","copilot","llm","mistral-ai","unix"],"created_at":"2024-07-31T22:02:27.423Z","updated_at":"2025-03-29T12:33:07.583Z","avatar_url":"https://github.com/efugier.png","language":"Rust","readme":"\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://github.com/efugier/smartcat/discussions\"\u003e\n    \u003cimg src=\"https://img.shields.io/badge/commmunity-discussion-blue?style=flat-square\" alt=\"community discussion\"\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://github.com/efugier/smartcat/actions/workflows/ci.yml\"\u003e\n      \u003cimg src=\"https://github.com/efugier/smartcat/actions/workflows/ci.yml/badge.svg?branch=main\" alt=\"Github Actions CI Build Status\"\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://crates.io/crates/smartcat\"\u003e\n      \u003cimg src=\"https://img.shields.io/crates/v/smartcat.svg?style=flat-square\" alt=\"crates.io\"\u003e\n  \u003c/a\u003e\n  \u003cbr\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"assets/sc_logo.png\" width=\"200\"\u003e\n\u003c/p\u003e\n\n# smartcat (sc)\n\nPuts a brain behind `cat`! CLI interface to bring language models into the Unix ecosystem and allow terminal power users to make the most out of LLMs while maintaining full control.\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"assets/workflow.gif\" /\u003e\n\u003c/p\u003e\n\nWhat makes it special:\n\n- Made for power users; tailor the config to reduce overhead on your most frequent tasks\n- Minimalist, built according to the Unix philosophy with terminal and editor integration in mind\n- Good I/O handling to insert user input in prompts and use the result in CLI-based workflows\n- Built-in partial prompt to make the model play nice as a CLI tool\n- Full configurability on which API, LLM version, and temperature you use\n- Write and save your own prompt templates for faster recurring tasks (simplify, optimize, tests, etc.)\n- Conversation support\n- Glob expressions to include context files\n\nCurrently supports the following APIs:\n\n- Local runs with **[Ollama](https://github.com/ollama/ollama/blob/main/docs/README.md)** or any server compliant with its format; see the [Ollama setup](#ollama-setup) section for the free and easiest way to get started!  \n_(Answers might be slow depending on your setup; you may want to try the third-party APIs for an optimal workflow.)_\n- **[Anthropic](https://docs.anthropic.com/claude/docs/models-overview)**, **[Azure OpenAi](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference)**, **[Groq](https://console.groq.com/docs/models)**, **[Mistral AI](https://docs.mistral.ai/getting-started/models/)**, **[OpenAI](https://platform.openai.com/docs/models/overview)**\n\n# Table of Contents\n\n- [Installation](#installation)\n- [Recommended models](#recommended-models)\n- [Usage](#usage)\n- [A few examples to get started 🐈‍⬛](#a-few-examples-to-get-started-)\n  - [Integrating with editors](#integrating-with-editors)\n    - [Example workflows](#example-workflows)\n- [Configuration](#configuration) ← please read this carefully\n    - [Ollama setup](#ollama-setup) ← easiest way to get running for free\n- [How to help?](./CONTRIBUTING.md)\n\n## Installation\n\nOn the first run (`sc`), it will ask you to generate default configuration files and provide guidance on finalizing the installation (see the [Configuration](#Configuration) section).\n\nThe minimum configuration requirement is a `default` prompt that calls a setup API (either remote with an API key or local with Ollama).\n\nNow on how to get it.\n\n### With Cargo\n\nWith an up-to-date Rust and Cargo setup (you might consider running `rustup update`):\n\n```\ncargo install smartcat\n```\n\nRun this command again to update `smartcat`.\n\n### Arch Linux\n\nIf you are on Arch Linux, you can install the package from the [extra repository](https://archlinux.org/packages/extra/x86_64/smartcat/):\n\n```\npacman -S smartcat\n```\n\n### By downloading the binary\n\nChoose the one compiled for your platform on the [release page](https://github.com/efugier/smartcat/releases).\n\n## Recommended Models\n\nCurrently the best results are achieved with APIs from Anthropic, Mistral or Openai. It costs about $2-3 a month for typical use with the best models.\n\n## Usage\n\n```text\nUsage: sc [OPTIONS] [INPUT_OR_TEMPLATE_REF] [INPUT_IF_TEMPLATE_REF]\n\nArguments:\n  [INPUT_OR_TEMPLATE_REF]  ref to a prompt template from config or straight input (will use `default` prompt template if input)\n  [INPUT_IF_TEMPLATE_REF]  if the first arg matches a config template, the second will be used as input\n\nOptions:\n  -e, --extend-conversation        whether to extend the previous conversation or start a new one\n  -r, --repeat-input               whether to repeat the input before the output, useful to extend instead of replacing\n      --api \u003cAPI\u003e                  overrides which api to hit [possible values: ollama, anthropic, groq, mistral, openai]\n  -m, --model \u003cMODEL\u003e              overrides which model (of the api) to use\n  -t, --temperature \u003cTEMPERATURE\u003e  higher temperature  means answer further from the average\n  -l, --char-limit \u003cCHAR_LIMIT\u003e    max number of chars to include, ask for user approval if more, 0 = no limit\n  -c, --context \u003cCONTEXT\u003e...       glob patterns or list of files to use the content as context\n                                   make sure it's the last arg.\n  -h, --help                       Print help\n  -V, --version                    Print version\n```\n\nYou can use it to accomplish tasks in the CLI but also in your editors (if they are good Unix citizens, i.e., work with shell commands and text streams) to complete, refactor, write tests... anything!\n\n**The key to making this work seamlessly is a good default prompt that tells the model to behave like a CLI tool** and not write any unwanted text like markdown formatting or explanations.\n\n## A few examples to get started 🐈‍⬛\n\n```\nsc \"say hi\"  # just ask (uses default prompt template)\n\nsc test                         # use templated prompts\nsc test \"and parametrize them\"  # extend them on the fly\n\nsc \"explain how to use this program\" -c **/*.md main.py  # use files as context\n\ngit diff | sc \"summarize the changes\"  # pipe data in\n\ncat en.md | sc \"translate in french\" \u003e\u003e fr.md   # write data out\nsc -e \"use a more informal tone\" -t 2 \u003e\u003e fr.md  # extend the conversation and raise the temprature\n```\n\n### Integrating with editors\n\nThe key for good integration in editors is a good default prompt (or set of prompts) combined with the `-p` flag for specifying the task at hand.\nThe `-r` flag can be used to decide whether to replace or extend the selection.\n\n#### Vim\n\nStart by selecting some text, then press `:`. You can then pipe the selection content to `smartcat`.\n\n```\n:'\u003c,'\u003e!sc \"replace the versions with wildcards\"\n```\n\n```\n:'\u003c,'\u003e!sc \"fix this function\"\n```\n\nwill **overwrite** the current selection with the same text transformed by the language model.\n\n```\n:'\u003c,'\u003e!sc -r test\n```\n\nwill **repeat** the input, effectively appending at the end of the current selection the result of the language model.\n\nAdd the following remap to your vimrc for easy access:\n\n```vimrc\nnnoremap \u003cleader\u003esc :'\u003c,'\u003e!sc\n```\n\n#### Helix and Kakoune\n\nSame concept, different shortcut, simply press the pipe key to redirect the selection to `smartcat`.\n\n```\npipe:sc test -r\n```\nWith some remapping you may have your most reccurrent action attached to few keystrokes e.g. `\u003cleader\u003ewt`!\n\n#### Example Workflows\n\n**For quick questions:**\n\n```\nsc \"my quick question\"\n```\n\nwhich will likely be **your fastest path to answer**: a shortcut to open your terminal (if you're not in it already), `sc` and you're set. No tab finding, no logins, no redirects etc.\n\n**To help with coding:**\n\nselect a struct\n\n```\n:'\u003c,'\u003e!sc \"implement the traits FromStr and ToString for this struct\"\n```\n\nselect the generated impl block\n\n```\n:'\u003c,'\u003e!sc -e \"can you make it more concise?\"\n```\n\nput the cursor at the bottom of the file and give example usage as input\n\n```\n:'\u003c,'\u003e!sc -e \"now write tests for it knowing it's used like this\" -c src/main.rs\n```\n\n...\n\n**To have a full conversation with a llm from a markdown file:**\n\n```\nvim problem_solving.md\n\n\u003e write your question as comment in the markdown file then select your question\n\u003e and send it to smartcat using the aforementioned trick, use `-r` to repeat the input.\n\nIf you wan to continue the conversation, write your new question as a comment and repeat\nthe previous step with `-e -r`.\n\n\u003e This allows you to keep track of your questions and make a nice reusable document.\n```\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"assets/qatohtml.gif\" /\u003e\n\u003c/p\u003e\n\n\n# Configuration\n\n- By default, lives at `$HOME/.config/smartcat` or `%USERPROFILE%\\.config\\smartcat` on Windows\n- The directory can be set using the `SMARTCAT_CONFIG_PATH` environment variable\n- Use `#[\u003cinput\u003e]` as the placeholder for input when writing prompts; if none is provided, it will be automatically added at the end of the last user message\n- The default model is a local `phi3` run with Ollama, but it's recommended to try the latest models and see which one works best for you\n- The prompt named `default` will be used by default\n- You can adjust the temperature and set a default for each prompt depending on its use case\n\nThree files are used:\n\n- `.api_configs.toml` stores your credentials; you need at least one provider with API key or a local Ollama setup\n- `prompts.toml` stores your prompt templates; you need at least the `default` prompt\n- `conversation.toml` stores the latest chat if you need to continue it; it's auto-managed, but you can make backups if desired\n\n`.api_configs.toml`\n\n```toml\n[ollama]  # local API, no key required\nurl = \"http://localhost:11434/api/chat\"\ndefault_model = \"phi3\"\ntimeout_seconds = 180  # default timeout if not specified\n\n[openai]  # each supported api has their own config section with api and url\napi_key = \"\u003cyour_api_key\u003e\"\ndefault_model = \"gpt-4-turbo-preview\"\nurl = \"https://api.openai.com/v1/chat/completions\"\n\n[mistral]\n# you can use a command to grab the key, requires a working `sh` command\napi_key_command = \"pass mistral/api_key\"\ndefault_model = \"mistral-medium\"\nurl = \"https://api.mistral.ai/v1/chat/completions\"\n\n[groq]\napi_key_command = \"echo $MY_GROQ_API_KEY\"\ndefault_model = \"llama3-70b-8192\"\nurl = \"https://api.groq.com/openai/v1/chat/completions\"\n\n[anthropic]\napi_key = \"\u003cyet_another_api_key\u003e\"\nurl = \"https://api.anthropic.com/v1/messages\"\ndefault_model = \"claude-3-opus-20240229\"\nversion = \"2023-06-01\"  # anthropic API version, see https://docs.anthropic.com/en/api/versioning\n\n[cerebras]\napi_key = \"\u003cyour_api_key\u003e\"\ndefault_model = \"llama3.1-70b\"\nurl = \"https://api.cerebras.ai/v1/chat/completions\"\n```\n\n`prompts.toml`\n\n```toml\n[default]  # a prompt is a section\napi = \"ollama\"  # must refer to an entry in the `.api_configs.toml` file\nmodel = \"phi3\"  # each prompt may define its own model\n\n[[default.messages]]  # then you can list messages\nrole = \"system\"\ncontent = \"\"\"\\\nYou are an expert programmer and a shell master. You value code efficiency and clarity above all things. \\\nWhat you write will be piped in and out of cli programs so you do not explain anything unless explicitely asked to. \\\nNever write ``` around your answer, provide only the result of the task you are given. Preserve input formatting.\\\n\"\"\"\n\n[empty]  # always nice to have an empty prompt available\napi = \"openai\"\n# not mentioning the model will use the default from the api config\nmessages = []\n\n[test]\napi = \"anthropic\"\ntemperature = 0.0\n\n[[test.messages]]\nrole = \"system\"\ncontent = \"\"\"\\\nYou are an expert programmer and a shell master. You value code efficiency and clarity above all things. \\\nWhat you write will be piped in and out of cli programs so you do not explain anything unless explicitely asked to. \\\nNever write ``` around your answer, provide only the result of the task you are given. Preserve input formatting.\\\n\"\"\"\n\n[[test.messages]]\nrole = \"user\"\n# the following placeholder string #[\u003cinput\u003e] will be replaced by the input\n# each message seeks it and replaces it\ncontent ='''Write tests using pytest for the following code. Parametrize it if appropriate.\n\n#[\u003cinput\u003e]\n'''\n```\n\nsee [the config setup file](./src/config/mod.rs) for more details.\n\n## Ollama setup\n\n1. [Install Ollama](https://github.com/ollama/ollama#ollama)\n2. Pull the model you plan on using `ollama pull phi3`\n3. Test the model `ollama run phi3 \"say hi\"`\n4. Make sure the serving is available `curl http://localhost:11434` which should say \"Ollama is running\", else you might need to run `ollama serve`\n5. `smartcat` will now be able to reach your local ollama, enjoy!\n\n⚠️ Answers might be slow depending on your setup, you may want to try the third party APIs for an optimal workflow. Timeout is configurable and set to 30s by default.\n\n## How to help?\n\nSee [CONTRIBUTING.md](./CONTRIBUTING.md).\n","funding_links":[],"categories":["Rust","\u003ca name=\"Rust\"\u003e\u003c/a\u003eRust"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fefugier%2Fsmartcat","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fefugier%2Fsmartcat","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fefugier%2Fsmartcat/lists"}