{"id":13494291,"url":"https://github.com/tzachar/cmp-ai","last_synced_at":"2026-02-04T14:23:09.279Z","repository":{"id":162074199,"uuid":"636682750","full_name":"tzachar/cmp-ai","owner":"tzachar","description":null,"archived":false,"fork":false,"pushed_at":"2025-04-21T07:45:33.000Z","size":58,"stargazers_count":244,"open_issues_count":6,"forks_count":47,"subscribers_count":4,"default_branch":"main","last_synced_at":"2025-04-21T08:35:33.738Z","etag":null,"topics":["nvim-cmp"],"latest_commit_sha":null,"homepage":"","language":"Lua","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tzachar.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-05-05T12:03:30.000Z","updated_at":"2025-04-21T07:45:36.000Z","dependencies_parsed_at":"2023-12-06T11:01:50.978Z","dependency_job_id":"8d58543f-709e-4311-ae65-0d142472cd0e","html_url":"https://github.com/tzachar/cmp-ai","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/tzachar/cmp-ai","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tzachar%2Fcmp-ai","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tzachar%2Fcmp-ai/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tzachar%2Fcmp-ai/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tzachar%2Fcmp-ai/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tzachar","download_url":"https://codeload.github.com/tzachar/cmp-ai/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tzachar%2Fcmp-ai/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29087225,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-04T03:31:03.593Z","status":"ssl_error","status_checked_at":"2026-02-04T03:29:50.742Z","response_time":62,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["nvim-cmp"],"created_at":"2024-07-31T19:01:23.540Z","updated_at":"2026-02-04T14:23:09.267Z","avatar_url":"https://github.com/tzachar.png","language":"Lua","readme":"# cmp-ai\n\n\nAI source for [hrsh7th/nvim-cmp](https://github.com/hrsh7th/nvim-cmp)\n\nThis is a general purpose AI source for `cmp`, easily adapted to any restapi\nsupporting remote code completion.\n\nFor now, HuggingFace, SantaCoder, OpenAI Chat, Codestral, Ollama and Google Bard are implemented.\n\n## Install\n\n### Dependencies\n\n- You will need `plenary.nvim` to use this plugin.\n- For using Codestral, OpenAI or HuggingFace, you will also need `curl`.\n- For using Google Bard, you will need [dsdanielpark/Bard-API](https://github.com/dsdanielpark/Bard-API).\n\n### Using a plugin manager\n\nUsing [Lazy](https://github.com/folke/lazy.nvim/):\n\n```lua\nreturn require(\"lazy\").setup({\n    {'tzachar/cmp-ai', dependencies = 'nvim-lua/plenary.nvim'},\n    {'hrsh7th/nvim-cmp', dependencies = {'tzachar/cmp-ai'}},\n})\n```\n\nAnd later, tell `cmp` to use this plugin:\n\n```lua\nrequire'cmp'.setup {\n    sources = {\n        { name = 'cmp_ai' },\n    },\n}\n```\n\n## Setup\n\nPlease note the use of `:` instead of a `.`\n\nTo use HuggingFace:\n\n```lua\nlocal cmp_ai = require('cmp_ai.config')\n\ncmp_ai:setup({\n  max_lines = 1000,\n  provider = 'HF',\n  notify = true,\n  notify_callback = function(msg)\n    vim.notify(msg)\n  end,\n  run_on_every_keystroke = true,\n  ignored_file_types = {\n    -- default is not to ignore\n    -- uncomment to ignore in lua:\n    -- lua = true\n  },\n})\n```\n\nYou will also need to make sure you have the Hugging Face api key in you\nenvironment, `HF_API_KEY`.\n\nTo use OpenAI:\n\n```lua\nlocal cmp_ai = require('cmp_ai.config')\n\ncmp_ai:setup({\n  max_lines = 1000,\n  provider = 'OpenAI',\n  provider_options = {\n    model = 'gpt-4',\n  },\n  notify = true,\n  notify_callback = function(msg)\n    vim.notify(msg)\n  end,\n  run_on_every_keystroke = true,\n  ignored_file_types = {\n    -- default is not to ignore\n    -- uncomment to ignore in lua:\n    -- lua = true\n  },\n})\n```\n\nYou will also need to make sure you have the OpenAI api key in you\nenvironment, `OPENAI_API_KEY`.\n\nAvailable models for OpenAI are `gpt-4` and `gpt-3.5-turbo`.\n\nTo use Codestral:\n\n```lua\nlocal cmp_ai = require('cmp_ai.config')\n\ncmp_ai:setup({\n  max_lines = 1000,\n  provider = 'Codestral',\n  provider_options = {\n    model = 'codestral-latest',\n  },\n  notify = true,\n  notify_callback = function(msg)\n    vim.notify(msg)\n  end,\n  run_on_every_keystroke = true,\n  ignored_file_types = {\n    -- default is not to ignore\n    -- uncomment to ignore in lua:\n    -- lua = true\n  },\n})\n```\n\nYou will also need to make sure you have the Codestral api key in you\nenvironment, `CODESTRAL_API_KEY`.\n\nYou can also use the `suffix` and `prompt` parameters, see [Codestral](https://github.com/codestral/codestral) for more details. \n\n```lua\nlocal cmp_ai = require('cmp_ai.config')\n\ncmp_ai:setup({\n  max_lines = 1000,\n  provider = 'Codestral',\n  provider_options = {\n    model = 'codestral-latest',\n    prompt = function(lines_before, lines_after)\n      return lines_before\n    end,\n    suffix = function(lines_after)\n      return lines_after\n    end\n  },\n  notify = true,\n  notify_callback = function(msg)\n    vim.notify(msg)\n  end,\n  run_on_every_keystroke = true,\n})\n```\n\nTo use Google Bard:\n\n```lua\nlocal cmp_ai = require('cmp_ai.config')\n\ncmp_ai:setup({\n  max_lines = 1000,\n  provider = 'Bard',\n  notify = true,\n  notify_callback = function(msg)\n    vim.notify(msg)\n  end,\n  run_on_every_keystroke = true,\n  ignored_file_types = {\n    -- default is not to ignore\n    -- uncomment to ignore in lua:\n    -- lua = true\n  },\n})\n```\n\nYou will also need to follow the instructions on [dsdanielpark/Bard-API](https://github.com/dsdanielpark/Bard-API)\nto get the `__Secure-1PSID` key, and set the environment variable `BARD_API_KEY`\naccordingly (note that this plugin expects `BARD_API_KEY` without a leading underscore).\n\nTo use [Ollama](https://ollama.ai):\n\n```lua\nlocal cmp_ai = require('cmp_ai.config')\n\ncmp_ai:setup({\n  max_lines = 100,\n  provider = 'Ollama',\n  provider_options = {\n    model = 'codellama:7b-code',\n    auto_unload = false, -- Set to true to automatically unload the model when\n                        -- exiting nvim.\n  },\n  notify = true,\n  notify_callback = function(msg)\n    vim.notify(msg)\n  end,\n  run_on_every_keystroke = true,\n  ignored_file_types = {\n    -- default is not to ignore\n    -- uncomment to ignore in lua:\n    -- lua = true\n  },\n})\n```\n\nWith Ollama you can also use the `suffix` parameter, typically when you want to use cmp-ai for code completion and you want to use the default plugin/prompt.  \n\nIf the model you're using has the following template:\n```\n{{- if .Suffix }}\u003c|fim_prefix|\u003e{{ .Prompt }}\u003c|fim_suffix|\u003e{{ .Suffix }}\u003c|fim_middle|\u003e\n{{- else }}{{ .Prompt }}\n{{- end }}\n```\nthen you can use the suffix parameter to not change the prompt. since the model will use your suffix and the prompt to construct the template.\nThe prompts should be the `lines_before` and suffix the `lines_after`\nNow you can even change the model without the need to adjust the prompt or suffix functions.\n\n```lua\nlocal cmp_ai = require('cmp_ai.config')\n\ncmp_ai:setup({\n  max_lines = 100,\n  provider = 'Ollama',\n  provider_options = {\n    model = 'codegemma:2b-code',\n    prompt = function(lines_before, lines_after)\n      return lines_before\n    end,\n    suffix = function(lines_after)\n      return lines_after\n    end,\n  },\n  notify = true,\n  notify_callback = function(msg)\n    vim.notify(msg)\n  end,\n  run_on_every_keystroke = true,\n})\n```\n\u003e [!NOTE] \n\u003e Different models may implement different special tokens to delimit\n\u003e prefix and suffix. You may want to consult the official documentation for the\n\u003e specific tokens used for your model and the recommended format of the prompt. For example, [qwen2.5-coder](https://github.com/QwenLM/Qwen2.5-Coder?tab=readme-ov-file#basic-information) used `\u003c|fim_prefix|\u003e`, `\u003c|fim_middle|\u003e` and `\u003c|fim_suffix|\u003e` (as well as some other special tokens for project context) as the delimiter for fill-in-middle code completion and provided [examples](https://github.com/QwenLM/Qwen2.5-Coder?tab=readme-ov-file#3-file-level-code-completion-fill-in-the-middle) on how to construct the prompt. This is model-specific and Ollama supports all kinds of different models and fine-tunes, so it's best if you write your own prompt like the following example:\n\n```lua\nlocal cmp_ai = require('cmp_ai.config')\n\ncmp_ai:setup({\n  max_lines = 100,\n  provider = 'Ollama',\n  provider_options = {\n    model = 'qwen2.5-coder:7b-base-q6_K',\n    prompt = function(lines_before, lines_after)\n    -- You may include filetype and/or other project-wise context in this string as well.\n    -- Consult model documentation in case there are special tokens for this.\n      return \"\u003c|fim_prefix|\u003e\" .. lines_before .. \"\u003c|fim_suffix|\u003e\" .. lines_after .. \"\u003c|fim_middle|\u003e\"\n    end,\n  },\n  notify = true,\n  notify_callback = function(msg)\n    vim.notify(msg)\n  end,\n  run_on_every_keystroke = false,\n})\n```\n\n\u003e [!NOTE]\n\u003e It's also worth noting that, for some models (like [qwen2.5-coder](https://github.com/QwenLM/Qwen2.5-Coder)), the base model appears to be better for completion because it only replies with the code, whereas the instruction-tuned variant tends to reply with a piece of Markdown text which cannot be directly used as the completion candidate.\n\nTo use Tabby:\n\n```lua\nlocal cmp_ai = require('cmp_ai.config')\n\ncmp_ai:setup({\n  max_lines = 1000,\n  provider = 'Tabby',\n  notify = true,\n  provider_options = {\n    -- These are optional\n    -- user = 'yourusername',\n    -- temperature = 0.2,\n    -- seed = 'randomstring',\n  },\n  notify_callback = function(msg)\n    vim.notify(msg)\n  end,\n  run_on_every_keystroke = true,\n  ignored_file_types = {\n    -- default is not to ignore\n    -- uncomment to ignore in lua:\n    -- lua = true\n  },\n})\n```\n\nYou will also need to make sure you have the Tabby api key in your environment, `TABBY_API_KEY`.\n\n\n### `notify`\n\nAs some completion sources can be quit slow, setting this to `true` will trigger\na notification when a completion starts and ends using `vim.notify`.\n\n### `notify_callback`\n\nThe default notify function uses `vim.notify`, but an override can be configured.\nFor example:\n\n```lua\nnotify_callback = function(msg)\n  require('notify').notify(msg, vim.log.levels.INFO, {\n    title = 'OpenAI',\n    render = 'compact',\n  })\nend\n```\n\nIf you want, you can also configure callbacks for `on_start` and `on_end` events\n\n```lua\nnotify_callback = {\n    on_start = function(msg)\n        require('notify').notify(\n            msg .. \"completion started\",\n            vim.log.levels.INFO,\n            {\n                title = 'OpenAI',\n                render = 'compact',\n            }\n        )\n\n        -- do pretty animations or something here\n    end,\n\n    on_end = function(msg)\n        require('notify').notify(\n            msg .. \"completion ended\",\n            vim.log.levels.INFO,\n            {\n                title = 'OpenAI',\n                render = 'compact',\n            }\n        )\n\n        -- finish pretty animations started above\n    end,\n}\n```\n\n### log_errors\n\nLog any errors that the AI backend returns. Defaults to `true`. This does not\nprevent the notification callbacks from being called; you can set this to\n`false` to prevent excess noise if you perform other `vim.notify` calls in\nyour callbacks.\n\n```lua\ncmp_ai:setup({\n    log_errors = true,\n})\n```\n\n\n### `max_lines`\n\nHow many lines of buffer context to use\n\n### `max_timeout_seconds`\n\nNumber of seconds before a code completion request is cancelled. Bard is\ncurrently not supported. This is `--max-time` for `curl`.\n\nexample:\n\n```lua\ncmp_ai:setup({\n  max_timeout_seconds = 8,\n})\n```\n\n### `run_on_every_keystroke`\n\nGenerate new completion items on every keystroke.\n\n### `ignored_file_types` `(table: \u003cstring:bool\u003e)`\n\nWhich file types to ignore. For example:\n\n```lua\nlocal ignored_file_types = {\n  html = true,\n}\n```\n\n`cmp-ai` will not offer completions when `vim.bo.filetype` is `html`.\n\n## Dedicated `cmp` keybindings\n\nAs completions can take time, and you might not want to trigger expensive apis\non every keystroke, you can configure `cmp-ai` to trigger only with a specific\nkey press. For example, to bind `cmp-ai` to `\u003cc-x\u003e`, you can do the following:\n\n```lua\ncmp.setup({\n  ...\n  mapping = {\n    ...\n    ['\u003cC-x\u003e'] = cmp.mapping(\n      cmp.mapping.complete({\n        config = {\n          sources = cmp.config.sources({\n            { name = 'cmp_ai' },\n          }),\n        },\n      }),\n      { 'i' }\n    ),\n  },\n})\n```\n\nAlso, make sure you do not pass `cmp-ai` to the default list of `cmp` sources.\n\n## Pretty Printing Menu Items\n\nYou can use the following to pretty print the completion menu (requires\n[lspkind](https://github.com/onsails/lspkind-nvim) and patched fonts\n(\u003chttps://www.nerdfonts.com\u003e)):\n\n```lua\nrequire('cmp').setup({\n  sources = {\n    { name = 'cmp_ai' },\n  },\n  formatting = {\n    format = require('lspkind').cmp_format({\n      mode = \"symbol_text\",\n      maxwidth = 50,\n      ellipsis_char = '...',\n      show_labelDetails = true,\n      symbol_map = {\n        HF = \"\",\n        OpenAI = \"\",\n        Codestral = \"\",\n        Bard = \"\",\n      }\n    });\n  },\n})\n```\n\n## Sorting\n\nYou can bump `cmp-ai` completions to the top of your completion menu like so:\n\n```lua\nlocal compare = require('cmp.config.compare')\ncmp.setup({\n  sorting = {\n    priority_weight = 2,\n    comparators = {\n      require('cmp_ai.compare'),\n      compare.offset,\n      compare.exact,\n      compare.score,\n      compare.recently_used,\n      compare.kind,\n      compare.sort_text,\n      compare.length,\n      compare.order,\n    },\n  },\n})\n```\n\n## Debugging Information\n\nTo retrieve the raw response from the backend, you can set the following option\nin `provider_options`: \n```lua\nprovider_options = {\n  raw_response_cb = function(response)\n    -- the `response` parameter contains the raw response (JSON-like) object.\n\n    vim.notify(vim.inspect(response)) -- show the response as a lua table\n\n    vim.g.ai_raw_response = response -- store the raw response in a global\n                                     -- variable so that you can use it\n                                     -- somewhere else (like statusline).\n  end,\n}\n```\nThis provides useful information like context lengths (# of tokens) and \ngeneration speeds (tokens per seconds), depending on your backend. \n","funding_links":[],"categories":["Lua","AI"],"sub_categories":["(requires Neovim 0.5)","Diagnostics"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftzachar%2Fcmp-ai","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftzachar%2Fcmp-ai","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftzachar%2Fcmp-ai/lists"}