Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jpmcb/nvim-llama
🦙 Ollama interfaces for Neovim
https://github.com/jpmcb/nvim-llama
llama2 neovim nvim nvim-plugin ollama
Last synced: 1 day ago
JSON representation
🦙 Ollama interfaces for Neovim
- Host: GitHub
- URL: https://github.com/jpmcb/nvim-llama
- Owner: jpmcb
- License: mit
- Created: 2023-08-26T19:48:17.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-08-16T15:45:54.000Z (4 months ago)
- Last Synced: 2024-11-30T00:34:09.021Z (12 days ago)
- Topics: llama2, neovim, nvim, nvim-plugin, ollama
- Language: Lua
- Homepage:
- Size: 59.6 KB
- Stars: 261
- Watchers: 11
- Forks: 8
- Open Issues: 8
-
Metadata Files:
- Readme: README.md
- License: LICENSE.txt
Awesome Lists containing this project
- awesome-neovim - jpmcb/nvim-llama - LLM (Llama 2 and llama.cpp) wrappers. (AI / (requires Neovim 0.5))
README
# 🦙 nvim-llama
_[Ollama](https://github.com/jmorganca/ollama) interfaces for Neovim: get up and running with large language models locally in Neovim._
https://github.com/jpmcb/nvim-llama/assets/23109390/3e9e7248-dcf4-4349-8ee2-fd87ac3838ca
🏗️ 👷 Warning! Under active development!! 👷 🚧
## Requirements
Docker is required to use `nvim-llama`.
And that's it! All models and clients run from within Docker to provide chat interfaces and functionality.
This is an agnostic approach that works for MacOS, Linux, and Windows.## Installation
Use your favorite package manager to install the plugin:
### Packer
```lua
use 'jpmcb/nvim-llama'
```### lazy.nvim
```lua
{
'jpmcb/nvim-llama'
}
```### vim-plug
```lua
Plug 'jpmcb/nvim-llama'
```## Setup & configuration
In your `init.vim`, setup the plugin:
```lua
require('nvim-llama').setup {}
```You can provide the following optional configuration table to the `setup` function:
```lua
local defaults = {
-- See plugin debugging logs
debug = false,-- The model for ollama to use. This model will be automatically downloaded.
model = llama2,
}
```### Model library
Ollama supports an incredible number of open-source models available on [ollama.ai/library](https://ollama.ai/library 'ollama model library')
Check out their docs to learn more: https://github.com/jmorganca/ollama
---
When setting the `model` setting, the specified model will be automatically downloaded:
| Model | Parameters | Size | Model setting |
| ------------------ | ---------- | ----- | ------------------------------ |
| Neural Chat | 7B | 4.1GB | `model = neural-chat` |
| Starling | 7B | 4.1GB | `model = starling-lm` |
| Mistral | 7B | 4.1GB | `model = mistral` |
| Llama 2 | 7B | 3.8GB | `model = llama2` |
| Code Llama | 7B | 3.8GB | `model = codellama` |
| Llama 2 Uncensored | 7B | 3.8GB | `model = llama2-uncensored` |
| Llama 2 13B | 13B | 7.3GB | `model = llama2:13b` |
| Llama 2 70B | 70B | 39GB | `model = llama2:70b` |
| Orca Mini | 3B | 1.9GB | `model = orca-mini` |
| Vicuna | 7B | 3.8GB | `model = vicuna` |> Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.
70B parameter models require upwards of 64 GB of ram (if not more).## Usage
The `:Llama` autocommand opens a `Terminal` window where you can start chatting with your LLM.
To exit `Terminal` mode, which by default locks the focus to the terminal buffer, use the bindings `Ctrl-\ Ctrl-n`