Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/pfrankov/obsidian-local-gpt
Local Ollama and OpenAI-like GPT's assistance for maximum privacy and offline access
https://github.com/pfrankov/obsidian-local-gpt
Last synced: 17 days ago
JSON representation
Local Ollama and OpenAI-like GPT's assistance for maximum privacy and offline access
- Host: GitHub
- URL: https://github.com/pfrankov/obsidian-local-gpt
- Owner: pfrankov
- License: mit
- Created: 2023-12-03T10:53:06.000Z (about 1 year ago)
- Default Branch: master
- Last Pushed: 2024-11-16T22:23:54.000Z (25 days ago)
- Last Synced: 2024-11-16T23:21:37.470Z (25 days ago)
- Language: TypeScript
- Homepage:
- Size: 1.07 MB
- Stars: 328
- Watchers: 7
- Forks: 23
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-obsidian-ai-tools - https://github.com/pfrankov/obsidian-local-gpt
- awesome-obsidian-ai-tools - https://github.com/pfrankov/obsidian-local-gpt
- awesome-ChatGPT-repositories - obsidian-local-gpt - Local Ollama and OpenAI-like GPT's assistance for maximum privacy and offline access (Langchain)
- project-awesome - pfrankov/obsidian-local-gpt - Local Ollama and OpenAI-like GPT's assistance for maximum privacy and offline access (TypeScript)
README
# Local GPT plugin for Obsidian
![demo](https://github.com/pfrankov/obsidian-local-gpt/assets/584632/724d4399-cb6c-4531-9f04-a1e5df2e3dad)
_No speedup. MacBook Pro 13, M1, 16GB, Ollama, orca-mini._Local GPT assistance for maximum privacy and offline access.
The plugin allows you to open a context menu on selected text to pick an AI-assistant's action.
The most casual AI-assistant for Obsidian.Also works with images
_No speedup. MacBook Pro 13, M1, 16GB, Ollama, bakllava._Also it can use context from links, backlinks and even PDF files
How to use (Ollama)
1. Install Embedding model:
- For English:
ollama pull nomic-embed-text
(fastest) - For other languages:
ollama pull bge-m3
(slower, but more accurate)
2. Select Embedding model in plugin's settings and try to use the largest model with largest context window.
### Default actions
- Continue writing
- Summarize text
- Fix spelling and grammar
- Find action items in text
- General help (just use selected text as a prompt for any purpose)
- New System Prompt to create actions for your needs
You can also add yours, share the best actions or get one [from the community](https://github.com/pfrankov/obsidian-local-gpt/discussions/2).
### Supported AI Providers
- Ollama
- OpenAI compatible server (also OpenAI)
## Installation
### 1. Install Plugin
#### Obsidian plugin store (recommended)
This plugin is available in the Obsidian community plugin store https://obsidian.md/plugins?id=local-gpt
#### BRAT
You can also install this plugin via [BRAT](https://obsidian.md/plugins?id=obsidian42-brat): `pfrankov/obsidian-local-gpt`
### 2. Install LLM
#### Ollama (recommended)
1. Install [Ollama](https://ollama.com/).
2. Install Gemma 2 (default) `ollama pull gemma2` or any preferred model [from the library](https://ollama.com/library).
Additional: if you want to enable streaming completion with Ollama you should set environment variable `OLLAMA_ORIGINS` to `*`:
- For MacOS run `launchctl setenv OLLAMA_ORIGINS "*"`.
- For Linux and Windows [check the docs](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server).
#### OpenAI compatible server
There are several options to run local OpenAI-like server:
- [Open WebUI](https://docs.openwebui.com/tutorials/integrations/continue-dev/)
- [llama.cpp](https://github.com/ggerganov/llama.cpp)
- [llama-cpp-python](https://github.com/abetlen/llama-cpp-python#openai-compatible-web-server)
- [LocalAI](https://localai.io/model-compatibility/llama-cpp/#setup)
- Obabooga [Text generation web UI](https://github.com/pfrankov/obsidian-local-gpt/discussions/8)
- [LM Studio](https://lmstudio.ai/)
- ...maybe more
### Configure Obsidian hotkey
1. Open Obsidian Settings
2. Go to Hotkeys
3. Filter "Local" and you should see "Local GPT: Show context menu"
4. Click on `+` icon and press hotkey (e.g. `⌘ + M`)
### "Use fallback" option
It is also possible to specify a fallback to handle requests — this allows you to use larger models when you are online and smaller ones when offline.
Example video
### Using with OpenAI
Since you can provide any OpenAI-like server, it is possible to use OpenAI servers themselves.
_Despite the ease of configuration, I do not recommend this method, since the main purpose of the plugin is to work with private LLMs._
1. Select `OpenAI compatible server` in `Selected AI provider`
2. Set `OpenAI compatible server URL` to `https://api.openai.com/v1`
3. Retrieve and paste your `API key` from the [API keys page](https://platform.openai.com/api-keys)
4. Click "refresh" button and select the model that suits your needs (e.g. `gpt-4o`)
Example screenshot
## My other Obsidian plugins
- [Colored Tags](https://github.com/pfrankov/obsidian-colored-tags) that colorizes tags in distinguishable colors.
## Inspired by
- [Obsidian Ollama](https://github.com/hinterdupfinger/obsidian-ollama).