https://github.com/BWurster/explain-selection-with-ai
This is my first go at making an Obsidian plugin to elaborate on and describe selected bits of information and their context.
https://github.com/BWurster/explain-selection-with-ai
Last synced: 4 months ago
JSON representation
This is my first go at making an Obsidian plugin to elaborate on and describe selected bits of information and their context.
- Host: GitHub
- URL: https://github.com/BWurster/explain-selection-with-ai
- Owner: BWurster
- License: mit
- Created: 2024-05-25T03:45:47.000Z (11 months ago)
- Default Branch: master
- Last Pushed: 2024-06-19T02:32:04.000Z (10 months ago)
- Last Synced: 2024-12-04T01:07:55.598Z (4 months ago)
- Language: TypeScript
- Size: 43.9 KB
- Stars: 3
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- jimsghstars - BWurster/explain-selection-with-ai - This is my first go at making an Obsidian plugin to elaborate on and describe selected bits of information and their context. (TypeScript)
README
# Obsidian Explain Selection with AI Plugin
This is a plugin for Obsidian (https://obsidian.md).
This plugin provides an additional element to the editor context menu that will prompt an [OpenAI Chat Completion API](https://platform.openai.com/docs/guides/text-generation/chat-completions-api)-compatible AI endpoint to elaborate on the selected text in the surrounding context.
There are three supported ways to set this up:
1. Using OpenAI
2. Using Ollama
3. Running a custom local or remote model## Using OpenAI (recommended)
The AI endpoint is the parameter that defines the LLM that is used for text generation. If you are using OpenAI models, possible options for this field could be
- `gpt-3.5-turbo`
- `gpt-4o`
- `gpt-4-turbo`If this is the case for you, you will also need to have the OpenAI key field populated with your API key from OpenAI.
If you wish to use other OpenAI model, perform an advanced custom setup with the `Base URL` set to `https://api.openai.com/v1/` and the endpoint set to your desired OpenAI model.
## Using Ollama
[Ollama](https://www.ollama.com/) is an open-source way to manage locally installed and ran large-language models. Such a system supports the [OpenAI Chat Completion API](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) format, so it has been added as an easy integration with support for `llama3` and `mistral` models.
If you wish to use other models at this time, see the next section on configuring alternative local models, setting the `Base URL` to point at your Ollama instance (`http://localhost:11434/v1/` by default) and then set the `Endpoint` setting to be your desired [Ollama model](https://www.ollama.com/library)
## Using alternative local or remote model (advanced)
If you are not familiar with the [Hugging Face Text Generation Interface (TGI)](https://huggingface.co/docs/text-generation-inference/en/index), it is not recommended to go this route. Having access to models spun up locally or remotely is a prerequisite for successfully using this interface and is beyond the scope of this documentation.
With this in mind, provide the `Base URL` and `Endpoint` settings as well as an optional `API Key` should your setup necessitate that to interface with your local or remote model.
This is ultimately an "under-the-hood" access to the Chat API to be an option for more advanced users or restricted endpoints.