Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/Darkdriller/PowerToys-Run-LocalLLm
use Local LLM on PowerToys Run
https://github.com/Darkdriller/PowerToys-Run-LocalLLm
llms ollama-api powertoys-run-plugin
Last synced: 4 months ago
JSON representation
use Local LLM on PowerToys Run
- Host: GitHub
- URL: https://github.com/Darkdriller/PowerToys-Run-LocalLLm
- Owner: Darkdriller
- License: mit
- Created: 2024-08-19T22:02:12.000Z (4 months ago)
- Default Branch: main
- Last Pushed: 2024-08-28T13:08:58.000Z (4 months ago)
- Last Synced: 2024-08-28T14:33:44.612Z (4 months ago)
- Topics: llms, ollama-api, powertoys-run-plugin
- Language: C#
- Homepage:
- Size: 812 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE.txt
Awesome Lists containing this project
- awesome-powertoys-run-plugins - LocalLLM - Query local LLM models with Ollama. (Plugins)
README
# Local LLM
PowerToys Run plugin which will enable to use LLMs on Ollama endpoints locally.
![Screenshot](screenshots/screenshot1.png)
## Note
Ollama doesnt not have a release for ARM64 Windows yet. So this plugin doesnot have a ARM64 Release yet. It will be added once Ollama releases a stable ARM64 version.
## Ollama (Prerequisite)
You need to install Ollama Locally.### Step 1: Install Ollama
1. **Visit the Ollama Website**:
- Go to the [Ollama website](https://ollama.com) to download the latest version of Ollama.2. **Download and Install Ollama**:
- **Linux(wsl)**:
- Follow the specific instructions provided on the Ollama website for your Linux distribution.
- **Windows**:
- Follow the specific instructions provided on the Ollama website for Windows.3. **Verify Installation**:
- Open a terminal and run the following command to verify that Ollama is installed:
```bash
ollama --version
```
- This should display the installed version of Ollama.### Step 2: Download and Run Llama 3.1 Model
1. **Set Up the Llama 3.1 Model**:
- Use the following command to download the Llama 3.1 model:
```bash
ollama pull llama3.1
```
- This will download the necessary files to run the Llama 3.1 model on your machine.### Step 3: Use the Llama 3.1 Model in Your Application
1. **Run Llama 3.1 via API**:
- We need to use Ollama's API. Run the API server:
```bash
ollama serve
```
- Then, send a POST request to the API endpoint to verify the endpoint:
```bash
curl http://localhost:11434/api/generate -d '{
"model": "llama3.1",
"prompt": "Your prompt here"
}'
```
- This will return a response generated by the Llama 3.1 model.### Additional Resources
- **Ollama Documentation**: Visit the [Ollama Documentation](https://ollama.com/docs) for more detailed information on using Ollama and its API.
- **Llama 3.1 Information**: Check the [Llama 3.1 details](https://ollama.com/models/llama3.1) on the Ollama website for specifics about the model.## Usage
You need to have the Ollama endpoint running. You can check by going to [http://localhost:11434/](http://localhost:11434/). Depending on your GPU it might take some time for the LLM to generate a response as it uses your own resources.
```
llm what is the capital of india
```## Installation
1. Download the latest release of the Local LLM from the releases page.
2. Extract the zip file's contents to your PowerToys modules directory for the user (`%LOCALAPPDATA%\Microsoft\PowerToys\PowerToys Run\Plugins`).
3. Restart PowerToys.Shoutout to [@Avaith3600](https://github.com/Advaith3600) for inspiring me and helping me in building this plugin.