https://github.com/explicit-logic/light-ai
Easy to use single-file executable to run LLMs locally on your machine
https://github.com/explicit-logic/light-ai
Last synced: 4 months ago
JSON representation
Easy to use single-file executable to run LLMs locally on your machine
- Host: GitHub
- URL: https://github.com/explicit-logic/light-ai
- Owner: explicit-logic
- License: mit
- Created: 2024-12-29T10:31:36.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2025-01-19T03:47:45.000Z (5 months ago)
- Last Synced: 2025-01-19T04:29:10.787Z (5 months ago)
- Language: TypeScript
- Size: 354 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
Awesome Lists containing this project
README
Light AI
========
Easy to use single-file executable to run LLMs locally on your machine
## Getting Started
- **Download the right Light AI server for your operating system.**
1. Run `./light-ai` (`./light-ai.exe` for Windows) from the command line.
2. Open Swagger (`http://0.0.0.0:8000/swagger`) and pick `id` of one of the available models: `GET /v1/models`.
3. Restart the server with the model `id` in params (For example: `./light-ai -m llama3.2-1b-instruct`).That's it! Your personal AI server is ready for usage. Try `POST /v1/ask` and `POST /v1/completion` to check.
## Download
MacOS Silicon
light-ai
MacOS Intel
light-ai
Linux
light-ai
Windows
light-ai.exe
## Usage
```sh
./light-ai -p 8000 -m llama3.2-1b-instruct
```| Argument | Explanation |
| -------- | ----------- |
| `-p, --port` | Port to listen on (__Optional__) |
| `-m, --model` | Model name (__Optional__) |## API Endpoints
### POST `/v1/ask`: Get a quick reply for a given `prompt`
*Options:*
`prompt`: Provide the prompt to get a reply (__Required__)
`model`: Model name (__Optional__)
`grammar`: Set grammar for grammar-based sampling (__Optional__)
`schema`: JSON response with a schema (__Optional__)
For Example:
```sh
curl http://0.0.0.0:8000/v1/ask --header 'Content-Type: application/json' --data '{"prompt": "Is an apple more expensive than a banana?"}'
```### POST `/v1/completion`: Given a `prompt`, it returns the predicted completion.
*Options:*
`prompt`: Provide the prompt for this completion as a string (__Required__)
`model`: Model name (__Optional__)
For Example:
```sh
curl http://0.0.0.0:8000/v1/completion --header 'Content-Type: application/json' --data '{"prompt": "Here is a list of sweet fruits:"}'
```### GET `/v1/models`: List of models
### POST `/v1/models/pull`: Pull a model
*Options:*
`model`: Model name (__Required__)
## Acknowledgements
* Bun: [oven-sh/bun](https://github.com/oven-sh/bun)
* node-llama-cpp: [withcatai/node-llama-cpp](https://github.com/withcatai/node-llama-cpp)