Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/hazelnutcloud/godot-llama-cpp
Run large language models in Godot.
https://github.com/hazelnutcloud/godot-llama-cpp
godot llamacpp
Last synced: 4 months ago
JSON representation
Run large language models in Godot.
- Host: GitHub
- URL: https://github.com/hazelnutcloud/godot-llama-cpp
- Owner: hazelnutcloud
- License: mit
- Created: 2024-02-08T02:46:59.000Z (12 months ago)
- Default Branch: main
- Last Pushed: 2024-07-05T08:01:42.000Z (7 months ago)
- Last Synced: 2024-09-30T04:58:48.518Z (4 months ago)
- Topics: godot, llamacpp
- Language: C++
- Homepage:
- Size: 86.9 KB
- Stars: 29
- Watchers: 3
- Forks: 5
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE.md
Awesome Lists containing this project
README
godot-llama-cpp
Run large language models in [Godot](https://godotengine.org). Powered by [llama.cpp](https://github.com/ggerganov/llama.cpp).
![Godot v4.2](https://img.shields.io/badge/Godot-v4.2-%23478cbf?logo=godot-engine&logoColor=white)
![GitHub last commit](https://img.shields.io/github/last-commit/hazelnutcloud/godot-llama-cpp)
![GitHub License](https://img.shields.io/github/license/hazelnutcloud/godot-llama-cpp)## Overview
This library aims to provide a high-level interface to run large language models in Godot, following Godot's node-based design principles.
```gdscript
@onready var llama_context = %LlamaContextvar messages = [
{ "sender": "system", "text": "You are a pirate chatbot who always responds in pirate speak!" },
{ "sender": "user", "text": "Who are you?" }
]
var prompt = ChatFormatter.apply("llama3", messages)
var completion_id = llama_context.request_completion(prompt)while (true):
var response = await llama_context.completion_generated
print(response["text"])if response["done"]: break
```## Features
- Platform and compute backend support:
| Platform | CPU | Metal | Vulkan | CUDA |
|----------|-----|-------|--------|------|
| macOS | ✅ | ✅ | ❌ | ❌ |
| Linux | ✅ | ❌ | ✅ | 🚧 |
| Windows | ✅ | ❌ | 🚧 | 🚧 |
- Asynchronous completion generation
- Support any language model that llama.cpp supports in GGUF format
- GGUF files are Godot resources## Roadmap
- [ ] Chat completions support via dedicated library for jinja2 templating in zig
- [ ] Grammar support
- [ ] Multimodal models support
- [ ] Embeddings
- [ ] Vector database using LibSQL## Building & Installation
1. Download zig v0.13.0 from https://ziglang.org/download/
2. Clone the repository:
```bash
git clone --recurse-submodules https://github.com/hazelnutcloud/godot-llama-cpp.git
```
3. Copy the `godot-llama-cpp` addon folder in `godot/addons` to your Godot project's `addons` folder.
```bash
cp -r godot-llama-cpp/godot/addons/godot-llama-cpp /addons
```
4. Build the extension and install it in your Godot project:
```bash
cd godot-llama-cpp
zig build --prefix /addons/godot-llama-cpp
```
5. Enable the plugin in your Godot project settings.
6. Add the `LlamaContext` node to your scene.
7. Run your Godot project.
8. Enjoy!## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE.md) file for details.