https://github.com/kassane/ollama-d
D bindings for the Ollama API
https://github.com/kassane/ollama-d
ai d dlang llama llm ollama ollama-api ollama-client
Last synced: 4 days ago
JSON representation
D bindings for the Ollama API
- Host: GitHub
- URL: https://github.com/kassane/ollama-d
- Owner: kassane
- License: mit
- Created: 2025-03-19T20:51:07.000Z (7 months ago)
- Default Branch: main
- Last Pushed: 2025-03-20T13:46:38.000Z (7 months ago)
- Last Synced: 2025-03-20T14:37:50.580Z (7 months ago)
- Topics: ai, d, dlang, llama, llm, ollama, ollama-api, ollama-client
- Language: D
- Homepage: http://ollama-d.dub.pm/
- Size: 10.7 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# ollama-d
[-f8240e?logo=d&logoColor=f8240e&label=runtime)](https://dlang.org/download.html)

[](https://github.com/kassane/ollama-d/actions/workflows/ci.yml)D language bindings for the Ollama REST API - Seamless integration with local AI models
## Features
- Text generation with native
- Chat interactions with local AI models
- Model management (list, create, show, pull, push, copy, delete models)
- Configurable timeout settings
- Simple and intuitive API design using `std.net.curl` and `std.json`
- Server version retrieval
- OpenAI-compatible API endpoints## Prerequisites
- [D compiler](https://dlang.org/download.html) installed on your system
- Ollama server running locally (default: "http://127.0.0.1:11434")
- Installed AI model (e.g., "llama3.2")## Quick Examples
```d
import ollama;
import std.stdio;void main() {
// Initialize Ollama client on localhost at port 11434
auto client = new OllamaClient();// Text generation
auto generateResponse = client.generate("llama3.2", "Why is the sky blue?");
writeln("Generate Response: ", generateResponse["response"].str);// Chat interaction
Message[] messages = [Message("user", "Hello, how are you?")];
auto chatResponse = client.chat("llama3.2", messages);
writeln("Chat Response: ", chatResponse["message"]["content"].str);// List available models
auto models = client.listModels();
writeln("Available Models: ", models);// OpenAI-compatible chat completions
auto openaiResponse = client.chatCompletions("llama3.2", messages, 50, 0.7);
writeln("OpenAI-style Response: ", openaiResponse["choices"][0]["message"]["content"].str);// Get server version
auto version = client.getVersion();
writeln("Ollama Server Version: ", version);
}
```## Additional Methods
- `generate()`: Text generation with custom options
- `chat()`: Conversational interactions
- `listModels()`: Retrieve available models
- `showModel()`: Get detailed model information
- `createModel()`: Create custom models
- `copy()`: Copy existing models
- `deleteModel()`: Remove models from server
- `pull()`: Download models from registry
- `push()`: Upload models to registry
- `chatCompletions()`: OpenAI-compatible chat endpoint
- `completions()`: OpenAI-compatible text completion
- `getModels()`: List models in OpenAI-compatible format
- `setTimeOut()`: Configure request timeout duration
- `getVersion()`: Retrieve Ollama server version## License
MIT License