Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/lebrunel/omni
One client for all LLMs. Universal Elixir chat completion API client.
https://github.com/lebrunel/omni
anthropic api-client chatgpt elixir llms ollama
Last synced: 3 months ago
JSON representation
One client for all LLMs. Universal Elixir chat completion API client.
- Host: GitHub
- URL: https://github.com/lebrunel/omni
- Owner: lebrunel
- License: apache-2.0
- Created: 2024-06-18T16:01:16.000Z (7 months ago)
- Default Branch: main
- Last Pushed: 2024-06-21T16:05:16.000Z (7 months ago)
- Last Synced: 2024-10-10T06:45:45.637Z (3 months ago)
- Topics: anthropic, api-client, chatgpt, elixir, llms, ollama
- Language: Elixir
- Homepage: https://hexdocs.pm/omni
- Size: 116 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Omni
![Omni](https://raw.githubusercontent.com/lebrunel/omni/main/media/poster.png)
![Hex.pm](https://img.shields.io/hexpm/v/omni?color=informational)
![License](https://img.shields.io/github/license/lebrunel/omni?color=informational)
![Build Status](https://img.shields.io/github/actions/workflow/status/lebrunel/omni/elixir.yml?branch=main)Omni focusses on one thing only - being a chat interface to *any* LLM provider. If you want a full featured client for a specific provider, supporting all available API endpoints, this is probably not it. If you want a single client to generate chat completions with literally any LLM backend, Omni is for you.
- 🧩 `Omni.Provider` behaviour to create integrations with any LLM provider. Built-in providers for:
- [`Anthropic`](`Omni.Providers.Anthropic`) - chat with any of of the Claude models.
- [`Google`](`Omni.Providers.Google`) - chat with any of of the Gemini models.
- [`Ollama`](`Omni.Providers.Ollama`) - use Ollama to chat with any local model.
- [`OpenAI`](`Omni.Providers.OpenAI`) - chat with ChatGPT or **any** other OpenAI compatible API.
- 🛜 Streaming API requests
- Stream to an Enumerable
- Or stream messages to any Elixir process
- 💫 Simple to use and easily customisable## Installation
The package can be installed by adding `omni` to your list of dependencies in `mix.exs`.
```elixir
def deps do
[
{:omni, "~> 0.1"}
]
end
```## Quickstart
To chat with an LLM, initialize a `t:provider/0` with `init/2`, and then send a `t:request/0`, using one of `generate/2`, `async/2` or `stream/2`. Refer to the schema documentation for each provider to ensure you construct a valid request.
```elixir
iex> provider = Omni.init(:openai)
iex> Omni.generate(provider, model: "gpt-4o", messages: [
...> %{role: "user", content: "Write a haiku about the Greek Gods"}
...> ])
{:ok, %{"object" => "chat.completion", "choices" => [...]}}
```## Streaming
Omni supports streaming request through `async/2` or `stream/2`.
Calling `async/2` returns a `t:Task.t/0`, which asynchronously sends text delta messages to the calling process. Using the `:stream_to` request option allows you to control the receiving process.
The example below demonstrates making a streaming request in a LiveView event, and sends each of the streaming messages back to the same LiveView process.
```elixir
defmodule MyApp.ChatLive do
use Phoenix.LiveView# When the client invokes the "prompt" event, create a streaming request and
# asynchronously send messages back to self.
def handle_event("prompt", %{"message" => prompt}, socket) do
{:ok, task} = Omni.async(Omni.init(:openai), [
model: "gpt-4o",
messages: [
%{role: "user", content: "Write a haiku about the Greek Gods"}
]
]){:noreply, assign(socket, current_request: task)}
end# The streaming request sends messages back to the LiveView process.
def handle_info({_request_pid, {:data, _data}} = message, socket) do
pid = socket.assigns.current_request.pid
case message do
{:omni, ^pid, {:chunk, %{"choices" => choices, "finish_reason" => nil}}} ->
# handle each streaming chunk{:omni, ^pid, {:chunk, %{"choices" => choices}}} ->
# handle the final streaming chunk
end
end# Tidy up when the request is finished
def handle_info({ref, {:ok, _response}}, socket) do
Process.demonitor(ref, [:flush])
{:noreply, assign(socket, current_request: nil)}
end
end
```Alternatively, use `stream/2` to collect the streaming responses into an `t:Enumerable.t/0` that can be used with Elixir's `Stream` functions.
```elixir
iex> provider = Omni.init(:openai)
iex> {:ok, stream} = Omni.stream(provider, model: "gpt-4o", messages: [
...> %{role: "user", content: "Write a haiku about the Greek Gods"}
...> ])iex> stream
...> |> Stream.each(&IO.inspect/1)
...> |> Stream.run()
```Because this function builds the `t:Enumerable.t/0` by calling `receive/1`, take care using `stream/2` inside `GenServer` callbacks as it may cause the GenServer to misbehave.
## License
This package is open source and released under the [Apache-2 License](https://github.com/lebrunel/omni/blob/master/LICENSE).
© Copyright 2024 [Push Code Ltd](https://www.pushcode.com/).