Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/akr4/openai-api
Rust client for OpenAI API supporting streaming mode of the chat completion endpoint.
https://github.com/akr4/openai-api
openai-api rust
Last synced: about 1 month ago
JSON representation
Rust client for OpenAI API supporting streaming mode of the chat completion endpoint.
- Host: GitHub
- URL: https://github.com/akr4/openai-api
- Owner: akr4
- License: mit
- Created: 2023-03-27T00:08:07.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-05-14T10:58:36.000Z (6 months ago)
- Last Synced: 2024-05-14T11:56:47.412Z (6 months ago)
- Topics: openai-api, rust
- Language: Rust
- Homepage:
- Size: 13.7 KB
- Stars: 0
- Watchers: 2
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# openai-api
Rust client for OpenAI API supporting streaming mode of the chat completion endpoint.
## Supported Endpoints
- Completion: post https://api.openai.com/v1/completions
- Chat Completion: post https://api.openai.com/v1/chat/completions (with streaming mode support)
- Embeddings: post https://api.openai.com/v1/embeddings## Getting Started
Add the following to your `Cargo.toml`:
```toml:file=Cargo.toml
openai-api = { git = "https://github.com/akr4/openai-api.git" }
```Put this in your crate root:
```rust
use std::env;
use openai_api::chat;#[tokio::main]
async fn main() {
// create request
let request = chat::CompletionRequest {
model: chat::Model::Gpt35Turbo,
temperature: Some(1.0),
messages: vec![
chat::Message {
role: chat::MessageRole::User,
content: "Hello".to_string(),
}
],
};// call completion endpoint
let response = chat::completion(
env::var("OPENAI_API_KEY").expect("environment variable OPENAI_API_KEY is not found."),
&request).await.unwrap();// show response text
println!("{}", response.choices[0].message.content);
}
```### Streaming Mode
```rust
use std::env;
use futures_util::StreamExt;
use openai_api::{chat, chat_stream};#[tokio::main]
async fn main() {
// create request
let request = chat::CompletionRequest {
model: chat::Model::Gpt35Turbo,
temperature: Some(1.0),
messages: vec![
chat::Message {
role: chat::MessageRole::User,
content: "Hello".to_string(),
}
],
};// call completion endpoint
let mut response = chat_stream::completion(
env::var("OPENAI_API_KEY").expect("environment variable OPENAI_API_KEY is not found."),
&request).await.unwrap();// receive response
let mut response_text = String::new();
while let Some(response) = response.next().await {
match response {
Ok(response) => {
if let Some(text_chunk) = response.choices[0].delta.content.clone() {
print!("{text_chunk}");
response_text.push_str(&text_chunk);
}
}
Err(err) => {
println!("{:?}", err);
break;
}
}
}// show response text
println!("\nResponse Text: {response_text}");
}
```