{"id":15031417,"url":"https://github.com/neferdata/allms","last_synced_at":"2026-02-13T05:06:55.016Z","repository":{"id":207612777,"uuid":"719621329","full_name":"neferdata/allms","owner":"neferdata","description":"allms: One Rust Library to rule them aLLMs","archived":false,"fork":false,"pushed_at":"2024-05-21T18:23:42.000Z","size":1080,"stargazers_count":31,"open_issues_count":5,"forks_count":3,"subscribers_count":1,"default_branch":"main","last_synced_at":"2024-05-22T13:48:49.621Z","etag":null,"topics":["anthropic","openai","rust","rustlang"],"latest_commit_sha":null,"homepage":"https://crates.io/crates/allms","language":"Rust","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/neferdata.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE-APACHE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-11-16T14:55:20.000Z","updated_at":"2024-06-21T12:19:20.658Z","dependencies_parsed_at":"2024-06-21T12:19:16.838Z","dependency_job_id":"51f45361-565a-47e3-97b0-e327f11ce5ad","html_url":"https://github.com/neferdata/allms","commit_stats":null,"previous_names":["neferdata/openai-rs","neferdata/openai-typesafe-rs","neferdata/openai-safe","neferdata/aidapter"],"tags_count":12,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/neferdata%2Fallms","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/neferdata%2Fallms/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/neferdata%2Fallms/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/neferdata%2Fallms/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/neferdata","download_url":"https://codeload.github.com/neferdata/allms/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248109802,"owners_count":21049379,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["anthropic","openai","rust","rustlang"],"created_at":"2024-09-24T20:15:38.149Z","updated_at":"2026-02-13T05:06:55.011Z","avatar_url":"https://github.com/neferdata.png","language":"Rust","readme":"# allms: One Library to rule them aLLMs\n[![crates.io](https://img.shields.io/crates/v/allms.svg)](https://crates.io/crates/allms)\n[![docs.rs](https://docs.rs/allms/badge.svg)](https://docs.rs/allms)\n\nThis Rust library is specialized in providing type-safe interactions with APIs of the following LLM providers: Anthropic, AWS Bedrock, Azure, DeepSeek, Google Gemini, Mistral, OpenAI, Perplexity, xAI. (More providers to be added in the future.) It's designed to simplify the process of experimenting with different models. It de-risks the process of migrating between providers reducing vendor lock-in issues. It also standardizes serialization of sending requests to LLM APIs and interpreting the responses, ensuring that the JSON data is handled in a type-safe manner. With allms you can focus on creating effective prompts and providing LLM with the right context, instead of worrying about differences in API implementations.\n\n## Features\n\n- Support for various foundational LLM providers including Anthropic, AWS Bedrock, Azure, DeepSeek, Google Gemini, OpenAI, Mistral, and Perplexity.\n- Easy-to-use functions for chat/text completions and assistants. Use the same struct and methods regardless of which model you choose.\n- Automated response deserialization to custom types.\n- Standardized approach to providing context with support of function calling, tools, and file uploads.\n- Enhanced developer productivity with automated token calculations, rate limits and debug mode.\n- Extensibility enabling easy adoption of other models with standardized trait.\n- Asynchronous support using Tokio.\n\n### Foundational Models\nAnthropic:\n- APIs: Messages, Text Completions\n- Models: Claude Opus 4.6, Claude Opus 4.5, Claude Sonnet 4.5, Claude Haiku 4.5, Claude Opus 4.1, Claude Sonnet 4, Claude Opus 4, Claude 3.7 Sonnet, Claude 3.5 Sonnet, Claude 3.5 Haiku, Claude 3 Opus, Claude 3 Sonnet, Claude 3 Haiku, Claude 2.0, Claude Instant 1.2\n- Tools: file search, web search, code interpreter, computer use\n\nAWS Bedrock:\n- APIs: Converse\n- Models: Nova Micro, Nova Lite, Nova Pro (additional models to be added)\n\nAzure OpenAI:\n- APIs: Completions, Responses, Assistants, Files, Vector Stores, Tools\n    - API version can be set using `AzureVersion` variant\n- Models: as per model deployments in Azure OpenAI Studio\n    - If using custom model deployment names please use the `Custom` variant of `OpenAIModels`\n\nDeepSeek:\n- APIs: Chat Completion\n- Models: DeepSeek-V3, DeepSeek-R1\n\nGoogle Gemini:\n- APIs: Chat Completions (including streaming)\n    - Via Vertex AI or AI Studio\n- Models: Gemini 3 Pro (Preview), Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.5 Flash-Lite, Gemini 2.0 Flash, Gemini 2.0 Flash-Lite, Gemini 1.5 Pro, Gemini 1.5 Flash, Gemini 1.5 Flash-8B\n    - Experimental models: Gemini 2.0 Pro, Gemini 2.0 Flash-Thinking\n    - Fine-tuned models: in Vertex AI only, using endpoint constructor\n- Tools: Google Search, code execution\n\nMistral:\n- APIs: Chat Completions\n- Models:  \n    - Multimodal: Mistral Large 2.1, Mistral Medium 3.1, Mistral Medium 3, Mistral Small 3.2, Mistral Small 3.1, Mistral Small 3, Mistral Small 2\n    - Reasoning: Magistral Medium 1.2, Magistral Medium, Magistral Small 1.2\n    - Other: Codestral 2508, Codestral 2, Ministral 3B, Ministral 8B\n    - Legacy models: Mistral Large, Mistral Nemo, Mistral 7B, Mixtral 8x7B, Mixtral 8x22B, Mistral Medium, Mistral Small, Mistral Tiny\n- Tools: file web search, code interpreter\n\nOpenAI:\n- APIs: Chat Completions, Responses, Function Calling, Assistants (v1 \u0026 v2), Files, Vector Stores\n- Models: \n    - Chat Completions \u0026 Responses only: GPT-5.2, GPT-5.2 Pro, Gpt-5.1, o1, o1 Preview, o1 Mini, o1 Pro, o3, o3 Mini, o4 Mini\n    - Chat Completions, Responses \u0026 Assistants: GPT-5, GPT-5-mini, GPT-5-nano, GPT-4.5-Preview, GPT-4o, GPT-4, GPT-4 32k, GPT-4 Turbo, GPT-3.5 Turbo, GPT-3.5 Turbo 16k, fine-tuned models (via `Custom` variant)\n- Tools: file search, web search, code interpreter, computer use\n\nPerplexity:\n- APIs: Chat Completions\n- Models: Sonar, Sonar Pro, Sonar Reasoning \n    - The following legacy models will be supported until February 22, 2025: Llama 3.1 Sonar Small, Llama 3.1 Sonar Large, Llama 3.1 Sonar Huge\n\nxAI:\n- APIs: Chat Completions, Responses\n- Models: Grok 4.1 Fast Thinking, Grok 4.1 Fast Non Thinking, Grok 4, Grok 4 Fast Thinking, Grok 4 Fast Non Thinking, Grok Code Fast 1, Grok 3, Grok 3 Mini, Grok 3 Fast, Grok 3 Mini Fast\n- Tools: web search, X search\n\n### Prerequisites\n- Anthropic: API key (passed in model constructor)\n- AWS Bedrock: environment variables `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` and `AWS_REGION` set as per AWS settings.\n- Azure OpenAI: environment variable `OPENAI_API_URL` set to your Azure OpenAI resource endpoint. Endpoint key passed in constructor\n- DeepSeek: API key (passed in model constructor)\n- Google AI Studio: API key (passed in model constructor)\n- Google Vertex AI: GCP service account key (used to obtain access token) + GCP project ID (set as environment variable)\n- Mistral: API key (passed in model constructor)\n- OpenAI: API key (passed in model constructor)\n- Perplexity: API key (passed in model constructor)\n- xAI: API key (passed in model constructor)\n\n### Examples\nExplore the `examples` directory to see more use cases and how to use different LLM providers and endpoint types.\n\nUsing `Completions` API with different foundational models:\n```\nlet anthropic_answer = Completions::new(AnthropicModels::Claude4Sonnet, \u0026API_KEY, None, None)\n    .get_answer::\u003cT\u003e(instructions)\n    .await?\n\nlet aws_bedrock_answer = Completions::new(AwsBedrockModels::NovaLite, \"\", None, None)\n    .get_answer::\u003cT\u003e(instructions)\n    .await?\n\nlet deepseek_answer = Completions::new(DeepSeekModels::DeepSeekReasoner, \u0026API_KEY, None, None)\n    .get_answer::\u003cT\u003e(instructions)\n    .await?\n\nlet google_answer = Completions::new(GoogleModels::Gemini2_5Flash, \u0026API_KEY, None, None)\n    .get_answer::\u003cT\u003e(instructions)\n    .await?\n\nlet mistral_answer = Completions::new(MistralModels::MistralMedium3, \u0026API_KEY, None, None)\n    .get_answer::\u003cT\u003e(instructions)\n    .await?\n\nlet openai_answer = Completions::new(OpenAIModels::Gpt4_1Mini, \u0026API_KEY, None, None)\n    .get_answer::\u003cT\u003e(instructions)\n    .await?\n\nlet openai_responses_answer = Completions::new(OpenAIModels::Gpt4_1Mini, \u0026API_KEY, None, None)\n    .version(\"openai_responses\")\n    .get_answer::\u003cT\u003e(instructions)\n    .await?\n\nlet perplexity_answer = Completions::new(PerplexityModels::SonarPro, \u0026API_KEY, None, None)\n    .get_answer::\u003cT\u003e(instructions)\n    .await?\n\nlet xai_answer = Completions::new(XAIModels::Grok3Mini, \u0026API_KEY, None, None)\n    .get_answer::\u003cT\u003e(instructions)\n    .await?\n```\n\nExample:\n```\nRUST_LOG=info RUST_BACKTRACE=1 cargo run --example use_completions\n```\n\nUsing `Assistant` API to analyze your files with `File` and `VectorStore` capabilities:\n```\n// Create a File\nlet openai_file = OpenAIFile::new(None, \u0026API_KEY)\n    .upload(\u0026file_name, bytes)\n    .await?;\n\n// Create a Vector Store\nlet openai_vector_store = OpenAIVectorStore::new(None, \"Name\", \u0026API_KEY)\n    .upload(\u0026[openai_file.id.clone().unwrap_or_default()])\n    .await?;\n\n// Extract data using Assistant \nlet openai_answer = OpenAIAssistant::new(OpenAIModels::Gpt4o, \u0026API_KEY)\n    .version(OpenAIAssistantVersion::V2)\n    .vector_store(openai_vector_store.clone())\n    .await?\n    .get_answer::\u003cT\u003e(instructions, \u0026[])\n    .await?;\n```\n\nExample:\n```\nRUST_LOG=info RUST_BACKTRACE=1 cargo run --example use_openai_assistant\n```\n\n## License\nThis project is licensed under dual MIT/Apache-2.0 license. See the [LICENSE-MIT](LICENSE-MIT) and [LICENSE-APACHE](LICENSE-APACHE) files for details.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fneferdata%2Fallms","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fneferdata%2Fallms","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fneferdata%2Fallms/lists"}