{"id":14964751,"url":"https://github.com/samestrin/llm-interface","last_synced_at":"2025-04-04T21:05:45.942Z","repository":{"id":243964127,"uuid":"813906874","full_name":"samestrin/llm-interface","owner":"samestrin","description":"A simple NPM interface for seamlessly interacting with 36 Large Language Model (LLM) providers, including OpenAI, Anthropic, Google Gemini, Cohere, Hugging Face Inference, NVIDIA AI, Mistral AI, AI21 Studio, LLaMA.CPP, and Ollama, and hundreds of models.","archived":false,"fork":false,"pushed_at":"2025-03-23T01:22:31.000Z","size":1067,"stargazers_count":103,"open_issues_count":2,"forks_count":13,"subscribers_count":4,"default_branch":"main","last_synced_at":"2025-03-28T20:06:04.657Z","etag":null,"topics":["ai","ai21","anthropic","chatgpt","chatgpt-api","claude","cloudflare-ai","cohere","gemini","gooseai","groq","huggingface","llamacpp","llm","llm-interface","mistral","openai","perplexity","reka","watsonx-ai"],"latest_commit_sha":null,"homepage":"https://www.npmjs.com/package/llm-interface","language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/samestrin.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-06-12T01:14:29.000Z","updated_at":"2025-03-25T23:09:09.000Z","dependencies_parsed_at":"2024-06-12T08:43:43.264Z","dependency_job_id":"2971a2af-94b8-4758-9ef4-f800d29e0040","html_url":"https://github.com/samestrin/llm-interface","commit_stats":{"total_commits":442,"total_committers":2,"mean_commits":221.0,"dds":0.08371040723981904,"last_synced_commit":"90b38e4f839b8b9d05aec006e6ce84fbd5a03885"},"previous_names":["samestrin/llm-interface"],"tags_count":13,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/samestrin%2Fllm-interface","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/samestrin%2Fllm-interface/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/samestrin%2Fllm-interface/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/samestrin%2Fllm-interface/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/samestrin","download_url":"https://codeload.github.com/samestrin/llm-interface/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247249524,"owners_count":20908212,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","ai21","anthropic","chatgpt","chatgpt-api","claude","cloudflare-ai","cohere","gemini","gooseai","groq","huggingface","llamacpp","llm","llm-interface","mistral","openai","perplexity","reka","watsonx-ai"],"created_at":"2024-09-24T13:33:43.675Z","updated_at":"2025-04-04T21:05:45.920Z","avatar_url":"https://github.com/samestrin.png","language":"JavaScript","readme":"# llm-interface\n\n[![Star on GitHub](https://img.shields.io/github/stars/samestrin/llm-interface?style=social)](https://github.com/samestrin/llm-interface/stargazers) [![Fork on GitHub](https://img.shields.io/github/forks/samestrin/llm-interface?style=social)](https://github.com/samestrin/llm-interface/network/members) [![Watch on GitHub](https://img.shields.io/github/watchers/samestrin/llm-interface?style=social)](https://github.com/samestrin/llm-interface/watchers)\n\n![Version 2.0.1495](https://img.shields.io/badge/Version-2.0.1495-blue) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Built with Node.js](https://img.shields.io/badge/Built%20with-Node.js-green)](https://nodejs.org/)\n\n## Introduction\n\nLLM Interface is an npm module that streamlines your interactions with various Large Language Model (LLM) providers in your Node.js applications. It offers a unified interface, simplifying the process of switching between providers and their models.\n\nThe LLM Interface package offers comprehensive support for a wide range of language model providers, encompassing 36 different providers and hundreds of models. This extensive coverage ensures that you have the flexibility to choose the best models suited to your specific needs.\n\n## Extensive Support for 36 Providers and Hundreds of Models\n\nLLM Interface supports: **AI21 Studio, AiLAYER, AIMLAPI, Anyscale, Anthropic, Cloudflare AI, Cohere, Corcel, DeepInfra, DeepSeek, Fireworks AI, Forefront AI, FriendliAI, Google Gemini, GooseAI, Groq, Hugging Face Inference, HyperBee AI, Lamini, LLaMA.CPP, Mistral AI, Monster API, Neets.ai, Novita AI, NVIDIA AI, OctoAI, Ollama, OpenAI, Perplexity AI, Reka AI, Replicate, Shuttle AI, TheB.ai, Together AI, Voyage AI, Watsonx AI, Writer, and Zhipu AI**.\n\n\u003c!-- Support List --\u003e\n\n[![AI21 Studio](https://samestrin.github.io/media/llm-interface/icons/ai21.png)](/docs/providers/ai21.md) [![AIMLAPI](https://samestrin.github.io/media/llm-interface/icons/aimlapi.png)](/docs/providers/aimlapi.md) [![Anthropic](https://samestrin.github.io/media/llm-interface/icons/anthropic.png)](/docs/providers/anthropic.md) [![Anyscale](https://samestrin.github.io/media/llm-interface/icons/anyscale.png)](/docs/providers/anyscale.md) [![Cloudflare AI](https://samestrin.github.io/media/llm-interface/icons/cloudflareai.png)](/docs/providers/cloudflareai.md) [![Cohere](https://samestrin.github.io/media/llm-interface/icons/cohere.png)](/docs/providers/cohere.md) [![Corcel](https://samestrin.github.io/media/llm-interface/icons/corcel.png)](/docs/providers/corcel.md) [![DeepInfra](https://samestrin.github.io/media/llm-interface/icons/deepinfra.png)](/docs/providers/deepinfra.md) [![DeepSeek](https://samestrin.github.io/media/llm-interface/icons/deepseek.png)](/docs/providers/deepseek.md) [![Forefront AI](https://samestrin.github.io/media/llm-interface/icons/forefront.png)](/docs/providers/forefront.md) [![GooseAI](https://samestrin.github.io/media/llm-interface/icons/gooseai.png)](/docs/providers/gooseai.md) [![Lamini](https://samestrin.github.io/media/llm-interface/icons/lamini.png)](/docs/providers/lamini.md) [![Mistral AI](https://samestrin.github.io/media/llm-interface/icons/mistralai.png)](/docs/providers/mistralai.md) [![Monster API](https://samestrin.github.io/media/llm-interface/icons/monsterapi.png)](/docs/providers/monsterapi.md) [![Neets.ai](https://samestrin.github.io/media/llm-interface/icons/neetsai.png)](/docs/providers/neetsai.md) [![Perplexity AI](https://samestrin.github.io/media/llm-interface/icons/perplexity.png)](/docs/providers/perplexity.md) [![Reka AI](https://samestrin.github.io/media/llm-interface/icons/rekaai.png)](/docs/providers/rekaai.md) [![Replicate](https://samestrin.github.io/media/llm-interface/icons/replicate.png)](/docs/providers/replicate.md) [![Shuttle AI](https://samestrin.github.io/media/llm-interface/icons/shuttleai.png)](/docs/providers/shuttleai.md) [![Together AI](https://samestrin.github.io/media/llm-interface/icons/togetherai.png)](/docs/providers/togetherai.md) [![Writer](https://samestrin.github.io/media/llm-interface/icons/writer.png)](/docs/providers/writer.md)\n\n\u003c!-- Support List End --\u003e\n\n[Detailed Provider List](docs/providers/README.md)\n\n## Features\n\n- **Unified Interface**: `LLMInterface.sendMessage` is a single, consistent interface to interact with **36 different LLM APIs** (34 hosted LLM providers and 2 local LLM providers).\n- **Chat Completion, Streaming and Embeddings**: Supports [chat completion, streaming, and embeddings](docs/providers/README.md) (with failover).\n- **Dynamic Module Loading**: Automatically loads and manages LLM interfaces only when they are invoked, minimizing resource usage.\n- **Error Handling**: Robust error handling mechanisms to ensure reliable API interactions.\n- **Extensible**: Easily extendable to support additional LLM providers as needed.\n- **Response Caching**: Efficiently caches LLM responses to reduce costs and enhance performance.\n- **Graceful Retries**: Automatically retry failed prompts with increasing delays to ensure successful responses.\n- **JSON Output**: Simple to use native JSON output for various LLM providers including OpenAI, Fireworks AI, Google Gemini, and more.\n- **JSON Repair**: Detect and repair invalid JSON responses.\n\n## Updates\n\n**v2.0.14**\n\n- **Recovery Mode (Beta)**: Automatically repair invalid JSON objects in HTTP 400 response errors. Currently, this feature is only available with Groq.\n\n**v2.0.11**\n\n- **New LLM Providers**: Anyscale, Bigmodel, Corcel, Deepseek, Hyperbee AI, Lamini, Neets AI, Novita AI, NVIDIA, Shuttle AI, TheB.AI, and Together AI.\n- **Caching**: Supports multiple caches: `simple-cache`, `flat-cache`, and `cache-manager`. _`flat-cache` is now an optional package._\n- **Logging**: Improved logging with the `loglevel`.\n- **Improved Documentation**: Improved [documentation](docs/README.md) with new examples, glossary, and provider details. Updated API key details, model alias breakdown, and usage information.\n- **More Examples**: [LangChain.js RAG](examples/langchain/rag.js), [Mixture-of-Agents (MoA)](examples/moa/moa.js), and [more](docs/examples.md).\n- **Removed Dependency**: `@anthropic-ai/sdk` is no longer required.\n\n## Dependencies\n\nThe project relies on several npm packages and APIs. Here are the primary dependencies:\n\n- `axios`: For making HTTP requests (used for various HTTP AI APIs).\n- `@google/generative-ai`: SDK for interacting with the Google Gemini API.\n- `dotenv`: For managing environment variables. Used by test cases.\n- `jsonrepair`: Used to repair invalid JSON responses.\n- `loglevel`: A minimal, lightweight logging library with level-based logging and filtering.\n\nThe following optional packages can added to extend LLMInterface's caching capabilities:\n\n- `flat-cache`: A simple JSON based cache.\n- `cache-manager`: An extendible cache module that supports various backends including Redis, MongoDB, File System, Memcached, Sqlite, and more.\n\n## Installation\n\nTo install the LLM Interface npm module, you can use npm:\n\n```bash\nnpm install llm-interface\n```\n\n## Quick Start\n\n- Looking for [API Keys](/docs/api-keys.md)? This document provides helpful links.\n- Detailed [usage](/docs/usage.md) documentation is available here.\n- Various [examples](/examples) are also available to help you get started.\n- A breakdown of [model aliases](/docs/models.md) is available here.\n- A breakdown of [embeddings model aliases](/docs/embeddings.md) is available here.\n- If you still want more examples, you may wish to review the [test cases](/test/) for further examples.\n\n## Usage\n\nFirst import `LLMInterface`. You can do this using either the CommonJS `require` syntax:\n\n```javascript\nconst { LLMInterface } = require('llm-interface');\n```\n\nthen send your prompt to the LLM provider:\n\n```javascript\nLLMInterface.setApiKey({ openai: process.env.OPENAI_API_KEY });\n\ntry {\n  const response = await LLMInterface.sendMessage(\n    'openai',\n    'Explain the importance of low latency LLMs.',\n  );\n} catch (error) {\n  console.error(error);\n}\n```\n\nif you prefer, you can pass use a one-liner to pass the provider and API key, essentially skipping the LLMInterface.setApiKey() step.\n\n```javascript\nconst response = await LLMInterface.sendMessage(\n  ['openai', process.env.OPENAI_API_KEY],\n  'Explain the importance of low latency LLMs.',\n);\n```\n\nPassing a more complex message object is just as simple. The same rules apply:\n\n```javascript\nconst message = {\n  model: 'gpt-4o-mini',\n  messages: [\n    { role: 'system', content: 'You are a helpful assistant.' },\n    { role: 'user', content: 'Explain the importance of low latency LLMs.' },\n  ],\n};\n\ntry {\n  const response = await LLMInterface.sendMessage('openai', message, {\n    max_tokens: 150,\n  });\n} catch (error) {\n  console.error(error);\n}\n```\n\n_LLMInterfaceSendMessage and LLMInterfaceStreamMessage are still available and will be available until version 3_\n\n## Running Tests\n\nThe project includes tests for each LLM handler. To run the tests, use the following command:\n\n```bash\nnpm test\n```\n\n#### Current Test Results\n\n```bash\nTest Suites: 9 skipped, 93 passed, 93 of 102 total\nTests:       86 skipped, 784 passed, 870 total\nSnapshots:   0 total\nTime:        630.029 s\n```\n\n_Note: Currently skipping NVIDIA test cases due to API issues, and Ollama due to performance issues._\n\n## TODO\n\n- [ ] Provider \u003e Models \u003e Azure AI\n- [ ] Provider \u003e Models \u003e Grok\n- [ ] Provider \u003e Models \u003e SiliconFlow\n- [ ] Provider \u003e Embeddings \u003e Nomic\n- [ ] _Feature \u003e Image Generation?_\n\n_Submit your suggestions!_\n\n## Contribute\n\nContributions to this project are welcome. Please fork the repository and submit a pull request with your changes or improvements.\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](/LICENSE) file for details.\n\n## Blogs\n\n- [Comparing 13 LLM Providers API Performance with Node.js: Latency and Response Times Across Models](https://dev.to/samestrin/comparing-13-llm-providers-api-performance-with-nodejs-latency-and-response-times-across-models-2ka4)\n\n## Share\n\n[![Twitter](https://img.shields.io/badge/X-Tweet-blue)](https://twitter.com/intent/tweet?text=Check%20out%20this%20awesome%20project!\u0026url=https://github.com/samestrin/llm-interface) [![Facebook](https://img.shields.io/badge/Facebook-Share-blue)](https://www.facebook.com/sharer/sharer.php?u=https://github.com/samestrin/llm-interface) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Share-blue)](https://www.linkedin.com/sharing/share-offsite/?url=https://github.com/samestrin/llm-interface)\n","funding_links":[],"categories":["Building","JavaScript"],"sub_categories":["LLM Models"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsamestrin%2Fllm-interface","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsamestrin%2Fllm-interface","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsamestrin%2Fllm-interface/lists"}