https://github.com/galiprandi/fastify-lm
fastify-lm is a Fastify plugin that Simplifies integration with multiple language model (LM) providers through a unified interface, enabling seamless provider switching.
https://github.com/galiprandi/fastify-lm
claude deepseek fastify gemini mistral openai plugin typescript
Last synced: 3 months ago
JSON representation
fastify-lm is a Fastify plugin that Simplifies integration with multiple language model (LM) providers through a unified interface, enabling seamless provider switching.
- Host: GitHub
- URL: https://github.com/galiprandi/fastify-lm
- Owner: galiprandi
- Created: 2025-03-04T04:00:53.000Z (4 months ago)
- Default Branch: main
- Last Pushed: 2025-03-31T23:48:07.000Z (3 months ago)
- Last Synced: 2025-03-31T23:51:05.628Z (3 months ago)
- Topics: claude, deepseek, fastify, gemini, mistral, openai, plugin, typescript
- Language: TypeScript
- Homepage: https://github.com/galiprandi/fastify-lm#readme
- Size: 229 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# fastify-lm
## What is fastify-lm?
`fastify-lm` is a **Fastify plugin** that simplifies integration with multiple **language model (LM) providers**, such as:
| Provider | Description |
|-----------|--------------------------------------|
| **Test** | Test provider, always returns "test" and the input parameters |
| **OpenAI** | GPT models, including GPT-4o, GPT-3.5 |
| **Google** | Gemini models, such as Gemini 1.5 |
| **Claude** | Anthropicβs Claude models (Claude 3, etc.) |
| **Deepseek** | Deepseek AI language models |
| **Llama** | Llama AI language models |
| **Mistral** | Mistral AI language models |It provides a **unified interface**, allowing you to switch providers without modifying your application code.
### π₯ **Why use fastify-lm?**
Developing applications that interact with language models usually requires direct API integration, which can lead to:
- π **Dependency on a single provider**
- π **Difficulty switching models without refactoring code**
- β **Inconsistencies in how different APIs are used**With `fastify-lm`, you can:
β Define multiple providers in a single configuration
β Switch models just by changing environment variables
β Use a **consistent query system** without worrying about API differences
β Easily run A/B tests with different models to find the best fit for your use case### π **Use Cases**
- **Chatbots and virtual assistants**: Seamlessly integrate multiple AI models to enhance user experience.
- **Natural Language Processing (NLP)**: Analyze text using different models without modifying your code.
- **Model comparison**: Evaluate different LMs within the same application with minimal changes.
- **Flexible infrastructure**: Switch providers based on availability, cost, or technological improvements.
- **Analyze requests**: Moderate or analyze requests using language models.π **Ready to get started?** Continue with the installation guide and start using `fastify-lm` in just a few minutes.
## Installation
To install the plugin, on existing Fastify project, just run:
```bash
npm install fastify-lm
```### Compatibility
| fastify-lm (plugin) | Fastify |
| ------------------- | ------------------ |
| `^1.x` | `^3.x, ^4.x, ^5.x` |Please note that if a Fastify version is out of support, then so are the corresponding versions of this plugin
in the table above.
See [Fastify's LTS policy](https://github.com/fastify/fastify/blob/main/docs/Reference/LTS.md) for more details.## Quick start
Start by creating a Fastify instance and registering the plugin.
```bash
npm i fastify fastify-lm
```Create a file `src/server.js` and add following code:
```javascript
// Import the framework and instantiate it
import Fastify from "fastify";
import LmPlugin from "fastify-lm";const fastify = Fastify({
logger: true,
});// Register the lm-plugin
fastify.register(LmPlugin, {
models: [
{
name: "lm", // the name of the model instance on your app
provider: "openai", // openai, google, claude, deepseek or any available provider
model: "gpt-4o-mini",
apiKey: "your-api-key",
},
],
});// Declare a route / that returns the models
fastify.get("/", async function handler(request, reply) {
const models = await fastify.lm.models();
return { models };
});// Run the server!
try {
await fastify.listen({ port: 3000 });
await fastify.lm.models();
} catch (err) {
fastify.log.error(err);
process.exit(1);
}
```> Remember to replace `your-api-key` with your actual API key.
Finally, launch the server with:
```bash
node src/server.js
```and test it with:
```bash
curl http://localhost:3000/
```## Usage
### Registering the Plugin
Register the plugin in your Fastify instance by specifying the models and providers to use.
#### Basic Usage
```typescript
import Fastify from "fastify";
import lmPlugin from "fastify-lm";// Create a Fastify instance and register the plugin
const app = Fastify();
app.register(lmPlugin, {
models: [
{
name: "lm",
provider: process.env.LM_PROVIDER,
model: process.env.LM_MODEL,
apiKey: process.env.LM_API_KEY,
},
],
});const response = await app.lm.chat({
messages: [{ role: "user", content: "How are you?" }],
});
```π‘ _Change the environment variables to switch the provider._
#### Multiple Providers with Query Parameter Selection
```typescript
import Fastify, { FastifyRequest, FastifyReply } from "fastify";
import lmPlugin from "fastify-lm";// Create a Fastify instance and register the plugin
const app = Fastify();
app.register(lmPlugin, {
models: [
{
name: "openai",
provider: "openai",
model: "gpt-3.5-turbo",
apiKey: process.env.OPENAI_API_KEY,
},
{
name: "google",
provider: "google",
model: "gemini-2.0-flash-lite",
apiKey: process.env.GOOGLE_API_KEY,
},
{
name: "claude",
provider: "claude",
model: "claude-3-5-sonnet-20240620",
apiKey: process.env.CLAUDE_API_KEY,
},
{
name: "deepseek",
provider: "deepseek",
model: "deepseek-chat",
apiKey: process.env.DEEPSEEK_API_KEY,
},
{
name: "mistral",
provider: "mistral",
model: "mistral-medium",
apiKey: process.env.MISTRAL_API_KEY,
},
],
});// Route that receives the query and optional model parameter
app.get<{ Querystring: QueryParams }>(
"/chat",
{
schema: {
querystring: {
type: 'object',
required: ['query'],
properties: {
query: { type: 'string' },
model: {
type: 'string',
enum: ['openai', 'google', 'claude', 'deepseek', 'mistral'],
default: 'openai'
}
}
}
}
},
async (
request: FastifyRequest<{ Querystring: QueryParams }>,
reply: FastifyReply
) => {
const { query, model = "openai" } = request.query;try {
const response = await app[model].chat({
messages: [{ role: "user", content: query }],
});
return { response };
} catch (error: any) {
reply.status(500).send({ error: error.message });
}
}
);// Start the server
app.listen({ port: 3000 }, (err, address) => {
if (err) {
console.error(err);
process.exit(1);
}
console.log(`Server running at ${address}`);
});interface QueryParams {
query: string;
model?: "openai" | "google" | "claude" | "deepseek" | "mistral"; // Optional, defaults to "openai"
}
```## Advanced Use Cases
Beyond simple model queries, you can leverage `fastify-lm` for more advanced functionalities:
### π€ Automated Customer Support Responses
Use AI to generate instant answers for common support queries.
[π Read the full guide β](docs/support-bot.md)### π« AI-Powered Support Ticket Prioritization
Automatically classify and prioritize support tickets based on urgency and sentiment.
[π Read the full guide β](docs/support-ticket-prioritization.md)### π’ AI-Driven Sentiment Analysis
Analyze user feedback, reviews, or messages to determine sentiment trends.
[π Read the full guide β](docs/sentiment-analysis.md)### π Automatic Content Moderation
Detect and block inappropriate messages before processing them.
[π Read the full guide β](docs/content-moderation.md)### π Semantic Search & Query Expansion
Improve search relevance by understanding intent and expanding queries intelligently.
[π Read the full guide β](docs/semantic-search.md)### β¨ Smart Autocomplete for Forms
Enhance user input by automatically generating text suggestions.
[π Read the full guide β](docs/autocomplete.md)### π Automatic Text Summarization
Summarize long text passages using AI models.
[π Read the full guide β](docs/summarizer.md)### π Real-Time Text Translation
Translate user input dynamically with multi-provider support.
[π Read the full guide β](docs/translator.md)### π AI-Powered Data Extraction
Extract structured information from unstructured text, such as invoices, legal documents, or reports.
[π Read the full guide β](docs/data-extraction.md)π **Check out more examples in the [`/docs/`](docs/) folder!**
## Contributing
We need a lot of hands to implement other providers you can help us by submitting a pull request.
[π Adding a New Adapter](docs/adding-new-adapter.md)
## License
MIT