Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
https://github.com/withcatai/node-llama-cpp
ai bindings catai cmake cmake-js cuda gguf grammar json-schema llama llama-cpp llm metal nodejs prebuilt-binaries self-hosted
Last synced: about 2 months ago
JSON representation
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
- Host: GitHub
- URL: https://github.com/withcatai/node-llama-cpp
- Owner: withcatai
- License: mit
- Created: 2023-08-12T20:53:16.000Z (about 1 year ago)
- Default Branch: master
- Last Pushed: 2024-07-30T17:57:06.000Z (about 2 months ago)
- Last Synced: 2024-08-01T04:51:31.369Z (about 2 months ago)
- Topics: ai, bindings, catai, cmake, cmake-js, cuda, gguf, grammar, json-schema, llama, llama-cpp, llm, metal, nodejs, prebuilt-binaries, self-hosted
- Language: TypeScript
- Homepage: https://withcatai.github.io/node-llama-cpp/
- Size: 8.76 MB
- Stars: 778
- Watchers: 10
- Forks: 69
- Open Issues: 6
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- Funding: .github/FUNDING.yml
- License: LICENSE
Awesome Lists containing this project
README
node-llama-cpp
Run AI models locally on your machine
Pre-built bindings are provided with a fallback to building from source with cmake
[![Build](https://github.com/withcatai/node-llama-cpp/actions/workflows/build.yml/badge.svg)](https://github.com/withcatai/node-llama-cpp/actions/workflows/build.yml)
[![License](https://badgen.net/badge/color/MIT/green?label=license)](https://www.npmjs.com/package/node-llama-cpp)
[![License](https://badgen.net/badge/color/TypeScript/blue?label=types)](https://www.npmjs.com/package/node-llama-cpp)
[![Version](https://badgen.net/npm/v/node-llama-cpp)](https://www.npmjs.com/package/node-llama-cpp)✨ New! [Try the beta of version `3.0.0`](https://github.com/withcatai/node-llama-cpp/pull/105) ✨ (included: function calling, automatic chat wrapper detection, embedding support, and more)
## Features
* Run a text generation model locally on your machine
* Metal and CUDA support
* Pre-built binaries are provided, with a fallback to building from source without `node-gyp` or Python
* Chat with a model using a chat wrapper
* Use the CLI to chat with a model without writing any code
* Up-to-date with the latest version of `llama.cpp`. Download and compile the latest release with a single CLI command.
* Force a model to generate output in a parseable format, like JSON, or even force it to follow a specific JSON schema## [Documentation](https://withcatai.github.io/node-llama-cpp/)
* [Getting started guide](https://withcatai.github.io/node-llama-cpp/guide/)
* [API reference](https://withcatai.github.io/node-llama-cpp/api/classes/LlamaModel)
* [CLI help](https://withcatai.github.io/node-llama-cpp/guide/cli/)
* [Changelog](https://github.com/withcatai/node-llama-cpp/releases)
* [Roadmap](https://github.com/orgs/withcatai/projects/1)## Installation
```bash
npm install --save node-llama-cpp
```This package comes with pre-built binaries for macOS, Linux and Windows.
If binaries are not available for your platform, it'll fallback to download the latest version of `llama.cpp` and build it from source with `cmake`.
To disable this behavior set the environment variable `NODE_LLAMA_CPP_SKIP_DOWNLOAD` to `true`.## Usage
```typescript
import {fileURLToPath} from "url";
import path from "path";
import {LlamaModel, LlamaContext, LlamaChatSession} from "node-llama-cpp";const __dirname = path.dirname(fileURLToPath(import.meta.url));
const model = new LlamaModel({
modelPath: path.join(__dirname, "models", "codellama-13b.Q3_K_M.gguf")
});
const context = new LlamaContext({model});
const session = new LlamaChatSession({context});const q1 = "Hi there, how are you?";
console.log("User: " + q1);const a1 = await session.prompt(q1);
console.log("AI: " + a1);const q2 = "Summarize what you said";
console.log("User: " + q2);const a2 = await session.prompt(q2);
console.log("AI: " + a2);
```> For more examples, see the [getting started guide](https://withcatai.github.io/node-llama-cpp/guide/)
## Contributing
To contribute to `node-llama-cpp` read the [contribution guide](https://withcatai.github.io/node-llama-cpp/guide/contributing).## Acknowledgements
* llama.cpp: [ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)
If you like this repo, star it ✨