Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/rahuldshetty/llm.js
Run Large-Language Models (LLMs) 🚀 directly in your browser!
https://github.com/rahuldshetty/llm.js
ai dev emscripten es6-javascript javascript-library llm ml package wasm web webassembly
Last synced: 3 days ago
JSON representation
Run Large-Language Models (LLMs) 🚀 directly in your browser!
- Host: GitHub
- URL: https://github.com/rahuldshetty/llm.js
- Owner: rahuldshetty
- License: apache-2.0
- Created: 2023-08-05T05:48:36.000Z (over 1 year ago)
- Default Branch: master
- Last Pushed: 2024-09-08T09:45:16.000Z (2 months ago)
- Last Synced: 2024-10-08T18:11:51.576Z (about 1 month ago)
- Topics: ai, dev, emscripten, es6-javascript, javascript-library, llm, ml, package, wasm, web, webassembly
- Language: JavaScript
- Homepage: https://rahuldshetty.github.io/llm.js/
- Size: 1.47 MB
- Stars: 165
- Watchers: 5
- Forks: 9
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- License: LICENSE
Awesome Lists containing this project
README
# LLM.js
> Run Large-Language Models (LLMs) 🚀 directly in your browser!
Example projects🌐✨: [Live Demo](https://rahuldshetty.github.io/llm.js-examples/)
Learn More: [Documentation](https://rahuldshetty.github.io/llm.js/)
Models Supported:
- [TinyLLaMA Series - 1,2,3🦙](https://huggingface.co/TinyLlama)
- [GPT-2](https://huggingface.co/gpt2)
- [Tiny Mistral Series](https://huggingface.co/Locutusque/TinyMistral-248M)
- [Tiny StarCoder Py](https://huggingface.co/bigcode/tiny_starcoder_py)
- [Qwen Models](https://huggingface.co/Qwen)
- [TinySolar](https://huggingface.co/upstage/TinySolar-248m-4k-code-instruct)
- [Pythia](https://github.com/EleutherAI/pythia)
- [Mamba](https://huggingface.co/state-spaces/mamba-130m-hf)
and much more✨## Features
- Run inference directly on browser (even on smartphones) with power of WebAssembly
- Guidance: Structure responses with CFG Grammar and JSON schema
- Developed in pure JavaScript
- Web Worker to perform background tasks (model downloading/inference)
- Model Caching support
- Pre-built [packages](https://github.com/rahuldshetty/llm.js/releases) to directly plug-and-play into your web apps.## Installation
Download and extract the latest [release](https://github.com/rahuldshetty/llm.js/releases) of the llm.js package to your web application📦💻.
## Quick Start
```js
// Import LLM app
import {LLM} from "llm.js/llm.js";// State variable to track model load status
var model_loaded = false;// Initial Prompt
var initial_prompt = "def fibonacci(n):"// Callback functions
const on_loaded = () => {
model_loaded = true;
}
const write_result = (text) => { document.getElementById('result').innerText += text + "\n" }
const run_complete = () => {}// Configure LLM app
const app = new LLM(
// Type of Model
'GGUF_CPU',// Model URL
'https://huggingface.co/RichardErkhov/bigcode_-_tiny_starcoder_py-gguf/resolve/main/tiny_starcoder_py.Q8_0.gguf',// Model Load callback function
on_loaded,// Model Result callback function
write_result,// On Model completion callback function
run_complete
);// Download & Load Model GGML bin file
app.load_worker();// Trigger model once its loaded
const checkInterval = setInterval(timer, 5000);function timer() {
if(model_loaded){
app.run({
prompt: initial_prompt,
top_k: 1
});
clearInterval(checkInterval);
} else{
console.log('Waiting...')
}
}
```