https://github.com/alenvelocity/langchain-llama
Run LLAMA LLMs in Node with Langchain
https://github.com/alenvelocity/langchain-llama
ai llama vicuna
Last synced: about 2 months ago
JSON representation
Run LLAMA LLMs in Node with Langchain
- Host: GitHub
- URL: https://github.com/alenvelocity/langchain-llama
- Owner: AlenVelocity
- License: mit
- Created: 2023-04-10T16:39:03.000Z (over 2 years ago)
- Default Branch: master
- Last Pushed: 2023-04-11T07:11:10.000Z (over 2 years ago)
- Last Synced: 2025-08-10T05:19:42.524Z (about 2 months ago)
- Topics: ai, llama, vicuna
- Language: TypeScript
- Homepage: https://www.npmjs.com/package/langchain-llama
- Size: 32.2 KB
- Stars: 39
- Watchers: 1
- Forks: 5
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
Awesome Lists containing this project
README
# Langchain-LLAMA
# ⚠️ WIP ⚠️
Run LLAMA LLMs locally in langchain. Ported from [linonetwo/langchain-alpaca](https://github.com/linonetwo/langchain-alpaca)
# Installation
```shell
# npm i langchain-llama
# Intall the package from github
npm i github:alenvelocity/langchain-llama
```# Usage
This example uses the `ggml-vicuna-7b-4bit-rev1` model```ts
import { LLAMACPP } from 'langchain-llama'const main = async () => {
const vicuna = new LLAMACPP({
model: './vicuna/ggml-vicuna-7b-4bit-rev1.bin', // Path to model
executablePath: './vicuna/main.exe', // Path to binary
params: [ // Parameters to pass to the binary
'-i',
'--interactive-first',
'-t',
'8',
'--temp',
'0',
'-c',
'2048',
'-n',
'-1',
'--ignore-eos',
'--repeat_penalty',
'1.2'
]
})
await vicuna.init()
const response = await vicuna.generate(['Say "Hello World"'])
console.log(response.generations)}
main()
```This project is still a work in progress. Better docs will be added soon