Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/fabricio872/local-llama
Simple script to run llama2 LLM locally on CPU
https://github.com/fabricio872/local-llama
Last synced: about 2 months ago
JSON representation
Simple script to run llama2 LLM locally on CPU
- Host: GitHub
- URL: https://github.com/fabricio872/local-llama
- Owner: Fabricio872
- License: mit
- Created: 2024-03-10T10:57:49.000Z (11 months ago)
- Default Branch: main
- Last Pushed: 2024-03-10T11:26:13.000Z (11 months ago)
- Last Synced: 2024-03-10T12:27:38.541Z (11 months ago)
- Language: Shell
- Size: 1.95 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# local-llama
Simple script to run llama2 LLM locally on CPU using this docker container as a base https://github.com/aborroy/llama2-docker-multiarch## Installation
- install [Docker](https://www.docker.com/)
- run this command to integrate shell script
```shell
sudo curl -o /usr/local/bin/llama https://raw.githubusercontent.com/Fabricio872/local-llama/main/llama
```
- set permission
```shell
sudo chown $USER /usr/local/bin/llama && chmod +x /usr/local/bin/llama
```
## Usage
> Note that first run will take longer and will require internet connection to download docker container
```shell
llama "hello llama"
```