https://github.com/rabilrbl/llamafile-builder
A simple github actions script to build a llamafile and uploads to huggingface
https://github.com/rabilrbl/llamafile-builder
llama llama2 llamacpp llamafile llamafile-builder llamafile-server
Last synced: 6 months ago
JSON representation
A simple github actions script to build a llamafile and uploads to huggingface
- Host: GitHub
- URL: https://github.com/rabilrbl/llamafile-builder
- Owner: rabilrbl
- Created: 2023-12-26T14:35:14.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-01-11T01:11:12.000Z (almost 2 years ago)
- Last Synced: 2025-04-12T08:09:18.184Z (6 months ago)
- Topics: llama, llama2, llamacpp, llamafile, llamafile-builder, llamafile-server
- Language: Python
- Homepage:
- Size: 60.5 KB
- Stars: 14
- Watchers: 2
- Forks: 9
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# llamafile-builder
A simple github actions script to build a llamafile from a given gguf file download url and uploads it to huggingface repo.
llamafile lets you distribute and run LLMs with a single file. [announcement blog post](https://hacks.mozilla.org/2023/11/introducing-llamafile/)
## Inputs
### `model_url`
**Required** The url to the gguf file to download.
### `huggingface_repo`
**Required** The huggingface repo to upload the llamafile to.
Add your huggingface token with write access to the repo as a actions secret with the name `HF_TOKEN`.
## Usage
1. Head over to the actions tab.
2. Select the action `Build llamafile`
3. Fill in the required inputs.
4. Click on `Run workflow` and wait for the action to complete.
5. Check your huggingface repo for the llamafile.