https://github.com/gaolingx/llama.cpp-launcher
run llama.cpp quickly and conveniently.
https://github.com/gaolingx/llama.cpp-launcher
batch-file ggml llama shell
Last synced: 7 months ago
JSON representation
run llama.cpp quickly and conveniently.
- Host: GitHub
- URL: https://github.com/gaolingx/llama.cpp-launcher
- Owner: Gaolingx
- License: mit
- Created: 2025-03-01T18:42:43.000Z (11 months ago)
- Default Branch: main
- Last Pushed: 2025-06-28T15:46:56.000Z (7 months ago)
- Last Synced: 2025-06-28T16:41:57.440Z (7 months ago)
- Topics: batch-file, ggml, llama, shell
- Language: Batchfile
- Homepage:
- Size: 14.6 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# llama.cpp-Launcher
## Description
This is a tool which run **llama.cpp** quickly and conveniently on `Windows` and `Linux`.
## How to Use
1. install [llama.cpp](https://github.com/ggml-org/llama.cpp/releases).
2. set your `SERVER_DIR` and `MODEL_PATH`.
3. run the batchfile.
4. you can also set parameter (`NUM_THREADS`, `GPU_LAYERS`, etc.) according to your CPU threads and GPU VRAM size.
## llama.cpp document
### Build
[docs/build.md](https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md)
### llama-cli(Local Chat)
1. [llama.cpp](https://github.com/ggml-org/llama.cpp/blob/master/tools/main/README.md)
2. [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/blob/main/examples/main/README.md)
### LLaMA.cpp HTTP Server(API Server)
1. [llama.cpp](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md)
2. [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/blob/main/examples/server/README.md)