https://github.com/yeeking/llamacpp-minimal-example
Minimal example of using llama cpp as library from cpp
https://github.com/yeeking/llamacpp-minimal-example
example-project llamacpp llm localllm
Last synced: about 2 months ago
JSON representation
Minimal example of using llama cpp as library from cpp
- Host: GitHub
- URL: https://github.com/yeeking/llamacpp-minimal-example
- Owner: yeeking
- License: mit
- Created: 2024-12-02T15:52:08.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2025-04-18T20:49:07.000Z (about 2 months ago)
- Last Synced: 2025-04-23T16:16:35.748Z (about 2 months ago)
- Topics: example-project, llamacpp, llm, localllm
- Language: C++
- Homepage:
- Size: 198 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Minimal example of using llama cpp from a cpp file
I am working on a C++ project that integrates llamacpp as a runtime for language models.
I wanted an absolutely minimal example of a CMake project that links against llamacpp and loads a model from a GGUF file.
This is it!
To use it:
```
# clone this project
git clone [email protected]:yeeking/llamacpp-minimal-example.git# cd into the project folder
cd llamacpp-minimal-example# clone llama cpp
git clone [email protected]:ggml-org/llama.cpp.git# generate the project
# dynamic linking - you probably don't want this as the binary will be less portable to others' computers
cmake -B build .
# static linking: llama cpp is baked into the binary.
cmake -B build -DBUILD_SHARED_LIBS=OFF .# build
cmake --build build --config Debug -j 10 # number of threads to use# run with the example supertiny model (which is untrained and just for testing)
./build/myk-llama-simple models/Supertinyllama-107K-F16.gguf
./build/myk-llama-adv -m models/Supertinyllama-107K-F16.gguf```