https://github.com/coffeevampir3/modularizedlcppserver
Cpp23 Modularized Multi-User batching server for lcpp
https://github.com/coffeevampir3/modularizedlcppserver
batching cpp cpp23 llamacpp llm-inference modules server
Last synced: about 1 year ago
JSON representation
Cpp23 Modularized Multi-User batching server for lcpp
- Host: GitHub
- URL: https://github.com/coffeevampir3/modularizedlcppserver
- Owner: CoffeeVampir3
- Created: 2025-03-19T05:59:20.000Z (about 1 year ago)
- Default Branch: master
- Last Pushed: 2025-03-19T06:09:09.000Z (about 1 year ago)
- Last Synced: 2025-03-19T07:22:10.819Z (about 1 year ago)
- Topics: batching, cpp, cpp23, llamacpp, llm-inference, modules, server
- Language: C++
- Homepage:
- Size: 18.6 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
C++ https://github.com/ggerganov/llama.cpp modularized server setup for bindings with continuous batching support for multi-user inference. This is designed to be exposed as C-compatible binding interface, this repo serves as an example of pure C++ and is mostly here to serve as an example.
Specialized Features:
- Inference rewinding
- Multi-User inference with Continuous Batching
- Minimum tokens
- Fully asynchronous inference design for FFI servers
This is a modernized version of the (more compatible for general users) bindings I wrote for https://github.com/theroyallab/YALS