https://github.com/ggml-org/ggml
Tensor library for machine learning
https://github.com/ggml-org/ggml
automatic-differentiation large-language-models machine-learning tensor-algebra
Last synced: 10 days ago
JSON representation
Tensor library for machine learning
- Host: GitHub
- URL: https://github.com/ggml-org/ggml
- Owner: ggml-org
- License: mit
- Created: 2022-09-18T17:07:19.000Z (over 2 years ago)
- Default Branch: master
- Last Pushed: 2025-04-04T19:05:12.000Z (14 days ago)
- Last Synced: 2025-04-06T01:53:05.822Z (13 days ago)
- Topics: automatic-differentiation, large-language-models, machine-learning, tensor-algebra
- Language: C++
- Homepage:
- Size: 12.9 MB
- Stars: 12,245
- Watchers: 141
- Forks: 1,197
- Open Issues: 295
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Authors: AUTHORS
Awesome Lists containing this project
- my-awesome-github-stars - ggml-org/ggml - Tensor library for machine learning (C++)
- awesome-production-machine-learning - GGML - org/ggml.svg?style=social) - GGML is a high-performance, tensor library for machine learning that enables efficient inference on CPUs, particularly optimized for large language models. (Model Storage Optimisation)
README
# ggml
[Roadmap](https://github.com/users/ggerganov/projects/7) / [Manifesto](https://github.com/ggerganov/llama.cpp/discussions/205)
Tensor library for machine learning
***Note that this project is under active development. \
Some of the development is currently happening in the [llama.cpp](https://github.com/ggerganov/llama.cpp) and [whisper.cpp](https://github.com/ggerganov/whisper.cpp) repos***## Features
- Low-level cross-platform implementation
- Integer quantization support
- Broad hardware support
- Automatic differentiation
- ADAM and L-BFGS optimizers
- No third-party dependencies
- Zero memory allocations during runtime## Build
```bash
git clone https://github.com/ggml-org/ggml
cd ggml# install python dependencies in a virtual environment
python3.10 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt# build the examples
mkdir build && cd build
cmake ..
cmake --build . --config Release -j 8
```## GPT inference (example)
```bash
# run the GPT-2 small 117M model
../examples/gpt-2/download-ggml-model.sh 117M
./bin/gpt-2-backend -m models/gpt-2-117M/ggml-model.bin -p "This is an example"
```For more information, checkout the corresponding programs in the [examples](examples) folder.
## Using CUDA
```bash
# fix the path to point to your CUDA compiler
cmake -DGGML_CUDA=ON -DCMAKE_CUDA_COMPILER=/usr/local/cuda-12.1/bin/nvcc ..
```## Using hipBLAS
```bash
cmake -DCMAKE_C_COMPILER="$(hipconfig -l)/clang" -DCMAKE_CXX_COMPILER="$(hipconfig -l)/clang++" -DGGML_HIP=ON
```## Using SYCL
```bash
# linux
source /opt/intel/oneapi/setvars.sh
cmake -G "Ninja" -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DGGML_SYCL=ON ..# windows
"C:\Program Files (x86)\Intel\oneAPI\setvars.bat"
cmake -G "Ninja" -DCMAKE_C_COMPILER=cl -DCMAKE_CXX_COMPILER=icx -DGGML_SYCL=ON ..
```## Compiling for Android
Download and unzip the NDK from this download [page](https://developer.android.com/ndk/downloads). Set the NDK_ROOT_PATH environment variable or provide the absolute path to the CMAKE_ANDROID_NDK in the command below.
```bash
cmake .. \
-DCMAKE_SYSTEM_NAME=Android \
-DCMAKE_SYSTEM_VERSION=33 \
-DCMAKE_ANDROID_ARCH_ABI=arm64-v8a \
-DCMAKE_ANDROID_NDK=$NDK_ROOT_PATH
-DCMAKE_ANDROID_STL_TYPE=c++_shared
``````bash
# create directories
adb shell 'mkdir /data/local/tmp/bin'
adb shell 'mkdir /data/local/tmp/models'# push the compiled binaries to the folder
adb push bin/* /data/local/tmp/bin/# push the ggml library
adb push src/libggml.so /data/local/tmp/# push model files
adb push models/gpt-2-117M/ggml-model.bin /data/local/tmp/models/adb shell
cd /data/local/tmp
export LD_LIBRARY_PATH=/data/local/tmp
./bin/gpt-2-backend -m models/ggml-model.bin -p "this is an example"
```## Resources
- [Introduction to ggml](https://huggingface.co/blog/introduction-to-ggml)
- [The GGUF file format](https://github.com/ggerganov/ggml/blob/master/docs/gguf.md)