Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ggerganov/ggml
Tensor library for machine learning
https://github.com/ggerganov/ggml
automatic-differentiation large-language-models machine-learning tensor-algebra
Last synced: 5 days ago
JSON representation
Tensor library for machine learning
- Host: GitHub
- URL: https://github.com/ggerganov/ggml
- Owner: ggerganov
- License: mit
- Created: 2022-09-18T17:07:19.000Z (about 2 years ago)
- Default Branch: master
- Last Pushed: 2024-05-28T11:42:57.000Z (5 months ago)
- Last Synced: 2024-05-29T03:10:36.262Z (5 months ago)
- Topics: automatic-differentiation, large-language-models, machine-learning, tensor-algebra
- Language: C
- Homepage:
- Size: 8.02 MB
- Stars: 9,999
- Watchers: 120
- Forks: 906
- Open Issues: 234
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Authors: AUTHORS
Awesome Lists containing this project
- awesome-genai - GGML - C Tensor ML library implementing Llama and Whisper etc (Tools)
- my-awesome-github-stars - ggerganov/ggml - Tensor library for machine learning (C++)
- awesome - ggerganov/ggml - Tensor library for machine learning (C++)
- awesome-repositories - ggerganov/ggml - Tensor library for machine learning (C++)
- awesome-ChatGPT-repositories - ggml - Tensor library for machine learning (Others)
- AiTreasureBox - ggerganov/ggml - 11-02_11133_0](https://img.shields.io/github/stars/ggerganov/ggml.svg) |Tensor library for machine learning| (Repos)
- awesome-zig - ggml
- StarryDivineSky - ggerganov/ggml - BFGS优化器、针对苹果芯片进行了优化、在x86架构上利用AVX / AVX2内部函数、在 ppc64 架构上利用 VSX 内部函数、无第三方依赖关系、运行时内存分配为零 (其他_机器学习与深度学习)
- Awesome-LLM-Compression - [Code
- awesome-production-machine-learning - GGML - GGML is a high-performance, tensor library for machine learning that enables efficient inference on CPUs, particularly optimized for large language models. (Model Storage Optimisation)
- awesome-yolo-object-detection - ggml
- awesome-yolo-object-detection - ggml
- awesome-ai-papers - [ggml - fast](https://github.com/pytorch-labs/gpt-fast)\]\[[lightllm](https://github.com/ModelTC/lightllm)\]\[[fastllm](https://github.com/ztxz16/fastllm)\]\[[CTranslate2](https://github.com/OpenNMT/CTranslate2)\]\[[ipex-llm](https://github.com/intel-analytics/ipex-llm)\]\[[rtp-llm](https://github.com/alibaba/rtp-llm)\]\[[KsanaLLM](https://github.com/pcg-mlp/KsanaLLM)\] (NLP / 3. Pretraining)
- awesome-ai-papers - [ggml - fast](https://github.com/pytorch-labs/gpt-fast)\]\[[lightllm](https://github.com/ModelTC/lightllm)\]\[[fastllm](https://github.com/ztxz16/fastllm)\]\[[CTranslate2](https://github.com/OpenNMT/CTranslate2)\]\[[ipex-llm](https://github.com/intel-analytics/ipex-llm)\]\[[rtp-llm](https://github.com/alibaba/rtp-llm)\]\[[KsanaLLM](https://github.com/pcg-mlp/KsanaLLM)\]\[[ppl.nn](https://github.com/OpenPPL/ppl.nn)\] (NLP / 3. Pretraining)
README
# ggml
[Roadmap](https://github.com/users/ggerganov/projects/7) / [Manifesto](https://github.com/ggerganov/llama.cpp/discussions/205)
Tensor library for machine learning
***Note that this project is under active development. \
Some of the development is currently happening in the [llama.cpp](https://github.com/ggerganov/llama.cpp) and [whisper.cpp](https://github.com/ggerganov/whisper.cpp) repos***## Features
- Low-level cross-platform implementation
- Integer quantization support
- Broad hardware support
- Automatic differentiation
- ADAM and L-BFGS optimizers
- No third-party dependencies
- Zero memory allocations during runtime## Build
```bash
git clone https://github.com/ggerganov/ggml
cd ggml# install python dependencies in a virtual environment
python3.10 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt# build the examples
mkdir build && cd build
cmake ..
cmake --build . --config Release -j 8
```## GPT inference (example)
```bash
# run the GPT-2 small 117M model
../examples/gpt-2/download-ggml-model.sh 117M
./bin/gpt-2-backend -m models/gpt-2-117M/ggml-model.bin -p "This is an example"
```For more information, checkout the corresponding programs in the [examples](examples) folder.
## Using CUDA
```bash
# fix the path to point to your CUDA compiler
cmake -DGGML_CUDA=ON -DCMAKE_CUDA_COMPILER=/usr/local/cuda-12.1/bin/nvcc ..
```## Using hipBLAS
```bash
cmake -DCMAKE_C_COMPILER="$(hipconfig -l)/clang" -DCMAKE_CXX_COMPILER="$(hipconfig -l)/clang++" -DGGML_HIPBLAS=ON
```## Using SYCL
```bash
# linux
source /opt/intel/oneapi/setvars.sh
cmake -G "Ninja" -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DGGML_SYCL=ON ..# windows
"C:\Program Files (x86)\Intel\oneAPI\setvars.bat"
cmake -G "Ninja" -DCMAKE_C_COMPILER=cl -DCMAKE_CXX_COMPILER=icx -DGGML_SYCL=ON ..
```## Compiling for Android
Download and unzip the NDK from this download [page](https://developer.android.com/ndk/downloads). Set the NDK_ROOT_PATH environment variable or provide the absolute path to the CMAKE_ANDROID_NDK in the command below.
```bash
cmake .. \
-DCMAKE_SYSTEM_NAME=Android \
-DCMAKE_SYSTEM_VERSION=33 \
-DCMAKE_ANDROID_ARCH_ABI=arm64-v8a \
-DCMAKE_ANDROID_NDK=$NDK_ROOT_PATH
-DCMAKE_ANDROID_STL_TYPE=c++_shared
``````bash
# create directories
adb shell 'mkdir /data/local/tmp/bin'
adb shell 'mkdir /data/local/tmp/models'# push the compiled binaries to the folder
adb push bin/* /data/local/tmp/bin/# push the ggml library
adb push src/libggml.so /data/local/tmp/# push model files
adb push models/gpt-2-117M/ggml-model.bin /data/local/tmp/models/adb shell
cd /data/local/tmp
export LD_LIBRARY_PATH=/data/local/tmp
./bin/gpt-2-backend -m models/ggml-model.bin -p "this is an example"
```## Resources
- [Introduction to ggml](https://huggingface.co/blog/introduction-to-ggml)
- [The GGUF file format](https://github.com/ggerganov/ggml/blob/master/docs/gguf.md)