An open API service indexing awesome lists of open source software.

Projects in Awesome Lists tagged with quantization-aware-training

A curated list of projects in awesome lists tagged with quantization-aware-training .

https://intel.github.io/neural-compressor/

SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, and ONNX Runtime

auto-tuning awq fp4 gptq int4 int8 knowledge-distillation large-language-models low-precision mxformat post-training-quantization pruning quantization quantization-aware-training smoothquant sparsegpt sparsity

Last synced: 09 Dec 2025

https://github.com/intel/neural-compressor

SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime

auto-tuning awq fp4 gptq int4 int8 knowledge-distillation large-language-models low-precision mxformat post-training-quantization pruning quantization quantization-aware-training smoothquant sparsegpt sparsity

Last synced: 12 May 2025

https://github.com/666DZY666/micronet

micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape

batch-normalization-fuse bnn convolutional-networks dorefa group-convolution integer-arithmetic-only model-compression network-in-network network-slimming neuromorphic-computing onnx post-training-quantization pruning pytorch quantization quantization-aware-training tensorrt tensorrt-int8-python twn xnor-net

Last synced: 20 Mar 2025

https://github.com/alibaba/tinyneuralnetwork

TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.

deep-learning deep-neural-networks model-compression model-converter post-training-quantization pruning pytorch quantization-aware-training

Last synced: 14 Oct 2025

https://github.com/megvii-research/Sparsebit

A model compression and acceleration toolbox based on pytorch.

deep-learning post-training-quantization pruning quantization quantization-aware-training sparse tensorrt

Last synced: 12 May 2025

https://github.com/beomi/bitnet-transformers

0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch with Llama(2) Architecture

llm quantization quantization-aware-training transformers

Last synced: 07 May 2025

https://github.com/sayakpaul/adventures-in-tensorflow-lite

This repository contains notebooks that show the usage of TensorFlow Lite for quantizing deep neural networks.

inference model-optimization model-quantization on-device-ml post-training-quantization pruning quantization-aware-training tensorflow-2 tensorflow-lite tf-hub tf-lite-model

Last synced: 20 Sep 2025

https://github.com/sayakpaul/Adventures-in-TensorFlow-Lite

This repository contains notebooks that show the usage of TensorFlow Lite for quantizing deep neural networks.

inference model-optimization model-quantization on-device-ml post-training-quantization pruning quantization-aware-training tensorflow-2 tensorflow-lite tf-hub tf-lite-model

Last synced: 09 Jul 2025

https://github.com/bharathsudharsan/cnn_on_mcu

Code for paper 'Multi-Component Optimization and Efficient Deployment of Neural-Networks on Resource-Constrained IoT Hardware'

c-code-generator cmsis-nn edge-computing efficient-inference graph-optimization neuralnetworks optimization quantization quantization-aware-training tflite tflite-conversion tinyml

Last synced: 20 Sep 2025

https://github.com/sandergi/yades

YOLOv8 Animal Detection for Embedded Systems. 97% test accuracy in just 400kb (about the same size as the photos it classifies or 1 second of video). Various quantization, pruning, and distillation techniques for vision models are explored.

animal-detection classification cnn distillation pruning quantization quantization-aware-training yolov8

Last synced: 24 Dec 2025

https://github.com/ambidextrous9/quantization-of-models-ptq-and-qat

Quantization of Models : Post-Training Quantization(PTQ) and Quantize Aware Training(QAT)

keras ptq pytorch pytorch-implementation qat quantization quantization-aware-training tflite tflite-models

Last synced: 21 Aug 2025

https://github.com/sukanyabag/finetuning-qwen2-7b-vqa-on-radiology-scans

This repository is doing the finetuning of the Qwen2 7B VLM for performing VQA (Visual Question Answering) on various kinds of patient radiologies or medical scans.

adapter-tuning deep-learning finetuning generative-ai healthcare lora quantization-aware-training vision-language-models visual-question-answering

Last synced: 28 Dec 2025