https://github.com/eidoslab/pruning-for-vision-representation
https://github.com/eidoslab/pruning-for-vision-representation
Last synced: 6 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/eidoslab/pruning-for-vision-representation
- Owner: EIDOSLAB
- Created: 2025-07-02T12:27:20.000Z (7 months ago)
- Default Branch: master
- Last Pushed: 2025-07-02T12:40:13.000Z (7 months ago)
- Last Synced: 2025-07-02T13:44:49.517Z (7 months ago)
- Language: Python
- Size: 145 KB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Pruning for Vision Representation
This repository provides code and tools for research on **pruning neural networks for vision representation tasks**, including classification, object detection, and segmentation. The project leverages PyTorch and TorchVision, and supports various models such as ResNet and ViT (Vision Transformers).
## Features
- **Support for Multiple Architectures**: Different versions of ResNet and ViT.
- **Training & Evaluation**: End-to-end scripts for training pruned models and evaluating them on standard vision benchmarks (e.g., ImageNet, VOC, COCO).
- **Visualization & Analysis**: Utilities for comparing model vs. human performance, saving high-quality plots, and analyzing learned representations.
- **Explainability**: Some explainability techniques implemented in this repository are taken from the [Captum library](https://captum.ai/).
## News
- 📄 The corresponding paper has been **accepted at ICIAP 2025**.
## Getting Started
### Requirements
- Python 3.8+
- PyTorch (>=1.10)
- TorchVision
- numpy, matplotlib, opencv-python, Pillow, and other standard ML libraries
### Datasets
Prepare your datasets (e.g., ImageNet, VOC, COCO) and organize them as follows:
```
your_data_path/
train/
val/
```
Update dataset paths in your scripts as needed.
## Usage
### Training a Pruned Model
Example for ImageNet:
```bash
python train.py --model resnet18 --data-path /path/to/imagenet --pruning-method snip --target-sparsity 0.5 --epochs 90 --output-dir ./results
```
### Running Object Discovery (LOST)
```bash
python main_lost.py --arch vit_small --dataset VOC07 --set train --models-dir /path/to/models --data-path /path/to/data
```
### Visualization
Scripts such as `mvh_triple_comparison.py` and `mvh_performance_rn50_vs_rn18.py` generate high-quality performance comparison plots.
## Repository Structure
- `train.py` — Training loop with support for pruning and logging.
- `main_lost.py` — Object discovery with LOST.
- `explain.py` — Explanation and analysis tools (with techniques from [Captum](https://captum.ai/)).
- `utils.py` — Utilities for model export, reproducibility, and more.
- `datasets.py` — Dataset loading and handling.
- `cluster_for_OD.py`, `mvh_triple_comparison.py`, etc. — Additional experiments and analyses.
## 📚 Bibtex
```bibtex
@misc{cassano2025doespruningbenefitvision,
title={When Does Pruning Benefit Vision Representations?},
author={Enrico Cassano and Riccardo Renzulli and Andrea Bragagnolo and Marco Grangetto},
year={2025},
eprint={2507.01722},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2507.01722},
}
```
## Acknowledgements
- Some explainability techniques are taken from the [Captum library](https://captum.ai/).
- This code builds on top of PyTorch and TorchVision libraries. If you use this repository for your research, please consider citing the relevant papers and this repository.
## License
This project is for research purposes. See individual file headers for license information.
---
**Maintained by [EIDOSLAB](https://eidos.di.unito.it/).**
For questions or contributions, please open an issue or pull request.