https://github.com/eidoslab/neuralvelocity
NeVe - Neural Velocity for hyperparameter tuning (IJCNN 2025)
https://github.com/eidoslab/neuralvelocity
Last synced: 7 months ago
JSON representation
NeVe - Neural Velocity for hyperparameter tuning (IJCNN 2025)
- Host: GitHub
- URL: https://github.com/eidoslab/neuralvelocity
- Owner: EIDOSLAB
- License: gpl-3.0
- Created: 2025-04-04T14:34:10.000Z (10 months ago)
- Default Branch: master
- Last Pushed: 2025-06-17T16:48:34.000Z (7 months ago)
- Last Synced: 2025-06-17T17:43:48.158Z (7 months ago)
- Language: Python
- Size: 1.08 MB
- Stars: 0
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# ๐ง [IJCNN 2025] NeVe: Neural Velocity for hyperparameter tuning
[](https://www.docker.com/)
[](https://developer.nvidia.com/cuda-zone)
[](https://www.python.org/downloads/release/python-388/)
[](https://pytorch.org/)
[](https://www.gnu.org/licenses/gpl-3.0)
[](https://arxiv.org/abs/xxxx.xxxxx)
This repository contains the official implementation of the paper:
> **Neural Velocity for hyperparameter tuning**
> *Gianluca Dalmasso, et al.*
> IJCNN 2025
> ๐ [arXiv / DOI link here]
---

## ๐ Project Structure
```bash
NeuralVelocity/
โโโ ๐ assets/ # Teaser images, figures, etc.
โโโ ๐ src/
โ โโโ ๐ dataloaders/ # CIFAR, ImageNet loaders
โ โโโ ๐ labelwave/ # Competing method: LabelWave
โ โโโ ๐ ultimate_optimizer/ # Competing method: Ultimate Optimizer
โ โโโ ๐ neve/ # ๐ก Core method: Neural Velocity
โ โโโ ๐ models/ # Model architectures (e.g. CIFAR ResNets, INet ResNets, ...)
โ โโโ ๐ optimizers/ # Optimizers
โ โโโ ๐ schedulers/ # LR schedulers
โ โโโ ๐ swin_transformer/ # Swin Transformer model architecture
โ โโโ arguments.py # CLI args and config parser
โ โโโ classification.py # Training pipeline (base)
โ โโโ classification_labelwave.py # For LabelWave experiments
โ โโโ classification_ultimate_optimizer.py # For Ultimate Optimizer experiments
โ โโโ utils.py # Utility functions
โโโ Dockerfile # Default Docker container
โโโ Dockerfile.python # Base Python environment
โโโ Dockerfile.sweep # Sweep setup (e.g. for tuning)
โโโ LICENSE # GNU GPLv3 license
โโโ README.md # Project overview
โโโ build.sh # Build script (e.g. for Docker or sweep)
โโโ requirements.txt # Python dependencies
โโโ setup.py # Install package for pip
```
---
## ๐ Getting Started
You can run this project either using a Python virtual environment or a Docker container.
#### โ
Clone the repository
```bash
git clone https://github.com/EIDOSLAB/NeuralVelocity.git
cd NeuralVelocity
```
### ๐งช Option A โ Run with virtual environment (recommended for development)
#### ๐ฆ Create virtual environment & install dependencies
> This project was developed and tested with Python 3.8.8 โ we recommend using the same version for full compatibility and reproducibility.
```bash
# 1. Install Python 3.8.8 (only once)
pyenv install 3.8.8
# 2. Create virtual environment
pyenv virtualenv 3.8.8 neve
# 3. Activate the environment
pyenv activate neve
# 4. Install dependencies
pip install -r requirements.txt
```
#### ๐ Run training
```bash
cd src
python classification.py
```
### ๐ณ Option B โ Run with Docker
You can also use Docker for full environment reproducibility.
#### ๐๏ธ Build Docker images and push to remote registry
The `build.sh` script automates the build of all Docker images and pushes them to the configured remote Docker registry.
Before running, make sure to edit `build.sh` to set your remote registry URL and credentials if needed.
Run:
```bash
bash build.sh
```
This will build the following Docker images:
- `neve:base` (default container for training and experiments)
- `neve:python` (base Python environment)
- `neve:sweep` (for hyperparameter sweep experiments)
#### ๐ Run training inside the container
```bash
docker run --rm -it \
--gpus all \ # Optional: remove if no GPU
neve:python classification.py # Optional: Optional parameters...
```
> ๐ก Note: you may need to adjust volume mounting (-v) depending on your OS and Docker setup.
---
## ๐ Datasets
Tested datasets:
- [CIFAR10, and CIFAR100](https://www.cs.toronto.edu/~kriz/cifar.html)
- [Imagenet-100](https://www.image-net.org/challenges/LSVRC/2012/) (must be downloaded separately and prepared in the standard folder format.)
---
## ๐ชช License
This project is licensed under the **GNU General Public License v3.0**.
See the [LICENSE](./LICENSE) file for details.
โก๏ธ You are free to use, modify, and distribute this code under the same license terms.
Any derivative work must also be distributed under the GNU GPL.
---
## ๐ Acknowledgments
This research was developed at the University of Turin (UniTO), within the [EIDOS Lab](https://www.di.unito.it/~eidos/), and Tรฉlรฉcom Paris.
We thank the members of both institutions for the insightful discussions and support during the development of this work.
---
## ๐ Citation
If you use this repository or find our work helpful, please cite:
```bibtex
@misc{dalmasso2025neve,
title = {Neural Velocity for Hyperparameter Tuning},
author = {Gianluca Dalmasso and Others},
year = {2025},
howpublished = {\url{https://arxiv.org/abs/xxxx.xxxxx}},
note = {Accepted at IJCNN 2025. Official citation will be updated upon publication.}
}
```
---
## ๐ซ Contact
For questions or collaborations, feel free to reach out:
- ๐ง gianluca.dalmasso@unito.it
- ๐ GitHub Issues for bugs or feature requests