https://github.com/densechen/eve-mli
eve-mli: making learning interesting
https://github.com/densechen/eve-mli
deep-learning deep-reinforcement-learning eve-mli model-compression network-architecture pruning pypi pytorch quantization-efficient-network reinforcement-learning spiking-neural-networks
Last synced: 27 days ago
JSON representation
eve-mli: making learning interesting
- Host: GitHub
- URL: https://github.com/densechen/eve-mli
- Owner: densechen
- License: mit
- Created: 2020-12-27T01:02:21.000Z (about 5 years ago)
- Default Branch: main
- Last Pushed: 2022-03-19T12:08:00.000Z (almost 4 years ago)
- Last Synced: 2025-12-05T14:28:46.359Z (2 months ago)
- Topics: deep-learning, deep-reinforcement-learning, eve-mli, model-compression, network-architecture, pruning, pypi, pytorch, quantization-efficient-network, reinforcement-learning, spiking-neural-networks
- Language: Python
- Homepage:
- Size: 13.8 MB
- Stars: 9
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README

# OpenSpiking
 [](https://eve-mli.readthedocs.io/en/latest/?badge=latest) [](https://pypi.org/project/eve-mli) [](https://pypi.org/project/eve-mli)
OpenSpiking is an open-source deep learning framework used to devise and modify a network architecture in a flexible and interesting way.
We provide several jupyter notebooks under `./examples` to demonstrate how to build various network structures.
The most features of OpenSpiking are that: it provides a well defined framework which make your network structure can be upgraded along with the learning of weights.
**Any contributions to OpenSpiking is welcome!**
## Installation
Install from [PyPI](https://pypi.org/project/eve-mli/):
```bash
pip install eve-mli
# pip install git+https://github.com/densechen/eve-mli.git
```
Developers can download and install the latest version from:
[GitHub](https://github.com/densechen/eve-mli):
```bash
git clone https://github.com/densechen/eve-mli.git
cd eve-mli
python setup.py install
```
[Gitee](https://gitee.com/densechen/eve-mli.git):
```bash
git clone https://gitee.com/densechen/eve-mli.git
cd eve-mli
python setup.py install
```
Vailidate installation:
```bash
python -c "import eve; print(eve.__version__)"
```
## Quick Start
The core module of eve-mli is `eve.core.Eve`, this module is a wrapper of `torch.nn.Module`.
In `Eve`, the parameter ended with `_eve` will be treated as an eve parameters, and we call the rest as torch parameters. In the same way, we also define eve buffers and torch buffers.
As for eve parameters, you can fetch and attach an `.obs` properties via `eve.core.State` class, and assign an upgrade
function to modify the eve parameter. As for eve buffers, it is useful to cache the hidden states, all the eve buffers will be cleared
once we call `Eve.reset()`.
In default, the model defined by `Eve` is the same with `nn.Module`. You can train it directly for obtaining a baseline model. Then, `Eve.spike()` will turn it into a spiking neural network module, and `Eve.quantize()` will trun it into a quantization neural network model.
## About the project
The documentation can be found [here](https://eve-mli.readthedocs.io).
(Auto-building of documentation fails sometimes, you can build it manually via ```cd docs; make html```).
The project remains in development. We encourage more volunteers to come together!
**eve-mli-v0.1.0 is released!**
## Next to do
Add CUDA support for speeding up, refer to [here](https://github.com/tudelft/cuSNN).
## About the authors
[Dengsheng Chen](https://densechen.github.io)
Master @ National University of Defense Technology
densechen@foxmail.com
## References
[PyTorch](https://github.com/pytorch/pytorch)
[stable-baselines3](https://github.com/DLR-RM/stable-baselines3)
[spikingjelly](https://github.com/fangwei123456/spikingjelly)
[Model-Compression-Deploy](https://github.com/666DZY666/Model-Compression-Deploy)
[Awesome-Deep-Neural-Network-Compression](https://github.com/csyhhu/Awesome-Deep-Neural-Network-Compression)