Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/locuslab/impsq
Implicit^2: Implicit model for implicit neural representations
https://github.com/locuslab/impsq
Last synced: about 2 months ago
JSON representation
Implicit^2: Implicit model for implicit neural representations
- Host: GitHub
- URL: https://github.com/locuslab/impsq
- Owner: locuslab
- Created: 2021-10-26T17:25:32.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2021-11-24T19:53:37.000Z (about 3 years ago)
- Last Synced: 2023-04-26T02:05:04.259Z (over 1 year ago)
- Language: Python
- Size: 335 KB
- Stars: 21
- Watchers: 3
- Forks: 6
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# (Implicit)2: Implicit Layers for Implicit Representations
This repo contains the implementation of the (Implicit)2 network, an implicit neural representation (INR) learning framework backboned by [Deep Equilibrium Model](https://arxiv.org/abs/1909.01377) (DEQ). By taking advantage of the full-batch training scheme commonly applied to INR learning on low-dimensional data (e.g. images and audios) as well as an approximated gradient, (Implicit)2 networks operate on significantly less computation and memory budget than exisiting explicit models while perform competitively.
![Comparsion of explicit & implicit models](/assets/exp_vs_imp.png)
For more info and implementation details, please refer to [our paper](https://openreview.net/forum\?id=AcoMwAU5c0s).
## Data
Data used in this project is publicly available on Google Drive ([link](https://drive.google.com/drive/folders/1AVPQ_cqZTKedGWwJ0R39zSBQXw7LC6Pf?usp=sharing)).
To replicate our experiments, create a _data_ folder under the root directory and download the correponding datasets.
```
📦data
┣ 📂image
┃ ┣ 📜celeba_128_tiny.npy
┃ ┣ 📜data_2d_text.npz
┃ ┗ 📜data_div2k.npz
┣ 📂3d_occupancy
┣ 📂audio
┣ 📂sdf
┗ 📂video
```## Reproduction of paper results
To reproduce results on image representation and image generalization, run
```
python scripts/train_2d_image.py --config_file ./configs//config__.yaml
```For other experiments (audio, video, and 3d_occupancy), run
```
python scripts/train_.py --config_file ./configs//.yaml --dataset
```Below is a list of available dataset options for each task (including some extra data we did not cover in the paper)
```
audio: ['bach', 'counting']
video: ['cat', 'bikes']
3d_occupancy: ['dragon', 'buddha', 'bunny', 'armadillo', 'lucy']
```## Credits
- The set of experiments on image, video, and audio signals and the corresponding data largely follows [SIREN](https://arxiv.org/abs/2006.09661) and [Fourier Feature Networks](https://arxiv.org/abs/2006.10739).
- Models for the 3D occupancy experiments are directly retrieved from the [Stanford 3D Scanning Repository](http://graphics.stanford.edu/data/3Dscanrep/)## Citation
```
@inproceedings{huang2021impsq,
author = {Zhichun Huang and Shaojie Bai and J. Zico Kolter},
title = {(Implicit)^2: Implicit Layers for Implicit Representations},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2021},
}
```