https://github.com/ermongroup/self-similarity-prior
Self-Similarity Priors: Neural Collages as Differentiable Fractal Representations
https://github.com/ermongroup/self-similarity-prior
Last synced: 5 months ago
JSON representation
Self-Similarity Priors: Neural Collages as Differentiable Fractal Representations
- Host: GitHub
- URL: https://github.com/ermongroup/self-similarity-prior
- Owner: ermongroup
- License: mit
- Created: 2022-04-11T23:40:57.000Z (over 3 years ago)
- Default Branch: main
- Last Pushed: 2022-11-26T00:14:33.000Z (almost 3 years ago)
- Last Synced: 2025-03-31T16:13:25.869Z (6 months ago)
- Language: Jupyter Notebook
- Size: 68.9 MB
- Stars: 27
- Watchers: 8
- Forks: 4
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
Self-Similarity Priors:
Neural Collages as Differentiable Fractal Representationshttps://user-images.githubusercontent.com/34561392/204065084-e3c80d70-8bbb-4ac4-9449-b64dddb85fcc.mp4
[](https://papers.nips.cc/paper/2020/hash/f1686b4badcf28d33ed632036c7ab0b8-Abstract.html)
[]()
[]()
[](https://arxiv.org/abs/2204.07673)
[](https://zymrael.github.io/self-similarity-prior/)
[](https://huggingface.co/spaces/Zymrael/Neural-Collage-Fractalization)Many patterns in nature exhibit self-similarity: they can be compactly described via self-referential transformations.
In this work, we investigate the role of learning in the automated discovery of self-similarity and in its utilization for downstream tasks. We design a novel class of implicit operators, Neural Collages, which (1) represent data as the parameters of a self-referential, structured transformation, and (2) employ hypernetworks to amortize the cost of finding these parameters to a single forward pass.
We investigate how to leverage the representations produced by Neural Collages in various tasks:
* Lossy image compression
* Deep generative modeling
* Image fractalization## The Upshot
We introduce a contractive operator as a layer. One application of a Collage Operator involves
tokenizing the input domain into two partitions: sources and targets. The source tokens are combined into target tokens, which are then "stitched together" (as a collage). The parameters of a Collage operator relate parts of the input to itself, and can thus be used to achieve high rates of compression in self-similar data.The machinery behind collage operators is inspired by fractal compression schemes. We provide a self-contained differentiable implementation of a simple fractal compression scheme in `jax_src/compress/fractal.py`. The script
`scripts/fractal_compress_img.py` can be used to fractal compress a batch of images using this method.## Codebase
The codebase is organized as follows. We provide a simple implementation of a Collage Operator and related utilities under `torch_src/`. The bulk of the experiments has been carried out in `jax`. Under `scripts/` we provide training and evalaution scripts for the three main experiments. The lossy image compression experiment is performed on a slice of the aerial dataset described in the paper, which can be found at this
[link](https://captain-whu.github.io/DOTA/).## Citing this work
If you found the paper or this codebase useful, please consider citing:
```bibtex
@article{poli2022self,
title={Self-Similarity Priors: Neural Collages as Differentiable Fractal Representations},
author={Poli, Michael and Xu, Winnie and Massaroli, Stefano and Meng, Chenlin and Kim, Kuno and Ermon, Stefano},
journal={arXiv preprint arXiv:2204.07673},
year={2022}
}
```