{"id":18317351,"url":"https://github.com/compvis/invariances","last_synced_at":"2026-03-07T11:31:56.993Z","repository":{"id":65983451,"uuid":"281635274","full_name":"CompVis/invariances","owner":"CompVis","description":"Making Sense of CNNs: Interpreting Deep Representations \u0026 Their Invariances with Invertible Neural Networks","archived":false,"fork":false,"pushed_at":"2020-12-18T13:21:38.000Z","size":143575,"stargazers_count":58,"open_issues_count":0,"forks_count":6,"subscribers_count":5,"default_branch":"master","last_synced_at":"2025-09-10T05:25:09.683Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://compvis.github.io/invariances/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/CompVis.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.md","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-07-22T09:34:47.000Z","updated_at":"2025-03-28T09:03:03.000Z","dependencies_parsed_at":"2023-02-19T19:15:25.372Z","dependency_job_id":null,"html_url":"https://github.com/CompVis/invariances","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/CompVis/invariances","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CompVis%2Finvariances","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CompVis%2Finvariances/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CompVis%2Finvariances/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CompVis%2Finvariances/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/CompVis","download_url":"https://codeload.github.com/CompVis/invariances/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CompVis%2Finvariances/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30212124,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-07T09:02:10.694Z","status":"ssl_error","status_checked_at":"2026-03-07T09:02:08.429Z","response_time":53,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-05T18:05:53.019Z","updated_at":"2026-03-07T11:31:56.926Z","avatar_url":"https://github.com/CompVis.png","language":"Python","readme":"# Making Sense of CNNs: Interpreting Deep Representations \u0026 Their Invariances with Invertible Neural Networks\n\nPyTorch code accompanying the [ECCV 2020](https://eccv2020.eu/) paper\n\n[**Making Sense of CNNs: Interpreting Deep Representations \u0026 Their Invariances with Invertible Neural Networks**](https://compvis.github.io/invariances/)\u003cbr/\u003e\n[Robin Rombach](https://github.com/rromb)\\*,\n[Patrick Esser](https://github.com/pesser)\\*,\n[Björn Ommer](https://hci.iwr.uni-heidelberg.de/Staff/bommer)\u003cbr/\u003e\n\\* equal contribution\n\n![teaser](data/overview.png)\u003cbr/\u003e\n[arXiv](https://arxiv.org/abs/2008.01777) | [BibTeX](#bibtex) | [Project Page](https://compvis.github.io/invariances/)\n\nTable of Contents\n=================\n\n* [Requirements](#requirements)\n* [Demos](#demos)\n* [Training](#training)\n   * [Data](#data)\n   * [Invariances of Classifiers](#invariances-of-classifiers)\n      * [ResNet](#resnet)\n      * [AlexNet](#alexnet)\n* [Evaluation](#evaluation)\n* [Pretrained Models](#pretrained-models)\n* [BibTeX](#bibtex)\n\n\n## Requirements\nA suitable [conda](https://conda.io/) environment named `invariances` can be created\nand activated with:\n\n```\nconda env create -f environment.yaml\nconda activate invariances\n```\n\nOptionally, you can then also `conda install tensorflow-gpu=1.14` to speed up\nFID evaluations.\n\n## Demos\n![teaser](data/demoteaser.gif)\n\nTo get started you can directly dive into some demos. After installing the requirements as described\nabove, simply run\n\n```\nstreamlit run invariances/demo.py\n```\n\nPlease note that checkpoints will be downloaded on demand, which can take a while. You can see\nthe download progress displayed in the terminal running the streamlit command. \n\nWe provide demonstrations on\n\n- visualization of adversarial attacks\n- visualization of network representations and their invariances\n- revealing the texture bias of *ImageNet*-CNNs\n- visualizing invariances from a video (resulting in image to video translation)\n- image mixing via their network representations\n\nNote that all of the provided demos can be run without a dataset, and you can add \nyour own images into `data/custom`.\n\n## Training\n\n### Data\nIf not present on your disk, all required datasets (*ImageNet*, *AnimalFaces* and *ImageNetAnimals*)\n will be downloaded and prepared automatically. The data processing and loading rely on the\n  [autoencoders](https://github.com/edflow/autoencoders) package and are described in more detail \n  [here](https://github.com/edflow/autoencoders#data).\n  \n  **Note:** If you already have one or more of the datasets present, follow the instructions linked \n  above to avoid downloading them again.\n \n\n### Invariances of Classifiers\n\n#### ResNet\n\nTo recover invariances of an ResNet classifier trained on the [AnimalFaces](https://github.com/edflow/autoencoders#animalfaces)\n dataset, run\n\n```\nedflow -b configs/resnet/animalfaces/base.yaml configs/resnet/animalfaces/train/\u003clayer\u003e.yaml -t\n```\n\nwhere `\u003clayer\u003e` is one of `input`, `maxpool`, `layer1`, `layer2`, `layer3`, `layer4`, \n`avgpool`, `fc`, `softmax`.\nTo enable logging to [wandb](https://wandb.ai), adjust\n`configs/project.yaml` and add it to above command:\n\n```\nedflow -b configs/resnet/animalfaces/base.yaml configs/resnet/animalfaces/train/\u003clayer\u003e.yaml configs/project.yaml -t\n```\n\n#### AlexNet\n\nTo reproduce the training procedure from the paper, run\n\n```\nedflow -b configs/alexnet/base_train.yaml configs/alexnet/train/\u003clayer\u003e.yaml -t\n```\n\nwhere `\u003clayer\u003e` is one of `conv5`, `fc6`, `fc7`, `fc8`, `softmax`.\nTo enable logging to [wandb](https://wandb.ai), adjust\n`configs/project.yaml` and add it to above command:\n\n```\nedflow -b configs/alexnet/base_train.yaml configs/alexnet/train/\u003clayer\u003e.yaml configs/project.yaml -t\n```\n\n\n## Evaluation\n\nEvaluations run automatically after each epoch of training. To start an\nevaluation manually, run\n\n```\nedflow -p logs/\u003clog_folder\u003e/configs/\u003cconfig\u003e.yaml\n```\n\nand, optionally, add `-c \u003cpath to checkpoint\u003e` to evaluate a specific\ncheckpoint instead of the last one.\n\n\n## Pretrained Models\nPretrained models (e.g. autoencoders and classifiers) will be downloaded automatically on their first \nuse in a demo, training or evaluation script. \n\n## BibTeX\n\n```\n@inproceedings{rombach2020invariances,\n  title={Making Sense of CNNs: Interpreting Deep Representations \\\u0026 Their Invariances with INNs},\n  author={Rombach, Robin and Esser, Patrick and Ommer, Bj{\\\"o}rn},\n  booktitle={Proceedings of the European Conference on Computer Vision},\n  year={2020}\n}\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcompvis%2Finvariances","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcompvis%2Finvariances","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcompvis%2Finvariances/lists"}