{"id":13869769,"url":"https://github.com/facebookresearch/NeuralCompression","last_synced_at":"2025-07-15T18:31:57.866Z","repository":{"id":37852637,"uuid":"384474900","full_name":"facebookresearch/NeuralCompression","owner":"facebookresearch","description":"A collection of tools for neural compression enthusiasts.","archived":false,"fork":false,"pushed_at":"2024-09-20T14:21:23.000Z","size":2329,"stargazers_count":560,"open_issues_count":7,"forks_count":48,"subscribers_count":18,"default_branch":"main","last_synced_at":"2025-06-24T02:41:35.631Z","etag":null,"topics":["compression","deep-learning","jax","machine-learning","neural-compression","python","pytorch"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/facebookresearch.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":".github/CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":".github/CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-07-09T15:14:13.000Z","updated_at":"2025-06-18T07:14:28.000Z","dependencies_parsed_at":"2023-09-29T16:30:03.571Z","dependency_job_id":"67ef411b-88f5-4b6b-a7b6-22553c1bbd5b","html_url":"https://github.com/facebookresearch/NeuralCompression","commit_stats":{"total_commits":136,"total_committers":9,"mean_commits":15.11111111111111,"dds":0.5367647058823529,"last_synced_commit":"e43a2aaa804086d5c8976f9802f51b95314a9728"},"previous_names":[],"tags_count":6,"template":false,"template_full_name":null,"purl":"pkg:github/facebookresearch/NeuralCompression","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/facebookresearch%2FNeuralCompression","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/facebookresearch%2FNeuralCompression/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/facebookresearch%2FNeuralCompression/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/facebookresearch%2FNeuralCompression/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/facebookresearch","download_url":"https://codeload.github.com/facebookresearch/NeuralCompression/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/facebookresearch%2FNeuralCompression/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":265451449,"owners_count":23767768,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["compression","deep-learning","jax","machine-learning","neural-compression","python","pytorch"],"created_at":"2024-08-05T20:01:16.274Z","updated_at":"2025-07-15T18:31:56.017Z","avatar_url":"https://github.com/facebookresearch.png","language":"Python","readme":"# NeuralCompression\n\n[![LICENSE](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/facebookresearch/NeuralCompression/tree/main/LICENSE)\n[![Build and Test](https://github.com/facebookresearch/NeuralCompression/actions/workflows/build-and-test.yml/badge.svg)](https://github.com/facebookresearch/NeuralCompression/actions/workflows/build-and-test.yml)\n\n## What's New\n- **August 2023 (image compression)** - [Released PyTorch implementation of MS-ILLM](https://github.com/facebookresearch/NeuralCompression/tree/main/projects/illm)\n- **April 2023 (video compression)** - [Released PyTorch implementation of VCT](https://github.com/facebookresearch/NeuralCompression/tree/main/projects/torch_vct)\n- **November 2022 (image compression)** - [Released Bits-Back coding with diffusion models](https://github.com/facebookresearch/NeuralCompression/tree/main/projects/bits_back_diffusion)!\n\n## About\n\nNeuralCompression is a Python repository dedicated to research of neural\nnetworks that compress data. The repository includes tools such as JAX-based\nentropy coders, image compression models, video compression models, and metrics\nfor image and video evaluation.\n\nNeuralCompression is alpha software. The project is under active development.\nThe API will change as we make releases, potentially breaking backwards\ncompatibility.\n\n## Installation\n\nNeuralCompression is a project currently under development. You can install the\nrepository in development mode.\n\n### PyPI Installation\n\nFirst, install PyTorch according to the directions from the\n[PyTorch website](https://pytorch.org/). Then, you should be able to run\n\n```bash\npip install neuralcompression\n```\n\nto get the latest version from PyPI.\n\n### Development Installation\n\nFirst, clone the repository and navigate to the NeuralCompression root\ndirectory and install the package in development mode by running:\n\n```bash\npip install --editable \".[tests]\"\n```\n\nIf you are not interested in matching the test environment, then you can just\napply `pip install -e .`.\n\n## Repository Structure\n\nWe use a 2-tier repository structure. The `neuralcompression` package contains\na core set of tools for doing neural compression research. Code committed to\nthe core package requires stricter linting, high code quality, and rigorous\nreview. The `projects` folder contains code for reproducing papers and training\nbaselines. Code in this folder is not linted aggressively, we don't enforce\ntype annotations, and it's okay to omit unit tests.\n\nThe 2-tier structure enables rapid iteration and reproduction via code in\n`projects` that is built on a backbone of high-quality code in\n`neuralcompression`.\n\n## neuralcompression\n\n- `neuralcompression` - base package\n  - `data` - PyTorch data loaders for various data sets\n  - `distributions` - extensions of probability models for compression\n  - `functional` - methods for image warping, information cost, flop counting, etc.\n  - `layers` - building blocks for compression models\n  - `metrics` - `torchmetrics` classes for assessing model performance\n  - `models` - complete compression models\n  - `optim` - useful optimization utilities\n\n## projects\n\n- `projects` - recipes and code for reproducing papers\n  - `bits_back_diffusion` - code for bits-back coding with diffusion models\n  - `deep_video_compression` [DVC (Lu et al., 2019)](https://openaccess.thecvf.com/content_CVPR_2019/html/Lu_DVC_An_End-To-End_Deep_Video_Compression_Framework_CVPR_2019_paper.html), deprecated\n  - `illm` [MS-ILLM (Muckley et al., 2023)](https://proceedings.mlr.press/v202/muckley23a.html)\n  - `jax_entropy_coders` - implementations of arithmetic coding and ANS in JAX\n  - `torch_vct` [VCT (Mentzer, et al.,)](https://proceedings.neurips.cc/paper_files/paper/2022/hash/54dcf25318f9de5a7a01f0a4125c541e-Abstract-Conference.html)\n\n## Tutorial Notebooks\n\nThis repository also features interactive notebooks detailing different \nparts of the package, which can be found in the `tutorials` directory. \nExisting tutorials are:\n\n- Walkthrough of the `neuralcompression` flop counter ([view on Colab](https://colab.research.google.com/github/facebookresearch/NeuralCompression/blob/main/tutorials/Flop_Count_Example.ipynb)).\n- Using `neuralcompression.metrics` and `torchmetrics` to calculate rate-distortion curves ([view on Colab](https://colab.research.google.com/github/facebookresearch/NeuralCompression/blob/main/tutorials/Metrics_Example.ipynb)).\n\n## Contributions\n\nPlease read our [CONTRIBUTING](https://github.com/facebookresearch/NeuralCompression/tree/main/.github/CONTRIBUTING.md) guide and our\n[CODE_OF_CONDUCT](https://github.com/facebookresearch/NeuralCompression/tree/main/.github/CODE_OF_CONDUCT.md) prior to submitting a pull\nrequest.\n\nWe test all pull requests. We rely on this for reviews, so please make sure any\nnew code is tested. Tests for `neuralcompression` go in the `tests` folder in\nthe root of the repository. Tests for individual projects go in those projects'\nown `tests` folder.\n\nWe use `black` for formatting, `isort` for import sorting, `flake8` for\nlinting, and `mypy` for type checking.\n\n## License\n\nNeuralCompression is MIT licensed, as found in the [LICENSE](https://github.com/facebookresearch/NeuralCompression/tree/main/LICENSE) file.\n\nModel weights released with NeuralCompression are CC-BY-NC 4.0 licensed, as\nfound in the [WEIGHTS_LICENSE](https://github.com/facebookresearch/NeuralCompression/tree/main/WEIGHTS_LICENSE)\nfile.\n\nSome of the code may from other repositories and include other licenses.\nPlease read all code files carefully for details.\n\n## Cite\n\nIf you use code for a paper reimplementation. If you would like to also cite\nthe repository, you can use:\n\n```bibtex\n@misc{muckley2021neuralcompression,\n    author={Matthew Muckley and Jordan Juravsky and Daniel Severo and Mannat Singh and Quentin Duval and Karen Ullrich},\n    title={NeuralCompression},\n    howpublished={\\url{https://github.com/facebookresearch/NeuralCompression}},\n    year={2021}\n}\n```\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffacebookresearch%2FNeuralCompression","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffacebookresearch%2FNeuralCompression","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffacebookresearch%2FNeuralCompression/lists"}