{"id":25999812,"url":"https://github.com/LowLevelAI/GLARE","last_synced_at":"2025-03-05T18:41:42.067Z","repository":{"id":247330953,"uuid":"825562499","full_name":"LowLevelAI/GLARE","owner":"LowLevelAI","description":"Official implementation of GLARE, which is accpeted by ECCV 2024.","archived":false,"fork":false,"pushed_at":"2024-12-31T11:25:01.000Z","size":3565,"stargazers_count":83,"open_issues_count":9,"forks_count":3,"subscribers_count":5,"default_branch":"main","last_synced_at":"2024-12-31T12:25:52.838Z","etag":null,"topics":["codebook","eccv24","low-light-image-enhancement","normalizing-flow"],"latest_commit_sha":null,"homepage":"https://arxiv.org/pdf/2407.12431","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/LowLevelAI.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-07-08T04:55:13.000Z","updated_at":"2024-12-31T11:25:05.000Z","dependencies_parsed_at":"2024-07-24T01:12:27.805Z","dependency_job_id":"03dfecd2-2788-4b1b-8580-c2123d69c083","html_url":"https://github.com/LowLevelAI/GLARE","commit_stats":null,"previous_names":["lowlevelai/glare"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LowLevelAI%2FGLARE","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LowLevelAI%2FGLARE/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LowLevelAI%2FGLARE/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LowLevelAI%2FGLARE/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/LowLevelAI","download_url":"https://codeload.github.com/LowLevelAI/GLARE/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":242083057,"owners_count":20069232,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["codebook","eccv24","low-light-image-enhancement","normalizing-flow"],"created_at":"2025-03-05T18:40:47.864Z","updated_at":"2025-03-05T18:41:42.040Z","avatar_url":"https://github.com/LowLevelAI.png","language":"Python","readme":"#  [ECCV 2024] GLARE: Low Light Image Enhancement via Generative Latent Feature based Codebook Retrieval [[Paper]](https://arxiv.org/pdf/2407.12431)\n\n\u003ch4 align=\"center\"\u003eHan Zhou\u003csup\u003e1,*\u003c/sup\u003e, Wei Dong\u003csup\u003e1,*\u003c/sup\u003e, Xiaohong Liu\u003csup\u003e2,\u0026dagger;\u003c/sup\u003e, Shuaicheng Liu\u003csup\u003e3\u003c/sup\u003e, Xiongkuo Min\u003csup\u003e2\u003c/sup\u003e, Guangtao Zhai\u003csup\u003e2\u003c/sup\u003e, Jun Chen\u003csup\u003e1,\u0026dagger;\u003c/sup\u003e\u003c/center\u003e\n\u003ch4 align=\"center\"\u003e\u003csup\u003e1\u003c/sup\u003eMcMaster University, \u003csup\u003e2\u003c/sup\u003eShanghai Jiao Tong University, \n\u003ch4 align=\"center\"\u003e\u003csup\u003e3\u003c/sup\u003eUniversity of Electronic Science and Technology of China\u003c/center\u003e\u003c/center\u003e\n\u003ch4 align=\"center\"\u003e\u003csup\u003e*\u003c/sup\u003eEqual Contribution, \u003csup\u003e\u0026dagger;\u003c/sup\u003eCorresponding Authors\u003c/center\u003e\u003c/center\u003e\n\n\n\t\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/glare-low-light-image-enhancement-via/low-light-image-enhancement-on-lolv2)](https://paperswithcode.com/sota/low-light-image-enhancement-on-lolv2?p=glare-low-light-image-enhancement-via)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/glare-low-light-image-enhancement-via/low-light-image-enhancement-on-lolv2-1)](https://paperswithcode.com/sota/low-light-image-enhancement-on-lolv2-1?p=glare-low-light-image-enhancement-via)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/glare-low-light-image-enhancement-via/low-light-image-enhancement-on-lol)](https://paperswithcode.com/sota/low-light-image-enhancement-on-lol?p=glare-low-light-image-enhancement-via)\n\n\n\n\n### Introduction\nThis repository represents the official implementation of our ECCV 2024 paper titled **GLARE: Low Light Image Enhancement via Generative Latent Feature based Codebook Retrieval**. If you find this repo useful, please give it a star ⭐ and consider citing our paper in your research. Thank you.\n\n[![License](https://img.shields.io/badge/License-Apache--2.0-929292)](https://www.apache.org/licenses/LICENSE-2.0)\n\nWe present GLARE, a novel network for low-light image enhancement.\n\n- **Codebook-based LLIE**: exploit normal-light (NL) images to extract NL codebook prior as the guidance.\n- **Generative Feature Learning**: develop an invertible latent normalizing flow strategy for feature alignment.\n- **Adaptive Feature Transformation**: adaptively introduces input information into the decoding process and allows flexible adjustments for users. \n- **Future:** network structure can be meticulously optimized to improve efficiency and performance in the future.\n\n### Overall Framework\n![teaser](images/framework.png)\n\n## 📢 News\n**2024-12-31:** Train code has been released.  ⭐ \u003cbr\u003e\n**2024-09-25:** Our another paper **ECMamba: Consolidating Selective State Space Model with Retinex Guidance for Efficient Multiple Exposure Correction** has been accepted by NeurIPS 2024. Code and pre-print will be released at: \u003ca href=\"https://github.com/LowlevelAI/ECMamba\"\u003e\u003cimg src=\"https://img.shields.io/github/stars/LowlevelAI/ECMamba\"/\u003e\u003c/a\u003e. :rocket:\u003cbr\u003e\n**2024-09-21:** Inference code for unpaired images and pre-trained models for LOL-v2-real is released! :rocket:\u003cbr\u003e\n**2024-07-21:** Inference code and pre-trained models for LOL is released! Feel free to use them. ⭐ \u003cbr\u003e\n**2024-07-21:** [License](LICENSE.txt) is updated to Apache License, Version 2.0. 💫 \u003cbr\u003e\n**2024-07-19:** Paper is available at: \u003ca href=\"https://arxiv.org/pdf/2407.12431\"\u003e\u003cimg src=\"https://img.shields.io/badge/arXiv-PDF-b31b1b\" height=\"16\"\u003e\u003c/a\u003e. :tada: \u003cbr\u003e\n**2024-07-01:** Our paper has been accepted by ECCV 2024. Code and Models will be released. :rocket:\u003cbr\u003e\n\n\n## 🛠️ Setup\n\nThe inference code was tested on:\n\n- Ubuntu 22.04 LTS, Python 3.8, CUDA 11.3, GeForce RTX 2080Ti or higher GPU RAM.\n\n### 📦 Repository\n\nClone the repository (requires git):\n\n```bash\ngit clone https://github.com/LowLevelAI/GLARE.git\ncd GLARE\n```\n\n### 💻 Dependencies\n\n- **Make Conda Environment: Using [Conda](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) to create the environment:** \n\n    ```bash\n    conda create -n glare python=3.8\n    conda activate glare\n    ```\n- **Then install dependencies:**\n\n  ```bash\n  conda install pytorch=1.11 torchvision cudatoolkit=11.3 -c pytorch\n  pip install addict future lmdb numpy opencv-python Pillow pyyaml requests scikit-image scipy tqdm yapf einops tb-nightly natsort\n  pip install pyiqa==0.1.4 \n  pip install pytorch_lightning==1.6.0\n  pip install --force-reinstall charset-normalizer==3.1.0\n  ```\n\n- **Build CUDA extensions:**\n  \n  ```bash\n  cd GLARE/defor_cuda_ext\n  BASICSR_EXT=True python setup.py develop\n  ```\n\n- **Move CUDA extensions** (/GLARE/defor_cuda_ext/basicsr/ops/dcn/deform_conv_ext.xxxxxx.so) to the path: **/GLARE/code/models/modules/ops/dcn/**.\n\n\n## 🏃 Testing on benchmark datasets\n\n### 📷 Download following datasets:\n\nLOL [Google Drive](https://drive.google.com/file/d/1L-kqSQyrmMueBh_ziWoPFhfsAh50h20H/view?usp=sharing)\n\nLOL-v2 [Google Drive](https://drive.google.com/file/d/1Ou9EljYZW8o5dbDCf9R34FS8Pd8kEp2U/view?usp=sharing)\n\n\n\n### ⬇ Download pre-trained models\n\nDownload [pre-trained weights for LOL](https://drive.google.com/drive/folders/1DuATvqpNgRGlPq5_LvvzghkFdFL9sYvq), [pre-trained weights for LOL-v2-real](https://drive.google.com/drive/folders/1Cesn3jJAdxjT7DDZCTMU8Vt2CnauBL7F?usp=drive_link) and place them to folder `pretrained_weights_lol`, `pretrained_weights_lol-v2-real`, respectively.\n\n### 🚀 Run inference\n\nFor LOL dataset\n\n```bash\npython code/infer_dataset_lol.py\n```\n\nFor LOL-v2-real dataset\n\n```bash\npython code/infer_dataset_lolv2-real.py\n```\n\nFor unpaired testing, please make sure the *dataroot_unpaired* in the .yml file is correct.\n\n```bash\npython code/infer_unpaired.py\n```\n\nYou can find all results in `results/`. **Enjoy**!\n\n\n## 🏋️ Training\n\nDownload [VQGAN weight for LOL](https://drive.google.com/drive/folders/1DuATvqpNgRGlPq5_LvvzghkFdFL9sYvq),and place it to folder `pretrained_weights_lol`.\n\nFor stage2 training:\n```bash\npython code/train_stage2.py\n```\n\nFor stage2 testing:\n```bash\npython code/test_stage2.py\n```\n\nFor stage3 training:\n```bash\npython code/train_stage3.py\n```\n\nFor stage3 testing:\n```bash\npython code/test_stage3.py\n```\n\n\n## ✏️ Contributing\n\nPlease refer to [this](CONTRIBUTING.md) instruction.\n\n## 🎓 Citation\n\nPlease cite our paper:\n\n```bibtex\n@inproceedings{han2025glare,\n  title={Glare: Low light image enhancement via generative latent feature based codebook retrieval},\n  author={Zhou, Han and Dong, Wei and Liu, Xiaohong and Liu, Shuaicheng and Min, Xiongkuo and Zhai, Guangtao and Chen, Jun},\n  booktitle={European Conference on Computer Vision},\n  pages={36--54},\n  year={2025},\n  organization={Springer}\n}\n\n@article{GLARE,\n      title = {GLARE: Low Light Image Enhancement via Generative Latent Feature based Codebook Retrieval}, \n      author = {Zhou, Han and Dong, Wei and Liu, Xiaohong and Liu, Shuaicheng and Min, Xiongkuo and Zhai, Guangtao and Chen, Jun},\n      journal = {arXiv preprint arXiv:2407.12431},\n      year = {2024}\n}\n```\n\n## 🎫 License\n\nThis work is licensed under the Apache License, Version 2.0 (as defined in the [LICENSE](LICENSE.txt)).\n\nBy downloading and using the code and model you agree to the terms in the  [LICENSE](LICENSE.txt).\n\n[![License](https://img.shields.io/badge/License-Apache--2.0-929292)](https://www.apache.org/licenses/LICENSE-2.0)\n\n\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FLowLevelAI%2FGLARE","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FLowLevelAI%2FGLARE","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FLowLevelAI%2FGLARE/lists"}