{"id":13646328,"url":"https://github.com/foundationvision/var","last_synced_at":"2025-04-10T02:14:24.967Z","repository":{"id":231462140,"uuid":"780522250","full_name":"FoundationVision/VAR","owner":"FoundationVision","description":"[NeurIPS 2024 Best Paper][GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of \"Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction\". An *ultra-simple, user-friendly yet state-of-the-art* codebase for autoregressive image generation!","archived":false,"fork":false,"pushed_at":"2025-03-22T12:26:22.000Z","size":635,"stargazers_count":7408,"open_issues_count":39,"forks_count":463,"subscribers_count":100,"default_branch":"main","last_synced_at":"2025-04-10T02:14:20.961Z","etag":null,"topics":["auto-regressive-model","autoregressive-models","diffusion-models","generative-ai","generative-model","gpt","gpt-2","image-generation","large-language-models","neurips","transformers","vision-transformer"],"latest_commit_sha":null,"homepage":"","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/FoundationVision.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-04-01T16:53:18.000Z","updated_at":"2025-04-10T00:05:50.000Z","dependencies_parsed_at":"2024-12-03T15:02:35.497Z","dependency_job_id":"525a7d57-273a-443b-9649-26b9771ca59c","html_url":"https://github.com/FoundationVision/VAR","commit_stats":null,"previous_names":["foundationvision/var"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FoundationVision%2FVAR","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FoundationVision%2FVAR/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FoundationVision%2FVAR/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FoundationVision%2FVAR/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/FoundationVision","download_url":"https://codeload.github.com/FoundationVision/VAR/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248142903,"owners_count":21054671,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["auto-regressive-model","autoregressive-models","diffusion-models","generative-ai","generative-model","gpt","gpt-2","image-generation","large-language-models","neurips","transformers","vision-transformer"],"created_at":"2024-08-02T01:02:52.981Z","updated_at":"2025-04-10T02:14:24.935Z","avatar_url":"https://github.com/FoundationVision.png","language":"Jupyter Notebook","readme":"# VAR: a new visual generation method elevates GPT-style models beyond diffusion🚀 \u0026 Scaling laws observed📈\n\n\u003cdiv align=\"center\"\u003e\n\n[![demo platform](https://img.shields.io/badge/Play%20with%20VAR%21-VAR%20demo%20platform-lightblue)](https://opensource.bytedance.com/gmpt/t2i/invite)\u0026nbsp;\n[![arXiv](https://img.shields.io/badge/arXiv%20paper-2404.02905-b31b1b.svg)](https://arxiv.org/abs/2404.02905)\u0026nbsp;\n[![huggingface weights](https://img.shields.io/badge/%F0%9F%A4%97%20Weights-FoundationVision/var-yellow)](https://huggingface.co/FoundationVision/var)\u0026nbsp;\n[![SOTA](https://img.shields.io/badge/State%20of%20the%20Art-Image%20Generation%20on%20ImageNet%20%28AR%29-32B1B4?logo=data%3Aimage%2Fsvg%2Bxml%3Bbase64%2CPHN2ZyB3aWR0aD0iNjA2IiBoZWlnaHQ9IjYwNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayIgb3ZlcmZsb3c9ImhpZGRlbiI%2BPGRlZnM%2BPGNsaXBQYXRoIGlkPSJjbGlwMCI%2BPHJlY3QgeD0iLTEiIHk9Ii0xIiB3aWR0aD0iNjA2IiBoZWlnaHQ9IjYwNiIvPjwvY2xpcFBhdGg%2BPC9kZWZzPjxnIGNsaXAtcGF0aD0idXJsKCNjbGlwMCkiIHRyYW5zZm9ybT0idHJhbnNsYXRlKDEgMSkiPjxyZWN0IHg9IjUyOSIgeT0iNjYiIHdpZHRoPSI1NiIgaGVpZ2h0PSI0NzMiIGZpbGw9IiM0NEYyRjYiLz48cmVjdCB4PSIxOSIgeT0iNjYiIHdpZHRoPSI1NyIgaGVpZ2h0PSI0NzMiIGZpbGw9IiM0NEYyRjYiLz48cmVjdCB4PSIyNzQiIHk9IjE1MSIgd2lkdGg9IjU3IiBoZWlnaHQ9IjMwMiIgZmlsbD0iIzQ0RjJGNiIvPjxyZWN0IHg9IjEwNCIgeT0iMTUxIiB3aWR0aD0iNTciIGhlaWdodD0iMzAyIiBmaWxsPSIjNDRGMkY2Ii8%2BPHJlY3QgeD0iNDQ0IiB5PSIxNTEiIHdpZHRoPSI1NyIgaGVpZ2h0PSIzMDIiIGZpbGw9IiM0NEYyRjYiLz48cmVjdCB4PSIzNTkiIHk9IjE3MCIgd2lkdGg9IjU2IiBoZWlnaHQ9IjI2NCIgZmlsbD0iIzQ0RjJGNiIvPjxyZWN0IHg9IjE4OCIgeT0iMTcwIiB3aWR0aD0iNTciIGhlaWdodD0iMjY0IiBmaWxsPSIjNDRGMkY2Ii8%2BPHJlY3QgeD0iNzYiIHk9IjY2IiB3aWR0aD0iNDciIGhlaWdodD0iNTciIGZpbGw9IiM0NEYyRjYiLz48cmVjdCB4PSI0ODIiIHk9IjY2IiB3aWR0aD0iNDciIGhlaWdodD0iNTciIGZpbGw9IiM0NEYyRjYiLz48cmVjdCB4PSI3NiIgeT0iNDgyIiB3aWR0aD0iNDciIGhlaWdodD0iNTciIGZpbGw9IiM0NEYyRjYiLz48cmVjdCB4PSI0ODIiIHk9IjQ4MiIgd2lkdGg9IjQ3IiBoZWlnaHQ9IjU3IiBmaWxsPSIjNDRGMkY2Ii8%2BPC9nPjwvc3ZnPg%3D%3D)](https://paperswithcode.com/sota/image-generation-on-imagenet-256x256?tag_filter=485\u0026p=visual-autoregressive-modeling-scalable-image)\n\n\n\u003c/div\u003e\n\u003cp align=\"center\" style=\"font-size: larger;\"\u003e\n  \u003ca href=\"https://arxiv.org/abs/2404.02905\"\u003eVisual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction\u003c/a\u003e\n\u003c/p\u003e\n\n\u003cdiv\u003e\n  \u003cp align=\"center\" style=\"font-size: larger;\"\u003e\n    \u003cstrong\u003eNeurIPS 2024 Best Paper\u003c/strong\u003e\n  \u003c/p\u003e\n\u003c/div\u003e\n\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"https://github.com/FoundationVision/VAR/assets/39692511/9850df90-20b1-4f29-8592-e3526d16d755\" width=95%\u003e\n\u003cp\u003e\n\n\u003cbr\u003e\n\n## News\n\n* **2024-12:** 🏆 VAR received **NeurIPS 2024 Best Paper Award**.\n* **2024-12:** 🔥 We Release our Text-to-Image research based on VAR, please check [Infinity](https://github.com/FoundationVision/Infinity).\n* **2024-09:** VAR is accepted as **NeurIPS 2024 Oral** Presentation.\n* **2024-04:** [Visual AutoRegressive modeling](https://github.com/FoundationVision/VAR) is released.\n\n## 🕹️ Try and Play with VAR!\n\n~~We provide a [demo website](https://var.vision/demo) for you to play with VAR models and generate images interactively. Enjoy the fun of visual autoregressive modeling!~~\n\nWe provide a [demo website](https://opensource.bytedance.com/gmpt/t2i/invite) for you to play with VAR Text-to-Image and generate images interactively. Enjoy the fun of visual autoregressive modeling!\n\nWe also provide [demo_sample.ipynb](demo_sample.ipynb) for you to see more technical details about VAR.\n\n[//]: # (\u003cp align=\"center\"\u003e)\n[//]: # (\u003cimg src=\"https://user-images.githubusercontent.com/39692511/226376648-3f28a1a6-275d-4f88-8f3e-cd1219882488.png\" width=50%)\n[//]: # (\u003cp\u003e)\n\n\n## What's New?\n\n### 🔥 Introducing VAR: a new paradigm in autoregressive visual generation✨:\n\nVisual Autoregressive Modeling (VAR) redefines the autoregressive learning on images as coarse-to-fine \"next-scale prediction\" or \"next-resolution prediction\", diverging from the standard raster-scan \"next-token prediction\".\n\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"https://github.com/FoundationVision/VAR/assets/39692511/3e12655c-37dc-4528-b923-ec6c4cfef178\" width=93%\u003e\n\u003cp\u003e\n\n### 🔥 For the first time, GPT-style autoregressive models surpass diffusion models🚀:\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"https://github.com/FoundationVision/VAR/assets/39692511/cc30b043-fa4e-4d01-a9b1-e50650d5675d\" width=55%\u003e\n\u003cp\u003e\n\n\n### 🔥 Discovering power-law Scaling Laws in VAR transformers📈:\n\n\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"https://github.com/FoundationVision/VAR/assets/39692511/c35fb56e-896e-4e4b-9fb9-7a1c38513804\" width=85%\u003e\n\u003cp\u003e\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"https://github.com/FoundationVision/VAR/assets/39692511/91d7b92c-8fc3-44d9-8fb4-73d6cdb8ec1e\" width=85%\u003e\n\u003cp\u003e\n\n\n### 🔥 Zero-shot generalizability🛠️:\n\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"https://github.com/FoundationVision/VAR/assets/39692511/a54a4e52-6793-4130-bae2-9e459a08e96a\" width=70%\u003e\n\u003cp\u003e\n\n#### For a deep dive into our analyses, discussions, and evaluations, check out our [paper](https://arxiv.org/abs/2404.02905).\n\n\n## VAR zoo\nWe provide VAR models for you to play with, which are on \u003ca href='https://huggingface.co/FoundationVision/var'\u003e\u003cimg src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-FoundationVision/var-yellow'\u003e\u003c/a\u003e or can be downloaded from the following links:\n\n|   model    | reso. |   FID    | rel. cost | #params | HF weights🤗                                                                        |\n|:----------:|:-----:|:--------:|:---------:|:-------:|:------------------------------------------------------------------------------------|\n|  VAR-d16   |  256  |   3.55   |    0.4    |  310M   | [var_d16.pth](https://huggingface.co/FoundationVision/var/resolve/main/var_d16.pth) |\n|  VAR-d20   |  256  |   2.95   |    0.5    |  600M   | [var_d20.pth](https://huggingface.co/FoundationVision/var/resolve/main/var_d20.pth) |\n|  VAR-d24   |  256  |   2.33   |    0.6    |  1.0B   | [var_d24.pth](https://huggingface.co/FoundationVision/var/resolve/main/var_d24.pth) |\n|  VAR-d30   |  256  |   1.97   |     1     |  2.0B   | [var_d30.pth](https://huggingface.co/FoundationVision/var/resolve/main/var_d30.pth) |\n| VAR-d30-re |  256  | **1.80** |     1     |  2.0B   | [var_d30.pth](https://huggingface.co/FoundationVision/var/resolve/main/var_d30.pth) |\n| VAR-d36    |  512  | **2.63** |     -     |  2.3B   | [var_d36.pth](https://huggingface.co/FoundationVision/var/resolve/main/var_d36.pth) |\n\nYou can load these models to generate images via the codes in [demo_sample.ipynb](demo_sample.ipynb). Note: you need to download [vae_ch160v4096z32.pth](https://huggingface.co/FoundationVision/var/resolve/main/vae_ch160v4096z32.pth) first.\n\n\n## Installation\n\n1. Install `torch\u003e=2.0.0`.\n2. Install other pip packages via `pip3 install -r requirements.txt`.\n3. Prepare the [ImageNet](http://image-net.org/) dataset\n    \u003cdetails\u003e\n    \u003csummary\u003e assume the ImageNet is in `/path/to/imagenet`. It should be like this:\u003c/summary\u003e\n\n    ```\n    /path/to/imagenet/:\n        train/:\n            n01440764: \n                many_images.JPEG ...\n            n01443537:\n                many_images.JPEG ...\n        val/:\n            n01440764:\n                ILSVRC2012_val_00000293.JPEG ...\n            n01443537:\n                ILSVRC2012_val_00000236.JPEG ...\n    ```\n   **NOTE: The arg `--data_path=/path/to/imagenet` should be passed to the training script.**\n    \u003c/details\u003e\n\n5. (Optional) install and compile `flash-attn` and `xformers` for faster attention computation. Our code will automatically use them if installed. See [models/basic_var.py#L15-L30](models/basic_var.py#L15-L30).\n\n\n## Training Scripts\n\nTo train VAR-{d16, d20, d24, d30, d36-s} on ImageNet 256x256 or 512x512, you can run the following command:\n```shell\n# d16, 256x256\ntorchrun --nproc_per_node=8 --nnodes=... --node_rank=... --master_addr=... --master_port=... train.py \\\n  --depth=16 --bs=768 --ep=200 --fp16=1 --alng=1e-3 --wpe=0.1\n# d20, 256x256\ntorchrun --nproc_per_node=8 --nnodes=... --node_rank=... --master_addr=... --master_port=... train.py \\\n  --depth=20 --bs=768 --ep=250 --fp16=1 --alng=1e-3 --wpe=0.1\n# d24, 256x256\ntorchrun --nproc_per_node=8 --nnodes=... --node_rank=... --master_addr=... --master_port=... train.py \\\n  --depth=24 --bs=768 --ep=350 --tblr=8e-5 --fp16=1 --alng=1e-4 --wpe=0.01\n# d30, 256x256\ntorchrun --nproc_per_node=8 --nnodes=... --node_rank=... --master_addr=... --master_port=... train.py \\\n  --depth=30 --bs=1024 --ep=350 --tblr=8e-5 --fp16=1 --alng=1e-5 --wpe=0.01 --twde=0.08\n# d36-s, 512x512 (-s means saln=1, shared AdaLN)\ntorchrun --nproc_per_node=8 --nnodes=... --node_rank=... --master_addr=... --master_port=... train.py \\\n  --depth=36 --saln=1 --pn=512 --bs=768 --ep=350 --tblr=8e-5 --fp16=1 --alng=5e-6 --wpe=0.01 --twde=0.08\n```\nA folder named `local_output` will be created to save the checkpoints and logs.\nYou can monitor the training process by checking the logs in `local_output/log.txt` and `local_output/stdout.txt`, or using `tensorboard --logdir=local_output/`.\n\nIf your experiment is interrupted, just rerun the command, and the training will **automatically resume** from the last checkpoint in `local_output/ckpt*.pth` (see [utils/misc.py#L344-L357](utils/misc.py#L344-L357)).\n\n## Sampling \u0026 Zero-shot Inference\n\nFor FID evaluation, use `var.autoregressive_infer_cfg(..., cfg=1.5, top_p=0.96, top_k=900, more_smooth=False)` to sample 50,000 images (50 per class) and save them as PNG (not JPEG) files in a folder. Pack them into a `.npz` file via `create_npz_from_sample_folder(sample_folder)` in [utils/misc.py#L344](utils/misc.py#L360).\nThen use the [OpenAI's FID evaluation toolkit](https://github.com/openai/guided-diffusion/tree/main/evaluations) and reference ground truth npz file of [256x256](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/256/VIRTUAL_imagenet256_labeled.npz) or [512x512](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/512/VIRTUAL_imagenet512.npz) to evaluate FID, IS, precision, and recall.\n\nNote a relatively small `cfg=1.5` is used for trade-off between image quality and diversity. You can adjust it to `cfg=5.0`, or sample with `autoregressive_infer_cfg(..., more_smooth=True)` for **better visual quality**.\nWe'll provide the sampling script later.\n\n\n## Third-party Usage and Research\n\n***In this pargraph, we cross link third-party repositories or research which use VAR and report results. You can let us know by raising an issue***\n\n(`Note please report accuracy numbers and provide trained models in your new repository to facilitate others to get sense of correctness and model behavior`)\n\n| **Time**     | **Research**                                                                                                      | **Link**                                                           |\n|--------------|-------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------|\n| [3/3/2025]   | Direct Discriminative Optimization: Your Likelihood-Based Visual Generative Model is Secretly a GAN Discriminator | https://research.nvidia.com/labs/dir/ddo/                          |\n| [2/28/2025]  | Autoregressive Medical Image Segmentation via Next-Scale Mask Prediction                                          | https://arxiv.org/abs/2502.20784                                   |\n| [2/27/2025]  | FlexVAR: Flexible Visual Autoregressive Modeling without Residual Prediction                                      | https://github.com/jiaosiyu1999/FlexVAR                            |\n| [2/17/2025]  | MARS: Mesh AutoRegressive Model for 3D Shape Detailization                                                        | https://arxiv.org/abs/2502.11390                                   |\n| [1/31/2025]  | Visual Autoregressive Modeling for Image Super-Resolution                                                         | https://github.com/quyp2000/VARSR                                  |\n| [1/21/2025]  | VARGPT: Unified Understanding and Generation in a Visual Autoregressive Multimodal Large Language Model           | https://github.com/VARGPT-family/VARGPT                            |\n| [1/26/2025]  | Visual Generation Without Guidance                                                                                | https://github.com/thu-ml/GFT                                      |\n| [12/30/2024] | Next Token Prediction Towards Multimodal Intelligence                                                             | https://github.com/LMM101/Awesome-Multimodal-Next-Token-Prediction |\n| [12/30/2024] | Varformer: Adapting VAR’s Generative Prior for Image Restoration                                                  | https://arxiv.org/abs/2412.21063                                   |\n| [12/22/2024] | [ICLR 2025]Distilled Decoding 1: One-step Sampling of Image Auto-regressive Models with Flow Matching             | https://github.com/imagination-research/distilled-decoding         |\n| [12/19/2024] | FlowAR: Scale-wise Autoregressive Image Generation Meets Flow Matching                                            | https://github.com/OliverRensu/FlowAR                              |\n| [12/13/2024] | 3D representation in 512-Byte: Variational tokenizer is the key for autoregressive 3D generation                  | https://github.com/sparse-mvs-2/VAT                                |\n| [12/9/2024]  | CARP: Visuomotor Policy Learning via Coarse-to-Fine Autoregressive Prediction                                     | https://carp-robot.github.io/                                      |\n| [12/5/2024]  | [CVPR 2025]Infinity ∞: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis                | https://github.com/FoundationVision/Infinity                       |\n| [12/5/2024]  | [CVPR 2025]Switti: Designing Scale-Wise Transformers for Text-to-Image Synthesis                                  | https://github.com/yandex-research/switti                          |\n| [12/4/2024]  | [CVPR 2025]TokenFlow🚀: Unified Image Tokenizer for Multimodal Understanding and Generation                       | https://github.com/ByteFlow-AI/TokenFlow                           |\n| [12/3/2024]  | XQ-GAN🚀: An Open-source Image Tokenization Framework for Autoregressive Generation                               | https://github.com/lxa9867/ImageFolder                             |\n| [11/28/2024] | [CVPR 2025]CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient                           | https://github.com/czg1225/CoDe                                    |\n| [11/28/2024] | [CVPR 2025]Scalable Autoregressive Monocular Depth Estimation                                                     | https://arxiv.org/abs/2411.11361                                   |\n| [11/27/2024] | [CVPR 2025]SAR3D: Autoregressive 3D Object Generation and Understanding via Multi-scale 3D VQVAE                  | https://github.com/cyw-3d/SAR3D                                    |\n| [11/26/2024] | LiteVAR: Compressing Visual Autoregressive Modelling with Efficient Attention and Quantization                    | https://arxiv.org/abs/2411.17178                                   |\n| [11/15/2024] | M-VAR: Decoupled Scale-wise Autoregressive Modeling for High-Quality Image Generation                             | https://github.com/OliverRensu/MVAR                                |\n| [10/14/2024] | [ICLR 2025]HART: Efficient Visual Generation with Hybrid Autoregressive Transformer                               | https://github.com/mit-han-lab/hart                                |\n| [10/12/2024] | [ICLR 2025 Oral]Toward Guidance-Free AR Visual Generation via Condition Contrastive Alignment                     | https://github.com/thu-ml/CCA                                      |\n| [10/3/2024]  | [ICLR 2025]ImageFolder🚀: Autoregressive Image Generation with Folded Tokens                                      | https://github.com/lxa9867/ImageFolder                             |\n| [07/25/2024] | ControlVAR: Exploring Controllable Visual Autoregressive Modeling                                                 | https://github.com/lxa9867/ControlVAR                              |\n| [07/3/2024]  | VAR-CLIP: Text-to-Image Generator with Visual Auto-Regressive Modeling                                            | https://github.com/daixiangzi/VAR-CLIP                             |\n| [06/16/2024] | STAR: Scale-wise Text-to-image generation via Auto-Regressive representations                                     | https://arxiv.org/abs/2406.10797                                   |\n\n\n## License\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n\n## Citation\nIf our work assists your research, feel free to give us a star ⭐ or cite us using:\n```\n@Article{VAR,\n      title={Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction}, \n      author={Keyu Tian and Yi Jiang and Zehuan Yuan and Bingyue Peng and Liwei Wang},\n      year={2024},\n      eprint={2404.02905},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV}\n}\n```\n\n```\n@misc{Infinity,\n    title={Infinity: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis}, \n    author={Jian Han and Jinlai Liu and Yi Jiang and Bin Yan and Yuqi Zhang and Zehuan Yuan and Bingyue Peng and Xiaobing Liu},\n    year={2024},\n    eprint={2412.04431},\n    archivePrefix={arXiv},\n    primaryClass={cs.CV},\n    url={https://arxiv.org/abs/2412.04431}, \n}\n```","funding_links":[],"categories":["Others"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffoundationvision%2Fvar","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffoundationvision%2Fvar","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffoundationvision%2Fvar/lists"}