{"id":13429521,"url":"https://github.com/advimman/lama","last_synced_at":"2025-05-12T13:32:07.287Z","repository":{"id":37416803,"uuid":"401446691","full_name":"advimman/lama","owner":"advimman","description":"🦙  LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022","archived":false,"fork":false,"pushed_at":"2025-02-05T08:41:52.000Z","size":8938,"stargazers_count":8710,"open_issues_count":114,"forks_count":925,"subscribers_count":82,"default_branch":"main","last_synced_at":"2025-04-23T17:09:25.037Z","etag":null,"topics":["cnn","colab","colab-notebook","computer-vision","deep-learning","deep-neural-networks","fourier","fourier-convolutions","fourier-transform","gan","generative-adversarial-network","generative-adversarial-networks","high-resolution","image-inpainting","inpainting","inpainting-algorithm","inpainting-methods","pytorch"],"latest_commit_sha":null,"homepage":"https://advimman.github.io/lama-project/","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/advimman.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-08-30T18:27:52.000Z","updated_at":"2025-04-23T15:03:25.000Z","dependencies_parsed_at":"2024-09-30T22:01:26.775Z","dependency_job_id":"120a5ae6-ce42-4579-944e-d990d9a285c0","html_url":"https://github.com/advimman/lama","commit_stats":null,"previous_names":["saic-mdal/lama"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/advimman%2Flama","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/advimman%2Flama/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/advimman%2Flama/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/advimman%2Flama/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/advimman","download_url":"https://codeload.github.com/advimman/lama/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":250477812,"owners_count":21437049,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cnn","colab","colab-notebook","computer-vision","deep-learning","deep-neural-networks","fourier","fourier-convolutions","fourier-transform","gan","generative-adversarial-network","generative-adversarial-networks","high-resolution","image-inpainting","inpainting","inpainting-algorithm","inpainting-methods","pytorch"],"created_at":"2024-07-31T02:00:41.208Z","updated_at":"2025-04-23T17:09:34.627Z","avatar_url":"https://github.com/advimman.png","language":"Jupyter Notebook","readme":"# 🦙 LaMa: Resolution-robust Large Mask Inpainting with Fourier Convolutions\n\nby Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, \nAnastasia Remizova, Arsenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Kiwoong Park, Victor Lempitsky.\n\n\u003cp align=\"center\" \"font-size:30px;\"\u003e\n  🔥🔥🔥\n  \u003cbr\u003e\n  \u003cb\u003e\nLaMa generalizes surprisingly well to much higher resolutions (~2k❗️) than it saw during training (256x256), and achieves the excellent performance even in challenging scenarios, e.g. completion of periodic structures.\u003c/b\u003e\n\u003c/p\u003e\n\n[[Project page](https://advimman.github.io/lama-project/)] [[arXiv](https://arxiv.org/abs/2109.07161)] [[Supplementary](https://ashukha.com/projects/lama_21/lama_supmat_2021.pdf)] [[BibTeX](https://senya-ashukha.github.io/projects/lama_21/paper.txt)] [[Casual GAN Papers Summary](https://www.casualganpapers.com/large-masks-fourier-convolutions-inpainting/LaMa-explained.html)]\n \n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://colab.research.google.com/drive/15KTEIScUbVZtUP6w2tCDMVpE-b1r9pkZ?usp=drive_link\"\u003e\n  \u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\"/\u003e\n  \u003c/a\u003e\n      \u003cbr\u003e\n   Try out in Google Colab \n  \u003cbr\u003e\n  All yandex dist links went bad, you can download the model from the https://drive.google.com/drive/folders/1B2x7eQDgecTL0oh3LSIBDGj0fTxs6Ips?usp=sharing \n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/senya-ashukha/senya-ashukha.github.io/master/projects/lama_21/ezgif-4-0db51df695a8.gif\" /\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/senya-ashukha/senya-ashukha.github.io/master/projects/lama_21/gif_for_lightning_v1_white.gif\" /\u003e\n\u003c/p\u003e\n\n\n\n# LaMa development\n(Feel free to share your paper by creating an issue)\n- https://github.com/geekyutao/Inpaint-Anything --- Inpaint Anything: Segment Anything Meets Image Inpainting\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/geekyutao/Inpaint-Anything/main/example/MainFramework.png\" /\u003e\n\u003c/p\u003e\n\n- [Feature Refinement to Improve High Resolution Image Inpainting](https://arxiv.org/abs/2206.13644) / [video](https://www.youtube.com/watch?v=gEukhOheWgE) / code https://github.com/advimman/lama/pull/112 / by Geomagical Labs ([geomagical.com](geomagical.com))\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/senya-ashukha/senya-ashukha.github.io/master/images/FeatureRefinement.png\" /\u003e\n\u003c/p\u003e\n\n# Non-official 3rd party apps:\n(Feel free to share your app/implementation/demo by creating an issue)\n\n- https://github.com/enesmsahin/simple-lama-inpainting - a simple pip package for LaMa inpainting.\n- https://github.com/mallman/CoreMLaMa - Apple's Core ML model format\n- [https://cleanup.pictures](https://cleanup.pictures/) - a simple interactive object removal tool by [@cyrildiagne](https://twitter.com/cyrildiagne)\n    - [lama-cleaner](https://github.com/Sanster/lama-cleaner) by [@Sanster](https://github.com/Sanster/lama-cleaner) is a self-host version of [https://cleanup.pictures](https://cleanup.pictures/)\n- Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See demo: [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/akhaliq/lama) by [@AK391](https://github.com/AK391)\n- Telegram bot [@MagicEraserBot](https://t.me/MagicEraserBot) by [@Moldoteck](https://github.com/Moldoteck), [code](https://github.com/Moldoteck/MagicEraser)\n- [Auto-LaMa](https://github.com/andy971022/auto-lama) = DE:TR object detection + LaMa inpainting by [@andy971022](https://github.com/andy971022)\n- [LAMA-Magic-Eraser-Local](https://github.com/zhaoyun0071/LAMA-Magic-Eraser-Local) = a standalone inpainting application built with PyQt5 by [@zhaoyun0071](https://github.com/zhaoyun0071)\n- [Hama](https://www.hama.app/) - object removal with a smart brush which simplifies mask drawing.\n- [ModelScope](https://www.modelscope.cn/models/damo/cv_fft_inpainting_lama/summary) = the largest Model Community in Chinese by  [@chenbinghui1](https://github.com/chenbinghui1).\n- [LaMa with MaskDINO](https://github.com/qwopqwop200/lama-with-maskdino) = MaskDINO object detection + LaMa inpainting with refinement by [@qwopqwop200](https://github.com/qwopqwop200).\n- [CoreMLaMa](https://github.com/mallman/CoreMLaMa) - a script to convert Lama Cleaner's port of LaMa to Apple's Core ML model format.\n\n# Environment setup\n\n❗️❗️❗️ All yandex dist links went bad, you can download the model from the [google drive](https://drive.google.com/drive/folders/1B2x7eQDgecTL0oh3LSIBDGj0fTxs6Ips?usp=sharing) ❗️❗️❗️\n\nClone the repo:\n`git clone https://github.com/advimman/lama.git`\n\nThere are three options of an environment:\n\n1. Python virtualenv:\n\n    ```\n    virtualenv inpenv --python=/usr/bin/python3\n    source inpenv/bin/activate\n    pip install torch==1.8.0 torchvision==0.9.0\n    \n    cd lama\n    pip install -r requirements.txt \n    ```\n\n2. Conda\n    \n    ```\n    % Install conda for Linux, for other OS download miniconda at https://docs.conda.io/en/latest/miniconda.html\n    wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh\n    bash Miniconda3-latest-Linux-x86_64.sh -b -p $HOME/miniconda\n    $HOME/miniconda/bin/conda init bash\n\n    cd lama\n    conda env create -f conda_env.yml\n    conda activate lama\n    conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch -y\n    pip install pytorch-lightning==1.2.9\n    ```\n \n3. Docker: No actions are needed 🎉.\n\n# Inference \u003ca name=\"prediction\"\u003e\u003c/a\u003e\n\nRun\n```\ncd lama\nexport TORCH_HOME=$(pwd) \u0026\u0026 export PYTHONPATH=$(pwd)\n```\n\n**1. Download pre-trained models**\n\nThe best model (Places2, Places Challenge):\n    \n```    \ncurl -LJO https://huggingface.co/smartywu/big-lama/resolve/main/big-lama.zip\nunzip big-lama.zip\n```\n\nAll models (Places \u0026 CelebA-HQ):\n\n```\ndownload [https://drive.google.com/drive/folders/1B2x7eQDgecTL0oh3LSIBDGj0fTxs6Ips?usp=drive_link]\nunzip lama-models.zip\n```\n\n**2. Prepare images and masks**\n\nDownload test images:\n\n```\nunzip LaMa_test_images.zip\n```\n\u003cdetails\u003e\n \u003csummary\u003eOR prepare your data:\u003c/summary\u003e\n1) Create masks named as `[images_name]_maskXXX[image_suffix]`, put images and masks in the same folder. \n\n- You can use the [script](https://github.com/advimman/lama/blob/main/bin/gen_mask_dataset.py) for random masks generation. \n- Check the format of the files:\n    ```    \n    image1_mask001.png\n    image1.png\n    image2_mask001.png\n    image2.png\n    ```\n\n2) Specify `image_suffix`, e.g. `.png` or `.jpg` or `_input.jpg` in `configs/prediction/default.yaml`.\n\n\u003c/details\u003e\n\n\n**3. Predict**\n\nOn the host machine:\n\n    python3 bin/predict.py model.path=$(pwd)/big-lama indir=$(pwd)/LaMa_test_images outdir=$(pwd)/output\n\n**OR** in the docker\n  \nThe following command will pull the docker image from Docker Hub and execute the prediction script\n```\nbash docker/2_predict.sh $(pwd)/big-lama $(pwd)/LaMa_test_images $(pwd)/output device=cpu\n```\nDocker cuda:\n```\nbash docker/2_predict_with_gpu.sh $(pwd)/big-lama $(pwd)/LaMa_test_images $(pwd)/output\n```\n\n**4. Predict with Refinement**\n\nOn the host machine:\n\n    python3 bin/predict.py refine=True model.path=$(pwd)/big-lama indir=$(pwd)/LaMa_test_images outdir=$(pwd)/output\n\n# Train and Eval\n\nMake sure you run:\n\n```\ncd lama\nexport TORCH_HOME=$(pwd) \u0026\u0026 export PYTHONPATH=$(pwd)\n```\n\nThen download models for _perceptual loss_:\n\n    mkdir -p ade20k/ade20k-resnet50dilated-ppm_deepsup/\n    wget -P ade20k/ade20k-resnet50dilated-ppm_deepsup/ http://sceneparsing.csail.mit.edu/model/pytorch/ade20k-resnet50dilated-ppm_deepsup/encoder_epoch_20.pth\n\n\n## Places\n\n⚠️ NB: FID/SSIM/LPIPS metric values for Places that we see in LaMa paper are computed on 30000 images that we produce in evaluation section below.\nFor more details on evaluation data check [[Section 3. Dataset splits in Supplementary](https://ashukha.com/projects/lama_21/lama_supmat_2021.pdf#subsection.3.1)]  ⚠️\n\nOn the host machine:\n\n    # Download data from http://places2.csail.mit.edu/download.html\n    # Places365-Standard: Train(105GB)/Test(19GB)/Val(2.1GB) from High-resolution images section\n    wget http://data.csail.mit.edu/places/places365/train_large_places365standard.tar\n    wget http://data.csail.mit.edu/places/places365/val_large.tar\n    wget http://data.csail.mit.edu/places/places365/test_large.tar\n\n    # Unpack train/test/val data and create .yaml config for it\n    bash fetch_data/places_standard_train_prepare.sh\n    bash fetch_data/places_standard_test_val_prepare.sh\n    \n    # Sample images for test and viz at the end of epoch\n    bash fetch_data/places_standard_test_val_sample.sh\n    bash fetch_data/places_standard_test_val_gen_masks.sh\n\n    # Run training\n    python3 bin/train.py -cn lama-fourier location=places_standard\n\n    # To evaluate trained model and report metrics as in our paper\n    # we need to sample previously unseen 30k images and generate masks for them\n    bash fetch_data/places_standard_evaluation_prepare_data.sh\n    \n    # Infer model on thick/thin/medium masks in 256 and 512 and run evaluation \n    # like this:\n    python3 bin/predict.py \\\n    model.path=$(pwd)/experiments/\u003cuser\u003e_\u003cdate:time\u003e_lama-fourier_/ \\\n    indir=$(pwd)/places_standard_dataset/evaluation/random_thick_512/ \\\n    outdir=$(pwd)/inference/random_thick_512 model.checkpoint=last.ckpt\n\n    python3 bin/evaluate_predicts.py \\\n    $(pwd)/configs/eval2_gpu.yaml \\\n    $(pwd)/places_standard_dataset/evaluation/random_thick_512/ \\\n    $(pwd)/inference/random_thick_512 \\\n    $(pwd)/inference/random_thick_512_metrics.csv\n\n    \n    \nDocker: TODO\n    \n## CelebA\nOn the host machine:\n\n    # Make shure you are in lama folder\n    cd lama\n    export TORCH_HOME=$(pwd) \u0026\u0026 export PYTHONPATH=$(pwd)\n\n    # Download CelebA-HQ dataset\n    # Download data256x256.zip from https://drive.google.com/drive/folders/11Vz0fqHS2rXDb5pprgTjpD7S2BAJhi1P\n    \n    # unzip \u0026 split into train/test/visualization \u0026 create config for it\n    bash fetch_data/celebahq_dataset_prepare.sh\n\n    # generate masks for test and visual_test at the end of epoch\n    bash fetch_data/celebahq_gen_masks.sh\n\n    # Run training\n    python3 bin/train.py -cn lama-fourier-celeba data.batch_size=10\n\n    # Infer model on thick/thin/medium masks in 256 and run evaluation \n    # like this:\n    python3 bin/predict.py \\\n    model.path=$(pwd)/experiments/\u003cuser\u003e_\u003cdate:time\u003e_lama-fourier-celeba_/ \\\n    indir=$(pwd)/celeba-hq-dataset/visual_test_256/random_thick_256/ \\\n    outdir=$(pwd)/inference/celeba_random_thick_256 model.checkpoint=last.ckpt\n    \n    \nDocker: TODO\n\n## Places Challenge \n\nOn the host machine:\n\n    # This script downloads multiple .tar files in parallel and unpacks them\n    # Places365-Challenge: Train(476GB) from High-resolution images (to train Big-Lama) \n    bash places_challenge_train_download.sh\n    \n    TODO: prepare\n    TODO: train \n    TODO: eval\n      \nDocker: TODO\n\n## Create your data\n\nPlease check bash scripts for data preparation and mask generation from CelebaHQ section,\nif you stuck at one of the following steps.\n\n\nOn the host machine:\n\n    # Make shure you are in lama folder\n    cd lama\n    export TORCH_HOME=$(pwd) \u0026\u0026 export PYTHONPATH=$(pwd)\n\n    # You need to prepare following image folders:\n    $ ls my_dataset\n    train\n    val_source # 2000 or more images\n    visual_test_source # 100 or more images\n    eval_source # 2000 or more images\n\n    # LaMa generates random masks for the train data on the flight,\n    # but needs fixed masks for test and visual_test for consistency of evaluation.\n\n    # Suppose, we want to evaluate and pick best models \n    # on 512x512 val dataset  with thick/thin/medium masks \n    # And your images have .jpg extention:\n\n    python3 bin/gen_mask_dataset.py \\\n    $(pwd)/configs/data_gen/random_\u003csize\u003e_512.yaml \\ # thick, thin, medium\n    my_dataset/val_source/ \\\n    my_dataset/val/random_\u003csize\u003e_512.yaml \\# thick, thin, medium\n    --ext jpg\n\n    # So the mask generator will: \n    # 1. resize and crop val images and save them as .png\n    # 2. generate masks\n    \n    ls my_dataset/val/random_medium_512/\n    image1_crop000_mask000.png\n    image1_crop000.png\n    image2_crop000_mask000.png\n    image2_crop000.png\n    ...\n\n    # Generate thick, thin, medium masks for visual_test folder:\n\n    python3 bin/gen_mask_dataset.py \\\n    $(pwd)/configs/data_gen/random_\u003csize\u003e_512.yaml \\  #thick, thin, medium\n    my_dataset/visual_test_source/ \\\n    my_dataset/visual_test/random_\u003csize\u003e_512/ \\ #thick, thin, medium\n    --ext jpg\n    \n\n    ls my_dataset/visual_test/random_thick_512/\n    image1_crop000_mask000.png\n    image1_crop000.png\n    image2_crop000_mask000.png\n    image2_crop000.png\n    ...\n\n    # Same process for eval_source image folder:\n    \n    python3 bin/gen_mask_dataset.py \\\n    $(pwd)/configs/data_gen/random_\u003csize\u003e_512.yaml \\  #thick, thin, medium\n    my_dataset/eval_source/ \\\n    my_dataset/eval/random_\u003csize\u003e_512/ \\ #thick, thin, medium\n    --ext jpg\n    \n\n\n    # Generate location config file which locate these folders:\n    \n    touch my_dataset.yaml\n    echo \"data_root_dir: $(pwd)/my_dataset/\" \u003e\u003e my_dataset.yaml\n    echo \"out_root_dir: $(pwd)/experiments/\" \u003e\u003e my_dataset.yaml\n    echo \"tb_dir: $(pwd)/tb_logs/\" \u003e\u003e my_dataset.yaml\n    mv my_dataset.yaml ${PWD}/configs/training/location/\n\n\n    # Check data config for consistency with my_dataset folder structure:\n    $ cat ${PWD}/configs/training/data/abl-04-256-mh-dist\n    ...\n    train:\n      indir: ${location.data_root_dir}/train\n      ...\n    val:\n      indir: ${location.data_root_dir}/val\n      img_suffix: .png\n    visual_test:\n      indir: ${location.data_root_dir}/visual_test\n      img_suffix: .png\n\n\n    # Run training\n    python3 bin/train.py -cn lama-fourier location=my_dataset data.batch_size=10\n\n    # Evaluation: LaMa training procedure picks best few models according to \n    # scores on my_dataset/val/ \n\n    # To evaluate one of your best models (i.e. at epoch=32) \n    # on previously unseen my_dataset/eval do the following \n    # for thin, thick and medium:\n\n    # infer:\n    python3 bin/predict.py \\\n    model.path=$(pwd)/experiments/\u003cuser\u003e_\u003cdate:time\u003e_lama-fourier_/ \\\n    indir=$(pwd)/my_dataset/eval/random_\u003csize\u003e_512/ \\\n    outdir=$(pwd)/inference/my_dataset/random_\u003csize\u003e_512 \\\n    model.checkpoint=epoch32.ckpt\n\n    # metrics calculation:\n    python3 bin/evaluate_predicts.py \\\n    $(pwd)/configs/eval2_gpu.yaml \\\n    $(pwd)/my_dataset/eval/random_\u003csize\u003e_512/ \\\n    $(pwd)/inference/my_dataset/random_\u003csize\u003e_512 \\\n    $(pwd)/inference/my_dataset/random_\u003csize\u003e_512_metrics.csv\n\n    \n**OR** in the docker:\n\n    TODO: train\n    TODO: eval\n    \n# Hints\n\n### Generate different kinds of masks\nThe following command will execute a script that generates random masks.\n\n    bash docker/1_generate_masks_from_raw_images.sh \\\n        configs/data_gen/random_medium_512.yaml \\\n        /directory_with_input_images \\\n        /directory_where_to_store_images_and_masks \\\n        --ext png\n\nThe test data generation command stores images in the format,\nwhich is suitable for [prediction](#prediction).\n\nThe table below describes which configs we used to generate different test sets from the paper.\nNote that we *do not fix a random seed*, so the results will be slightly different each time.\n\n|        | Places 512x512         | CelebA 256x256         |\n|--------|------------------------|------------------------|\n| Narrow | random_thin_512.yaml   | random_thin_256.yaml   |\n| Medium | random_medium_512.yaml | random_medium_256.yaml |\n| Wide   | random_thick_512.yaml  | random_thick_256.yaml  |\n\nFeel free to change the config path (argument #1) to any other config in `configs/data_gen` \nor adjust config files themselves.\n\n### Override parameters in configs\nAlso you can override parameters in config like this:\n\n    python3 bin/train.py -cn \u003cconfig\u003e data.batch_size=10 run_title=my-title\n\nWhere .yaml file extension is omitted\n\n### Models options \nConfig names for models from paper (substitude into the training command): \n\n    * big-lama\n    * big-lama-regular\n    * lama-fourier\n    * lama-regular\n    * lama_small_train_masks\n\nWhich are seated in configs/training/folder\n\n### Links\n- All the data (models, test images, etc.) https://disk.yandex.ru/d/AmdeG-bIjmvSug\n- Test images from the paper https://disk.yandex.ru/d/xKQJZeVRk5vLlQ\n- The pre-trained models https://disk.yandex.ru/d/EgqaSnLohjuzAg\n- The models for perceptual loss https://disk.yandex.ru/d/ncVmQlmT_kTemQ\n- Our training logs are available at https://disk.yandex.ru/d/9Bt1wNSDS4jDkQ\n\n\n### Training time \u0026 resources\n\nTODO\n\n## Acknowledgments\n\n* Segmentation code and models if form [CSAILVision](https://github.com/CSAILVision/semantic-segmentation-pytorch).\n* LPIPS metric is from [richzhang](https://github.com/richzhang/PerceptualSimilarity)\n* SSIM is from [Po-Hsun-Su](https://github.com/Po-Hsun-Su/pytorch-ssim)\n* FID is from [mseitzer](https://github.com/mseitzer/pytorch-fid)\n\n## Citation\nIf you found this code helpful, please consider citing: \n```\n@article{suvorov2021resolution,\n  title={Resolution-robust Large Mask Inpainting with Fourier Convolutions},\n  author={Suvorov, Roman and Logacheva, Elizaveta and Mashikhin, Anton and Remizova, Anastasia and Ashukha, Arsenii and Silvestrov, Aleksei and Kong, Naejin and Goka, Harshith and Park, Kiwoong and Lempitsky, Victor},\n  journal={arXiv preprint arXiv:2109.07161},\n  year={2021}\n}\n```\n","funding_links":[],"categories":["2 Foundation Models","Jupyter Notebook","📦 Legacy \u0026 Inactive Projects"],"sub_categories":["2.2 Vision Foundation Models"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fadvimman%2Flama","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fadvimman%2Flama","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fadvimman%2Flama/lists"}