{"id":13723893,"url":"https://github.com/OpenTexture/Paint3D","last_synced_at":"2025-05-07T17:32:07.044Z","repository":{"id":213506038,"uuid":"725476324","full_name":"OpenTexture/Paint3D","owner":"OpenTexture","description":"[CVPR 2024] Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models, a no lighting baked texture generative model","archived":false,"fork":false,"pushed_at":"2024-11-05T06:40:10.000Z","size":40136,"stargazers_count":676,"open_issues_count":21,"forks_count":31,"subscribers_count":60,"default_branch":"main","last_synced_at":"2024-11-05T07:28:56.016Z","etag":null,"topics":["diffusion-models","generative-ai","generative-model","stable-diffusion","texture","texture-synthesis"],"latest_commit_sha":null,"homepage":"https://paint3d.github.io/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/OpenTexture.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-11-30T08:17:03.000Z","updated_at":"2024-11-05T06:40:14.000Z","dependencies_parsed_at":"2024-05-20T10:26:36.594Z","dependency_job_id":"405e0683-b9e2-4413-96da-e397437184ff","html_url":"https://github.com/OpenTexture/Paint3D","commit_stats":null,"previous_names":["opentexture/paint3d"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenTexture%2FPaint3D","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenTexture%2FPaint3D/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenTexture%2FPaint3D/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenTexture%2FPaint3D/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/OpenTexture","download_url":"https://codeload.github.com/OpenTexture/Paint3D/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":224628197,"owners_count":17343289,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["diffusion-models","generative-ai","generative-model","stable-diffusion","texture","texture-synthesis"],"created_at":"2024-08-03T01:01:46.817Z","updated_at":"2025-05-07T17:32:07.031Z","avatar_url":"https://github.com/OpenTexture.png","language":"Python","readme":"\n\n\u003cdiv align=\"center\"\u003e\n    \u003ch1\u003e \u003ca\u003ePaint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models\u003c/a\u003e\u003c/h1\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=https://paint3d.github.io/\u003eProject Page\u003c/a\u003e •\n  \u003ca href=https://arxiv.org/abs/2312.13913\u003eArxiv\u003c/a\u003e •\n  Demo •\n  \u003ca href=\"#️-faq\"\u003eFAQ\u003c/a\u003e •\n  \u003ca href=\"#-citation\"\u003eCitation\u003c/a\u003e\n\u003c/p\u003e\n\n\u003c/div\u003e\n\n\nhttps://github.com/OpenTexture/Paint3D/assets/18525299/9aef7eeb-a783-482c-87d5-78055da3bfc0\n\n\n##  Introduction\n\nPaint3D is a novel coarse-to-fine generative framework that is capable of producing high-resolution, lighting-less, and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs.\n\n\u003cdetails open=\"open\"\u003e\n    \u003csummary\u003e\u003cb\u003eTechnical details\u003c/b\u003e\u003c/summary\u003e\n\nWe present Paint3D, a novel coarse-to-fine generative framework that is capable of producing high-resolution, lighting-less, and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs. The key challenge addressed is generating high-quality textures without embedded illumination information, which allows the textures to be re-lighted or re-edited within modern graphics pipelines. To achieve this, our method first leverages a pre-trained depth-aware 2D diffusion model to generate view-conditional images and perform multi-view texture fusion, producing an initial coarse texture map. However, as 2D models cannot fully represent 3D shapes and disable lighting effects, the coarse texture map exhibits incomplete areas and illumination artifacts. To resolve this, we train separate UV Inpainting and UVHD diffusion models specialized for the shape-aware refinement of incomplete areas and the removal of illumination artifacts. Through this coarse-to-fine process, Paint3D can produce high-quality 2K UV textures that maintain semantic consistency while being lighting-less, significantly advancing the state-of-the-art in texturing 3D objects.\n\n\u003cimg width=\"1194\" alt=\"pipeline\" src=\"./assets/images/pipeline.jpg\"\u003e\n\u003c/details\u003e\n\n## 🚩 News\n- [2024/11/05] 🔥🔥🔥 We're excited to release [MVPaint](https://github.com/3DTopia/MVPaint), a multi-view consistent texturing method that supports arbitrary UV unwrapping and high generation flexibility.\n- [2024/09/26] 🎉🎉🎉 Our mesh generation method, [MeshXL](https://github.com/OpenMeshLab/MeshXL), has been accepted to NeurIPS 2024! It utilizes Paint3D to generate detailed mesh textures.\n- ComfyUI node for Paint3D: [ComfyUI-Paint3D-Nodes](https://github.com/N3rd00d/ComfyUI-Paint3D-Nodes?tab=readme-ov-file) by [N3rd00d](https://github.com/N3rd00d)\n- [2024/04/26] Upload code 🔥🔥🔥\n- [2023/12/21] Upload paper and init project 🔥🔥🔥\n\n## ⚡ Quick Start\n### Setup\nThe code is tested on Centos 7 with PyTorch 1.12.1 CUDA 11.6 installed. Please follow the following steps to setup environment.\n```\n# install python environment\nconda env create -f environment.yaml\n\n# install kaolin\npip install kaolin==0.13.0 -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/{TORCH_VER}_{CUDA_VER}.html\n```\n\n\n### Txt condition\nFor UV-position controlnet, you can find it [here](https://huggingface.co/GeorgeQi/Paint3d_UVPos_Control).\n\nTo use the other ControlNet models, please download it from the [hugging face page](https://huggingface.co/lllyasviel), and modify the controlnet path in the config file.\n\n\nThen, you can generate coarse texture via:\n```\npython pipeline_paint3d_stage1.py \\\n --sd_config controlnet/config/depth_based_inpaint_template.yaml \\\n --render_config paint3d/config/train_config_paint3d.py \\\n --mesh_path demo/objs/Suzanne_monkey/Suzanne_monkey.obj \\\n --outdir outputs/stage1\n```\n\nand the refined texture via:\n```\npython pipeline_paint3d_stage2.py \\\n--sd_config controlnet/config/UV_based_inpaint_template.yaml \\\n--render_config paint3d/config/train_config_paint3d.py \\\n--mesh_path demo/objs/Suzanne_monkey/Suzanne_monkey.obj \\\n--texture_path outputs/stage1/res-0/albedo.png \\\n--outdir outputs/stage2\n```\n\n\nOptionally, you can also generate texture results with UV position controlnet only, for example:\n```\npython pipeline_UV_only.py \\\n --sd_config controlnet/config/UV_gen_template.yaml \\\n --render_config paint3d/config/train_config_paint3d.py \\\n --mesh_path demo/objs/teapot/scene.obj \\\n --outdir outputs/test_teapot\n```\n\n\n### Image condition\n\nWith a image condition, you can generate coarse texture via:\n```\npython pipeline_paint3d_stage1.py \\\n --sd_config controlnet/config/depth_based_inpaint_template.yaml \\\n --render_config paint3d/config/train_config_paint3d.py \\\n --mesh_path demo/objs/Suzanne_monkey/Suzanne_monkey.obj \\\n --prompt \" \" \\\n --ip_adapter_image_path demo/objs/Suzanne_monkey/img_prompt.png \\\n --outdir outputs/img_stage1\n```\n\nand the refined texture via:\n```\npython pipeline_paint3d_stage2.py \\\n--sd_config controlnet/config/UV_based_inpaint_template.yaml \\\n--render_config paint3d/config/train_config_paint3d.py \\\n--mesh_path demo/objs/Suzanne_monkey/Suzanne_monkey.obj \\\n--texture_path outputs/img_stage1/res-0/albedo.png \\\n--prompt \" \" \\\n --ip_adapter_image_path demo/objs/Suzanne_monkey/img_prompt.png \\\n--outdir outputs/img_stage2\n```\n\n\n\n### Model Converting\nFor checkpoints in [Civitai](https://civitai.com/) with only a .safetensor file, you can use the following script to convert and use them. \n```\npython tools/convert_original_stable_diffusion_to_diffusers.py \\\n--checkpoint_path YOUR_LOCAL.safetensors \\\n--dump_path model_cvt/ \\\n--from_safetensors\n```\n\n\n\u003c!-- \u003cdetails\u003e\n  ## ▶️ Demo\n  \u003csummary\u003e\u003cb\u003eWebui\u003c/b\u003e\u003c/summary\u003e\n\n\n\u003c/details\u003e --\u003e\n\n\n\u003c!-- \u003cdetails\u003e\n  ## 👀 Visualization\n\n  ## ⚠️ FAQ\n\n\u003cdetails\u003e \u003csummary\u003e\u003cb\u003eQuestion-and-Answer\u003c/b\u003e\u003c/summary\u003e\n\n## 🧩 Projects that use Paint3D\nIf you develop/use Paint3D in your projects, welcome to let me know.\n- [MeshXL](https://meshxl.github.io/)(accepted to NeurIPS 2024🔥) uses Paint3D to generate textures for their meshes.\n- ComfyUI node for Paint3D: [ComfyUI-Paint3D-Nodes](https://github.com/N3rd00d/ComfyUI-Paint3D-Nodes?tab=readme-ov-file) by [N3rd00d](https://github.com/N3rd00d)\n\n\u003c/details\u003e --\u003e\n\n\n\n\n\n## 📖 Citation\n```bib\n@inproceedings{zeng2024paint3d,\n  title={Paint3d: Paint anything 3d with lighting-less texture diffusion models},\n  author={Zeng, Xianfang and Chen, Xin and Qi, Zhongqi and Liu, Wen and Zhao, Zibo and Wang, Zhibin and Fu, Bin and Liu, Yong and Yu, Gang},\n  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},\n  pages={4252--4262},\n  year={2024}\n}\n```\n\n## Acknowledgments\n\nThanks to [TEXTure](https://github.com/TEXTurePaper/TEXTurePaper), \n[Text2Tex](https://github.com/daveredrum/Text2Tex), \n[Stable Diffusion](https://github.com/CompVis/stable-diffusion) and [ControlNet](https://github.com/lllyasviel/ControlNet), our code is partially borrowing from them. \nOur approach is inspired by [MotionGPT](https://github.com/OpenMotionLab/MotionGPT), [Michelangelo](https://neuralcarver.github.io/michelangelo/) and [DreamFusion](https://dreamfusion3d.github.io/).\n\n## License\n\nThis code is distributed under an [Apache 2.0 LICENSE](LICENSE).\n\nNote that our code depends on other libraries, including [PyTorch3D](https://pytorch3d.org/) and [PyTorch Lightning](https://lightning.ai/), and uses datasets which each have their own respective licenses that must also be followed.\n","funding_links":[],"categories":["\u003cspan id=\"model\"\u003e3D Model\u003c/span\u003e"],"sub_categories":["\u003cspan id=\"tool\"\u003eLLM (LLM \u0026 Tool)\u003c/span\u003e"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FOpenTexture%2FPaint3D","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FOpenTexture%2FPaint3D","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FOpenTexture%2FPaint3D/lists"}