{"id":13440584,"url":"https://github.com/OpenShapeLab/ShapeGPT","last_synced_at":"2025-03-20T10:31:29.271Z","repository":{"id":209985350,"uuid":"725423865","full_name":"OpenShapeLab/ShapeGPT","owner":"OpenShapeLab","description":"ShapeGPT: 3D Shape Generation with A Unified Multi-modal Language Model, a unified and user-friendly shape-language model ","archived":false,"fork":false,"pushed_at":"2023-12-01T12:48:58.000Z","size":1262,"stargazers_count":86,"open_issues_count":1,"forks_count":0,"subscribers_count":16,"default_branch":"main","last_synced_at":"2024-08-01T03:31:42.801Z","etag":null,"topics":["3d-generation","caption-generation","chatgpt","gpt","language-model","multi-modal","shape","unified"],"latest_commit_sha":null,"homepage":"https://shapegpt.github.io","language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/OpenShapeLab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2023-11-30T05:30:48.000Z","updated_at":"2024-07-14T18:23:30.000Z","dependencies_parsed_at":"2024-01-16T02:45:22.471Z","dependency_job_id":"186b1547-dd84-4771-bab7-1a1aecaa6fce","html_url":"https://github.com/OpenShapeLab/ShapeGPT","commit_stats":null,"previous_names":["openshapelab/shapegpt"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenShapeLab%2FShapeGPT","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenShapeLab%2FShapeGPT/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenShapeLab%2FShapeGPT/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenShapeLab%2FShapeGPT/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/OpenShapeLab","download_url":"https://codeload.github.com/OpenShapeLab/ShapeGPT/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":244595002,"owners_count":20478388,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["3d-generation","caption-generation","chatgpt","gpt","language-model","multi-modal","shape","unified"],"created_at":"2024-07-31T03:01:24.146Z","updated_at":"2025-03-20T10:31:28.923Z","avatar_url":"https://github.com/OpenShapeLab.png","language":null,"readme":"\u003cdiv align= \"center\"\u003e\n    \u003ch1\u003e Official repo for ShapeGPT \u003cimg src=\"./assets/images/logo_shapegpt.png\" width=\"35px\"\u003e\u003c/h1\u003e\n\n\u003c/div\u003e\n\n\u003cdiv align=\"center\"\u003e\n    \u003ch3\u003e \u003ca href=\"https://shapegpt.github.io/\"\u003eShapeGPT: 3D Shape Generation with A Unified Multi-modal Language Model\u003c/a\u003e\u003c/h3\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://shapegpt.github.io/\"\u003eProject Page\u003c/a\u003e •\n  \u003ca href=\"https://arxiv.org/abs/2311.17618\"\u003eArxiv Paper\u003c/a\u003e •\n  Demo •\n  \u003ca href=\"#️-faq\"\u003eFAQ\u003c/a\u003e •\n  \u003ca href=\"#-citation\"\u003eCitation\u003c/a\u003e\n\u003c/p\u003e\n\n\u003c/div\u003e\n\n\u003cdiv align=\"center\"\u003e\n\u003c/div\u003e\n\u003c!-- \u003cimg src=\"https://cdn.discordapp.com/attachments/941582479117127680/1111543600879259749/20230526075532.png\" width=\"350px\"\u003e --\u003e\n\n\n\n\n\nhttps://github.com/OpenShapeLab/ShapeGPT/assets/91652696/47cb697b-4778-4046-9e8e-0eafa54d0270\n\n\n\n\n\n## \u003cimg src=\"./assets/images/logo_shapegpt.png\" width=\"24px\"\u003e Intro ShapeGPT\n\nShapeGPT is a **unified** and **user-friendly** shape-centric multi-modal language model to establish a multi-modal corpus and develop shape-aware language models  on **multiple shape tasks**.\n\n\u003cdetails open=\"open\"\u003e\n    \u003csummary\u003e\u003cb\u003eTechnical details\u003c/b\u003e\u003c/summary\u003e\n\nThe advent of large language models, enabling flexibility through instruction-driven approaches, has revolutionized many traditional generative tasks, but large models for 3D data, particularly in comprehensively handling 3D shapes with other modalities, are still under-explored. By achieving instruction-based shape generations, versatile multimodal generative shape models can significantly benefit various fields like 3D virtual construction and network-aided design. In this work, we present ShapeGPT, a shape-included multi-modal framework to leverage strong pre-trained language models to address multiple shape-relevant tasks. Specifically, ShapeGPT employs a word-sentence-paragraph framework to discretize continuous shapes into shape words, further assembles these words for shape sentences, as well as integrates shape with instructional text for multi-modal paragraphs. To learn this shape-language model, we use a three-stage training scheme, including shape representation, multimodal alignment, and instruction-based generation, to align shape-language codebooks and learn the intricate correlations among these modalities. Extensive experiments demonstrate that ShapeGPT achieves comparable performance across shape-relevant tasks, including text-to-shape, shape-to-text, shape completion, and shape editing.\n\n\u003cimg width=\"1194\" alt=\"pipeline\" src=\"./assets/images/pipeline.jpg\"\u003e\n\u003c/details\u003e\n\n## 🚩 News\n\n- [2023/12/01] Upload paper and init project 🔥🔥🔥\n\n## ⚡ Quick Start\n\n\u003c!-- \u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eSetup and download\u003c/b\u003e\u003c/summary\u003e\n\n\u003c/details\u003e --\u003e\n\n## ▶️ Demo\n\n\u003c!-- \u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eWebui\u003c/b\u003e\u003c/summary\u003e\n\n\n\u003c/details\u003e --\u003e\n\n## 👀 Visualization\n\n## ⚠️ FAQ\n\n\u003cdetails\u003e \u003csummary\u003e\u003cb\u003eQuestion-and-Answer\u003c/b\u003e\u003c/summary\u003e\n    \n\n\u003c/details\u003e\n\u003c/details\u003e\n\n## 📖 Citation\n\nIf you find our code or paper helps, please consider citing:\n\n```bibtex\n@misc{yin2023shapegpt,\n      title={ShapeGPT: 3D Shape Generation with A Unified Multi-modal Language Model}, \n      author={Fukun Yin and Xin Chen and Chi Zhang and Biao Jiang and Zibo Zhao and Jiayuan Fan and Gang Yu and Taihao Li and Tao Chen},\n      year={2023},\n      eprint={2311.17618},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV}\n}\n\n```\n\n## Acknowledgments\n\nThanks to [T5 model](https://github.com/google-research/text-to-text-transfer-transformer), [Motion-GPT](https://github.com/OpenMotionLab/MotionGPT), [Perceiver-IO](https://github.com/krasserm/perceiver-io) and [SDFusion](https://yccyenchicheng.github.io/SDFusion/), our code is partially borrowing from them. Our approach is inspired by [Unified-IO](https://unified-io.allenai.org/), [Michelangelo](https://neuralcarver.github.io/michelangelo/), [ShapeCrafter](https://ivl.cs.brown.edu/research/shapecrafter.html), [Pix2Vox](https://github.com/hzxie/Pix2Vox), and [3DShape2VecSet](https://github.com/1zb/3DShape2VecSet).\n\n## License\n\nThis code is distributed under an [MIT LICENSE](LICENSE).\n\nNote that our code depends on other libraries, including [PyTorch3D](https://pytorch3d.org/) and [PyTorch Lightning](https://lightning.ai/), and uses datasets which each have their own respective licenses that must also be followed.\n","funding_links":[],"categories":["Others"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FOpenShapeLab%2FShapeGPT","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FOpenShapeLab%2FShapeGPT","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FOpenShapeLab%2FShapeGPT/lists"}