https://github.com/openai/shap-e
Generate 3D objects conditioned on text or images
https://github.com/openai/shap-e
Last synced: 6 days ago
JSON representation
Generate 3D objects conditioned on text or images
- Host: GitHub
- URL: https://github.com/openai/shap-e
- Owner: openai
- License: mit
- Created: 2023-04-19T18:54:32.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-06-22T19:19:14.000Z (10 months ago)
- Last Synced: 2025-04-02T00:16:49.473Z (13 days ago)
- Language: Python
- Size: 11.4 MB
- Stars: 11,851
- Watchers: 236
- Forks: 989
- Open Issues: 100
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome - openai/shap-e - Generate 3D objects conditioned on text or images (Python)
- ai-game-devtools - Shap-E
- AiTreasureBox - openai/shap-e - 04-07_11861_1](https://img.shields.io/github/stars/openai/shap-e.svg) |Generate 3D objects conditioned on text or images| (Repos)
- StarryDivineSky - openai/shap-e - E是一个由OpenAI开发的用于生成3D对象的项目,它可以通过文本或图像来生成3D模型。该项目利用扩散模型,能够从文本描述或图像中创建多样且高质量的3D形状。其核心在于使用神经辐射场(Neural Radiance Fields, NeRFs)作为中间表示,并训练一个扩散模型来生成这些NeRFs的参数。SHAP-E的优势在于其生成速度快,并且能够生成各种各样的3D对象,而无需复杂的3D建模专业知识。它提供了一个简单易用的界面,允许用户通过简单的文本提示或上传图像来生成3D模型。该项目旨在推动3D内容创作的民主化,让更多人能够轻松创建3D对象。SHAP-E的训练数据包括大量的3D模型和相应的文本描述或图像,使其能够学习到文本和图像与3D形状之间的复杂关系。该项目为3D建模、游戏开发、虚拟现实等领域带来了新的可能性。 (3D视觉生成重建 / 资源传输下载)
README
# Shap-E
This is the official code and model release for [Shap-E: Generating Conditional 3D Implicit Functions](https://arxiv.org/abs/2305.02463).
* See [Usage](#usage) for guidance on how to use this repository.
* See [Samples](#samples) for examples of what our text-conditional model can generate.# Samples
Here are some highlighted samples from our text-conditional model. For random samples on selected prompts, see [samples.md](samples.md).
![]()
![]()
![]()
A chair that looks
like an avocado
An airplane that looks
like a banana
A spaceship
![]()
![]()
![]()
A birthday cupcake
A chair that looks
like a tree
A green boot
![]()
![]()
![]()
A penguin
Ube ice cream cone
A bowl of vegetables
# Usage
Install with `pip install -e .`.
To get started with examples, see the following notebooks:
* [sample_text_to_3d.ipynb](shap_e/examples/sample_text_to_3d.ipynb) - sample a 3D model, conditioned on a text prompt.
* [sample_image_to_3d.ipynb](shap_e/examples/sample_image_to_3d.ipynb) - sample a 3D model, conditioned on a synthetic view image. To get the best result, you should remove background from the input image.
* [encode_model.ipynb](shap_e/examples/encode_model.ipynb) - loads a 3D model or a trimesh, creates a batch of multiview renders and a point cloud, encodes them into a latent, and renders it back. For this to work, install Blender version 3.3.1 or higher, and set the environment variable `BLENDER_PATH` to the path of the Blender executable.