Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-3d-reconstruction-papers
A collection of 3D reconstruction papers in the deep learning era.
https://github.com/bluestyle97/awesome-3d-reconstruction-papers
Last synced: 2 days ago
JSON representation
-
Object-level
-
Single-view
- A Point Set Generation Network for 3D Object Reconstruction from a Single Image
- SurfNet: Generating 3D Shape Surfaces Using Deep Residual Networks
- OctNet: Learning Deep 3D Representations at High Resolutions
- MarrNet: 3D Shape Reconstruction via 2.5D Sketches
- Hierarchical Surface Prediction for 3D Object Reconstruction
- Image2Mesh: A Learning Framework for Single Image 3D Reconstruction
- Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction - point-cloud-generation/) |
- Pixels, voxels, and views: A study of shape representations for single view 3D object shape prediction - voxels-views/) |
- Im2Struct: Recovering 3D Shape Structure From a Single RGB Image
- Matryoshka Networks: Predicting 3D Geometry via Nested Shape Layers - 2018-matryoshka/src/master/) |
- Efficient Dense Point Cloud Object Reconstruction using Deformation Vector Fields
- GAL: Geometric Adversarial Loss for Single-View 3D-Object Reconstruction
- Learning Shape Priors for Single-View 3D Completion and Reconstruction
- Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images
- Residual MeshNet: Learning to Deform Meshes for Single-View 3D Reconstruction
- Learning to Reconstruct Shapes from Unseen Classes
- Multi-View Silhouette and Depth Decomposition for High Resolution 3D Object Representation - View-Silhouette-and-Depth-Decomposition-for-High-Resolution-3D-Object-Representation) |
- MVPNet: Multi-View Point Regression Networks for 3D Object Reconstruction from A Single Image
- Deep Single-View 3D Object Reconstruction with Visual Hull Embedding - 3d-reconstruction) |
- Occupancy Networks: Learning 3D Reconstruction in Function Space
- Learning Implicit Fields for Generative Shape Modeling
- A Skeleton-Bridged Deep Learning Approach for Generating Meshes of Complex Topologies From Single RGB Images
- What Do Single-view 3D Reconstruction Networks Learn? - freiburg/what3d) |
- Deep Level Sets: Implicit Surface Representations for 3D Shape Inference
- Deep Mesh Reconstruction From Single RGB Images via Topology Modification Networks
- Deep Meta Functionals for Shape Representation - Meta) |
- GraphX-Convolution for Point Cloud Deformation in 2D-to-3D Conversion
- Pix2Vox: Context-Aware 3D Reconstruction From Single and Multi-View Images
- Domain-Adaptive Single-View 3D Reconstruction - Adaptive-Single-View-3D-Reconstruction) |
- Few-Shot Generalization for Single-Image 3D Reconstruction via Priors
- DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction
- Front2Back: Single View 3D Shape Reconstruction via Front to Back Prediction
- BSP-Net: Generating Compact Meshes via Binary Space Partitioning - net.github.io/) |
- Height and Uprightness Invariance for 3D Prediction From a Single View
- Implicit Functions in Feature Space for 3D Shape Reconstruction and Completion - inf.mpg.de/ifnets/) |
- CvxNet: Learnable Convex Decomposition
- Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction
- Few-Shot Single-View 3-D Object Reconstruction with Compositional Priors
- GSIR: Generalizable 3D Shape Interpretation and Reconstruction
- DR-KFS: A Differentiable Visual Similarity Metric for 3D Shape Reconstruction
- Self-supervised Single-view 3D Reconstruction via Semantic Consistency - mesh-2020) |
- Ladybird: Quasi-Monte Carlo Sampling for Deep Implicit Field Based 3D Reconstruction with Symmetry
- Learning Deformable Tetrahedral Meshes for 3D Reconstruction - tlabs.github.io/DefTet/) |
- SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static Images - distance-SRN/) |
- UCLID-Net: Single View Reconstruction in Object Space - epfl/UCLID-Net) |
- Pix2Vox++: Multi-scale Context-aware 3D Object Reconstruction from Single and Multiple Images
- D2IM-Net: Learning Detail Disentangled Implicit Fields From Single Images - Net) |
- NeRD: Neural 3D Reflection Symmetry Detector
- Fostering Generalization in Single-view 3D Reconstruction by Learning a Hierarchy of Local and Global Shape Priors
- Single-View 3D Object Reconstruction From Shape Priors in Memory
- Implicit Surface Representations as Layers in Neural Networks
- Ray-ONet: Efficient 3D Reconstruction From A Single RGB Image
- Learning Anchored Unsigned Distance Functions with Gradient Direction Alignment for Single-view Garment Reconstruction
- Geometric Granularity Aware Pixel-to-Mesh
- Sketch2Mesh: Reconstructing and Editing 3D Shapes from Sketches - epfl/sketch2mesh) |
- 3DIAS: 3D Shape Reconstruction With Implicit Algebraic Surfaces
- A Dataset-Dispersion Perspective on Reconstruction Versus Recognition in Single-View 3D Reconstruction Networks - score) |
- 3D Reconstruction of Novel Object Shapes from Single Images - gt.github.io/3DShapeGen/) |
- AutoSDF: Shape Priors for 3D Completion, Reconstruction and Generation
- Neural Template: Topology-aware Reconstruction and Disentangled Generation of 3D Meshes - Template) |
- SkeletonNet: A Topology-Preserving Solution for Learning Mesh Reconstruction of Object Surfaces from RGB Images
- Training Data Generating Networks: Shape Reconstruction via Bi-level Optimization
- Structural Causal 3D Reconstruction
- Few-shot Single-view 3D Reconstruction with Memory Prior Contrastive Network
- Semi-Supervised Single-View 3D Reconstruction via Prototype Shape Priors
- Single-view 3D Scene Reconstruction with High-fidelity Shape and Texture - Jack/SSR-code) |
- Pre-train, Self-train, Distill: A simple recipe for Supersizing 3D Reconstruction
- A Papier-Mâché Approach to Learning 3D Surface Generation
-
Multi-view
- 3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction - R2N2) |
- 3D Shape Induction from 2D Views of Multiple Objects
- Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction - point-cloud-generation/) |
- Conditional Single-view Shape Generation for Multi-view Stereo Reconstruction
- Pixel2Mesh++: Multi-View 3D Mesh Generation via Deformation
- Multiview Aggregation for Learning Category-Specific Shape Reconstruction
- Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images
- Multi-view 3D Reconstruction with Transformers
- 3D-C2FT: Coarse-to-fine Transformer for Multi-view 3D Reconstruction
- FvOR: Robust Joint Shape and Pose Optimization for Few-view Object Reconstruction
- FOUND: Foot Optimisation with Uncertain Normals for Surface Deformation using Synthetic Data
- Multiview Aggregation for Learning Category-Specific Shape Reconstruction
-
Unsupervised
- Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision
- Multi-view Supervision for Single-View Reconstruction via Differentiable Ray Consistency
- Learning View Priors for Single-view 3D Reconstruction - kato/view_prior_learning) |
- Escaping Plato's Cave: 3D Shape From Adversarial Rendering
- Learning to Infer Implicit Surfaces without 3D Supervision
- Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer - tlabs.github.io/DIB-R/) |
- Leveraging 2D Data to Learn Textured 3D Mesh Generation - mesh-gen) |
- Implicit Mesh Reconstruction from Unannotated Image Collections
- Self-supervised Single-view 3D Reconstruction via Semantic Consistency - mesh-2020) |
- SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static Images - distance-SRN/) |
- Shelf-Supervised Mesh Prediction in the Wild
- Fully Understanding Generic Objects: Modeling, Segmentation, and Reconstruction - fully3dobject.html) |
- Self-Supervised 3D Mesh Reconstruction from Single Images - research/SMR) |
- View Generalization for Single Image Textured 3D Models - adlr.github.io/view-generalization) |
- Do 2D GANs Know 3D Shape? Unsupervised 3D shape reconstruction from 2D Image GANs
- Image GANs meet Differentiable Rendering for Inverse Graphics and Interpretable 3D Neural Rendering - tlabs.github.io/GANverse3D/) |
- Discovering 3D Parts from Image Collections
- Learning Canonical 3D Object Representation for Fine-Grained Recognition
- Toward Realistic Single-View 3D Object Reconstruction with Unsupervised Learning from Multiple Images
- Learning Generative Models of Textured 3D Meshes from Real-World Images - 3d-gan) |
- To The Point: Correspondence-driven monocular 3D category reconstruction
- Topologically-Aware Deformation Fields for Single-View 3D Reconstruction - 3D/) |
- 2D GANs Meet Unsupervised Single-View 3D Reconstruction - gansvr.html) |
- Monocular 3D Object Reconstruction with GAN Inversion - ntu.com/project/meshinversion/) |
- Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency
- Shape, Pose, and Appearance from a Single Image via Bootstrapped Radiance Field Inversion - research/nerf-from-image) |
- Seeing a Rose in Five Thousand Ways
- ShapeClipper: Scalable 3D Shape Learning From Single-View Images via Geometric and CLIP-Based Consistency
- SAOR: Single-View Articulated Object Reconstruction
- Progressive Learning of 3D Reconstruction Network from 2D GAN Data - 3d-learning/) |
- Rethinking Reprojection: Closing the Loop for Pose-Aware Shape Reconstruction from a Single Image
- Multi-View Consistency as Supervisory Signal for Learning Shape and Pose Prediction
- Shape and Viewpoint without Keypoints - goel.github.io/ucmr/) |
- Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild
- Learning Category-Specific Mesh Reconstruction from Image Collections
- Learning Single-View 3D Reconstruction with Limited Pose Supervision - recon) |
-
-
Scene-level
-
Single-view
- IM2CAD
- 3D-RCNN: Instance-level 3D Object Reconstruction via Render-and-Compare - RCNN/) |
- Factoring Shape, Pose, and Layout from the 2D Image of a 3D Scene
- Holistic 3D Scene Parsing and Reconstruction from a Single RGB Image
- Mesh R-CNN
- 3D Scene Reconstruction With Multi-Layer Depth and Epipolar Transformers - layer-depth) |
- 3D-RelNet: Joint Object and Relational Network for 3D Prediction
- Total3DUnderstanding: Joint Layout, Object Pose and Mesh Reconstruction for Indoor Scenes from a Single Image
- 3D Scene Reconstruction from a Single Viewport - RM/SingleViewReconstruction) |
- CoReNet: Coherent 3D scene reconstruction from a single RGB image - research/corenet) |
- Image-to-Voxel Model Translation for 3D Scene Reconstruction and Segmentation
- Holistic 3D Scene Understanding from a Single Image with Implicit Representation
- From Points to Multi-Object 3D Reconstruction
- Learning to Recover 3D Scene Shape from a Single Image - uofa/AdelaiDepth) |
- Patch2CAD: Patchwise Embedding Learning for In-the-Wild Shape Retrieval from a Single Image
- Panoptic 3D Scene Reconstruction From a Single RGB Image - dahnert.com/research/panoptic-reconstruction) |
- Voxel-based 3D Detection and Reconstruction of Multiple Objects from a Single Image - mdr.html) |
- Towards High-Fidelity Single-view Holistic Reconstruction of Indoor Scenes
- 3D-Former: Monocular Scene Reconstruction with SDF 3D Transformers
- BUOL: A Bottom-Up Framework With Occupancy-Aware Lifting for Panoptic 3D Scene Reconstruction From a Single Image
-
Multi-view
- MARMVS: Matching Ambiguity Reduced Multiple View Stereo for Efficient Large Scale Scene Reconstruction
- FroDO: From Detections to 3D Objects
- Associative3D: Volumetric Reconstruction from Sparse Views
- Atlas: End-to-End 3D Scene Reconstruction from Posed Images
- NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video
- TransformerFusion: Monocular RGB Scene Reconstruction using Transformers
- Learning 3D Object Shape and Layout without 3D Supervision
- Directed Ray Distance Functions for 3D Scene Reconstruction
- Learning 3D Scene Priors with 2D Supervision - page/) |
- FineRecon: Depth-aware Feed-forward Network for Detailed 3D Reconstruction - finerecon) |
- CVRecon: Rethinking 3D Geometric Feature Learning For Neural Reconstruction
- VisFusion: Visibility-Aware Online 3D Scene Reconstruction From Videos - gao.github.io/visfusion/) |
-
-
Neural Surface
-
Multi-view
- SDFDiff: Differentiable Rendering of Signed Distance Fields for 3D Shape - nj.github.io/papers/CVPR2020_SDFDiff/project_page.html) |
- Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision
- Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance
- Unsupervised Learning of 3D Object Categories from Videos in the Wild
- Neural Lumigraph Rendering
- Iso-Points: Optimizing Neural Implicit Surfaces With Hybrid Representations
- Learning Signed Distance Field for Multi-view Surface Reconstruction
- UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction
- NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction
- NeRS: Neural Reflectance Surfaces for Sparse-View 3D Reconstruction in the Wild
- Volume Rendering of Neural Implicit Surfaces
- NeuralWarp: Improving neural implicit surfaces geometry with patch warping
- Neural 3D Scene Reconstruction with the Manhattan-world Assumption
- GenDR: A Generalized Differentiable Renderer - Petersen/gendr) |
- NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction - Web/) |
- Critical Regularizations for Neural Surface Reconstruction in the Wild
- Multi-View Mesh Reconstruction with Neural Deferred Shading - deferred-shading/) |
- Differentiable Stereopsis: Meshes From Multiple Views Using Differentiable Rendering - goel/ds) |
- SparseNeuS: Fast Generalizable Neural Surface Reconstruction from Sparse views
- Object-Compositional Neural Implicit Surfaces
- SNeS: Learning Probably Symmetric Neural Surfaces from Incomplete Data
- Neural 3D Reconstruction in the Wild - w/) |
- Differentiable Signed Distance Function Rendering
- Differentiable Rendering of Neural SDFs through Reparameterization - 2022/index.html) |
- Learning Consistency-Aware Unsigned Distance Functions Progressively from Raw Point Clouds - UDF/) |
- Geo-Neus: Geometry-Consistent Neural Implicit Surfaces Learning for Multi-view Reconstruction - Neus) |
- MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction
- HF-NeuS: Improved Surface Reconstruction Using High-Frequency Details - wang/HFS) |
- Recovering Fine Details for Neural Implicit Surface Reconstruction - NeuS) |
- NeuRIS: Neural Reconstruction of Indoor Scenes Using Normal Priors
- Sphere-Guided Training of Neural Implicit Surfaces
- QFF: Quantized Fourier Features for Neural Field Representations
- NeuS2: Fast Learning of Neural Implicit Surfaces for Multi-view Reconstruction - inf.mpg.de/projects/NeuS2/) |
- Voxurf: Voxel-based Efficient and Accurate Neural Surface Reconstruction
- PermutoSDF: Fast Multi-View Reconstruction with Implicit Surfaces using Permutohedral Lattices
- ShadowNeuS: Neural SDF Reconstruction by Shadow Ray Supervision
- NeuralUDF: Learning Unsigned Distance Fields for Multi-view Reconstruction of Surfaces with Arbitrary Topologies
- NeuDA: Neural Deformable Anchor for High-Fidelity Implicit Surface Reconstruction - front-future.github.io/neuda/) |
- SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction
- I$^2$-SDF: Intrinsic Indoor Scene Reconstruction and Editing via Raytracing in Neural SDFs - sdf/) |
- NeAT: Learning Neural Implicit Surfaces with Arbitrary Topologies from Multi-view Images
- NeUDF: Leaning Neural Unsigned Distance Fields With Volume Rendering
- Towards Better Gradient Consistency for Neural Signed Distance Functions via Level Set Alignment
- Neuralangelo: High-Fidelity Neural Surface Reconstruction
- VolRecon: Volume Rendering of Signed Ray Distance Functions for Generalizable Multi-View Reconstruction
- PET-NeuS: Positional Encoding Tri-Planes for Neural Surfaces
- HR-NeuS: Recovering High-Frequency Surface Geometry via Neural Implicit Surfaces
- RICO: Regularizing the Unobservable for Indoor Compositional Reconstruction
- Learning a Room with the Occ-SDF Hybrid: Signed Distance Function Mingled with Occupancy Aids Scene Representation
- NeUDF: Learning Unsigned Distance Fields from Multi-view Images for Reconstructing Non-watertight Models
- S-VolSDF: Sparse Multi-View Stereo Regularization of Neural Implicit - yu-wu.github.io/s-volsdf/) |
- VDN-NeRF: Resolving Shape-Radiance Ambiguity via View-Dependence Normalization
- FastMESH: Fast Surface Reconstruction by Hexagonal Mesh-based Neural Rendering
- Explicit Neural Surfaces: Learning Continuous Geometry With Deformation Fields
-
Point-cloud
- Deep Geometric Prior for Surface Reconstruction - geometric-prior) |
- Scan2Mesh: From Unstructured Range Scans to 3D Meshes - ebbed/Scan2Mesh) |
- Meshlet Priors for 3D Mesh Reconstruction
- SSRNet: Scalable 3D Surface Reconstruction Network
- SAL: Sign Agnostic Learning of Shapes from Raw Data
- Implicit Geometric Regularization for Learning Shapes
- Meshing Point Clouds with Predicted Intrinsic-Extrinsic Ratio Guidance
- PointTriNet: Learned Triangulation of 3D Point Sets - triangulation) |
- Points2Surf: Learning Implicit Surfaces from Point Cloud Patches
- Convolutional Occupancy Networks
- Implicit Neural Representations with Periodic Activation Functions
- Neural Unsigned Distance Fields for Implicit Function Learning
- Differentiable Surface Triangulation - surface-triangulation) |
- SALD: Sign Agnostic Learning with Derivatives
- Deep Implicit Moving Least-Squares Functions for 3D Reconstruction
- Sign-Agnostic Implicit Learning of Surface Self-Similarities for Shape Modeling and Reconstruction from Raw Point Clouds
- Learning Delaunay Surface Elements for Mesh Reconstruction - meshing) |
- Neural Splines: Fitting 3D Surfaces with Infinitely-Wide Neural Networks - splines) |
- Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces
- Phase Transitions, Distance Functions, and Implicit Neural Representations
- Vis2Mesh: Efficient Mesh Reconstruction from Unstructured Point Clouds of Large Scenes with Learned Virtual View Visibility
- Deep Hybrid Self-Prior for Full 3D Mesh Generation
- Adaptive Surface Reconstruction with Multiscale Convolutional Kernels - org/adaptive-surface-reconstruction) |
- SA-ConvONet: Sign-Agnostic Optimization of Convolutional Occupancy Networks - ConvONet) |
- Deep Implicit Surface Point Prediction Networks
- Shape As Points: A Differentiable Poisson Solver
- AIR-Nets: An Attention-Based Framework for Locally Conditioned Implicit Representations - Nets) |
- Scalable Surface Reconstruction with Delaunay-Graph Neural Networks
- Neural-IMLS: Learning Implicit Moving Least-Squares for Surface Reconstruction from Unoriented Point clouds
- Neural Fields as Learnable Kernels for 3D Reconstruction - tlabs.github.io/nkf/) |
- POCO: Point Convolution for Surface Reconstruction
- GIFS: Neural Implicit Function for General Shape Representation
- Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors
- Surface Reconstruction from Point Clouds by Learning Predictive Context Priors
- DiGS: Divergence Guided Shape Implicit Neural Representation for Unoriented Point Clouds - Site/) |
- VisCo Grids: Surface Reconstruction with Viscosity and Coarea Grids
- GenSDF: Two-Stage Learning of Generalizable Signed Distance Functions
- Dual Octree Graph Networks for Learning Adaptive Volumetric Shape Representations
- Deep Point Cloud Simplification for High-quality Surface Reconstruction
- RangeUDF: Semantic Surface Reconstruction from 3D Point Clouds - group/rangeudf) |
- Neural Poisson: Indicator Functions for Neural Fields
- GeoUDF: Surface Reconstruction from 3D Point Clouds via Geometry-guided Distance Representation
- CircNet: Meshing 3D Point Clouds with Circumcenter Detection
- ALTO: Alternating Latent Topologies for Implicit 3D Reconstruction
- Octree Guided Unoriented Surface Reconstruction - INR-Site/) |
- Unsupervised Inference of Signed Distance Functions From Single Sparse Point Clouds Without Learning Priors
- Neural Vector Fields: Implicit Representation by Explicit Learning - sc/NVF) |
- StEik: Stabilizing the Optimization of Neural Signed Distance Functions and Finer Shape Representation
- Learning Signed Distance Functions from Noisy 3D Point Clouds via Noise to Noise Mapping
-
RGB-D
- Neural RGB-D Surface Reconstruction - rgbd-surface-reconstruction/) |
- BNV-Fusion: Dense 3D Reconstruction Using Bi-Level Neural Volume Fusion
- NICE-SLAM: Neural Implicit Scalable Encoding for SLAM - slam) |
- ShAPO: Implicit Representations for Multi-Object Shape, Appearance, and Pose Optimization - irshad.github.io/projects/ShAPO.html) |
- CIRCLE: Convolutional Implicit Reconstruction and Completion for Large-scale Indoor Scene
- Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D Camera
- GO-Surf: Neural Feature Grid Optimization for Fast, High-Fidelity RGB-D Surface Reconstruction
- MobileBrick: Building LEGO for 3D Reconstruction on Mobile Devices
- FastSurf: Fast Neural RGB-D Surface Reconstruction using Per-Frame Intrinsic Refinement and TSDF Fusion Prior Learning - healthcare.github.io/FastSurf/) |
- Dynamic Voxel Grid Optimization for High-Fidelity RGB-D Supervised Surface Reconstruction
- TMO: Textured Mesh Acquisition of Objects with a Mobile Device by using Differentiable Rendering - choi.github.io/TMO/) |
- Multiview Compressive Coding for 3D Reconstruction
-
-
Survey
-
RGB-D
- Image-based 3D Object Reconstruction: State-of-the-Art and Trends in the Deep Learning Era
- Neural Fields in Visual Computing and Beyond
- Advances in Neural Rendering
- Surface Reconstruction from Point Clouds: A Survey and a Benchmark
- NeRF: Neural Radiance Field in 3D Vision, A Comprehensive Review
- A Review of Deep Learning-Powered Mesh Reconstruction Methods
-
Categories
Sub Categories