Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/robo-alex/awesome-scene-representation

A curated list of awesome scene representation(NeRFs) papers, code, and resources.
https://github.com/robo-alex/awesome-scene-representation

List: awesome-scene-representation

nerf nerfs neural-radiance-field neural-radiance-fields neural-scene-representations radiance-field

Last synced: about 2 months ago
JSON representation

A curated list of awesome scene representation(NeRFs) papers, code, and resources.

Awesome Lists containing this project

README

        

# Awesome Scene Representation [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome)

A curated list of awesome scene representation(NeRFs) papers, code, and resources, inspired by [awesome-computer-vision](https://github.com/jbhuang0604/awesome-computer-vision)

- [Nerfstudio: A Modular Framework for Neural Radiance Field Development](https://docs.nerf.studio/), Tancik et al., SIGGRAPH 2023 | [github](https://github.com/nerfstudio-project/nerfstudio/) | [bibtex](./citations/nerfstudio.txt)
- [NerfAcc: A General NeRF Acceleration Toolbox](https://www.nerfacc.com/en/latest/), Li et al., Arxiv 2023 | [github](https://github.com/KAIR-BAIR/nerfacc) | [bibtex](./citations/li2023nerfacc.txt)
- [NeRFs: The Search for the Best 3D Representation](https://arxiv.org/abs/2308.02751), Ramamoorthi et al., Arxiv 2023 | [bibtex](./citations/ramamoorthi2023nerfs.txt)
- [Objaverse-XL: A Universe of 10M+ 3D Objects](https://objaverse.allenai.org/objaverse-xl-paper.pdf), Deitke et al., Arxiv 2023 | [bibtex](./citations/deitke2023objaverse.txt)
- [SDFStudio: A Unified Framework for Surface Reconstruction](https://autonomousvision.github.io/sdfstudio/), Yu et al., Arxiv 2023 | [github](https://github.com/autonomousvision/sdfstudio) | [bibtex](./citations/yu2022sdfstudio.txt)
- [LERF: Language Embedded Radiance Fields](https://www.lerf.io/), Kerr et al., ICCV 2023 | [github](https://github.com/kerrj/lerf) | [bibtex](./citations/lerf.txt)
- [GARField: Group Anything with Radiance Fields](https://www.garfield.studio/), Kim et al., Arxiv 2023 | [github](https://github.com/chungmin99/garfield) | [bibtex](./citations/garfield2024.txt)
- [Nerfbusters: Removing Ghostly Artifacts from Casually Captured NeRFs](https://ethanweber.me/nerfbusters/), Warburg et al., ICCV 2023 | [github](https://github.com/ethanweber/nerfbusters) | [bibtex](./citations/nerfbusters.txt)
- [MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes](https://merf42.github.io/), Reiser et al., SIGGRAPH 2023 | [github](https://github.com/google-research/google-research/tree/master/merf) | [bibtex](./citations/merf.txt)
- [BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis](https://bakedsdf.github.io/), Yariv et al., SIGGRAPH 2023 | [bibtex](./citations/bakedsdf.txt)
- [Language Embedded Radiance Fields for Zero-Shot Task-Oriented Grasping](https://lerftogo.github.io/desktop.html), Rashid et al., CoRL 2023 | [bibtex](./citations/sharma2023language.txt)
- [SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene Exploration](https://smerf-3d.github.io/), Duckworth et al., Arxiv 2023 | [bibtex](./citations/duckworth2023smerf.txt)
- [CamP: Camera Preconditioning for Neural Radiance Fields](https://camp-nerf.github.io/), Park et al., SIGGRAPH 2023 | [bibtex](./citations/park2023camp.txt)
- [Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields](https://jonbarron.info/zipnerf/), Barron et al., ICCV 2023 | [github](https://github.com/jake-austin/zipnerf-nerfstudio) | [bibtex](./citations/zipnerf.txt)
- [DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features](https://distillnerf.github.io/), Wang et al., CVPR 2024 | [bibtex](./citations/wang2024distillnerf.txt)
- [NeRF in the Palm of Your Hand: Corrective Augmentation for Robotics via Novel-View Synthesis](https://bland.website/spartn/), Zhou et al., CVPR 2023 | [bibtex](./citations/spartn.txt)
- [ZeroRF: Fast Sparse View 360° Reconstruction with Zero Pretraining](https://sarahweiii.github.io/zerorf/), Shi et al., Arxiv 2023 | [github](https://github.com/eliphatfs/zerorf)
- [Factor Fields and Beyond](https://apchenstu.github.io/FactorFields/), Chen et al., Arxiv and SIGGRAPH 2023 | [github](https://github.com/autonomousvision/factor-fields)
- [Live 3D Portrait: Real-Time Radiance Fields for Single-Image Portrait View Synthesis](https://research.nvidia.com/labs/nxp/lp3d/), Trevithick et al., SIGGRAPH 2023 | [bibtex](./citations/trevithick2023.txt)
- [GeNVS: Generative Novel View Synthesis with 3D-Aware Diffusion Models](https://nvlabs.github.io/genvs/), Chan et al., Arxiv 2023 | [github](https://github.com/NVlabs/genvs) | [bibtex](./citations/genvs.txt)
- [NeRDi: Single-View NeRF Synthesis with Language-Guided Diffusion as General Image Priors](https://arxiv.org/abs/2212.03267), Deng et al., CVPR 2023 | [bibtex](./citations/deng2022nerdi.txt)
- [SSIF: Single-shot Implicit Morphable Faces with Consistent Texture Parameterization](https://research.nvidia.com/labs/toronto-ai/ssif/), Lin et al., SIGGRAPH 2023 | [bibtex](./citations/lin2023ssif.txt)
- [Learning a Diffusion Prior for NeRFs](https://arxiv.org/abs/2304.14473), Yang et al., ICLR 2023 Workshop | [bibtex](./citations/yang2023learning.txt)
- [Nerflets: Local Radiance Fields for Efficient Structure-Aware 3D Scene Representation from 2D Supervision](https://jetd1.github.io/nerflets-web/), Zhang et al., CVPR 2023 | [bibtex](./citations/nerflets.txt)
- [Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions](https://instruct-nerf2nerf.github.io/), Haque et al., ICCV 2023 | [github](https://github.com/ayaanzhaque/instruct-nerf2nerf) | [bibtex](./citations/instructnerf2023.txt)
- [GINA-3D: Learning to Generate Implicit Neural Assets in the Wild](https://arxiv.org/abs/2304.02163), Shen et al., CVPR 2023 | [bibtex](./citations/gina.txt)
- [DreamBooth3D: Subject-Driven Text-to-3D Generation](https://dreambooth3d.github.io/), Raj et al., Arxiv 2023 | [bibtex](./citations/dreambooth3d.txt)
- [Shap-E: Generating Conditional 3D Implicit Functions](https://arxiv.org/abs/2305.02463), Jun et al., Arxiv 2023 | [github](https://github.com/openai/shap-e) | [bibtex](./citations/jun2023shap.txt)
- [Neural Lens Modeling](https://neural-lens.github.io/), Xian et al., CVPR 2023 | [bibtex](./citations/neurallens.txt)
- [DeLiRa: Self-Supervised Depth, Light, and Radiance Fields](https://sites.google.com/view/tri-delira), Guizilini et al., ECCV 2023 | [bibtex](./citations/delira.txt)
- [ATT3D: Amortized Text-To-3D Object Synthesis](https://research.nvidia.com/labs/toronto-ai/ATT3D/?linkId=100000204660845), Lorraine et al., ICCV 2023 | [bibtex](./citations/lorraine2023att3d.txt)
- [HiFA: High-fidelity Text-to-3D with Advanced Diffusion Guidance](https://hifa-team.github.io/HiFA-site/), Zhu et al., Arxiv 2023 | [github](https://github.com/HiFA-team/HiFA) | [bibtex](./citations/zhu2023hifa.txt)
- [FlowCam: Training Generalizable 3D Radiance Fields without Camera Poses via Pixel-Aligned Scene Flow](https://cameronosmith.github.io/flowcam/), Smith et al., Arxiv 2023 | [github](https://github.com/cameronosmith/FlowCam) | [bibtex](./citations/smith2023flowcam.txt)
- [Neural Kernel Surface Reconstruction](https://research.nvidia.com/labs/toronto-ai/NKSR/), Huang et al., CVPR 2023 | [github](https://github.com/nv-tlabs/nksr) | [bibtex](./citations/huang2023nksr.txt)
- [Neuralangelo: High-Fidelity Neural Surface Reconstruction](https://research.nvidia.com/labs/dir/neuralangelo/), Li et al., CVPR 2023 | [github](https://github.com/nvlabs/neuralangelo) | [bibtex](./citations/li2023neuralangelo.txt)
- [Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis](https://dynamic3dgaussians.github.io/), Luiten et al., Arxiv 2023 | [github](https://github.com/JonathonLuiten/Dynamic3DGaussians) | [bibtex](./citations/luiten2023dynamic.txt)
- [F2-NeRF: Fast Neural Radiance Field Training with Free Camera Trajectories](https://totoro97.github.io/projects/f2-nerf/), Wang et al., CVPR 2023 | [github](https://github.com/totoro97/f2-nerf) | [bibtex](./citations/f2nerf.txt)
- [NeuManifold: Neural Watertight Manifold Reconstruction with Efficient and High-Quality Rendering Support](https://sarahweiii.github.io/neumanifold/), Wei et al., Arxiv 2023 | [bibtex](./citations/wei2023neumanifold.txt)
- [Neural Spline Fields for Burst Image Fusion and Layer Separation](https://light.princeton.edu/publication/nsf/), Chugunov et al., Arxiv 2023 | [github](https://github.com/princeton-computational-imaging/NSF) | [bibtex](./citations/chugunov2023nsf.txt)
- [PlatoNeRF: 3D Reconstruction in Plato's Cave via Single-View Two-Bounce Lidar](https://platonerf.github.io/), Klinghoffer et al., CVPR 2024 | [github](https://github.com/facebookresearch/PlatoNeRF) | [bibtex](./citations/PlatoNeRF.txt)
- [Compact Neural Graphics Primitives with Learned Hash Probing](https://research.nvidia.com/labs/toronto-ai/compact-ngp/), Takikawa et al., SIGGRAPH Asia 2023 | [bibtex](./citations/takikawa2023compact.txt)
- [Strivec: Sparse Tri-Vector Radiance Fields](https://arxiv.org/abs/2307.13226), Gao et al., Arxiv 2023 | [bibtex](./citations/gao2023strivec.txt)
- [MonoNeRF: Learning Generalizable NeRFs from Monocular Videos without Camera Poses](https://oasisyang.github.io/mononerf/), Fu et al., ICML 2023 | [bibtex](./citations/fu2022mononerf.txt)
- [Neural Free-Viewpoint Relighting for Glossy Indirect Illumination](https://arxiv.org/abs/2307.06335), Raghavan et al., Arxiv 2023 | [bibtex](./citations/raghavan2023neural.txt)
- [SCADE: NeRFs from Space Carving with Ambiguity-Aware Depth Estimates](https://scade-spacecarving-nerfs.github.io/), Uy et al., CVPR 2023 | [bibtex](./citations/scade.txt)
- [Progressively Optimized Local Radiance Fields for Robust View Synthesis](https://localrf.github.io/), Meuleman et al., CVPR 2023 | [github](https://github.com/facebookresearch/localrf) | [bibtex](./citations/localrf.txt)
- [Seeing the World through Your Eyes](https://world-from-eyes.github.io/), Alzayer et al., Arxiv 2023 | [bibtex](./citations/alzayer2023seeing.txt)
- [Neural Relighting with Subsurface Scattering by Learning the Radiance Transfer Gradient](https://arxiv.org/abs/2306.09322), Zhu et al., Arxiv 2023 | [bibtex](./citations/zhu2023neural.txt)
- [DreamHuman: Animatable 3D Avatars from Text](https://dream-human.github.io/), Kolotouros et al., Arxiv 2023 | [bibtex](./citations/kolotouros2023dreamhuman.txt)
- [Generalizable One-shot Neural Head Avatar](https://arxiv.org/abs/2306.08768), Li et al., Arxiv 2023 | [bibtex](./citations/li2023generalizable.txt)
- [UniSim: A Neural Closed-Loop Sensor Simulator](https://waabi.ai/unisim/), Yang et al., CVPR 2023 | [bibtex](./citations/yang2023unisim.txt)
- [Grid-guided Neural Radiance Fields for Large Urban Scenes](https://city-super.github.io/gridnerf/), Xu et al., CVPR 2023 | [github](https://github.com/InternLandMark/LandMark) | [bibtex](./citations/gridnerf.txt)
- [K-Planes: Explicit Radiance Fields in Space, Time, and Appearance](https://sarafridov.github.io/K-Planes/), Fridovich-Keil et al, CVPR 2023 | [github](https://github.com/sarafridov/K-Planes) | [bibtex](./citations/kplanes.txt)
- [HexPlane: A Fast Representation for Dynamic Scenes](https://caoang327.github.io/HexPlane/), Cao et al., CVPR 2023 | [github](https://github.com/Caoang327/HexPlane) | [bibtex](./citations/hexplane.txt)
- [Neural Scene Chronology](https://zju3dv.github.io/neusc/), Lin et al., CVPR 2023 | [github](https://github.com/zju3dv/NeuSC) | [bibtex](./citations/lin2023neural.txt)
- [SUDS: Scalable Urban Dynamic Scenes](https://haithemturki.com/suds/), Turki et al., CVPR 2023 | [github](https://github.com/hturki/suds) | [bibtex](./citations/suds.txt)
- [Neural LiDAR Fields for Novel View Synthesis](https://research.nvidia.com/labs/toronto-ai/nfl/), Huang et al., ICCV 2023 | [bibtex](./citations/huang2023nfl.txt)
- [NeuralField-LDM: Scene Generation with Hierarchical Latent Diffusion Models](https://research.nvidia.com/labs/toronto-ai/NFLDM/), Kim et al., CVPR 2023 | [bibtex](./citations/nfldm.txt)
- [AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training](https://yifanjiang19.github.io/alignerf), Jiang et al., CVPR 2023 | [bibtex](./citations/alignerf.txt)
- [RoDynRF: Robust Dynamic Radiance Fields](https://robust-dynrf.github.io/), Liu et al., CVPR 2023 | [github](https://github.com/facebookresearch/robust-dynrf) | [bibtex](./citations/liu2023robust.txt)
- [NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from Multiview Images](https://liuyuan-pal.github.io/NeRO/), Liu et al., SIGGRAPH 2023 | [github](https://github.com/liuyuan-pal/NeRO) | [bibtex](./citations/liu2023nero.txt)
- [LANe : Lighting-Aware Neural Fields for Compositional Scene Synthesis](https://lane-composition.github.io/), Krishnan et al., Arxiv 2023 | [bibtex](./citations/lane.txt)
- [Neural Fields meet Explicit Geometric Representations for Inverse Rendering of Urban Scenes](https://nv-tlabs.github.io/fegr/), Wang et al., CVPR 2023 | [bibtex](./citations/fegr.txt)
- [Random-Access Neural Compression of Material Textures](https://research.nvidia.com/labs/rtr/neural_texture_compression/), Vaidyanathan et al., SIGGRAPH 2023 | [bibtex](./citations/ntc2023.txt)
- [Neural Prefiltering for Correlation-Aware Levels of Detail](https://weiphil.github.io/portfolio/neural_lod), Weier et al., SIGGRAPH 2023 | [github](https://github.com/WeiPhil/neural_lod)
- [Lift3D: Synthesize 3D Training Data by Lifting 2D GAN to 3D Generative Radiance Field](https://len-li.github.io/lift3d-web/), Li et al., CVPR 2023 | [github](https://github.com/Len-Li/Lift3D) | [bibtex](./citations/lift3d.txt)
- [NeRF-Texture: Texture Synthesis with Neural Radiance Fields](https://yihua7.github.io/NeRF-Texture-web/), Huang et al., SIGGRAPH 2023 | [github](https://github.com/yihua7/NeRF-Texture) | [bibtex](./citations/huang2023nerf-texture.txt)
- [ReLight My NeRF: A Dataset for Novel View Synthesis and Relighting of Real World Objects](https://eyecan-ai.github.io/rene/), Toschi et al., CVPR 2023 | [bibtex](./citations/toschi2023relight.txt)
- [DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising Diffusion Models](https://arxiv.org/abs/2302.12231), Wynn et al., CVPR 2023 | [github](https://github.com/nianticlabs/diffusionerf) | [bibtex](./citations/wynn-2023-diffusionerf.txt)
- [SeaThru-NeRF: Neural Radiance Fields in Scattering Media](https://sea-thru-nerf.github.io/), Levy et al., CVPR 2023 | [bibtex](./citations/levy2023seathru.txt)
- [Text2NeRF: Text-Driven 3D Scene Generation with Neural Radiance Fields](https://eckertzhang.github.io/Text2NeRF.github.io/), Zhang et al., Arxiv 2023 | [github](https://github.com/eckertzhang/Text2NeRF) | [bibtex](./citations/zhang2023text2nerf.txt)
- [Semantic-Ray: Learning a Generalizable Semantic Field with Cross-Reprojection Attention](https://liuff19.github.io/S-Ray/), Liu et al., CVPR 2023 | [github](https://github.com/liuff19/Semantic-Ray) | [bibtex](./citations/liu2023semantic.txt)
- [SPARF: Neural Radiance Fields from Sparse and Noisy Poses](http://prunetruong.com/sparf.github.io/), Truong et al., CVPR 2023 | [github](https://github.com/google-research/sparf) | [bibtex](./citations/sparf2023.txt)
- [ActorsNeRF: Animatable Few-shot Human Rendering with Generalizable NeRFs](https://jitengmu.github.io/ActorsNeRF/), Mu et al., ICCV 2023 | [bibtex](./citations/mu2023actorsnerf.txt)
- [Conditional 3D Shape Generation based on Shape-Image-Text Aligned Latent Representation](https://neuralcarver.github.io/michelangelo/), Zhao et al., Arxiv 2023 | [github](https://github.com/NeuralCarver/Michelangelo) | [bibtex](./citations/zhao2023michelangelo.txt)
- [Compressing Volumetric Radiance Fields to 1 MB](https://arxiv.org/abs/2211.16386), Li et al., CVPR 2023 | [github](https://github.com/AlgoHunt/VQRF) | [bibtex](./citations/li2022compressing.txt)
- [AutoRecon: Automated 3D Object Discovery and Reconstruction](https://zju3dv.github.io/autorecon/), Wang et al., CVPR 2023 | [github](https://github.com/zju3dv/AutoRecon) | [bibtex](./citations/wang2023autorecon.txt)
- [Total-Recon: Deformable Scene Reconstruction for Embodied View Synthesis](https://andrewsonga.github.io/totalrecon/), Song et al., ICCV 2023 | [github](https://github.com/andrewsonga/Total-Recon) | [bibtex](./citations/song2023totalrecon.txt)
- [DynIBaR: Neural Dynamic Image-Based Rendering](https://dynibar.github.io/), Li et al., CVPR 2023 | [github](https://github.com/google/dynibar) | [bibtex](./citations/li2023dynibar.txt)
- [3D Gaussian Splatting for Real-Time Radiance Field Rendering](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/), Kerbl et al., SIGGRAPH 2023 | [github](https://gitlab.inria.fr/fungraph/) | [bibtex](./citations/kerbl3Dgaussians.txt)
- [Real-Time Neural Appearance Models](https://research.nvidia.com/labs/rtr/neural_appearance_models/), Zeltner et al., Arxiv 2023 | [bibtex](./citations/zeltner2023real.txt)
- [Towards Realistic Generative 3D Face Models](https://aashishrai3799.github.io/Towards-Realistic-Generative-3D-Face-Models/), Rai et al., Arxiv 2023 | [github](https://github.com/aashishrai3799/Towards-Realistic-Generative-3D-Face-Models/) | [bibtex](./citations/rai2023towards.txt)
- [Pointersect: Neural Rendering with Cloud-Ray Intersection](https://machinelearning.apple.com/research/pointersect), Chang et al., CVPR 2023 | [github](https://github.com/apple/ml-pointersect) | [bibtex](./citations/pointersect.txt)
- [ORCa: Glossy Objects as Radiance-Field Cameras](https://ktiwary2.github.io/objectsascam/), Tiwary et al., CVPR 2023 | [github](https://github.com/ktiwary2/orca) | [bibtex](./citations/glossyobjects2022.txt)
- [GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields](https://barbararoessle.github.io/ganerf/), Roessle, et al., SIGGRAPH 2023 | [github](https://github.com/barbararoessle/ganerf) | [bibtex](./citations/roessle2023ganerf.txt)
- [TextMesh: Generation of Realistic 3D Meshes From Text Prompts](https://fabi92.github.io/textmesh/), Tsalicoglou et al., Arxiv 2023 | [bibtex](./citations/tsalicoglou2023textmesh.txt)
- [Local Implicit Ray Function for Generalizable Radiance Field Representation](https://xhuangcv.github.io/lirf/), Huang et al., CVPR 2023 | [github](https://github.com/xhuangcv/lirf/) | [bibtex](./citations/huang2023lirf.txt)
- [AutoNeRF: Training Implicit Scene Representations with Autonomous Agents](https://pierremarza.github.io/projects/autonerf/), Marza et al., Arxiv 2023 | [bibtex](./citations/marza2023autonerf.txt)
- [Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur](https://daipengwa.github.io/Hybrid-Rendering-ProjectPage/), Dai et al., CVPR 2023 | [github](https://github.com/CVMI-Lab/HybridNeuralRendering) | [bibtex](./citations/dai2023hybrid.txt)
- [ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation](https://ml.cs.tsinghua.edu.cn/prolificdreamer/), Wang et al., Arxiv 2023 | [github](https://github.com/thu-ml/prolificdreamer) | [bibtex](./citations/wang2023prolificdreamer.txt)
- [Learning Signed Distance Functions from Noisy 3D Point Clouds via Noise to Noise Mapping](https://arxiv.org/abs/2306.01405), Ma et al., ICML 2023 | [github](https://github.com/mabaorui/Noise2NoiseMapping/) | [bibtex](./citations/ma2023learning.txt)
- [ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural Rendering](https://arxiv.org/abs/2305.02103), Ramazzina et al., Arxiv 2023 | [bibtex](./citations/ramazzina2023scatternerf.txt)
- [HyP-NeRF: Learning Improved NeRF Priors using a HyperNetwork](https://arxiv.org/abs/2306.06093), Sen et al., Arxiv 2023 | [bibtex](./citations/sen2023hyp.txt)
- [Neural Haircut: Prior-Guided Strand-Based Hair Reconstruction](https://arxiv.org/abs/2306.05872), Sklyarova et al., Arxiv 2023 | [bibtex](./citations/sklyarova2023neural.txt)
- [DiViNeT: 3D Reconstruction from Disparate Views via Neural Template Regularization](https://arxiv.org/abs/2306.04699), Vora et al., Arxiv 2023 | [bibtex](./citations/vora2023divinet.txt)
- [ENVIDR: Implicit Differentiable Renderer with Neural Environment Lighting](https://nexuslrf.github.io/ENVIDR/), Liang et al., ICCV 2023 | [github](https://github.com/nexuslrf/ENVIDR) | [bibtex](./citations/liang2023envidr.txt)
- [SPIDR: SDF-based Neural Point Fields for Illumination and Deformation](https://nexuslrf.github.io/SPIDR_webpage/), Liang et al., Arxiv 2023 | [github](https://github.com/nexuslrf/SPIDR) | [bibtex](./citations/liang2022spidr.txt)
- [TensoIR: Tensorial Inverse Rendering](https://haian-jin.github.io/TensoIR/), Jin et al., CVPR 2023 | [github](https://github.com/Haian-Jin/TensoIR) | [bibtex](./citations/Jin2023TensoIR.txt)
- [Patch-based 3D Natural Scene Generation from a Single Example](http://weiyuli.xyz/Sin3DGen/), Li et al., CVPR 2023 | [github](https://github.com/wyysf-98/Sin3DGen) | [bibtex](./citations/weiyu23sin3dgen.txt)
- [3D Neural Field Generation using Triplane Diffusion](https://jryanshue.com/nfd/), Shue et al., CVPR 2023 | [github](https://github.com/JRyanShue/NFD) | [bibtex](./citations/nfd.txt)
- [Dynamic Point Fields](https://sergeyprokudin.github.io/dpf/), Prokudin et al., Arxiv 2023 | [bibtex](./citations/dpf.txt)
- [SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction](https://sparsefusion.github.io/), Zhou et al., CVPR 2023 | [github](https://github.com/zhizdev/sparsefusion) | [bibtex](./citations/sparsefusion.txt)
- [Co-SLAM: Joint Coordinate and Sparse Parametric Encodings for Neural Real-Time SLAM](https://hengyiwang.github.io/projects/CoSLAM), Wang et al., CVPR 2023 | [github](https://github.com/HengyiWang/Co-SLAM) | [bibtex](./citations/wang2023coslam.txt)
- [Set-the-Scene: Global-Local Training for Generating Controllable NeRF Scenes](https://danacohen95.github.io/Set-the-Scene/), Cohen-Bar et al., Arxiv 2023 | [github](https://github.com/DanaCohen95/Set-the-Scene) | [bibtex](./citations/setthescene.txt)
- [Point2Pix: Photo-Realistic Point Cloud Rendering via Neural Radiance Fields](https://arxiv.org/abs/2303.16482), Hu et al., Arxiv 2023 | [bibtex](./citations/point2pix.txt)
- [Multiview Compressive Coding for 3D Reconstruction](https://mcc3d.github.io/), Wu et al., CVPR 2023 | [github](https://github.com/facebookresearch/MCC) | [bibtex](./citations/mcc.txt)
- [NeRF-Supervised Deep Stereo](https://nerfstereo.github.io/), Tosi et al., CVPR 2023 | [github](https://github.com/fabiotosi92/NeRF-Supervised-Deep-Stereo) | [bibtex](./citations/nerfstereo.txt)
- [NeILF++: Inter-reflectable Light Fields for Geometry and Material Estimation](https://yoyo000.github.io/NeILF_pp/), Zhang et al., ICCV 2023 | [github](https://github.com/apple/ml-neilfpp) | [bibtex](./citations/neilfpp.txt)
- [SparseNeRF: Distilling Depth Ranking for Few-shot Novel View Synthesis](https://sparsenerf.github.io/), Wang et al., ICCV 2023 | [github](https://github.com/Wanggcong/SparseNeRF) | [bibtex](./citations/sparsenerf.txt)
- [NeRFMeshing: Distilling Neural Radiance Fields into Geometrically-Accurate 3D Meshes](https://arxiv.org/abs/2303.09431), Rakotosaona et al., Arxiv 2023 | [bibtex](./citations/nerfmeshing.txt)
- [Rodin: A Generative Model for Sculpting 3D Digital Avatars Using Diffusion](https://3d-avatar-diffusion.microsoft.com/), Wang et al., CVPR 2023 | [bibtex](./citations/rodin.txt)
- [PointAvatar: Deformable Point-based Head Avatars from Videos](https://zhengyuf.github.io/PointAvatar/), Zheng et al., CVPR 2023 | [github](https://github.com/zhengyuf/pointavatar) | [bibtex](./citations/Zheng2023pointavatar.txt)
- [Instant Volumetric Head Avatars](https://zielon.github.io/insta/), Zielonka et al., CVPR 2023 | [github](https://github.com/Zielon/INSTA) | [bibtex](./citations/zielonka2023insta.txt)
- [One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization](https://one-2-3-45.github.io/), Liu et al., Arxiv 2023 | [github](https://github.com/One-2-3-45/One-2-3-45) | [bibtex](./citations/liu2023one2345.txt)
- [Zero-1-to-3: Zero-shot One Image to 3D Object](https://zero123.cs.columbia.edu/), Liu et al., Arxiv 2023 | [github](https://github.com/cvlab-columbia/zero123) | [bibtex](./citations/zero123.txt)
- [Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors](https://guochengqian.github.io/project/magic123/), Qian et al., Arxiv 2023 | [github](https://github.com/guochengqian/Magic123) | [bibtex](./citations/qian2023magic123.txt)
- [RealFusion: 360° Reconstruction of Any Object from a Single Image](https://lukemelas.github.io/realfusion/), Melas-Kyriazi et al., CVPR 2023 | [github](https://github.com/lukemelas/realfusion) | [bibtex](./citations/melaskyriazi2023realfusion.txt)
- [3DShape2VecSet: A 3D Shape Representation for Neural Fields and Generative Diffusion Models](https://1zb.github.io/3DShape2VecSet/), Zhang et al., SIGGRAPH 2023 | [github](https://github.com/1zb/3DShape2VecSet) | [bibtex](./citations/zhang20233dshape2vecset.txt)
- [Factor Fields: A Unified Framework for Neural Fields and Beyond](https://arxiv.org/abs/2302.01226), Chen et al., Arxiv 2023 | [bibtex](./citations/chen2023factor.txt)
- [Ref-NPR: Reference-Based Non-Photorealistic Radiance Fields for Controllable Scene Stylization](https://ref-npr.github.io/), Zhang et al., CVPR 2023 | [github](https://github.com/dvlab-research/Ref-NPR/) | [bibtex](./citations/zhang2023refnpr.txt)
- [Instruct 3D-to-3D: Text Instruction Guided 3D-to-3D conversion](https://sony.github.io/Instruct3Dto3D-doc/), Kamata et al., Arxiv 2023 | [bibtex](./citations/instruct3dto3d.txt)
- [FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization](https://jiawei-yang.github.io/FreeNeRF/), Yang et al., CVPR 2023 | [github](https://github.com/Jiawei-Yang/FreeNeRF) | [bibtex](./citations/freenerf.txt)
- [TUVF : Learning Generalizable Texture UV Radiance Fields](https://www.anjiecheng.me/TUVF), Cheng et al., Arxiv 2023 | [github](https://github.com/AnjieCheng/TUVF) | [bibtex](./citations/cheng2023tuvf.txt)
- [Super-NeRF: View-consistent Detail Generation for NeRF super-resolution](https://arxiv.org/abs/2304.13518), Han et al., TPAMI 2023 | [bibtex](./citations/han2023super.txt)
- [Chat with NeRF: Grounding 3D Objects in Neural Radiance Field through Dialog](https://chat-with-nerf.github.io/), Yang et al, Arxiv 2023 | [github](https://github.com/sled-group/chat-with-nerf) | [bibtex](./citations/chat.txt)
- [Segment Anything in 3D with NeRFs](https://jumpat.github.io/SA3D/), Cen et al., Arxiv 2023 | [github](https://github.com/Jumpat/SegmentAnythingin3D) | [bibtex](./citations/cen2023segment.txt)
- [HOSNeRF: Dynamic Human-Object-Scene Neural Radiance Fields from a Single Video](https://showlab.github.io/HOSNeRF/), Liu et al., Arxiv 2023 | [bibtex](./citations/liu2023hosnerf.txt)
- [Explicit Correspondence Matching for Generalizable Neural Radiance Fields](https://donydchen.github.io/matchnerf/), Chen et al., CVPR 2023 | [github](https://github.com/donydchen/matchnerf) | [bibtex](./citations/chen2023matchnerf.txt)
- [Self-supervised Learning by View Synthesis](https://arxiv.org/abs/2304.11330), Liu et al., Arxiv 2023 | [bibtex](./citations/liu2023self.txt)
- [SPIn-NeRF: Multiview Segmentation and Perceptual Inpainting with Neural Radiance Fields](https://spinnerf3d.github.io/), Mirzaei et al., CVPR 2023 | [github](https://github.com/SamsungLabs/SPIn-NeRF) | [bibtex](./citations/spinnerf.txt)
- [Neural Radiance Fields: Past, Present, and Future](https://arxiv.org/abs/2304.10050), Mittal, Arxiv 2023 | [bibtex](./citations/mittal2023neural.txt)
- [HyperReel: High-Fidelity 6-DoF Video with Ray-Conditioned Sampling](https://hyperreel.github.io/), Attal et al., CVPR 2023 | [github](https://github.com/facebookresearch/hyperreel) | [bibtex](./citations/hyperreel.txt)
- [Panoptic Lifting for 3D Scene Understanding with Neural Fields](https://nihalsid.github.io/panoptic-lifting/), Siddiqui et al., CVPR 2023 | [github](https://github.com/nihalsid/panoptic-lifting) | [bibtex](./citations/panopticlifting.txt)
- [Instance Neural Radiance Field](https://arxiv.org/abs/2304.04395), Hu et al., Arxiv 2023 | [bibtex](./citations/hu2023instance.txt)
- [Flow Supervision for Deformable NeRF](https://mightychaos.github.io/projects/fsdnerf/), Wang et al., CVPR 2023 | [github](https://github.com/MightyChaos/fsdnerf) | [bibtex](./citations/wang2023flow.txt)
- [DyLiN: Making Light Field Networks Dynamic](https://dylin2023.github.io/), Yu et al., CVPR 2023 | [github](https://github.com/Heng14/DyLiN) | [bibtex](./citations/dylin.txt)
- [NeuralLift-360: Lifting An In-the-wild 2D Photo to A 3D Object with 360° Views](https://vita-group.github.io/NeuralLift-360/), Xu et al., Arxiv 2023 | [github](https://github.com/VITA-Group/NeuralLift-360) | [bibtex](./citations/neuralLift.txt)
- [Ref-NeuS: Ambiguity-Reduced Neural Implicit Surface Learning for Multi-View Reconstruction with Reflection](https://g3956.github.io/), Ge et al., ICCV 2023 | [github](https://github.com/g3956/Ref-NeuS) | [bibtex](./citations/refneus.txt)
- [Seeing Through the Glass: Neural 3D Reconstruction of Object Inside a Transparent Container](https://arxiv.org/abs/2303.13805), Tong et al., CVPR 2023 | [github](https://github.com/hirotong/ReNeuS) | [bibtex](./citations/tong2023seeing.txt)
- [PAC-NeRF: Physics Augmented Continuum Neural Radiance Fields for Geometry-Agnostic System Identification](https://sites.google.com/view/PAC-NeRF), Li et al., ICLR 2023 | [github](https://github.com/xuan-li/PAC-NeRF) | [bibtex](./citations/pacnerf.txt)
- [NeRF-LiDAR: Generating Realistic LiDAR Point Clouds with Neural Radiance Fields](https://arxiv.org/abs/2304.14811), Zhang et al., Arxiv 2023 | [bibtex](./citations/zhang2023nerf.txt)
- [BundleSDF: Neural 6-DoF Tracking and 3D Reconstruction of Unknown Objects](https://bundlesdf.github.io/), Wen et al., CVPR 2023 | [github](https://github.com/NVlabs/BundleSDF) | [bibtex](./citations/wen2023bundlesdf.txt)
- [ABLE-NeRF: Attention-Based Rendering with Learnable Embeddings for Neural Radiance Field](https://arxiv.org/abs/2303.13817), Tang et al., CVPR 2023 | [github](https://github.com/TangZJ/able-nerf) | [bibtex](./citations/tang2023able.txt)
- [FeatureNeRF: Learning Generalizable NeRFs by Distilling Foundation Models](https://jianglongye.com/featurenerf/), Ye et al., ICCV 2023 | [bibtex](./citations/featurenerf.txt)
- [Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction](https://lakonik.github.io/ssdnerf/), Chen et al., ICCV 2023 | [github](https://github.com/Lakonik/SSDNeRF) | [bibtex](./citations/ssdnerf.txt)
- [Learning to Render Novel Views from Wide-Baseline Stereo Pairs](https://yilundu.github.io/wide_baseline/), Du et al., CVPR 2023 | [github](https://yilundu.github.io/wide_baseline/) | [bibtex](./citations/widerender.txt)
- [Canonical Fields: Self-Supervised Learning of Pose-Canonicalized Neural Fields](https://ivl.cs.brown.edu/#/projects/canonicalfields), Agaram et al., CVPR 2023 | [github](https://github.com/brown-ivl/Cafi-Net) | [bibtex](./citations/agaram2023_cafinet.txt)
- [Neural Volumetric Memory for Visual Locomotion Control](https://rchalyang.github.io/NVM/), Yang et al., CVPR 2023 | [bibtex](./citations/nvm.txt)
- [DBARF: Deep Bundle-Adjusting Generalizable Neural Radiance Fields](https://aibluefisher.github.io/dbarf/), Yu et al., CVPR 2023 | [github](https://github.com/AIBluefisher/dbarf) | [bibtex](./citations/dbarf.txt)
- [NeRF-DS: Neural Radiance Fields for Dynamic Specular Objects](https://jokeryan.github.io/projects/nerf-ds/), Yang et al., CVPR 2023 | [github](https://github.com/JokerYan/NeRF-DS) | [bibtex](./citations/nerfds.txt)
- [WildLight: In-the-wild Inverse Rendering with a Flashlight](https://junxuan-li.github.io/wildlight-website/), Chen, et al., CVPR 2023 | [github](https://github.com/za-cheng/WildLight) | [bibtex](./citations/wildlight.txt)
- [CompoNeRF: Text-guided Multi-object Compositional NeRF with Editable 3D Scene Layout](https://arxiv.org/abs/2303.13843), Lin et al., Arxiv 2023 | [bibtex](./citations/componerf.txt)
- [TEGLO: High Fidelity Canonical Texture Mapping from Single-View Images](https://teglo-nerf.github.io/), Vinod et al., Arxiv 2023 | [bibtex](./citations/teglo.txt)
- [Delicate Textured Mesh Recovery from NeRF via Adaptive Surface Refinement](https://me.kiui.moe/nerf2mesh/), Tang et al., ICCV 2023 | [github](https://github.com/ashawkey/nerf2mesh) | [bibtex](./citations/nerf2mesh.txt)
- [NoPe-NeRF: Optimising Neural Radiance Field with No Pose Prior](https://nope-nerf.active.vision/), Bian et al., CVPR 2023 | [github](https://github.com/ActiveVisionLab/nope-nerf) | [bibtex](./citations/nopenerf.txt)
- [NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from 3D-aware Diffusion](https://jiataogu.me/nerfdiff/), Gu et al., ICML 2023 | [bibtex](./citations/nerfdiff.txt)
- [α Surf: Implicit Surface Reconstruction for Semi-Transparent and Thin Objects with Decoupled Geometry and Opacity](https://alphasurf.netlify.app/), Wu et al., Arxiv 2023 | [bibtex](./citations/alphasurf.txt)
- [Implicit Neural Head Synthesis via Controllable Local Deformation Fields](https://imaging.cs.cmu.edu/local_deformation_fields/), Chen et al., CVPR 2023 | [bibtex](./citations/chen2023-implicit_head.txt)
- [EventNeRF: Neural Radiance Fields from a Single Colour](https://4dqv.mpi-inf.mpg.de/EventNeRF/), Rudnev et al., CVPR 2023 | [bibtex](./citations/eventnerf.txt)
- [Factored Neural Representation for Scene Understanding](https://yushiangw.github.io/factorednerf/), Wong et al., SGP 2023 | [github](https://github.com/yushiangw/factorednerf) | [bibtex](./citations/factorednerf.txt)
- [PlenVDB: Memory Efficient VDB-Based Radiance Fields for Fast Training and Rendering](https://plenvdb.github.io/), Yan et al., CVPR 2023 | [github](https://github.com/wolfball/PlenVDB) | [bibtex](./citations/hyan2023plenvdb.txt)
- [L2G-NeRF: Local-to-Global Registration for Bundle-Adjusting Neural Radiance Fields](https://rover-xingyu.github.io/L2G-NeRF/), Chen et al., CVPR 2023 | [github](https://github.com/rover-xingyu/L2G-NeRF) | [bibtex](./citations/l2gnerf.txt)
- [Radiance Field Gradient Scaling for Unbiased Near-Camera Training](https://arxiv.org/abs/2305.02756), Philip et al., Arxiv 2023 | [bibtex](./citations/philip2023radiance.txt)
- [NeRF-Factory: An awesome PyTorch NeRF collection](https://github.com/kakaobrain/nerf-factory)
- [Deceptive-NeRF: Enhancing NeRF Reconstruction using Pseudo-Observations from Diffusion Models](https://deceptive-nerf.github.io/), Liu et al., Arxiv 2023 | [bibtex](./citations/liu2023deceptive.txt)
- [InpaintNeRF360: Text-Guided 3D Inpainting on Unbounded Neural Radiance Fields](https://arxiv.org/abs/2305.15094), Wang et al., Arxiv 2023 | [bibtex](./citations/wang2023inpaintnerf360.txt)
- [OD-NeRF: Efficient Training of On-the-Fly Dynamic Neural Radiance Fields](https://arxiv.org/abs/2305.14831), Yan et al., Arxiv 2023 | [bibtex](./citations/yan2023odn.txt)
- [Removing Objects From Neural Radiance Fields](https://nianticlabs.github.io/nerf-object-removal/), Weder wt al., CVPR 2023 | [github](https://github.com/nianticlabs/nerf-object-removal) | [bibtex](./citations/weder2023removing.txt)
- [Evaluate Geometry of Radiance Field with Low-frequency Color Prior](https://arxiv.org/abs/2304.04351), Fang et al., Arxiv 2023 | [github](https://github.com/qihangGH/IMRC) | [bibtex](./citations/imrc.txt)
- [VDN-NeRF: Resolving Shape-Radiance Ambiguity via View-Dependence Normalization](https://arxiv.org/abs/2303.17968), Zhu et al., Arxiv 2023 | [github](https://github.com/BoifZ/VDN-NeRF) | [bibtex](./citations/vdn.txt)
- [Behind the Scenes: Density Fields for Single View Reconstruction](https://fwmb.github.io/bts/), Wimbauer et al., CVPR 2023 | [github](https://github.com/Brummi/BehindTheScenes) | [bibtex](./citations/behind.txt)
- [NeRFshop: Interactive Editing of Neural Radiance Fields](https://repo-sam.inria.fr/fungraph/nerfshop/), Jambon et al., I3D 2023 | [github](https://github.com/graphdeco-inria/nerfshop) | [bibtex](./citations/nerfshop.txt)
- [DITTO-NeRF: Diffusion-based Iterative Text To Omni-directional 3D Model](https://janeyeon.github.io/ditto-nerf/), Seo et al., Arxiv 2023 | [github](https://github.com/janeyeon/ditto-nerf-code) | [bibtex](./citations/ditto.txt)
- [Neural Microfacet Fields for Inverse Rendering](https://half-potato.gitlab.io/posts/nmf/), Mai et al., Arxiv 2023 | [github](https://github.com/half-potato/nmf) | [bibtex](./citations/nmf.txt)
- [StreetSurf: Extending Multi-view Implicit Surface Reconstruction to Street Views](https://ventusff.github.io/streetsurf_web/), Guo et al., Arxiv 2023 | [github](https://github.com/pjlab-ADG/neuralsim) | [bibtex](./citations/guo2023streetsurf.txt)
- [I2-SDF: Intrinsic Indoor Scene Reconstruction and Editing via Raytracing in Neural SDFs](https://jingsenzhu.github.io/i2-sdf/), Zhu et al., CVPR 2023 | [github](https://github.com/jingsenzhu/i2-sdf) | [bibtex](./citations/i2sdf.txt)
- [PET-NeuS: Positional Encoding Tri-Planes for Neural Surfaces](https://arxiv.org/abs/2305.05594), Wang et al., CVPR 2023 | [github](https://github.com/yiqun-wang/PET-NeuS) | [bibtex](./citations/wang2023petneus.txt)
- [Reference-guided Controllable Inpainting of Neural Radiance Fields](https://ashmrz.github.io/reference-guided-3d/), Mirzaei et al., ICCV 2023 | [bibtex](./citations/reference.txt)
- [NeRFuser: Large-Scale Scene Representation by NeRF Fusion](https://arxiv.org/abs/2305.13307), Fang et al., Arxiv 2023 | [github](https://github.com/ripl/nerfuser) | [bibtex](./citations/fang23nerfuser.txt)
- [Registering Neural Radiance Fields as 3D Density Images](https://arxiv.org/abs/2305.12843), Jiang et al., Arxiv 2023 | [bibtex](./citations/jiang2023registering.txt)
- [TiNeuVox: Fast Dynamic Radiance Fields with Time-Aware Neural Voxels](https://jaminfong.cn/tineuvox/), Fang et al., SIGGRAPH Asia 2022 | [github](https://github.com/hustvl/TiNeuVox) | [bibtex](./citations/tineuvox.txt)
- [NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads](https://tobias-kirschstein.github.io/nersemble/), Kirschstein et al., SIGGRAPH Asia 2023 | [github](https://github.com/tobias-kirschstein/nersemble) | [bibtex](./citations/kirschstein2023nersemble.txt)
- [AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control](https://avatar-craft.github.io/), Jiang et al., Arxiv 2023 | [github](https://github.com/songrise/avatarcraft) | [bibtex](./citations/avatarcraft.txt)
- [LatentAvatar: Learning Latent Expression Code for Expressive Neural Head Avatar](https://www.liuyebin.com/latentavatar), Xu et al., SIGGRAPH 2023 | [github](https://github.com/YuelangX/LatentAvatar) | [bibtex](./citations/xu2023latentavatar.txt)
- [HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion](https://synthesiaresearch.github.io/humanrf/), IŞIK et al., SIGGRAPH 2023 | [github](https://github.com/synthesiaresearch/humanrf) | [bibtex](./citations/isik2023humanrf.txt)
- [AvatarReX: Real-time Expressive Full-body Avatars](https://liuyebin.com/AvatarRex/), Zheng et al., SIGGRAPH 2023 | [bibtex](./citations/zheng2023avatarrex.txt)
- [DreamFace: Progressive Generation of Animatable 3D Faces under Text Guidance](https://sites.google.com/view/dreamface), Zhang et al., SIGGRAPH 2023 | [bibtex](./citations/zhang2023dreamface.txt)
- [Unsupervised Object-Centric Voxelization for Dynamic Scene Understanding](https://sites.google.com/view/dynavol/), Gao et al., Arxiv 2023 | [github](https://github.com/zyp123494/DynaVol) | [bibtex](./citations/gao2023object.txt)
- [ViP-NeRF: Visibility Prior for Sparse Input Neural Radiance Fields](https://nagabhushansn95.github.io/publications/2023/ViP-NeRF.html), Somraj et al., SIGGRAPH 2023 | [github](https://github.com/NagabhushanSN95/ViP-NeRF) | [bibtex](./citations/vipnerf.txt)
- [General Neural Gauge Fields](https://arxiv.org/abs/2305.03462), Zhan et al., ICLR 2023 | [bibtex](./citations/zhan2023general.txt)
- [Online Learning of Neural Surface Light Fields alongside Real-time Incremental 3D Reconstruction](https://jarrome.github.io/NSLF-OL/), Yuan et al., Arxiv 2023 | [github](https://github.com/Jarrome/NSLF-OL) | [bibtex](./citations/yuan2023online.txt)
- [MRVM-NeRF: Mask-Based Pretraining for Neural Radiance Field](https://arxiv.org/abs/2304.04962), Yang et al., Arxiv 2023 | [bibtex](./citations/mrvm.txt)
- [UV Volumes for Real-time Rendering of Editable Free-view Human Performance](https://fanegg.github.io/UV-Volumes/), Chen et al., CVPR 2023 | [github](https://github.com/fanegg/UV-Volumes) | [bibtex](./citations/uvvolumes.txt)
- [Re-ReND: Real-time Rendering of NeRFs across Devices](https://arxiv.org/abs/2303.08717), Rojas et al., Arxiv 2023 | [bibtex](./citations/rerend.txt)
- [Learning Neural Volumetric Representations of Dynamic Humans in Minutes](https://zju3dv.github.io/instant_nvr/), Geng et al., CVPR 2023 | [bibtex](./citations/instant_nvr.txt)
- [PermutoSDF: Fast Multi-View Reconstruction withImplicit Surfaces using Permutohedral Lattices](https://radualexandru.github.io/permuto_sdf/), Rosu et al., CVPR 2023 | [github](https://github.com/RaduAlexandru/permuto_sdf) | [bibtex](./citations/permutosdf.txt)
- [FusedRF: Fusing Multiple Radiance Fields](https://arxiv.org/abs/2306.04180), Goel et al., Arxiv 2023 | [bibtex](./citations/goel2023fusedrf.txt)
- [Explicit Neural Surfaces: Learning Continuous Geometry With Deformation Fields](https://arxiv.org/abs/2306.02956), Walker et al., Arxiv 2023 | [bibtex](./citations/walker2023explicit.txt)
- [Neural Implicit Dense Semantic SLAM](https://arxiv.org/abs/2304.14560), Haghighi et al., Arxiv 2023 | [bibtex](./citations/haghighi2023neural.txt)
- [Point-SLAM: Dense Neural Point Cloud-based SLAM](https://arxiv.org/abs/2304.04278), Sandström et al., Arxiv 2023 | [github](https://github.com/tfy14esa/Point-SLAM) | [bibtex](./citations/pointslam.txt)
- [Decoupling Dynamic Monocular Videos for Dynamic View Synthesis](https://arxiv.org/abs/2304.01716), You et al., Arxiv 2023 | [bibtex](./citations/decoupling.txt)
- [Neural Field Convolutions by Repeated Differentiation](https://arxiv.org/abs/2304.01834), Nsampi et al., Arxiv 2023 | [bibtex](./citations/nfc.txt)
- [HQ3DAvatar: High Quality Controllable 3D Head Avatar](https://vcai.mpi-inf.mpg.de/projects/HQ3DAvatar/), Teotia et al., Arxiv 2023 | [bibtex](./citations/hq3davatar.txt)
- [FlexNeRF: Photorealistic Free-viewpoint Rendering of Moving Humans from Sparse Views](https://flex-nerf.github.io/), Jayasundara et al., CVPR 2023 | [bibtex](./citations/flexnerf.txt)
- [Enhanced Stable View Synthesis](https://arxiv.org/abs/2303.17094), Jain et al., CVPR 2023 | [bibtex](./citations/enhanced.txt)
- [S-VolSDF: Sparse Multi-View Stereo Regularization of Neural Implicit Surfaces](https://hao-yu-wu.github.io/s-volsdf/), Wu et al., Arxiv 2023 | [bibtex](./citations/svolsdf.txt)
- [SurfelNeRF: Neural Surfel Radiance Fields for Online Photorealistic Reconstruction of Indoor Scenes](https://gymat.github.io/SurfelNeRF-web/), Gao et al., CVPR 2023 | [github](https://github.com/TencentARC/SurfelNeRF) | [bibtex](./citations/surfelnerf.txt)
- [Learning Neural Duplex Radiance Fields for Real-Time View Synthesis](http://raywzy.com/NDRF/), Wan et al., CVPR 2023 | [bibtex](./citations/ndrf.txt)
- [Multi-Space Neural Radiance Fields](https://zx-yin.github.io/msnerf/), Yin et al., CVPR 2023 | [github](https://github.com/ZX-Yin/ms-nerf) | [bibtex](./citations/yin2023msnerf.txt)
- [Tetra-NeRF: Representing Neural Radiance Fields Using Tetrahedra](https://jkulhanek.com/tetra-nerf/), Kulhánek et al., Arxiv 2023 | [github](https://github.com/jkulhanek/tetra-nerf/) | [bibtex](./citations/tetra.txt)
- [Multiscale Representation for Real-Time Anti-Aliasing Neural Rendering](https://arxiv.org/abs/2304.10075), Hu et al., Arxiv 2023 | [bibtex](./citations/multiscale.txt)
- [Instant Neural Radiance Fields Stylization](https://arxiv.org/abs/2303.16884), Li et al., Arxiv 2023 | [bibtex](./citations/li2023instant.txt)
- [MF-NeRF: Memory Efficient NeRF with Mixed-Feature Hash Table](https://arxiv.org/abs/2304.12587), Lee et al., Arxiv 2023 | [github](https://github.com/nfyfamr/MixNeRF) | [bibtex](./citations/lee2023mixnerf.txt)
- [Real-Time Neural Light Field on Mobile Devices](https://snap-research.github.io/MobileR2L/), Cao et al., CVPR 2023 | [github](https://github.com/snap-research/MobileR2L) | [bibtex](./citations/mobiler2l.txt)
- [NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations](https://arxiv.org/abs/2306.06359), Fu et al., ICML 2023 | [bibtex](./citations/fu2023nerfool.txt)
- [Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation](https://fantasia3d.github.io/), Chen et al., ICCV 2023 | [github](https://github.com/Gorilla-Lab-SCUT/Fantasia3D) | [bibtex](./citations/chen2023fantasia3d.txt)
- [Template-free Articulated Neural Point Clouds for Reposable View Synthesis](https://arxiv.org/abs/2305.19065), Uzolas et al., Arxiv 2023 | [github](https://github.com/lukasuz/Articulated-Point-NeRF) | [bibtex](./citations/uzolas2023template.txt)
- [DäRF: Boosting Radiance Fields from Sparse Inputs with Monocular Depth Adaptation](https://ku-cvlab.github.io/DaRF/), Song et al., Arxiv 2023 | [github](https://github.com/KU-CVLAB/DaRF) | [bibtex](./citations/song2023darf.txt)
- [Volume Feature Rendering for Fast Neural Radiance Field Reconstruction](https://arxiv.org/abs/2305.17916), Han et al., Arxiv 2023 | [bibtex](./citations/han2023volume.txt)
- [Compact Real-time Radiance Fields with Neural Codebook](https://arxiv.org/abs/2305.18163), Li et al., Arxiv 2023 | [bibtex](./citations/li2023compact.txt)
- [Towards a Robust Framework for NeRF Evaluation](https://arxiv.org/abs/2305.18079), Azzarelli et al., Arxiv 2023 | [bibtex](./citations/azzarelli2023towards.txt)
- [ConsistentNeRF: Enhancing Neural Radiance Fields with 3D Consistency for Sparse View Synthesis](https://skhu101.github.io/ConsistentNeRF/), Hu et al., Arxiv 2023 | [github](https://github.com/skhu101/ConsistentNeRF) | [bibtex](./citations/hu2023consistentnerf.txt)
- [Viewset Diffusion: (0-)Image-Conditioned 3D Generative Models from 2D Data](https://szymanowiczs.github.io/viewset-diffusion), Szymanowicz, et al., ICCV 2023 | [github](https://github.com/szymanowiczs/viewset-diffusion) | [bibtex](./citations/szymanowicz2023viewset.txt)
- [Multi-Object Navigation with dynamically learned neural implicit representations](https://arxiv.org/abs/2210.05129), Marza et al., ICCV 2023 | [bibtex](./citations/marza2022multi.txt)
- [OR-NeRF: Object Removing from 3D Scenes Guided by Multiview Segmentation with Neural Radiance Fields](https://ornerf.github.io/), Yin et al., Arxiv 2023 | [github](https://github.com/cuteyyt/or-nerf) | [bibtex](./citations/yin2023ornerf.txt)
- [VGOS: Voxel Grid Optimization for View Synthesis from Sparse Inputs](https://arxiv.org/abs/2304.13386), Sun et al., ICJAI 2023 | [bibtex](./citations/sun2023vgos.txt)
- [DP-NeRF: Deblurred Neural Radiance Field with Physical Scene Priors](https://dogyoonlee.github.io/dpnerf/), Lee et al., CVPR 2023 | [github](https://github.com/dogyoonlee/DP-NeRF) | [bibtex](./citations/dpnerf.txt)
- [Temporal Interpolation is All You Need for Dynamic Neural Radiance Fields](https://sungheonpark.github.io/tempinterpnerf/), Park et al., CVPR 2023 | [bibtex](./citations/park2023temporal.txt)
- [FMapping: Factorized Efficient Neural Field Mapping for Real-Time Dense RGB SLAM](https://vlis2022.github.io/fmap/), Hua et al., Arxiv 2023 | [github](https://github.com/thua919/FMapping) | [bibtex](./citations/fmaphua23.txt)
- [ZIGNeRF: Zero-shot 3D Scene Representation with Invertible Generative Neural Radiance Fields](https://arxiv.org/abs/2306.02741), Ko et al., Arxiv 2023 | [bibtex](./citations/ko2023zignerf.txt)
- [LU-NeRF: Scene and Pose Estimation by Synchronizing Local Unposed NeRFs](https://people.cs.umass.edu/~zezhoucheng/lu-nerf/), Cheng et al., ICCV 2023 | [bibtex](./citations/cheng2023lunerf.txt)
- [H2-Mapping: Real-time Dense Mapping Using Hierarchical Hybrid Representation](https://arxiv.org/abs/2306.03207), Jiang et al., Arxiv 2023 | [github](https://github.com/SYSU-STAR/H2-Mapping) | [bibtex](./citations/jiang2023h2.txt)
- [Binary Radiance Fields](https://arxiv.org/abs/2306.07581), Shin et al., Arxiv 2023 | [bibtex](./citations/shin2023binary.txt)
- [NeuS-PIR: Learning Relightable Neural Surface using Pre-Integrated Rendering](https://arxiv.org/abs/2306.07632), Mao et al., Arxiv 2023 | [bibtex](./citations/mao2023neus.txt)
- [UrbanIR: Large-Scale Urban Scene Inverse Rendering from a Single Video](https://urbaninverserendering.github.io/), Lin et al., Arxiv 2023 | [bibtex](./citations/lin2023urbanir.txt)
- [Neural Volumetric Reconstruction for Coherent Synthetic Aperture Sonar](https://arxiv.org/abs/2306.09909), Reed et al., Arxiv 2023 | [bibtex](./citations/reed2023neural.txt)
- [Edit-DiffNeRF: Editing 3D Neural Radiance Fields using 2D Diffusion Model](https://arxiv.org/abs/2306.09551), Yu et al., Arxiv 2023 | [bibtex](./citations/yu2023edit.txt)
- [Articulated Object Neural Radiance Field](https://github.com/zubair-irshad/articulated-object-nerf), Irshad
- [Benchmarking and Analyzing 3D-aware Image Synthesis with a Modularized Codebase](https://arxiv.org/abs/2306.12423v1), Wang et al., Arxiv 2023 | [github](https://github.com/qiuyu96/Carver) | [bibtex](./citations/wang2023benchmarking.txt)
- [Floaters No More: Radiance Field Gradient Scaling for Improved Near-Camera Training](https://gradient-scaling.github.io/), Philip et al., EGSR 2023 | [bibtext](./citations/philip23.txt)
- [AvatarBooth: High-Quality and Customizable 3D Human Avatar Generation](https://zeng-yifei.github.io/avatarbooth_page/), Zeng et al., Arxiv 2023 | [github](https://github.com/zeng-yifei/AvatarBooth) | [bibtex](./citations/zeng2023avatarbooth.txt)
- [Blended-NeRF: Zero-Shot Object Generation and Blending in Existing Neural Radiance Fields](https://www.vision.huji.ac.il/blended-nerf/), Gordon et al., Arxiv 2023 | [bibtex](./citations/gordon2023blendednerf.txt)
- [DreamTime: An Improved Optimization Strategy for Text-to-3D Content Creation](https://arxiv.org/abs/2306.12422), Huang et al., Arxiv 2023 | [bibtex](./citations/huang2023dreamtime.txt)
- [DreamEditor: Text-Driven 3D Scene Editing with Neural Fields](https://arxiv.org/abs//2306.13455), Zhuang et al., Arxiv 2023 | [bibtex](./citations/zhuang2023dreameditor.txt)
- [Self-supervised novel 2D view synthesis of large-scale scenes with efficient multi-scale voxel carving](https://arxiv.org/abs/2306.14709), Budisteanu et al., Arxiv 2023 | [github](https://github.com/onorabil/MSVC) | [bibtex](./citations/budisteanu2023self.txt)
- [FlipNeRF: Flipped Reflection Rays for Few-shot Novel View Synthesis](https://shawn615.github.io/flipnerf/), Seo et al., ICCV 2023 | [github](https://github.com/shawn615/FlipNeRF) | [bibtex](./citations/seo2023flipnerf.txt)
- [NeuBTF: Neural Fields for BTF Encoding and Transfer](https://carlosrodriguezpardo.es/projects/NeuBTF/), Rodríguez-Pardo et al., Computer & Graphics 2023 | [bibtex](./citations/rodriguezpardo2023NeuBTF.txt)
- [Hybrid Neural Diffeomorphic Flow for Shape Representation and Generation via Triplane](https://arxiv.org/abs/2307.01957), Han et al., Arxiv 2023 | [bibtex](./citations/han2023hybrid.txt)
- [Neural Fields for Interactive Visualization of Statistical Dependencies in 3D Simulation Ensembles](https://arxiv.org/abs/2307.02203), Farokhmanesh et al., Arxiv 2023 | [bibtex](./citations/farokhmanesh2023neural.txt)
- [NeRFahedron: A Primitive for Animatable Neural Rendering with Interactive Speed](https://zackarysin.github.io/NeRFahedron/), Sin et al., SIGGRAPH i3D 2023 | [bibtex](./citations/sin2023nerfahedron.txt)
- [AutoDecoding Latent 3D Diffusion Models](https://snap-research.github.io/3DVADER/), Ntavelis et al., Arxiv 2023 | [github](https://github.com/snap-research/3DVADER) | [bibtex](./citations/ntavelis2023_3DVADER.txt)
- [NOFA: NeRF-based One-shot Facial Avatar Reconstruction](https://arxiv.org/abs/2307.03441), Yu et al., Arxiv 2023 | [bibtex](./citations/yu2023nofa.txt)
- [RGB-D Mapping and Tracking in a Plenoxel Radiance Field](https://arxiv.org/abs/2307.03404), Teigen et al., Arxiv 2023 | [bibtex](./citations/teigen2023rgb.txt)
- [HyperDiffusion: Generating Implicit Neural Fields with Weight-Space Diffusion](https://ziyaerkoc.com/hyperdiffusion/), Erkoç et al., ICCV 2023 | [bibtex](./citations/2023hyperdiffusion.txt)
- [Surface Geometry Processing: An Efficient Normal-based Detail Representation](https://arxiv.org/abs/2307.07945), Xie et al., Arxiv 2023 | [bibtex](./citations/xie2023surface.txt)
- [Transient Neural Radiance Fields for Lidar View Synthesis and 3D Reconstruction](https://anaghmalik.com/TransientNeRF/), Malik et al., Arxiv 2023 | [bibtex](./citations/malik2023transient.txt)
- [Magic NeRF Lens: Interactive Fusion of Neural Radiance Fields for Virtual Facility Inspection](https://arxiv.org/abs/2307.09860), Li et al., Arxiv 2023 | [bibtex](./citations/li2023magic.txt)
- [PAPR: Proximity Attention Point Rendering](https://zvict.github.io/papr/), Zhang et al., Arxiv 2023 | [bibtex](./citations/zhang2023papr.txt)
- [Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields](https://wbhu.github.io/projects/Tri-MipRF/), Hu et al., ICCV 2023 | [github](https://github.com/wbhu/Tri-MipRF) | [bibtex](./citations/hu2023Tri-MipRF.txt)
- [CopyRNeRF: Protecting the CopyRight of Neural Radiance Fields](https://arxiv.org/abs/2307.11526), Luo et al., ICCV 2023 | [bibtex](./citations/luo2023copyrnerf.txt)
- [FaceCLIPNeRF: Text-driven 3D Face Manipulation using Deformable Neural Radiance Fields](https://arxiv.org/abs/2307.11418), Hwang et al., ICCV 2023 | [bibtex](./citations/hwang2023faceclipnerf.txt)
- [Dyn-E: Local Appearance Editing of Dynamic Neural Radiance Fields](https://dyn-e.github.io/), Zhang et al., Arxiv 2023 | [bibtex](./citations/zhang2023dyn.txt)
- [Points-to-3D: Bridging the Gap between Sparse Points and Shape-Controllable Text-to-3D Generation](https://arxiv.org/abs/2307.13908), Yu et al., Arxiv 2023 | [bibtex](./citations/yu2023points.txt)
- [MARS: An Instance-aware, Modular and Realistic Simulator for Autonomous Driving](https://open-air-sun.github.io/mars/), Wu et al., CICAI 2023 | [github](https://github.com/OPEN-AIR-SUN/mars) | [bibtex](./citations/wu2023mars.txt)
- [Weakly Supervised Multi-Modal 3D Human Body Pose Estimation for Autonomous Driving](https://arxiv.org/abs/2307.14889), Bauer et al., Arxiv 2023 | [bibtex](./citations/bauer2023weakly.txt)
- [Seal-3D: Interactive Pixel-Level Editing for Neural Radiance Field](https://windingwind.github.io/seal-3d/), Wang et al., ICCV 2023 | [github](https://github.com/windingwind/seal-3d/) | [bibtex](./citations/wang2023seal3d.txt)
- [Dynamic PlenOctree for Adaptive Sampling Refinement in Explicit NeRF](https://vlislab22.github.io/DOT/), Bai et al., ICCV 2023 | [github](https://github.com/164140757/DOT) | [bibtex](./citations/Bai2023DOT.txt)
- [Robust Single-view Cone-beam X-ray Pose Estimation with Neural Tuned Tomography (NeTT) and Masked Neural Radiance Fields (mNeRF)](https://arxiv.org/abs/2308.00214), Zhou et al., Arxiv 2023 | [bibtex](./citations/zhou2023robust.txt)
- [Onboard View Planning of a Flying Camera for High Fidelity 3D Reconstruction of a Moving Actor](https://arxiv.org/abs/2308.00134), Jiang et al., Arxiv 2023 | [bibtex](./citations/jiang2023onboard.txt)
- [HD-Fusion: Detailed Text-to-3D Generation Leveraging Multiple Noise Estimation](https://arxiv.org/abs/2307.16183), Wu et al., Arxiv 2023 | [bibtex](./citations/wu2023hd.txt)
- [Learning Unified Decompositional and Compositional NeRF for Editable Novel View Synthesis](https://w-ted.github.io/publications/udc-nerf/), Wang et al., ICCV 2023 | [bibtex](./citations/wang2023udcnerf.txt)
- [Learning Neural Radiance Fields for Mirrors with Whitted-Style Ray Tracing](https://zju3dv.github.io/Mirror-NeRF/), Zeng et al., ACM Multimedia 2023 | [github](https://github.com/zju3dv/Mirror-NeRF/) | [bibtex](./citations/zeng2023mirror-nerf.txt)
- [3D Motion Magnification: Visualizing Subtle Motions with Time-Varying Neural Fields](https://3d-motion-magnification.github.io/), Feng et al., ICCV 2023 | [bibtex](./citations/feng2023motionmag.txt)
- [MVPSNet: Fast Generalizable Multi-view Photometric Stereo](https://floralzhao.github.io/mvpsnet.github.io/), Zhao et al., ICCV 2023 | [bibtex](./citations/zhao2023mvpsnet.txt)
- [Cloth2Tex: A Customized Cloth Texture Generation Pipeline for 3D Virtual Try-On](https://tomguluson92.github.io/projects/cloth2tex/), Gao et al., Arxiv 2023 | [bibtex](./citations/gao2023cloth2tex.txt)
- [AvatarVerse: High-quality & Stable 3D Avatar Creation from Text and Pose](https://avatarverse3d.github.io/), Zhang et al., Arxiv 2023 | [bibtex](./citations/zhang2023avatarverse.txt)
- [Digging into Depth Priors for Outdoor Neural Radiance Fields](https://cwchenwang.github.io/outdoor-nerf-depth/), Wang et al., ACM Multimedia 2023 | [github](https://github.com/cwchenwang/outdoor-nerf-depth) | [bibtex](./citations/wang2023digging.txt)
- [NeRFLiX: High-Quality Neural View Synthesis by Learning a Degradation-Driven Inter-viewpoint MiXer](https://redrock303.github.io/nerflix/), Zhou et al., CVPR 2023 | [github](https://github.com/redrock303/NeRFLiX_CVPR2023) | [bibtex](./citations/zhou2023nerflix.txt)
- [From NeRFLiX to NeRFLiX++: A General NeRF-Agnostic Restorer Paradigm](https://redrock303.github.io/nerflix_plus/), Zhou et al., Arxiv 2023 | [bibtex](./citations/zhou2023nerflixplus.txt)
- [VERF: Runtime Monitoring of Pose Estimation with Neural Radiance Fields](https://arxiv.org/abs/2308.05939), Maggio, Arxiv 2023 | [bibtex](./citations/maggio2023verf.txt)
- [MagicPony: Learning Articulated 3D Animals in the Wild](https://3dmagicpony.github.io/), Wu et al., CVPR 2023 | [github](https://github.com/elliottwu/MagicPony) | [bibtex](./citations/wu2023magicpony.txt)
- [Color-NeuS: Reconstructing Neural Implicit Surfaces with Color](https://colmar-zlicheng.github.io/color_neus/), Zhong et al., Arxiv 2023 | [github](https://github.com/Colmar-zlicheng/Color-NeuS) | [bibtex](./citations/zhong2023colorneus.txt)
- [Neural radiance fields in the industrial and robotics domain: applications, research opportunities and use cases](https://arxiv.org/abs/2308.07118), Šlapak et al., Arxiv 2023 | [github](https://github.com/Maftej/iisnerf) | [bibtex](./citations/vslapak2023neural.txt)
- [SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes](https://vcai.mpi-inf.mpg.de/projects/scenerflow/), Tretschk et al., Arxiv 2023 | [bibtex](./citations/tretschk2023scenerflow.txt)
- [DDF-HO: Hand-Held Object Reconstruction via Conditional Directed Distance Field](https://arxiv.org/abs/2308.08231), Zhang et al., Arxiv 2023 | [bibtex](./citations/zhang2023ddfh.txt)
- [Relightable and Animatable Neural Avatar from Sparse-View Video](https://zju3dv.github.io/relightable_avatar/), Xu et al., Arxiv 2023 | [github](https://zju3dv.github.com/relightable_avatar) | [bibtex](./citations/zhen2023relightable.txt)
- [TeCH: Text-guided Reconstruction of Lifelike Clothed Humans](https://huangyangyi.github.io/TeCH/), Huang et al., Arxiv 2023 | [github](https://github.com/huangyangyi/TeCH) | [bibtex](./citations/huang2023tech.txt)
- [Ref-DVGO: Reflection-Aware Direct Voxel Grid Optimization for an Improved Quality-Efficiency Trade-Off in Reflective Scene Reconstructio](https://arxiv.org/abs/2308.08530), Kouros et al., Arxiv 2023 | [bibtex](./citations/kouros2023ref.txt)
- [MATLABER: Material-Aware Text-to-3D via LAtent BRDF auto-EncodeR](https://sheldontsui.github.io/projects/Matlaber), Xu et al., Arxiv 2023 | [github](https://github.com/SheldonTsui/Matlaber) | [bibtex](./citations/xu2023matlaber.txt)
- [MonoNeRD: NeRF-like Representations for Monocular 3D Object Detection](https://arxiv.org/abs/2308.09421), Xu et al., ICCV 2023 | [github](https://github.com/cskkxjk/MonoNeRD) | [bibtex](./citations/xu2023mononerd.txt)
- [DReg-NeRF: Deep Registration for Neural Radiance Fields](https://aibluefisher.github.io/DReg-NeRF/), Chen et al., ICCV 2023 | [github](https://github.com/AIBluefisher/DReg-NeRF) | [bibtex](./citations/chen2023dreg.txt)
- [Strata-NeRF: Neural Radiance Fields for Stratified Scenes](https://ankitatiisc.github.io/Strata-NeRF/), Dhiman et al., ICCV 2023 | [bibtex](./citations/dhiman2023strata.txt)
- [Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior](https://make-it-3d.github.io/), Tang et al., Arxiv 2023 | [github](https://github.com/junshutang/Make-It-3D) | [bibtex](./citations/tang2023makeit3d.txt)
- [Renderable Neural Radiance Map for Visual Navigation](https://rllab-snu.github.io/projects/RNR-Map/), Kwon et al., CVPR 2023 | [github](https://github.com/rllab-snu/RNR-Map) | [bibtex](./citations/Kwon_2023_CVPR.txt)
- [LiveHand: Real-time and Photorealistic Neural Hand Rendering](https://vcai.mpi-inf.mpg.de/projects/LiveHand/), Mundra et al., ICCV 2023 | [github](https://github.com/amundra15/livehand) | [bibtex](./citations/mundra2023livehand.txt)
- [TADA! Text to Animatable Digital Avatars](https://tada.is.tue.mpg.de/), Liao et al., Arxiv 2023 | [bibtex](citations/liao2023tada.txt)
- [Blending-NeRF: Text-Driven Localized Editing in Neural Radiance Fields](https://arxiv.org/abs/2308.11974), Song et al., ICCV 2023 | [bibtex](./citations/song2023blending.txt)
- [Enhancing NeRF akin to Enhancing LLMs: Generalizable NeRF Transformer with Mixture-of-View-Experts](https://arxiv.org/abs/2308.11793), Cong et al., ICCV 2023 | [bibtex](./citations/cong2023enhancing.txt)
- [NeO 360: Neural Fields for Sparse View Synthesis of Outdoor Scenes](https://zubair-irshad.github.io/projects/neo360.html), IRSHAD et al., ICCV 2023 | [github](https://github.com/zubair-irshad/NeO-360) | [bibtex](./citations/irshad2023neo360.txt)
- [NeuralClothSim: Neural Deformation Fields Meet the Kirchhoff-Love Thin Shell Theory](https://4dqv.mpi-inf.mpg.de/NeuralClothSim/), Kairanda et al., Arxiv 2023 | [bibtex](./citations/kair2023neuralclothsim.txt)
- [NOVA: NOvel View Augmentation for Neural Composition of Dynamic Objects](https://arxiv.org/abs/2308.12560), Agrawal et al., Arxiv 2023 | [github](https://github.com/dakshitagrawal/NoVA) | [bibtex](./citations/agrawal2023nova.txt)
- [Relighting Neural Radiance Fields with Shadow and Highlight Hints](https://nrhints.github.io/), Zeng ey al., SIGGRAPH Asia 2023 | [github](https://github.com/iamNCJ/NRHints) | [bibtex](./citations/zeng2023nrhints.txt)
- [Flexible Techniques for Differentiable Rendering with 3D Gaussians](https://leonidk.com/fmb-plus/), Keselman et al., Arxiv 2023 | [github](https://github.com/leonidk/fmb-plus) | [bibtex](./citations/keselman2023fuzzyplus.txt)
- [CLNeRF: Continual Learning Meets NeRF](https://arxiv.org/abs/2308.14816), Cai et al., ICCV 2023 | [github](https://github.com/IntelLabs/CLNeRF) | [bibtex](./citations/cai2023clnerf.txt)
- [Canonical Factors for Hybrid Neural Fields](https://brentyi.github.io/tilted/), Yi et al., ICCV 2023 | [github](https://github.com/brentyi/tilted) | [bibtex](./citations/titled2023.txt)
- [CityDreamer: Compositional Generative Model of Unbounded 3D Cities](https://infinitescript.com/project/city-dreamer/), Xie et al., Arxiv 2023 | [github](https://github.com/hzxie/city-dreamer) | [bibtex](./citations/xie2023citydreamer.txt)
- [Improving NeRF Quality by Progressive Camera Placement for Unrestricted Navigation in Complex Environments](https://arxiv.org/abs/2309.00014), Kopanas et al., Arxiv 2023 | [bibtex](./citations/kopanas2023improving.txt)
- [S2RF: Semantically Stylized Radiance Fields](https://arxiv.org/abs/2309.01252), Lahiri et al., Arxiv 2023 | [bibtex](./citations/lahiri2023s2rf.txt)
- [Neural Vector Fields: Generalizing Distance Vector Fields by Codebooks and Zero-Curl Regularization](https://arxiv.org/abs/2309.01512), Yang et al., Arxiv 2023 | [bibtex](./citations/yang2023neural.txt)
- [ImmersiveNeRF: Hybrid Radiance Fields for Unbounded Immersive Light Field Reconstruction](https://arxiv.org/abs/2309.01374), Yu et al., Arxiv 2023 | [bibtex](./citations/yu2023immersivenerf.txt)
- [Instant Continual Learning of Neural Radiance Fields](https://arxiv.org/abs/2309.01811), Po et al., Arxiv 2023 | [bibtex](./citations/po2023instant.txt)
- [ResFields: Residual Neural Fields for Spatiotemporal Signals](https://markomih.github.io/ResFields/), Mihajlovic et al., Arxiv 2023 | [github](https://markomih.github.io/ResFields/) | [bibtex](./citations/mihajlovic2023resfields.txt)
- [Bayes' Rays: Uncertainty Quantification for Neural Radiance Fields](https://bayesrays.github.io/), Goli et al., Arxiv 2023 | [bibtex](./citations/goli2023.txt)
- [Single View Refractive Index Tomography with Neural Fields](https://arxiv.org/abs/2309.04437), Zhao et al., Arxiv 2023 | [bibtex](./citations/zhao2023single.txt)
- [DeformToon3D: Deformable 3D Toonification from Neural Radiance Fields](https://www.mmlab-ntu.com/project/deformtoon3d/), Zhang et al., ICCV 2023 | [github](https://github.com/junzhezhang/DeformToon3D) | [bibtex](./citations/zhang2023deformtoon3d.txt)
- [EmerNeRF: Emergent Spatial-Temporal Scene Decomposition via Self-Supervision](https://emernerf.github.io/), Yang et al., Arxiv 2023 | [github](https://github.com/NVlabs/EmerNeRF) | [bibtex](./citations/yang2023emernerf.txt)
- [Dynamic Mesh-Aware Radiance Fields](https://mesh-aware-rf.github.io/), Qiao et al., ICCV 2023 | [github](https://github.com/YilingQiao/DMRF/tree/cleaning) | [bibtex](./citations/qiao2023dmrf.txt)
- [DynaMoN: Motion-Aware Fast And Robust Camera Localization for Dynamic NeRF](https://arxiv.org/abs/2309.08927), Karaoglu et al., Arxiv 2023 | [bibtex](./citations/karaoglu2023dynamon.txt)
- [Locally Stylized Neural Radiance Fields](https://arxiv.org/abs/2309.10684), Pang et al., ICCV 2023 | [bibtex](./citations/pang2023locally.txt)
- [PanopticNeRF-360: Panoramic 3D-to-2D Label Transfer in Urban Scenes](https://fuxiao0719.github.io/projects/panopticnerf360/), Fu et al., Arxiv 2023 | [github](https://github.com/fuxiao0719/PanopticNeRF) | [bibtex](./citations/fu2023panoptic.txt)
- [Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion](https://wengflow.github.io/robust-e-nerf/), Low et al., ICCV 2023 | [github](https://github.com/wengflow/robust-e-nerf) | [bibtex](./citations/low2023_robust-e-nerf.txt)
- [DiffFacto: Controllable Part-Based 3D Point Cloud Generation with Cross Diffusion](https://difffacto.github.io/), Nakayama et al., ICCV 2023 | [bibtex](./citations/nakayama2023difffacto.txt)
- [NeRF-Det: Learning Geometry-Aware Volumetric Representation for Multi-View 3D Object Detection](https://chenfengxu714.github.io/nerfdet/), Xu et al., ICCV 2023 | [github](https://github.com/facebookresearch/NeRF-Det) | [bibtex](./citations/xu2023nerfdet.txt)
- [Neural Impostor: Editing Neural Radiance Fields with Explicit Shape Manipulation](https://arxiv.org/abs/2310.05391), Liu et al., Arxiv 2023 | [bibtex](./citations/liu2023neural.txt)
- [PoRF: Pose Residual Field for Accurate Neural Surface Reconstruction](https://arxiv.org/abs/2310.07449), Bian et al., Arxiv 2023 | [bibtex](./citations/bian2023porf.txt)
- [S4C: Self-Supervised Semantic Scene Completion with Neural Fields](https://arxiv.org/abs/2310.07522), Hayler et al., Arxiv 2023 | [bibtex](./citations/hayler2023s4c.txt)
- [Im4D: High-Fidelity and Real-Time Novel View Synthesis for Dynamic Scenes](https://zju3dv.github.io/im4d/), Lin et al., SIGGRAPH Asia 2023 | [github](https://github.com/zju3dv/im4d) | [bibtex](./citations/lin2023im4d.txt)
- [LightSpeed: Light and Fast Neural Light Fields on Mobile Devices](https://lightspeed-r2l.github.io/), Gupta et al., NeurIPS 2023 | [github](https://github.com/lightspeed-r2l/lightspeed) | [bibtex](./citations/anonymous2023lightspeed.txt)
- [PERF: Panoramic Neural Radiance Field from a Single Panorama](https://perf-project.github.io/), Wang et al., Arxiv 2023 | [github](https://github.com/perf-project/PeRF) | [bibtex](./citations/perf2023.txt)
- [LiCROM: Linear-Subspace Continuous Reduced Order Modeling with Neural Fields](https://arxiv.org/abs/2310.15907), Chang et al., Arxiv 2023 | [bibtex](./citations/chang2023licrom.txt)
- [SIRe-IR: Inverse Rendering for BRDF Reconstruction with Shadow and Illumination Removal in High-Illuminance Scenes](https://arxiv.org/abs/2310.13030), Yang et al., Arxiv 2023 | [bibtex](./citations/yang2023sire.txt)
- [Reconstructive Latent-Space Neural Radiance Fields for Efficient 3D Scene Representations](https://arxiv.org/abs/2310.17880), Aumentado-Armstrong et al., Arxiv 2023 | [bibtex](./citations/aumentado2023reconstructive.txt)
- [Novel View Synthesis from a Single RGBD Image for Indoor Scenes](https://arxiv.org/abs/2311.01065), Hetang et al., Arxiv 2023 | [bibtex](./citations/hetang2023novel.txt)
- [DynPoint: Dynamic Neural Point For View Synthesis](https://arxiv.org/abs/2310.18999), Zhou et al., Arxiv 2023 | [bibtex](./citations/zhou2023dynpoint.txt)
- [InstructPix2NeRF: Instructed 3D Portrait Editing from a Single Image](https://arxiv.org/abs/2311.02826), Li et al., Arxiv 2023 | [github](https://github.com/mybabyyh/InstructPix2NeRF) | [bibtex](./citations/li2023instructpix2nerf.txt)
- [VR-NeRF: High-Fidelity Virtualized Walkable Spaces](https://vr-nerf.github.io/), Xu et al., SIGGRAPH Asia 2023 | [bibtex](./citations/VRNeRF.txt)
- [Real-Time Neural Rasterization for Large Scenes](https://waabi.ai/NeuRas/), Liu et al., ICCV 2023 | [bibtex](./citations/liu2023neuras.txt)
- [ConRad: Image Constrained Radiance Fields for 3D Generation from a Single Image](https://arxiv.org/abs/2311.05230), Purushwalkam et al., NeurIPS 2023 | [bibtex](./citations/purushwalkam2023conrad.txt)
- [Adaptive Shells for Efficient Neural Radiance Field Rendering](https://research.nvidia.com/labs/toronto-ai/adaptive-shells/), Wang et al., SIGGRAPH Asia 2023 | [bibtex](./citations/adaptiveshells2023.txt)
- [D3GA - Drivable 3D Gaussian Avatars](https://zielon.github.io/d3ga/), Zielonka et al., Arxiv 2023 | [bibtex](./citations/Zielonka2023Drivable3D.txt)
- [EvaSurf: Efficient View-Aware Implicit Textured Surface Reconstruction on Mobile Devices](https://g-1nonly.github.io/EvaSurf-Website/), Gao et al., Arxiv 2023 | [bibtex](./citations/gao2023evasurf.txt)
- [Rethinking Directional Integration in Neural Radiance Fields](https://cs.stanford.edu/~congyue/linerf/), Deng et al., Arxiv 2023 | [bibtex](./citations/deng2023rethinking.txt)
- [Re-Nerfing: Enforcing Geometric Constraints on Neural Radiance Fields through Novel Views Synthesis](https://arxiv.org/abs/2312.02255), Tristram et al., Arxiv 2023 | [bibtex](./citations/tristram2023re.txt)
- [Self-Evolving Neural Radiance Fields](https://ku-cvlab.github.io/SE-NeRF/), Jung et al., Arxiv 2023 | [github](https://github.com/KU-CVLAB/SE-NeRF) | [bibtex](./citations/jung2023selfevolving.txt)
- [Dynamic LiDAR Re-simulation using Compositional Neural Fields](https://github.com/prs-eth/Dynamic-LiDAR-Resimulation/tree/master?tab=readme-ov-file), Wu et al., Arxiv 2023 | [github](https://github.com/prs-eth/Dynamic-LiDAR-Resimulation/tree/master?tab=readme-ov-file) | [bibtex](./citations/Wu2023dynfl.txt)
- [Neural Lighting Simulation for Urban Scenes](https://waabi.ai/lightsim/), Pun et al., NeurIPS 2023 | [bibtex](./citations/pun2023neural.txt)
- [CorresNeRF: Image Correspondence Priors for Neural Radiance Fields](https://yxlao.github.io/corres-nerf/), Lao et al., NeurIPS 2023 | [github](https://github.com/yxlao/corres-nerf) | [bibtex](./citations/lao2023corresnerf.txt)
- [SIFU: Side-view Conditioned Implicit Function for Real-world Usable Clothed Human Reconstruction](https://river-zhang.github.io/SIFU-projectpage/), Zhang et al., Arxiv 2023 | [github](https://github.com/River-Zhang/SIFU) | [bibtex](./citations/zhang2023sifu.txt)
- [ProNeRF: Learning Efficient Projection-Aware Ray Sampling for Fine-Grained Implicit Neural Radiance Fields](https://kaist-viclab.github.io/pronerf-site/), Bello et al., Arxiv 2023 | [github](https://github.com/KAIST-VICLab) | [bibtex](./citations/bello2023pronerf.txt)
- [OccNeRF: Self-Supervised Multi-Camera Occupancy Prediction with Neural Radiance Fields](https://linshan-bin.github.io/OccNeRF/), Zhang et al., Arxiv 2023 | [github](https://github.com/LinShan-Bin/OccNeRF) | [bibtex](./citations/chubin2023occnerf.txt)
- [MixRT: Mixed Neural Representations For Real-Time NeRF Rendering](https://licj15.github.io/MixRT/), Li et al., 3DV 2024 | [bibtex](./citations/licj2024mixrt.txt)
- [PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields](https://arxiv.org/abs/2312.10649), Zhao et al., AAAI 2024 | [bibtex](./citations/zhao2024pnerfloc.txt)
- [Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption](https://cuiziteng.github.io/Aleth_NeRF_web/), Cui et al., AAAI 2024 | [github](https://github.com/cuiziteng/Aleth-NeRF) | [bibtex](./citations/cui2024alethnerf.txt)
- [Pano-NeRF: Synthesizing High Dynamic Range Novel Views with Geometry from Sparse Low Dynamic Range Panoramic Images](https://arxiv.org/abs/2312.15942), Lu et al., Arxiv 2023 | [github](https://github.com/Lu-Zhan/Pano-NeRF) | [bibtex](./citations/lu2023pano.txt)
- [Diffusion Priors for Dynamic View Synthesis from Monocular Videos](https://arxiv.org/abs/2401.05583), Wang et al., Arxiv 2024 | [bibtex](./citations/wang2024diffusion.txt)
- [InseRF: Text-Driven Generative Object Insertion in Neural 3D Scenes](https://mohamad-shahbazi.github.io/inserf/), Shahbazi et al., Arxiv 2024 | [bibtex](./citations/shahbazi2024inserf.txt)
- [ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Process](https://provnerf.github.io/), Nakayama et al., Arxiv 2024 | [bibtex](./citations/nakayama2023provnerf.txt)
- [Single-View 3D Human Digitalization with Large Reconstruction Models](https://arxiv.org/abs/2401.12175), Weng et al., Arxiv 2024 | [bibtex](./citations/weng2024single.txt)
- [Scaling Face Interaction Graph Networks to Real World Scenes](https://arxiv.org/abs/2401.11985), López-Guevara et al., Arxiv 2024 | [bibtex](./citations/lopez2024scaling.txt)
- [ReplaceAnything3D:Text-Guided 3D Scene Editing with Compositional Neural Radiance Fields](https://arxiv.org/abs/2401.17895), Bartrum et al., Arxiv 2024 | [bibtex](./citations/bartrum2024replaceanything3dtextguided.txt)
- [Neural Rendering based Urban Scene Reconstruction for Autonomous Driving](https://arxiv.org/abs/2402.06826), Shen et al., Arxiv 2024 | [bibtex](./citations/shen2024neural.txt)
- [OV-NeRF: Open-vocabulary Neural Radiance Fields with Vision and Language Foundation Models for 3D Semantic Understanding](https://arxiv.org/abs/2402.04648), Liao et al., Arxiv 2024 | [bibtex](./citations/liao2024ov.txt)
- [NeRF Analogies: Example-Based Visual Attribute Transfer for NeRFs](https://mfischer-ucl.github.io/nerf_analogies/), Fischer et al., CVPR 2024 | [bibtex](./citations/fischer2024nerf.txt)
- [Is Vanilla MLP in Neural Radiance Field Enough for Few-shot View Synthesis?](https://arxiv.org/abs/2403.06092), Zhu et al., CVPR 2024 | [bibtex](./citations/zhu2024vanilla.txt)
- [Vosh: Voxel-Mesh Hybrid Representation for Real-Time View Synthesis](https://arxiv.org/abs/2403.06505), Zhang et al., Arxiv 2024 | [bibtex](./citations/zhang2024vosh.txt)
- [The NeRFect Match: Exploring NeRF Features for Visual Localization](https://arxiv.org/abs/2403.09577), Zhou et al., Arxiv 2024 | [bibtex](./citations/zhou2024nerfect.txt)
- [CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs](https://zhongyingji.github.io/CVT-xRF/), Zhong et al., CVPR 2024 | [bibtex](./citations/zhong2024cvtxrf.txt)
- [Entity-NeRF: Detecting and Removing Moving Entities in Urban Scenes](https://otonari726.github.io/entitynerf/), Otonari et al., CVPR 2024 | [bibtex](./citations/otonari2024entity.txt)
- [Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation](https://terencecyj.github.io/projects/Mesh2NeRF/), Chen et al., Arxiv 2024 | [bibtex](./citations/chen2024mesh2nerf.txt)
- [DATENeRF: Depth-Aware Text-based Editing of NeRFs](https://datenerf.github.io/DATENeRF/), Rojas et al., Arxiv 2024 | [github](https://github.com/sararoma95/DATENeRF) | [bibtex](./citations/rojas2024datenerf.txt)
- [NeRF-XL: Scaling NeRFs with Multiple GPUs](https://research.nvidia.com/labs/toronto-ai/nerfxl/), Li et al., Arxiv 2024 | [bibtex](./citations/li2024nerfxl.txt)
- [HaLo-NeRF: Learning Geometry-Guided Semantics for Exploring Unconstrained Photo Collections](https://tau-vailab.github.io/HaLo-NeRF/), Dudai et al., Eurographics 2024 | [github](https://github.com/TAU-VAILab/HaLo-NeRF) | [bibtex](./citations/dudai2024halonerf.txt)
- [Lightplane: Highly-Scalable Components for Neural 3D Fields](https://lightplane.github.io/), Cao et al., Arxiv 2024 | [github](https://github.com/facebookresearch/lightplane) | [bibtex](./citations/cao2024lightplane.txt)
- [NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections](https://nerf-casting.github.io/) | Verbin et al., Arxiv 2024 | [bibtex](./citations/verbin2024nerfcasting.txt)
- [OpenNeRF: OpenSet 3D Neural Scene Segmentation with Pixel-wise Features and Rendered Novel Views](https://opennerf.github.io/), Engelmann et al., ICLR 2024 | [github](https://github.com/opennerf/opennerf) | [bibtex](./citations/engelmann2024opennerf.txt)
- [MIRReS: Multi-bounce Inverse Rendering using Reservoir Sampling](https://brabbitdousha.github.io/MIRReS/), Dai et al., Arxiv 2024 | [github](https://github.com/brabbitdousha/MIRReS) | [bibtex](./citations/dai2024mirres.txt)
- [InterNeRF: Scaling Radiance Fields via Parameter Interpolation](https://arxiv.org/abs/2406.11737), Wang et al., Arxiv 2024 | [bibtex](./citations/wang2024internerf.txt)
- [Style-NeRF2NeRF: 3D Style Transfer from Style-Aligned Multi-View Images](https://haruolabs.github.io/style-n2n/), Fujiwara et al., Arxiv 2024 | [bibtex](./citations/fujiwara2024style.txt)
- [Relighting Scenes with Object Insertions in Neural Radiance Fields](https://arxiv.org/abs/2406.14806), Zhu et al., Arxiv 2024 | [bibtex](./citations/zhu2024relighting.txt)
- [Crowd-Sourced NeRF: Collecting Data from Production Vehicles for 3D Street View Reconstruction](https://arxiv.org/abs/2406.16289), Qin et al., Arxiv 2024 | [bibtex](./citations/qin2024crowd.txt)
- [RS-NeRF: Neural Radiance Fields from Rolling Shutter Images](https://arxiv.org/abs/2407.10267), Niu et al., ECCV 2024 | [github](https://github.com/MyNiuuu/RS-NeRF) | [bibtex](./citations/niu2024rs.txt)
- [Boost Your NeRF: A Model-Agnostic Mixture of Experts Framework for High Quality and Efficient Rendering](https://arxiv.org/abs/2407.10389), Sario et al., Arxiv 2024 | [bibtex](./citations/di2024boost.txt)
- [1-Lipschitz Neural Distance Fields](https://arxiv.org/abs/2407.09505), Coiffier et al., Arxiv 2024 | [bibtex](./citations/coiffier20241.txt)
- [NeRF-MAE: Masked AutoEncoders for Self-Supervised 3D Representation Learning for Neural Radiance Fields](https://nerf-mae.github.io/), Irshad et al., ECCV 2024 | [github](https://github.com/zubair-irshad/NeRF-MAE) | [bibtex](./citations/nerf-mae.txt)
- [KFD-NeRF: Rethinking Dynamic NeRF with Kalman Filter](https://arxiv.org/abs/2407.13185), Zhan et al., ECCV 2024 | [bibtex](./citations/zhan2024kfd.txt)
- [SG-NeRF: Neural Surface Reconstruction with Scene Graph Optimization](https://arxiv.org/abs/2407.12667), Chen et al., ECCV 2024 | [github](https://github.com/Iris-cyy/SG-NeRF) | [bibtex](./citations/chen2024sg.txt)
- [BoostMVSNeRFs: Boosting MVS-based NeRFs to Generalizable View Synthesis in Large-scale Scenes](https://su-terry.github.io/BoostMVSNeRFs/), Su et al., SIGGRAPH 2024 | [github](https://github.com/Su-Terry/BoostMVSNeRFs) | [bibtex](./citations/su2024boostmvsnerfs.txt)