This repo collects papers that use diffusion models for 3D generation.
- DreamFusion: Text-to-3D using 2D Diffusion, Poole et al., Arxiv 2022
- Magic3D: High-Resolution Text-to-3D Content Creation, Lin et al., Arxiv 2022
- Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation, Wang et al., Arxiv 2022
- Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation, Chen et al., Arxiv 2023
- Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation, Seo et al., Arxiv 2023
- DITTO-NeRF: Diffusion-based Iterative Text To Omni-directional 3D Model, Seo et al., Arxiv 2023
- TextMesh: Generation of Realistic 3D Meshes From Text Prompts, Tsalicoglou et al., Arxiv 2023
- Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation, Seo et al., Arxiv 2023
- Text-driven Visual Synthesis with Latent Diffusion Prior, Liao et al., Arxiv 2023
- Re-imagine the Negative Prompt Algorithm: Transform 2D Diffusion into 3D, alleviate Janus problem and Beyond, Armandpour et al., Arxiv 2023
- HiFA: High-fidelity Text-to-3D with Advanced Diffusion Guidance, Zhu and Zhuang, Arxiv 2023
- ATT3D: Amortized Text-to-3D Object Synthesis, Lorraine et al., Arxiv 2023
- Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models, Höllein et al., Arxiv 2023
- SceneScape: Text-Driven Consistent Scene Generation, Fridman et al., Arxiv 2023
- Text2NeRF: Text-Driven 3D Scene Generation with Neural Radiance Fields, Zhang et al., Arxiv 2023
- PanoGen: Text-Conditioned Panoramic Environment Generation for Vision-and-Language Navigation, Li and Bansal, Arxiv 2023
- MVDiffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion, Tang et al., Arxiv 2023
- Compositional 3D Scene Generation using Locally Conditioned Diffusion, Po and Wetzstein, Arxiv 2023
- Set-the-Scene: Global-Local Training for Generating Controllable NeRF Scenes, Cohen-Bar et al., Arxiv 2023
- CompoNeRF: Text-guided Multi-object Compositional NeRF with Editable 3D Scene Layout, Lin et al., Arxiv 2023
- DreamTime: An Improved Optimization Strategy for Text-to-3D Content Creation, Huang et al., Arxiv 2023
- EfficientDreamer: High-Fidelity and Robust 3D Creation via Orthogonal-view Diffusion Prior, Zhao et al., Arxiv 2023
- MVDream: Multi-view Diffusion for 3D Generation, Shi et al., Arxiv 2023
- SweetDreamer: Aligning Geometric Priors in 2D Diffusion for Consistent Text-to-3D, Li et al., Arxiv 2023
- Ctrl-Room: Controllable Text-to-3D Room Meshes Generation with Layout Constraints, Fang et al. Arxiv 2023
- DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior, Sun et al., Arxiv 2023
- Instant3D: Instant Text-to-3D Generation, Li et al., Arxiv 2023
- HyperFields: Towards Zero-Shot Generation of NeRFs from Text, Babu et al., Arxiv 2023
- DreamSpace: Dreaming Your Room Space with Text-Driven Panoramic Texture Propagation, Yang et al., Arxiv 2023
- Enhancing High-Resolution 3D Generation through Pixel-wise Gradient Clipping, Pan et al., Arxiv 2023
- GaussianDreamer: Fast Generation from Text to 3D Gaussian Splatting with Point Cloud Priors, Yi et al., Arxiv 2023
- NeuralLift-360: Lifting An In-the-wild 2D Photo to A 3D Object with $360^{\deg}$ Views, Xu et al., CVPR 2023
- NeRDi: Single-View NeRF Synthesis with Language-Guided Diffusion as General Image Priors, Deng et al., CVPR 2023
- Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures, Metzer et al., CVPR 2023
- RealFusion: 360{\deg} Reconstruction of Any Object from a Single Image, Melas-Kyriazi et al., Arxiv 2023
- Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior, Tang et al., Arxiv 2023
- Zero-1-to-3: Zero-shot One Image to 3D Object, Liu et al., Arxiv 2023
- DreamBooth3D: Subject-Driven Text-to-3D Generation, Raj et al., Arxiv 2023
- DreamSparse: Escaping from Plato's Cave with 2D Frozen Diffusion Model Given Sparse Views, Yoo et al., Arxiv 2023
- One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization, Liu et al., Arxiv 2023
- Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors, Qian et al., Arxiv 2023
- 360◦ Reconstruction From a Single Image Using Space Carved Outpainting, Ryu et al., SIGGRAPH ASIA 2023
- Viewpoint Textual Inversion: Unleashing Novel View Synthesis with Pretrained 2D Diffusion Models, Burgess et al., Arxiv 2023
- SyncDreamer: Generating Multiview-consistent Images from a Single-view Image, Liu et al., Arxiv 2023
- Wonder3D: Single Image to 3D using Cross-Domain Diffusion, Long et al., Arxiv 2023
- Consistent123: Improve Consistency for One Image to 3D Object Synthesis, Weng et al., Arxiv 2023
- Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model, Shi et al., Arxiv 2023
- TOSS:High-quality Text-guided Novel View Synthesis from a Single Image, Shi et al., Arxiv 2023
- DreamFace: Progressive Generation of Animatable 3D Faces under Text Guidance, Zhang et al., Arxiv 2023
- AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control, Jiang et al., ICCV 2023
- DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via Diffusion Models, Cao et al., Arxiv 2023
- DreamWaltz: Make a Scene with Complex 3D Animatable Avatars, Huang et al., Arxiv 2023
- ZeroAvatar: Zero-shot 3D Avatar Generation from a Single Image, Weng et al., Arxiv 2023
- AvatarBooth: High-Quality and Customizable 3D Human Avatar Generation, Zeng et al., Arxiv 2023
- Farm3D: Learning Articulated 3D Animals by Distilling 2D Diffusion Jakab et al., Arxiv 2023
- Anything 3D: Towards Single-view Anything Reconstruction in the Wild, Shen et al., Arxiv 2023
- ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image Collections, Yao et al., Arxiv 2023
- TADA! Text to Animatable Digital Avatars, Liao et al., Arxiv 2023
- Diffusion-Guided Reconstruction of Everyday Hand-Object Interaction Clips, Ye et al., ICCV 2023
- Text-Guided Generation and Editing of Compositional 3D Avatars, Zhang et al., Arxiv 2023
- SKED: Sketch-guided Text-based 3D Editing, Mikaeili et al., Arxiv 2023
- TEXTure: Text-Guided Texturing of 3D Shapes, Richardson et al., Arxiv 2023
- Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions, Haque et al., ICCV 2023
- Instruct 3D-to-3D: Text Instruction Guided 3D-to-3D conversion, Kamata et al., Arxiv 2023
- Edit-DiffNeRF: Editing 3D Neural Radiance Fields using 2D Diffusion Model, Yu et al., Arxiv 2023
- Control4D: Dynamic Portrait Editing by Learning 4D GAN from 2D Diffusion-based Editor, Shao et al., Arxiv 2023
- RePaint-NeRF: NeRF Editting via Semantic Masks and Diffusion Models, Zhou et al., Arxiv 2023
- DreamEditor: Text-Driven 3D Scene Editing with Neural Fields, Zhuang et al., SIGRRAPH ASIA 2023
- Language-driven Object Fusion into Neural Radiance Fields with Pose-Conditioned Dataset Updates, Shum et al., Arxiv 2023
- ED-NeRF: Efficient Text-Guided Editing of 3D Scene using Latent Space NeRF , Park et al., Arxiv 2023
- 3D Paintbrush: Local Stylization of 3D Shapes with Cascaded Score Distillation, Decatur et al., Arxiv 2023
- Novel View Synthesis with Diffusion Models, Watson et al., ICLR 2023
- Generative Novel View Synthesis with 3D-Aware Diffusion Models, Chan et al., Arxiv 2023
- NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from 3D-aware Diffusion, Gu et al., ICML 2023
- 3DDesigner: Towards Photorealistic 3D Object Generation and Editing with Text-guided Diffusion Models, Li et al., Arxiv 2022
- SparseFusSparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction, Zhou and Tulsiani, CVPR 2023
- HoloDiffusion: Training a 3D Diffusion Model using 2D Images, Karnewar et al., CVPR 2023
- Renderdiffusion: Image Diffusion for 3D Reconstruction, Inpainting and Generation, Anciukevičius et al., CVPR 2023
- Diffusion with Forward Models: Solving Stochastic Inverse Problems Without Direct Supervision, Tewari et al., Arxiv 2023
- 3D-aware Image Generation using 2D Diffusion Models, Xiang et al., Arxiv 2023
- Viewset Viewset Diffusion: (0-)Image-Conditioned 3D Generative Models from 2D Data, Szymanowicz et al., Arxiv 2023
- HOLOFUSION: Towards Photo-realistic 3D Generative Modeling, Karnewar et al., Arxiv 2023
- ZeroNVS: Zero-Shot 360-Degree View Synthesis from a Single Real Image, Sargent et al., Arxiv 2023
- Instant3D: Fast Text-to-3D with Sparse-view Generation and Large Reconstruction Model, Li et al., Arxiv 2023
- DMV3D: Denoising Multi-View Diffusion using 3D Large Reconstruction Model, Xu et al., Arxiv 2023
- LRM: Large Reconstruction Model for Single Image to 3D, Hong et al., Arxiv 2023
- Consistent View Synthesis with Pose-Guided Diffusion Models, Tseng et al., CVPR 2023
- Long-Term Photometric Consistent Novel View Synthesis with Diffusion Models, Yu et al., Arxiv 2023
- DiffDreamer: Towards Consistent Unsupervised Single-view Scene Extrapolation with Conditional Diffusion Models, Cai et al., Arxiv 2023
- Diffusion Probabilistic Models for 3D Point Cloud Generation, Luo et al., CVPR 2021
- 3d shape generation and completion through point-voxel diffusion, Zhou et al., Arxiv 2021
- A Diffusion-ReFinement Model for Sketch-to-Point Modeling, Kong et al., ACCV 2022
- Controllable Mesh Generation Through Sparse Latent Point Diffusion Models, Lyu, CVPR 2023
- Point-E: A System for Generating 3D Point Clouds from Complex Prompts, Alex Nichol Heewoo Jun et al., ICML 2023
- DiffFacto: Controllable Part-Based 3D Point Cloud Generation with Cross Diffusion, Nakayama et al., Arxiv 2023
- Sketch and Text Guided Diffusion Model for Colored Point Cloud Generation, Wu et al., ICCV 2023
- DiT-3D: Exploring Plain Diffusion Transformers for 3D Shape Generation, Mo et al., Arxiv 2023
- Learning A Diffusion Prior For Nerfs, Yang et al., ICLRW 2023
- Tetrahedral Diffusion Models for 3D Shape Generation, Nikolai and Torben et al., Arxiv 2022
- MeshDiffusion: Score-based Generative 3D Mesh Modeling, Liu et al., ICLR 2023
- Neural Wavelet-domain Diffusion for 3D Shape Generation, Hui et al., SIGGRAPH Asia 2022
- Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and Manipulation, Hu and Hui et al., Arxiv 2023
- DiffRF: Rendering-Guided 3D Radiance Field Diffusion, Muller et al., CVPR 2023
- Locally Attentional SDF Diffusion for Controllable 3D Shape Generation, Zheng et al., SIGGRAPH 2023
- HyperDiffusion: Generating Implicit Neural Fields with Weight-Space Diffusion, Erkoç et al., ICCV 2023
- DiffComplete: Diffusion-based Generative 3D Shape Completion, Chu et al., Arxiv 2023
- DiffRoom: Diffusion-based High-Quality 3D Room Reconstruction and Generation, Ju et al., Arxiv 2023
- 3D Neural Field Generation using Triplane Diffusion, Shue et al., Arxiv 2022
- DiffusionSDF: Conditional Generative Modeling of Signed Distance Functions, Chou et al., Arxiv 2022
- Rodin: A Generative Model for Sculpting 3D Digital Avatars Using Diffusion, Wang et al., CVPR 2023
- 3DGen: Triplane Latent Diffusion for Textured Mesh Generation, Gupta et al., Arxiv 2023
- Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction, Chen et al., Arxiv 2023
- Learning Controllable 3D Diffusion Models from Single-view Images, Gu et al., Arxiv 2023
- GAUDI: A Neural Architect for Immersive 3D Scene Generation, Bautista et al., NIPS 2022
- LION: Latent Point Diffusion Models for 3D Shape Generation, Zeng et al., NIPS 2022
- Diffusion-SDF: Text-to-Shape via Voxelized Diffusion, Li et al., CVPR 2023
- 3D-LDM: Neural Implicit 3D Shape Generation with Latent Diffusion Models, Nam et al., Arxiv 2022
- 3DShape2VecSet: A 3D Shape Representation for Neural Fields and Generative Diffusion Models, Zhang et al., SIGGRAPH 2023
- Shap-E: Generating Conditional 3D Implicit Functions, Jun et al., Arxiv 2023
- StyleAvatar3D: Leveraging Image-Text Diffusion Models for High-Fidelity 3D Avatar Generation, Zhang et al., Arxiv 2023
- AutoDecoding Latent 3D Diffusion Models, Ntavelis et al., Arxiv 2023