1Zhejiang University 2Tongji University 3Deep Glint
*Equal Contribution xCorresponding Author
This paper tackles the challenge of robust reconstruction, i.e., the task of reconstructing a 3D scene from a set of inconsistent multi-view images. Some recent works have attempted to simultaneously remove image inconsistencies and perform reconstruction by integrating image degradation modeling into neural 3D scene representations.However, these methods rely heavily on dense observations for robustly optimizing model parameters.To address this issue, we propose to decouple robust reconstruction into two subtasks: restoration and reconstruction, which naturally simplifies the optimization process.To this end, we introduce UniVerse, a unified framework for robust reconstruction based on a video diffusion model. Specifically, UniVerse first converts inconsistent images into initial videos, then uses a specially designed video diffusion model to restore them into consistent images, and finally reconstructs the 3D scenes from these restored images.Compared with case-by-case per-view degradation modeling, the diffusion model learns a general scene prior from large-scale data, making it applicable to diverse image inconsistencies.Extensive experiments on both synthetic and real-world datasets demonstrate the strong generalization capability and superior performance of our method in robust reconstruction. Moreover, UniVerse can control the style of the reconstructed 3D scene.
Given a set of inconsistent images, we first convert them into an initial video. We then use SAM to identify transient occlusions and generate inpainting masks. These masks are used to set the occluded pixels in the initial video to zero. Next, we encode the video into latents using a VAE Encoder. After setting one image as the style image and assigning it style mask, we concatenate the style masks, inpainting masks, latents, and randomly sampled Gaussian noise along the channel dimension and feed them into the U-Net. For each masked input image, we obtain semantic embeddings using the CLIP image encoder and aggregate them via the Multi-input Query Transformer to form a global semantic embedding. This embedding guides the U-Net in the video generation process. Finally, the U-Net output is decoded by the VAE Decoder to produce the restored video, from which we extract the consistent images and reconstruct a high-quality 3D scene. If too many images for the VDM to restore at once, we iteratively restore them in batches as described.
The style of the restored images, and consequently the reconstructed 3D scenes, is determined by the style image. By changing the style image, we can alter the style of the entire reconstructed 3D scene
UniVerse focuses on making images consistent rather than generating new views. Thus, even after restoring very sparse input images to a consistent state, reconstruction may still fail due to insufficient views. This issue can be easily resolved by using a generative novel view synthesis model like ViewCrafter. As shown in the above figure, given 2 inconsistent input images, ViewCrafter synthesizes distorted novel views with strange occlusions. After the images are restored via UniVerse, the novel views synthesized by ViewCrafter become consistent. In other words, as a restoration model, UniVerse can serve as a pre-processor for other models, enabling robust reconstruction.
@misc{cao2025universeunleashingsceneprior,
title={UniVerse: Unleashing the Scene Prior of Video Diffusion Models for Robust Radiance Field Reconstruction},
author={Jin Cao and Hongrui Wu and Ziyong Feng and Hujun Bao and Xiaowei Zhou and Sida Peng},
year={2025},
eprint={2510.01669},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2510.01669},
}