IPoD: Implicit Field Learning with Point Diffusion for Generalizable 3D Object Reconstruction from Single RGB-D Images

CVPR 2024, Poster (Highlight)

1FNii,CUHKSZ, 2SSE,CUHKSZ, 3Alibaba Group, 4HKUST
Our work focuses on the task of generalizable 3D object reconstruction from a single RGB-D image. The proposed method conducts implicit field learning with point diffusion that iteratively denoises a point cloud as adaptive query points for better implicit field learning, which leads to high reconstruction quality on both the global shape and fine details.

Abstract

Generalizable 3D object reconstruction from single-view RGB-D images remains a challenging task, particularly with real-world data. Current state-of-the-art methods develop Transformer-based implicit field learning, necessitating an intensive learning paradigm that requires dense query-supervision uniformly sampled throughout the entire space. We propose a novel approach, IPoD, which harmonizes implicit field learning with point diffusion. This approach treats the query points for implicit field learning as a noisy point cloud for iterative denoising, allowing for their dynamic adaptation to the target object shape. Such adaptive query points harness diffusion learning’s capability for coarse shape recovery and also enhances the implicit representation’s ability to delineate finer details. Besides, an additional self-conditioning mechanism is designed to use implicit predictions as the guidance of diffusion learning, leading to a cooperative system. Experiments conducted on the CO3D-v2 dataset affirm the superiority of IPoD, achieving 7.8% improvement in F-score and 28.6% in Chamfer distance over existing methods. The generalizability of IPoD is also demonstrated on the MVImgNet dataset.

1. Methodology

Overview of the proposed method. The network takes a single-view image and a partial point cloud unprojected from the image according to the depth information as the input. A bunch of points are sampled both as a noisy point cloud for point diffusion learning and also as query points for implicit field learning. The proposed self-conditioning mechanism leverages the implicit predictions to reversely assist denoising. The reconstruction result can be obtained via iteratively conducting implicit predicting and denoising concurrently.


2. Denoising Visualization

Visualization of the denoising process (t={1000, 750, 500, 250, 0}) of our method in inferring. Note we only sample 2k points in each noisy sample for better visualization. The darkness of each point indicates the magnitude of the predicted UDF value (the smaller, the darker) at this position.

3. Reconstruction Visualization

Visualization of reconstructions by NU-MCC v.s. IPoD (Ours2).


Visualization of reconstructions by IPoD (Ours2) on CO3D-v2 held-out categories, with comparisons with the SOTA method NU-MCC.


Visualization of reconstructions by IPoD (Ours2) on CO3D-v2 held-in categories, with comparisons with the SOTA method NU-MCC.


Visualization of reconstructions (by inference without fine-tuning) by IPoD (Ours2) on the MVImgNet dataset, with comparisons with the SOTA method NU-MCC.

BibTeX

@inproceedings{wu2023ipod,
  title={IPoD: Implicit Field Learning with Point Diffusion for Generalizable 3D Object Reconstruction from Single RGB-D Images},
  author={Yushuang, Wu and Luyue, Shi and Junhao, Cai and Weihao, Yuan and Lingteng, Qiu and Zilong, Dong and Liefeng, Bo and Shuguang, Cui and Xiaoguang, Han},
  booktitle={The IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR)},
  year={2024}}