SCoDA: Domain Adaptive Shape Completion for Real Scans

CVPR 2023

1FNii,CUHKSZ, 2SSE,CUHKSZ, 3SRIBD, 4SDS,CUHKSZ, 5Microsoft Research Asia, 6Sun Yat-sen University, 7Research Institute, Sun Yat-sen University, Shenzhen
An illustration of the proposed task SCoDA: domain adaptive shape completion, which aims to transfer the knowledge in the synthetic domain to the reconstruction of noisy and incomplete real point clouds. For the proposed task, a dataset, ScanSalon, with paired real scans and 3D shape models is contributed.

Abstract

3D shape completion from point clouds is a challenging task, especially from scans of real-world objects. Considering the paucity of 3D shape ground truths for real scans, existing works mainly focus on benchmarking this task on synthetic data, e.g. 3D computer-aided design models. However, the domain gap between synthetic and real data limits the generalizability of these methods. Thus, we propose a new task, SCoDA, for the domain adaptation of real scan shape completion from the synthetic domain. A new dataset, ScanSalon, is contributed with a bunch of elaborate 3D models created by skillful artists according to given scans. To address this new task, we propose a novel cross-domain feature fusion method for knowledge transfer and a novel volume-consistent self-training framework for robust learning from real data. Extensive experiments are conducted for existing methods and the proposed method is effective to bring an improvement of 6%∼7% mIoU.

1. Dataset Exhibition of ScanSalon

Visualization of samples in the proposed ScanSalon dataset.

Real Scans

Created Meshes

2. Method and Results

Overview of the proposed method. Two IF-Net encoders are used for the source and the target domain, respectively, and they share an implicit function decoder. The cross-domain feature fusion (CDFF) works by adaptively combining the global-level and local-level knowledge learned from the source and target domain, respectively. The volume-consistency self-training (VCST) works by enforcing the prediction consistency between two different augmented views to learn the local details.

Qualitative comparison between our method and IF-Net on shape completion with only 3% (left) and 5% labels (right) for training.

3. ScanSalon Sample Visulization

Class: Sofa

Class: bed

Class: Chair

Calss: Desk

Class: Lamp

Class: Car

BibTeX

@inproceedings{wu2023scoda,
  title={SCoDA: Domain Adaptive Shape Completion for Real Scans},
  author={Yushuang, Wu and Zizheng, Yan and Ce, Chen and Lai, Wei and Xiao, Li and Guanbin, Li and Yihao, Li and Shuguang, Cui and Xiaoguang, Han},
  booktitle={The IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR)},
  year={2023}}