Skip to content
/ RPBG Public

[ECCV 2024 Oral] Code for RPBG: Towards Robust Neural Point-based Graphics in the Wild.

License

Notifications You must be signed in to change notification settings

QT-Zhu/RPBG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RPBG: Robust Point-based Graphics

RPBG: Towards Robust Neural Point-based Graphics in the Wild
Qingtian Zhu1, Zizhuang Wei2,3, Zhongtian Zheng3, Yifan Zhan1, Zhuyu Yao4, Jiawang Zhang4, Kejian Wu4, Yinqiang Zheng1
1The University of Tokyo, 2Huawei Technologies, 3Peking University, 4XREAL
ECCV 2024 Oral

TL; DR: MVS-triangulated Splats + Image Restoration = Robust Point-based NVS



Indoor Navigation with Textureless and Transparrent Surfaces

Environment

The configuration of running environment involves CUDA compiling, so please make sure NVCC has been installed (nvcc -V to check the version) and the installed PyTorch is compiled with the same CUDA version.

For example, if the system's CUDA is 11.8, run the following commands to configure the environment:

conda create -n RPBG python=3.9 -y && conda activate RPBG
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
pip install ./pcpr

To Run Your Custom Data

We provide the scripts to process custom data without camera calibration and triangulation. The typical data structure is as follows:

|-- custom_root_path
    |-- camera.xml # agisoft format of camera intrinsics & extrinsics
    |-- scene-sparse.yaml # configuration file for sparse triangulation (SfM)
    |-- scene-dense.yaml # configuration file for dense triangulation (MVS)
    |-- images # raw images (considered distorted thus not to be used in training)
    |-- sfm
        |-- sparse_pcd.ply # sparsely triangulated points
        |-- undis 
            |-- images # undistorted images (undistorted, to be used in training)
    |-- mvs
        |-- dense_pcd.ply # densely triangulated points

Data Preparation

First configure the path of your data & COLMAP installation in the script in triangulation/prepare_inputs.sh, as well as other settings if wanted, e.g., GPU indexes and distortion models, and execute it as:

sh triangulation/prepare_inputs.sh

Note that the GPU-enabled SIFT of COLMAP may not work with headless servers, check this issue for more information. By default, this script performs SfM, image undistortion, and MVS sequentially assuming all your images share one set of intrinsics including the distortion model, with unregistered images discarded.

Then please fill the relevant information in configs/paths.yaml and create a custom config file similar to configs/custom/sample.yaml, and adopting the default set of hyper-parameters will just work fine. After execution, scene-sparse.yaml, scene-dense.yaml, and camera.xml will be created under the given directory.

Following NPBG and READ, we follow the data convention of Agisoft Metashape, but we provide a useful script for converting camera parameters from one to another. It's now supporting:

  • Agisoft Metashape
  • Open3D-style camera trajectory
  • COLMAP sparse reconstruction
  • MVSNet-style format

For example, to convert COLMAP sparse reconstruction to Agisoft, run:

from pose_format_converter import *
COLMAP_recons = <YOUR_COLMAP_SPARSE_RECONS>
traj = load_colmap(COLMAP_recons)
traj.export_agisoft("camera.xml")

Training

To start training, please follow the scripts in scripts. We give an example as follows.

sh scripts/train.sh configs/custom/sample.yaml

Citation

@article{zhu2024rpbg,
  title={RPBG: Towards Robust Neural Point-based Graphics in the Wild},
  author={Zhu, Qingtian and Wei, Zizhuang and Zheng, Zhongtian and Zhan, Yifan and Yao, Zhuyu and Zhang, Jiawang and Wu, Kejian and Zheng, Yinqiang},
  journal={arXiv preprint arXiv:2405.05663},
  year={2024}
}

Acknowledgements

We would like to thank the maintainers of the following repositories.

  • PCPR: for point cloud rasterization (z-buffering) by pure CUDA
  • NPBG: for the general point-based neural rendering pipeline & data convention
  • READ: for more features and the MIMO-UNet implemented
  • Open3D: for visualization of point clouds on headless servers
  • COLMAP: for camera calibration and sparse triangulation
  • AA-RMVSNet: for dense triangulation

About

[ECCV 2024 Oral] Code for RPBG: Towards Robust Neural Point-based Graphics in the Wild.

Resources

License

Stars

Watchers

Forks