GPS-Gaussian+: Generalizable Pixel-wise 3D Gaussian Splatting
for Real-Time Human-Scene Rendering from Sparse Views


Boyao Zhou1*, Shunyuan Zheng2*, Hanzhang Tu1, Ruizhi Shao1, Boning Liu1, Shengping Zhang2✉, Liqiang Nie2, Yebin Liu1

1Tsinghua University        2Harbin Institute of Technology
*Equal contribution  †Work done during an internship at Tsinghua University  Corresponding author


Abstract

Differentiable rendering techniques have recently shown promising results for free-viewpoint video synthesis of characters. However, such methods, either Gaussian Splatting or neural implicit rendering, typically necessitate per-subject optimization which does not meet the requirement of real-time rendering in an interactive application. We propose a generalizable Gaussian Splatting approach for high-resolution image rendering under a sparse-view camera setting. To this end, we introduce Gaussian parameter maps defined on the source views and directly regress Gaussian properties for instant novel view synthesis without any fine-tuning or optimization. We train our Gaussian parameter regression module on human-only data or human-scene data, jointly with a depth estimation module to lift 2D parameter maps to 3D space. The proposed framework is fully differentiable with both depth and rendering supervision or with only rendering supervision. We further introduce a regularization term and an epipolar attention mechanism to preserve geometry consistency between two source views, especially when neglecting depth supervision. Experiments on several datasets demonstrate that our method outperforms state-of-the-art methods while achieving an exceeding rendering speed.


Method


 

Overview: Given RGB images of a human-centered scene with sparse camera views and a target novel viewpoint, we select the adjacent two views on which to formulate our pixel-wise Gaussian representation. We extract the image features by using epipolar attention and then conduct an iterative depth estimation. For each source view, the RGB image serves as a color map, while the other parameters of 3D Gaussians are predicted in a pixel-wise manner. The Gaussian parameter maps defined on 2D image planes of both views are further unprojected to 3D space via refined depth maps and aggregated for novel view rendering. The fully differentiable framework enables a joint training mechanism with only rendering loss and geometry regularization.

 


Live Demo


Live demo for human-object, multi-person, human-scene interaction scenario

Rendering Comparisons


Baselines: NeRF-like ENeRF, image-based rendering IBRNet, Gaussian-Splatting-based MVSplat, optimization-based 4D-GS

Feed-forward Geometry Result


Ablation studies on Depth Residual and Geometry Regularization

Free View Rendering


Data from DyNeRF
Data collected by ourselves

Citation


  @article{zhou2024gpsplus,
    title={GPS-Gaussian+: Generalizable Pixel-wise 3D Gaussian Splatting for Real-Time Human-Scene Rendering from Sparse Views},
    author={Zhou, Boyao and Zheng, Shunyuan and Tu, Hanzhang and Shao, Ruizhi and Liu, Boning and Zhang, Shengping and Nie, Liqiang and Liu, Yebin},
    journal={arXiv preprint arXiv:2411.11363},
    year={2024}
  }