Differentiable Light Transport with Gaussian Surfels via Adapted Radiosity for Efficient Relighting and Geometry Reconstruction

SIGGRAPH Asia 2025
(ACM Transactions on Graphics)

University of California, San Diego

We build an efficient Gaussian-splatting-based framework for differentiable light transport, inspired from the classic radiosity theory.
During inference, we enable hundreds of FPS for global illumination effects, including view-dependent reflections using a spherical harmonics representation.

Abstract

Radiance fields have gained tremendous success with applications ranging from novel view synthesis to geometry reconstruction, especially with the advent of Gaussian splatting. However, they sacrifice modeling of material reflective properties and lighting conditions, leading to significant geometric ambiguities and the inability to easily perform relighting. One way to address these limitations is to incorporate physically-based rendering, but it has been prohibitively expensive to include full global illumination within the inner loop of the optimization. Therefore, previous works adopt simplifications that make the whole optimization with global illumination effects efficient but less accurate.

In this work, we adopt Gaussian surfels as the primitives and build an efficient framework for differentiable light transport, inspired from the classic radiosity theory. The whole framework operates in the coefficient space of spherical harmonics, enabling both diffuse and specular materials. We extend the classic radiosity into non-binary visibility and semi-opaque primitives, propose novel solvers to efficiently solve the light transport, and derive the backward pass for gradient optimizations, which is more efficient than auto-differentiation. During inference, we achieve view-independent rendering where light transport need not be recomputed under viewpoint changes, enabling hundreds of FPS for global illumination effects, including view-dependent reflections using a spherical harmonics representation.

Through extensive qualitative and quantitative experiments, we demonstrate superior geometry reconstruction, view synthesis and relighting than previous inverse rendering baselines, or data-driven baselines given relatively sparse datasets with known or unknown lighting conditions.

Video

Pipeline

At the core of our method is a Differentiable Light Transport module that is view-independent. Our differentiable light transport is therefore compatible with efficient rasterization. The gradients flow from the rasterizer into the underlying geometry, material and lighting conditions.

Our pipeline is fully end-to-end, initialized either from randomly distributed points within a cube or from SfM points when available. This formulation helps resolve the geometric ambiguities that commonly arise in radiance fields. Please find more analysis in ''Geometry disambiguation'' paragraph of Sec. 9.3.

Evaluation

Our proposed framework only needs to calculate the light transport once and then supports efficient rendering at arbitrary viewpoint using a rasterizer. The time cost roughly increases linearly as the number of primitives increases. We propose a hybrid solver to efficiently solve the light transport while maintaining the low variance. With a small overhead, the variance is significantly reduced compared to using only MC. For the full evaluation please check the paper.

Relationship between the time cost of calculating the full light transport and the number of primitives.

Comparison between the hybrid solver and Monte-carlo solver. The variances of rendered images with global illumination using different solvers are demonstrated by colored heatmaps (colorbar is visualized on the right). The time cost forcalculating the light transport is labeled at the left-top corner

Visual Comparisons

Ours
GS3 [Bi 2024]
Ours [Global Illumination] (Geometry)
Ours [Direct Illumination] (Geometry)
Ours
NRHints [Zeng 2023]
Ours
Edited Material (More Shiny)

BibTeX

@article{jiang2025radiositygs,
  author  = {Jiang, Kaiwen and Sun, Jia-Mu and Li, Zilu and Wang, Dan and Li, Tzu-Mao and Ramamoorthi, Ravi},
  title   = {Differentiable Light Transport with Gaussian Surfels via Adapted Radiosity for Efficient Relighting and Geometry Reconstruction},
  year    = {2025},
  journal = {ACM Transactions on Graphics (TOG)}, 
  number  = {6}, 
  volume  = {44}, 
  month   = {December}
}

Acknowledgments and Funding

We thank Rama Chellappa, Cheng Peng, Venkataram Sivaram, Haolin Lu, and Ishit Mehta for discussions. We thank the anonymous reviewers for detailed comments and suggestions. We also thank NVIDIA for GPU gifts.

This work was supported in part by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/ Interior Business Center (DOI/IBC) contract number 140D0423C0076. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing IARPA, DOI/IBC, or the U.S. Government. We also acknowledge support from ONR grant N00014-23-1-2526, NSF grant 2127544 and 2238839, NSF grants 2100237 and 2120019 for the Nautilus cluster, gifts from Adobe, Google, Qualcomm and Rembrand, an NSERC Postdoctoral Fellowship, the Ronald L. Graham Chair and the UC San Diego Center for Visual Computing.

References

[Bi 2024] Zoubin Bi, Yixin Zeng, Chong Zeng, Fan Pei, Xiang Feng, Kun Zhou, and Hongzhi Wu. GS3: Efficient Relighting with Triple Gaussian Splatting.

[Zeng 2023] Chong Zeng, Guojun Chen, Yue Dong, Pieter Peers, Hongzhi Wu, and Xin Tong. Relighting Neural Radiance Fields with Shadow and Highlight Hints.