Profile Photo

Lianghao Zhang

Tianjin University
Researcher
Computer Graphics & Computer Vision

About

Hello, I'm Lianghao Zhang, currently working at Xiaomi. I received my Ph.D. degree from Tianjin University, where I was advised by Prof. Jiawan Zhang. My research interests lie in computer graphics and computer vision, focusing on material acquisition, appearance modeling, and 3D reconstruction. My long-term vision is to democratize material capture technologies, enabling everyone to participate in 3D content creation and enriching the digital content ecosystem with greater vitality and diversity.

Publications

Sparse SVBRDF Acquisition via Importance-Aware Illumination Multiplexing


Lianghao Zhang, Zixuan Wang, Fangzhou Gao, Li Wang, Ruya Sun, Jiawan Zhang
To appear in ACM Trans. on Graphics (Proc. SIGGRAPH Asia 2025).

Reflectance acquisition from sparse images has been a long-standing problem in computer graphics. Previous works have addressed this by introducing either material-related priors or illumination multiplexing with a general sampling strategy. However, fixed lighting patterns in multiplexing can lead to redundant sampling and entangled observations, making it necessary to adaptively capture salient reflectance responses in each shot based on material behavior. In this paper, we propose combining adaptive sampling with illumination multiplexing for SVBRDF reconstruction from sparse images lit by a planar light source. Central to our method is the modeling of a sampling importance distribution on lighting surface, guided by the statistical nature of microfacet theory. Based on this sampling structure, our framework jointly trains networks to learn an adaptive sampling strategy in the lighting domain, and furthermore, approximately separates pure specular-related information from observations to reduce ambiguities in reconstruction. We validate our approach through experiments and comparisons with previous works on both synthetic and real materials.

Paper

EBREnv: SVBRDF Estimation in Uncontrolled Environment Lighting via Exemplar-Based Representation


Li Wang, Jiajun Zhao Lianghao Zhang, Fangzhou Gao, Jiawan Zhang
To appear in ACM SIGGRAPH Asia 2025 Conference Proceedings.

Recovering spatial-varying bi-directional reflectance distribution function (SVBRDF) from as few as possible captured images has been a challenging task in computer graphics. Benefiting from the co-located flashlight-camera capture strategy and data-driven priors, SVBRDF can be estimated from few input images. However, this capture strategy usually requires a controllable darkroom environment, ensuring the flashlight is a single light source. It is often impractical during on-site capture in real-world scenarios. To support SVBRDF estimation in an uncontrolled environment, the key challenge lies in the high-precise estimation of unknown environment lighting and its effective utilization on SVBRDF recovery. To address this issue, we proposed a novel exemplar-based environment lighting representation, which is easier to use for neural networks. These exemplars are a set of rendered images of selected materials under the environment lighting. By embedding the rendering process, our approach transforms environment lighting represented in the spherical domain into the sample-surface domain, thereby achieving the domain alignment with input images. This significantly reduces the net- work’s learning burden, resulting in a more precise environment lighting estimation. Furthermore, after lighting prediction, we also present a dominant lighting extraction algorithm and an adaptive exemplar selection algorithm to enhance the guidance of environment lighting in SVBRDF estimation. Finally, considering the distant contribution of environment lighting and point lighting to SVBRDF recovery, we proposed a well-designed cascaded network. Quantitative assessments and qualitative analysis have demonstrated that our method achieves superior SVBRDF estimations compared to previous approaches. The source code will be released.

Paper

RCTrans: Transparent Object Reconstruction in Natural Scene via Refractive Correspondence Estimation


Fangzhou Gao, Yuzhen Kang, Lianghao Zhang, Li Wang, Qishen Wang, Jiawan Zhang
To appear in ACM SIGGRAPH Asia 2025 Conference Proceedings.

Transparent object reconstruction in an uncontrolled natural scene is a challenging task due to its complex appearance. Existing methods optimize the object shape with RGB color as supervision, which suffer from locality and ambiguity, and fail to recover accurate structures. In this paper, we present RCTrans, which uses ray-background intersection as a more efficient constraint to achieve high-quality reconstruction, while maintaining a convenient setup. The key technology to achieve this is a novel pre-trained correspondence estimation network, which allows us to acquire ray-background correspondence under uncontrolled scenes and camera views. In addition, a confidence evaluation is introduced to protect the reconstruction from inaccurate estimated correspondence. Extensive experiments on both synthetic and real data demonstrate that our method can produce highly accurate results, without any extra acquisition burden. The code and dataset will be publicly available.

Paper

On-site single image SVBRDF reconstruction with active planar lighting


Lianghao Zhang, Ruya Sun, Li Wang, Fangzhou Gao, Zixuan Wang, Jiawan Zhang
Computers & Graphics 130 (2025) 104268.

Recovering the spatially-varying bidirectional reflectance distribution function (SVBRDF) from a single image in uncontrolled environments is challenging while essential for various applications. In this paper, we address this highly ill-posed problem using a convenient capture setup and a carefully designed reconstruction framework. Our proposed setup, which incorporates an active extended light source and a mirror hemisphere, is easy to implement for even common users and requires no careful calibration. These devices can simultaneously capture uncontrolled lighting, real active lighting patterns, and material appearance in a single image. Based on all captured information, we solve the reconstruction problem by designing lighting clues that are semantically aligned with the input image to aid the network in understanding the captured lighting. We further embed lighting clue generation into the network’s forward pass by introducing real-time rendering. This allows the network to render accurate lighting clues based on predicted normal variations while jointly learning to reconstruct high-quality SVBRDF. Moreover, we also use captured lighting patterns to model noises of pattern display in real scenes, which significantly increases the robustness of our methods on real data. With these innovations, our method demonstrates clear improvements over previous approaches on both synthetic and real-world data.

Paper

PixelatedScatter: Arbitrary-level Visual Abstraction for Large-scale Multiclass Scatterplots


Ziheng Guo, Tianxiang Wei, Zeyu Li, Lianghao Zhang, Sisi Li, Jiawan Zhang
To appear in IEEE Trans. Vis. Comput. Graph.

Recovering the spatially-varying bidirectional reflectance distribution function (SVBRDF) from a single image in uncontrolled environments is challenging while essential for various applications. In this paper, we address this highly ill-posed problem using a convenient capture setup and a carefully designed reconstruction framework. Our proposed setup, which incorporates an active extended light source and a mirror hemisphere, is easy to implement for even common users and requires no careful calibration. These devices can simultaneously capture uncontrolled lighting, real active lighting patterns, and material appearance in a single image. Based on all captured information, we solve the reconstruction problem by designing lighting clues that are semantically aligned with the input image to aid the network in understanding the captured lighting. We further embed lighting clue generation into the network’s forward pass by introducing real-time rendering. This allows the network to render accurate lighting clues based on predicted normal variations while jointly learning to reconstruct high-quality SVBRDF. Moreover, we also use captured lighting patterns to model noises of pattern display in real scenes, which significantly increases the robustness of our methods on real data. With these innovations, our method demonstrates clear improvements over previous approaches on both synthetic and real-world data.

Paper

NFPLight: Deep SVBRDF Estimation via the Combination of Near and Far Field Point Lighting


Li Wang, Lianghao Zhang, Fangzhou Gao, Yuzhen Kang, Jiawan Zhang
ACM Trans. on Graphics (Proc. SIGGRAPH Asia 2024), 43, 6.

Recovering spatial-varying bi-directional reflectance distribution function (SVBRDF) from a few hand-held captured images has been a challenging task in computer graphics. Benefiting from the learned priors from data, single-image methods can obtain plausible SVBRDF estimation results. However, the extremely limited appearance information in a single image does not suffice for high-quality SVBRDF reconstruction. Although increasing the number of inputs can improve the reconstruction quality, it also affects the efficiency of real data capture and adds significant computational burdens. Therefore, the key challenge is to minimize the required number of inputs, while keeping high-quality results. To address this, we propose maximizing the effective information in each input through a novel co-located capture strategy that combines near-field and far-field point lighting. To further enhance effectiveness, we theoretically investigate the inherent relation between two images. The extracted relation is strongly correlated with the slope of specular reflectance, substantially enhancing the precision of roughness map estimation. Additionally, we designed the registration and denoising modules to meet the practical requirements of hand-held capture. Quantitative assessments and qualitative analysis have demonstrated that our method achieves superior SVBRDF estimations compared to previous approaches. All source codes will be publicly released.

Project

Single-image SVBRDF estimation with auto-adaptive high-frequency feature extraction


Jiamin Cheng, Li Wang, Lianghao Zhang, Fangzhou Gao, Jiawan Zhang
Computers & Graphics 124 (2024) 104103.

In this paper, we address the task of estimating spatially-varying bi-directional reflectance distribution functions (SVBRDF) of a near-planar surface from a single flash-lit image. Disentangling SVBRDF from the material appearance by deep learning has proven a formidable challenge. This difficulty is particularly pronounced when dealing with images lit by a point light source because the uneven distribution of irradiance in the scene interacts with the surface, leading to significant global luminance variations across the image. These variations may be overemphasized by the network and wrongly baked into the material property space. To tackle this issue, we propose a high-frequency path that contains an auto-adaptive subband ‘‘knob’’. This path aims to extract crucial image textures and details while eliminating global luminance variations present in the original image. Furthermore, recognizing that color information is ignored in this path, we design a two-path strategy to jointly estimate material reflectance from both the high-frequency path and the original image. Extensive experiments on a substantial dataset have confirmed the effectiveness of our method. Our method outperforms state-of-the-art methods across a wide range of materials.

Project

Deep SVBRDF Estimation from Single Image under Learned Planar Lighting


Lianghao Zhang, Fangzhou Gao, Li Wang, Minjing Yu, Jiamin Cheng, Jiawan Zhang
ACM SIGGRAPH 2023 Conference Proceedings, Article 48, 1-11.

Estimating spatially varying BRDF from a single image without complicated acquisition devices is a challenging problem. In this paper, a deep learning based method was proposed to improve the capture efficiency of single image significantly by learning the lighting pattern of a planar light source, and reconstruct high-quality SVBRDF by learning the global correlation prior of the input image. In our framework, the lighting pattern optimization is embedded in the training process of the network by introducing an online rendering process. The rendering process not only renders images online as the input of network, but also efficiently back propagates gradients from the network to optimize the lighting pattern. Once trained, the network can estimate SVBRDFs from real photographs captured under the learned lighting pattern. Additionally, we describe an onsite capture setup that needs no careful calibration to capture the material sample efficiently. In particular, even a cell phone can be used for illumination. We demonstrate on synthetic and real data that our method could recover a wide range of materials from a single image casually captured under the learned lighting pattern.

Project

Transparent Object Reconstruction via Implicit Differentiable Refraction Rendering


Fangzhou Gao, Lianghao Zhang, Li Wang, Jiamin Cheng, Jiawan Zhang
ACM SIGGRAPH Asia 2023 Conference Proceedings, Article No. 57, 1-11.

Reconstructing the geometry of transparent objects has been a long-standing challenge. Existing methods rely on complex setups, such as manual annotation or darkroom conditions, to obtain ob- ject silhouettes and usually require controlled environments with designed patterns to infer ray-background correspondence. How- ever, these intricate arrangements limit the practical application for common users. In this paper, we significantly simplify the setups and present a novel method that reconstructs transparent objects in unknown natural scenes without manual assistance. Our method incorporates two key technologies. Firstly, we introduce a volume rendering-based method that estimates object silhouettes by pro- jecting the 3D neural field onto 2D images. This automated process yields highly accurate multi-view object silhouettes from images captured in natural scenes. Secondly, we propose transparent ob- ject optimization through differentiable refraction rendering with the neural SDF field, enabling us to optimize the refraction ray based on color rather than explicit ray-background correspondence. Additionally, our optimization includes a ray sampling method to supervise the object silhouette at a low computational cost. Exten- sive experiments and comparisons demonstrate that our method produces high-quality results while offering much more convenient setups.

Project

DeepBasis: Hand-Held Single-Image SVBRDF Capture via Two-Level Basis Material Model


Li Wang, Lianghao Zhang, Fangzhou Gao, Jiawan Zhang
ACM SIGGRAPH Asia 2023 Conference Proceedings, Article No. 85, 1-11.

Recovering spatial-varying bi-directional reflectance distribution function (SVBRDF) from a single hand-held captured image has been a meaningful but challenging task in computer graphics. Ben- efiting from the learned data priors, some previous methods can utilize the potential material correlations between image pixels to serve for SVBRDF estimation. To further reduce the ambigu- ity from single-image estimation, it is necessary to integrate ad- ditional explicit material correlations. Given the flexible expres- sive ability of basis material assumption, we propose DeepBasis, a deep-learning-based method integrated with this assumption. It jointly predicts basis materials and their blending weights. Then the estimated SVBRDF is their linear combination. To facilitate the extraction of data priors, we introduce a two-level basis model to keep the sufficient representative while using a fixed number of basis materials. Moreover, considering the absence of ground-truth basis materials and weights during network training, we propose a variance-consistency loss and adopt a joint prediction strategy, thereby enabling the existing SVBRDF dataset available for train- ing. Additionally, due to the hand-held capture setting, the exact lighting directions are unknown. We model the lighting direction es- timation as a sampling problem and propose an optimization-based algorithm to find the optimal estimation. Quantitative evaluation and qualitative analysis demonstrate that DeepBasis can produce a higher quality SVBRDF estimation than previous methods. All source codes will be publicly released.

Project Code