Profile Photo

Lianghao Zhang

Tianjin University
Ph.D. student
Computer Graphics & Computer Vision

About

Hello, I'm Lianghao Zhang, a Ph.D. student with research interests in computer graphics and computer vision. Currently, I'm a third-year Ph.D. student at Tianjin University (supervised by Prof.Jiawan Zhang). I obtained my master's and bachelor's degrees from Tianjin University in 2018 and 2014, respectively. My research vision is to democratize material capture technologies, making 3D content creation an activity widely engaged in by the public, thereby accelerating the advent of the metaverse era and infusing the digital content ecosystem with vitality and diversity.

Publications

NFPLight: Deep SVBRDF Estimation via the Combination of Near and Far Field Point Lighting


Li Wang, Lianghao Zhang, Fangzhou Gao, Yuzhen Kang, Jiawan Zhang
To appear in Transaction on Graphics (ACM SIGGRAPH Asia 2024)

Recovering spatial-varying bi-directional reflectance distribution function (SVBRDF) from a few hand-held captured images has been a challenging task in computer graphics. Benefiting from the learned priors from data, single-image methods can obtain plausible SVBRDF estimation results. However, the extremely limited appearance information in a single image does not suffice for high-quality SVBRDF reconstruction. Although increasing the number of inputs can improve the reconstruction quality, it also affects the efficiency of real data capture and adds significant computational burdens. Therefore, the key challenge is to minimize the required number of inputs, while keeping high-quality results. To address this, we propose maximizing the effective information in each input through a novel co-located capture strategy that combines near-field and far-field point lighting. To further enhance effectiveness, we theoretically investigate the inherent relation between two images. The extracted relation is strongly correlated with the slope of specular reflectance, substantially enhancing the precision of roughness map estimation. Additionally, we designed the registration and denoising modules to meet the practical requirements of hand-held capture. Quantitative assessments and qualitative analysis have demonstrated that our method achieves superior SVBRDF estimations compared to previous approaches. All source codes will be publicly released.

Project

Deep SVBRDF Estimation from Single Image under Learned Planar Lighting


Lianghao Zhang, Fangzhou Gao, Li Wang, Minjing Yu, Jiamin Cheng, Jiawan Zhang
ACM SIGGRAPH 2023 Conference Proceedings, Article 48, 1-11.

Estimating spatially varying BRDF from a single image without complicated acquisition devices is a challenging problem. In this paper, a deep learning based method was proposed to improve the capture efficiency of single image significantly by learning the lighting pattern of a planar light source, and reconstruct high-quality SVBRDF by learning the global correlation prior of the input image. In our framework, the lighting pattern optimization is embedded in the training process of the network by introducing an online rendering process. The rendering process not only renders images online as the input of network, but also efficiently back propagates gradients from the network to optimize the lighting pattern. Once trained, the network can estimate SVBRDFs from real photographs captured under the learned lighting pattern. Additionally, we describe an onsite capture setup that needs no careful calibration to capture the material sample efficiently. In particular, even a cell phone can be used for illumination. We demonstrate on synthetic and real data that our method could recover a wide range of materials from a single image casually captured under the learned lighting pattern.

Project

Transparent Object Reconstruction via Implicit Differentiable Refraction Rendering


Fangzhou Gao, Lianghao Zhang, Li Wang, Jiamin Cheng, Jiawan Zhang
ACM SIGGRAPH Asia 2023 Conference Proceedings, Article No. 57, 1-11.

Reconstructing the geometry of transparent objects has been a long-standing challenge. Existing methods rely on complex setups, such as manual annotation or darkroom conditions, to obtain ob- ject silhouettes and usually require controlled environments with designed patterns to infer ray-background correspondence. How- ever, these intricate arrangements limit the practical application for common users. In this paper, we significantly simplify the setups and present a novel method that reconstructs transparent objects in unknown natural scenes without manual assistance. Our method incorporates two key technologies. Firstly, we introduce a volume rendering-based method that estimates object silhouettes by pro- jecting the 3D neural field onto 2D images. This automated process yields highly accurate multi-view object silhouettes from images captured in natural scenes. Secondly, we propose transparent ob- ject optimization through differentiable refraction rendering with the neural SDF field, enabling us to optimize the refraction ray based on color rather than explicit ray-background correspondence. Additionally, our optimization includes a ray sampling method to supervise the object silhouette at a low computational cost. Exten- sive experiments and comparisons demonstrate that our method produces high-quality results while offering much more convenient setups.

Project

DeepBasis: Hand-Held Single-Image SVBRDF Capture via Two-Level Basis Material Model


Li Wang, Lianghao Zhang, Fangzhou Gao, Jiawan Zhang
ACM SIGGRAPH Asia 2023 Conference Proceedings, Article No. 85, 1-11.

Recovering spatial-varying bi-directional reflectance distribution function (SVBRDF) from a single hand-held captured image has been a meaningful but challenging task in computer graphics. Ben- efiting from the learned data priors, some previous methods can utilize the potential material correlations between image pixels to serve for SVBRDF estimation. To further reduce the ambigu- ity from single-image estimation, it is necessary to integrate ad- ditional explicit material correlations. Given the flexible expres- sive ability of basis material assumption, we propose DeepBasis, a deep-learning-based method integrated with this assumption. It jointly predicts basis materials and their blending weights. Then the estimated SVBRDF is their linear combination. To facilitate the extraction of data priors, we introduce a two-level basis model to keep the sufficient representative while using a fixed number of basis materials. Moreover, considering the absence of ground-truth basis materials and weights during network training, we propose a variance-consistency loss and adopt a joint prediction strategy, thereby enabling the existing SVBRDF dataset available for train- ing. Additionally, due to the hand-held capture setting, the exact lighting directions are unknown. We model the lighting direction es- timation as a sampling problem and propose an optimization-based algorithm to find the optimal estimation. Quantitative evaluation and qualitative analysis demonstrate that DeepBasis can produce a higher quality SVBRDF estimation than previous methods. All source codes will be publicly released.

Project Code