Profile Photo

Lianghao Zhang

Tianjin University
Ph.D. student
Computer Graphics & Computer Vision

About

Hello, I'm Lianghao Zhang, a Ph.D. student with research interests in computer graphics and computer vision. Currently, I'm a third-year Ph.D. student at Tianjin University (supervised by Prof.Jiawan Zhang). I obtained my master's and bachelor's degrees from Tianjin University in 2018 and 2014, respectively. I am eager to utilize my knowledge to transform the real world and bring amazing technologies that make our world more beautiful and convenient.

Publications

Deep SVBRDF Estimation from Single Image under Learned Planar Lighting


Lianghao Zhang, Fangzhou Gao, Li Wang, Minjing Yu, Jiamin Cheng, Jiawan Zhang
ACM SIGGRAPH 2023 Conference Proceedings, Article 48, 1-11.

Estimating spatially varying BRDF from a single image without complicated acquisition devices is a challenging problem. In this paper, a deep learning based method was proposed to improve the capture efficiency of single image significantly by learning the lighting pattern of a planar light source, and reconstruct high-quality SVBRDF by learning the global correlation prior of the input image. In our framework, the lighting pattern optimization is embedded in the training process of the network by introducing an online rendering process. The rendering process not only renders images online as the input of network, but also efficiently back propagates gradients from the network to optimize the lighting pattern. Once trained, the network can estimate SVBRDFs from real photographs captured under the learned lighting pattern. Additionally, we describe an onsite capture setup that needs no careful calibration to capture the material sample efficiently. In particular, even a cell phone can be used for illumination. We demonstrate on synthetic and real data that our method could recover a wide range of materials from a single image casually captured under the learned lighting pattern.

Project

Transparent Object Reconstruction via Implicit Differentiable Refraction Rendering


Fangzhou Gao, Lianghao Zhang, Li Wang, Jiamin Cheng, Jiawan Zhang
To appear in ACM SIGGRAPH Asia 2023 .

Reconstructing the geometry of transparent objects has been a long-standing challenge. Existing methods rely on complex setups, such as manual annotation or darkroom conditions, to obtain ob- ject silhouettes and usually require controlled environments with designed patterns to infer ray-background correspondence. How- ever, these intricate arrangements limit the practical application for common users. In this paper, we significantly simplify the setups and present a novel method that reconstructs transparent objects in unknown natural scenes without manual assistance. Our method incorporates two key technologies. Firstly, we introduce a volume rendering-based method that estimates object silhouettes by pro- jecting the 3D neural field onto 2D images. This automated process yields highly accurate multi-view object silhouettes from images captured in natural scenes. Secondly, we propose transparent ob- ject optimization through differentiable refraction rendering with the neural SDF field, enabling us to optimize the refraction ray based on color rather than explicit ray-background correspondence. Additionally, our optimization includes a ray sampling method to supervise the object silhouette at a low computational cost. Exten- sive experiments and comparisons demonstrate that our method produces high-quality results while offering much more convenient setups.

Project

DeepBasis: Hand-Held Single-Image SVBRDF Capture via Two-Level Basis Material Model


Li Wang, Lianghao Zhang, Fangzhou Gao, Jiawan Zhang
To appear in ACM SIGGRAPH Asia 2023 .

Recovering spatial-varying bi-directional reflectance distribution function (SVBRDF) from a single hand-held captured image has been a meaningful but challenging task in computer graphics. Ben- efiting from the learned data priors, some previous methods can utilize the potential material correlations between image pixels to serve for SVBRDF estimation. To further reduce the ambigu- ity from single-image estimation, it is necessary to integrate ad- ditional explicit material correlations. Given the flexible expres- sive ability of basis material assumption, we propose DeepBasis, a deep-learning-based method integrated with this assumption. It jointly predicts basis materials and their blending weights. Then the estimated SVBRDF is their linear combination. To facilitate the extraction of data priors, we introduce a two-level basis model to keep the sufficient representative while using a fixed number of basis materials. Moreover, considering the absence of ground-truth basis materials and weights during network training, we propose a variance-consistency loss and adopt a joint prediction strategy, thereby enabling the existing SVBRDF dataset available for train- ing. Additionally, due to the hand-held capture setting, the exact lighting directions are unknown. We model the lighting direction es- timation as a sampling problem and propose an optimization-based algorithm to find the optimal estimation. Quantitative evaluation and qualitative analysis demonstrate that DeepBasis can produce a higher quality SVBRDF estimation than previous methods. All source codes will be publicly released.

Project Code