Doctor of Philosophy (Ph.D.)
Global light transport, including diffuse interreflections, caustic, refractions and subsurface scattering, is important to achieve photorealistic rendering. However rendering these phenomena is very time-consuming. Furthermore, many inverse rendering methods’ accuracy in computer graphics and computer vision is adversely affected by the presence of global light transport. Therefore, separating direct-global light transport components is necessary to help in designing new rendering methods and in improving the accuracy of many image inverse methods. Prior work on separating direct and global light transport from photographs either requires expensive hardware, requires multiple photographs of the scene, or fails to accurately recover high frequency details. In this thesis, we propose two efficient and accurate single image direct global components separation methods. The first method is based on a sparse coding framework. We show good quality results on a variety of synthetic scenes. However, in practice, due to acquisition limitations, the sparse coding method is practically not feasible without accurate optical calibration that might be difficult to obtain. To address all different types of acquisition related artifacts introduced by the optical system, we introduce a second data-driven approach using a novel deep learning based method to automatically compensate for acquisition related artifacts. Our deep learning method achieves high quality decompositions on synthetic scenes as well as real scenes by capturing only a single photograph of the scene illuminated by a low-cost projector. Furthermore, we resolve the lighting frequency constraints of prior work, yielding more accurate decompositions for lighting frequency sensitive features such as subsurface scattering and specular light transport.
© The Author
Duan, Zhaoliang, "Single Image Direct-Global Illumination Separation" (2020). Dissertations, Theses, and Masters Projects. William & Mary. Paper 1616444522.