ORCID ID
https://orcid.org/0009-0001-1915-6887
Date Awarded
2023
Document Type
Thesis
Degree Name
Master of Science (M.Sc.)
Department
Computer Science
Advisor
Pieter Peers
Committee Member
Andreas Stathopoulos
Committee Member
Huajie Shao
Abstract
We formulate SVBRDF estimation from photographs as a diffusion task. To model the distribution of spatially varying materials, we first train a novel unconditional SVBRDF diffusion backbone model on a large set of 312,165 synthetic spatially varying material exemplars. This SVBRDF diffusion backbone model, named MatFusion, can then serve as a basis for refining a conditional diffusion model to estimate the material properties from a photograph under controlled or uncontrolled lighting. Our backbone MatFusion model is trained using only a loss on the reflectance properties, and therefore refinement can be paired with more expensive rendering methods without the need for backpropagation during training. Because the conditional SVBRDF diffusion models are generative, we can synthesize multiple SVBRDF estimates from the same input photograph from which the user can select the one that best matches the users' expectation. We demonstrate the flexibility of our method by refining different SVBRDF diffusion models conditioned on different types of incident lighting, and show that for a single photograph under colocated flash lighting our method achieves equal or better accuracy than existing SVBRDF estimation methods.
DOI
https://dx.doi.org/10.21220/s2-ar28-g979
Rights
© The Author
Recommended Citation
Sartor, Samuel Lee, "Matfusion: A Generative Diffusion Model For Svbrdf Capture" (2023). Dissertations, Theses, and Masters Projects. William & Mary. Paper 1697552550.
https://dx.doi.org/10.21220/s2-ar28-g979