About this EventAdd to calendar
Estimating 3D articulated shapes like animal bodies from monocular images is highly ill-posed due to the ambiguities of camera viewpoint, pose, texture, lighting, etc. Most prior methods rely on large-scale image datasets, dense temporal correspondence, or human annotations like camera pose, 2D keypoints, and shape templates. Instead, my recent work, LASSIE, proposes a novel and practical setting to learn articulated shapes from 10-30 web images, using geometric part priors and a skeleton-based neural surface representation. My follow-up paper, Hi-LASSIE, automatically discovers a class-specific 3D skeleton and produces higher- fidelity details of each instance. My latest work, ARTIC3D, combines the 3D geometric priors and 2D diffusion priors, which enables robust and detailed textured reconstruction from occluded/truncated images. Extensive evaluations on multiple existing datasets and self-collected web images demonstrate that the proposed methods can not only produce higher quality outputs compared to prior arts but allow part-based applications like novel shape generation and realistic animation.
Chun-Han Yao is a Ph.D. candidate in Computer Science and Electrical Engineering at UC Merced, advised by Professor Ming-Hsuan Yang. Chun-Han's research interest lies in monocular 3D reconstruction of rigid objects, human bodies, or general articulated shapes, especially in a weakly- supervised or self-supervised setting. Some of Chun-Han's key research contributions include learning geometric part priors for 3D reconstruction and incorporating 2D generative priors for animal body estimation. Chun-Han also has many top- conference publications on video object detection, domain adaptation and federated learning during his industry collaborations.
0 people are interested in this event