Search Results for author: Tarik Kelestemur

Found 5 papers, 1 papers with code

GenDP: 3D Semantic Fields for Category-Level Generalizable Diffusion Policy

no code implementations23 Oct 2024 YiXuan Wang, Guang Yin, Binghao Huang, Tarik Kelestemur, Jiuguang Wang, Yunzhu Li

Diffusion-based policies have shown remarkable capability in executing complex robotic manipulation tasks but lack explicit characterization of geometry and semantics, which often limits their ability to generalize to unseen objects and layouts.

Theia: Distilling Diverse Vision Foundation Models for Robot Learning

1 code implementation29 Jul 2024 Jinghuan Shang, Karl Schmeckpeper, Brandon B. May, Maria Vittoria Minniti, Tarik Kelestemur, David Watkins, Laura Herlant

Vision-based robot policy learning, which maps visual inputs to actions, necessitates a holistic understanding of diverse visual tasks beyond single-task needs like classification or segmentation.

Equivariant Diffusion Policy

no code implementations1 Jul 2024 Dian Wang, Stephen Hart, David Surovik, Tarik Kelestemur, Haojie Huang, Haibo Zhao, Mark Yeatman, Jiuguang Wang, Robin Walters, Robert Platt

Recent work has shown diffusion models are an effective approach to learning the multimodal distributions arising from demonstration data in behavior cloning.

Denoising

D$^3$Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Rearrangement

no code implementations28 Sep 2023 YiXuan Wang, Mingtong Zhang, Zhuoran Li, Tarik Kelestemur, Katherine Driggs-Campbell, Jiajun Wu, Li Fei-Fei, Yunzhu Li

These fields are implicit 3D representations that take in 3D points and output semantic features and instance masks.

Cannot find the paper you are looking for? You can Submit a new open access paper.