Search Results for author: Luxin Zhang

Found 6 papers, 3 papers with code

AVID: Any-Length Video Inpainting with Diffusion Model

1 code implementation6 Dec 2023 Zhixing Zhang, Bichen Wu, Xiaoyan Wang, Yaqiao Luo, Luxin Zhang, Yinan Zhao, Peter Vajda, Dimitris Metaxas, Licheng Yu

Given a video, a masked region at its initial frame, and an editing prompt, it requires a model to do infilling at each frame following the editing guidance while keeping the out-of-mask region intact.

Image Inpainting Video Inpainting

Cloth Region Segmentation for Robust Grasp Selection

1 code implementation13 Aug 2020 Jianing Qian, Thomas Weng, Luxin Zhang, Brian Okorn, David Held

Our approach trains a network to segment the edges and corners of a cloth from a depth image, distinguishing such regions from wrinkles or folds.

Robotics

Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset

1 code implementation15 Mar 2019 Ruohan Zhang, Calen Walshe, Zhuode Liu, Lin Guan, Karl S. Muller, Jake A. Whritner, Luxin Zhang, Mary M. Hayhoe, Dana H. Ballard

We hope that the scale and quality of this dataset can provide more opportunities to researchers in the areas of visual attention, imitation learning, and reinforcement learning.

Imitation Learning

AGIL: Learning Attention from Human for Visuomotor Tasks

no code implementations ECCV 2018 Ruohan Zhang, Zhuode Liu, Luxin Zhang, Jake A. Whritner, Karl S. Muller, Mary M. Hayhoe, Dana H. Ballard

When intelligent agents learn visuomotor behaviors from human demonstrations, they may benefit from knowing where the human is allocating visual attention, which can be inferred from their gaze.

Atari Games Imitation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.