Search Results for author: Shuo Cheng

Found 14 papers, 4 papers with code

Pose Transferrable Person Re-Identification

no code implementations CVPR 2018 Jinxian Liu, Bingbing Ni, Yichao Yan, Peng Zhou, Shuo Cheng, Jianguo Hu

On the other hand, in addition to the conventional discriminator of GAN (i. e., to distinguish between REAL/FAKE samples), we propose a novel guider sub-network which encourages the generated sample (i. e., with novel pose) towards better satisfying the ReID loss (i. e., cross-entropy ReID loss, triplet ReID loss).

Person Re-Identification

Structure Preserving Video Prediction

no code implementations CVPR 2018 Jingwei Xu, Bingbing Ni, Zefan Li, Shuo Cheng, Xiaokang Yang

Despite recent emergence of adversarial based methods for video prediction, existing algorithms often produce unsatisfied results in image regions with rich structural information (i. e., object boundary) and detailed motion (i. e., articulated body movement).

Object Video Prediction

Fine-Grained Video Captioning for Sports Narrative

no code implementations CVPR 2018 Huanyu Yu, Shuo Cheng, Bingbing Ni, Minsi Wang, Jian Zhang, Xiaokang Yang

First, to facilitate this novel research of fine-grained video caption, we collected a novel dataset called Fine-grained Sports Narrative dataset (FSN) that contains 2K sports videos with ground-truth narratives from YouTube. com.

2k Video Captioning

Normal Assisted Stereo Depth Estimation

1 code implementation CVPR 2020 Uday Kusupati, Shuo Cheng, Rui Chen, Hao Su

We couple the learning of a multi-view normal estimation module and a multi-view depth estimation module.

Stereo Depth Estimation

Deep Stereo using Adaptive Thin Volume Representation with Uncertainty Awareness

1 code implementation CVPR 2020 Shuo Cheng, Zexiang Xu, Shilin Zhu, Zhuwen Li, Li Erran Li, Ravi Ramamoorthi, Hao Su

In contrast, we propose adaptive thin volumes (ATVs); in an ATV, the depth hypothesis of each plane is spatially varying, which adapts to the uncertainties of previous per-pixel depth predictions.

3D Reconstruction Point Clouds

Learning to Regrasp by Learning to Place

1 code implementation18 Sep 2021 Shuo Cheng, Kaichun Mo, Lin Shao

In this paper, we explore whether a robot can learn to regrasp a diverse set of objects to achieve various desired grasp poses.

Object

Using Augmented Face Images to Improve Facial Recognition Tasks

no code implementations13 May 2022 Shuo Cheng, Guoxian Song, Wan-Chun Ma, Chao Wang, Linjie Luo

We present a framework that uses GAN-augmented images to complement certain specific attributes, usually underrepresented, for machine learning model training.

BIG-bench Machine Learning

LEAGUE: Guided Skill Learning and Abstraction for Long-Horizon Manipulation

no code implementations23 Oct 2022 Shuo Cheng, Danfei Xu

We also show that the learned skills can be reused to accelerate learning in new tasks domains and transfer to a physical robot platform.

Motion Planning Reinforcement Learning (RL) +1

Learning to Discern: Imitating Heterogeneous Human Demonstrations with Preference and Representation Learning

no code implementations22 Oct 2023 Sachit Kuhar, Shuo Cheng, Shivang Chopra, Matthew Bronars, Danfei Xu

Furthermore, the intrinsic heterogeneity in human behavior can produce equally successful but disparate demonstrations, further exacerbating the challenge of discerning demonstration quality.

Imitation Learning Representation Learning

NOD-TAMP: Multi-Step Manipulation Planning with Neural Object Descriptors

no code implementations2 Nov 2023 Shuo Cheng, Caelan Garrett, Ajay Mandlekar, Danfei Xu

Developing intelligent robots for complex manipulation tasks in household and factory settings remains challenging due to long-horizon tasks, contact-rich manipulation, and the need to generalize across a wide variety of object shapes and scene layouts.

Motion Planning Object +1

A Survey of Optimization-based Task and Motion Planning: From Classical To Learning Approaches

no code implementations3 Apr 2024 Zhigen Zhao, Shuo Cheng, Yan Ding, Ziyi Zhou, Shiqi Zhang, Danfei Xu, Ye Zhao

Task and Motion Planning (TAMP) integrates high-level task planning and low-level motion planning to equip robots with the autonomy to effectively reason over long-horizon, dynamic tasks.

Motion Planning Task and Motion Planning

Cannot find the paper you are looking for? You can Submit a new open access paper.