Search Results for author: Zhangjie Cao

Found 37 papers, 16 papers with code

Out-of-Dynamics Imitation Learning from Multimodal Demonstrations

1 code implementation13 Nov 2022 Yiwen Qiu, Jialong Wu, Zhangjie Cao, Mingsheng Long

Existing imitation learning works mainly assume that the demonstrator who collects demonstrations shares the same dynamics as the imitator.

Imitation Learning

Hub-Pathway: Transfer Learning from A Hub of Pre-trained Models

no code implementations8 Jun 2022 Yang Shu, Zhangjie Cao, Ziyang Zhang, Jianmin Wang, Mingsheng Long

The proposed framework can be trained end-to-end with the target task-specific loss, where it learns to explore better pathway configurations and exploit the knowledge in pre-trained models for each target datum.

Transfer Learning

MetaSets: Meta-Learning on Point Sets for Generalizable Representations

no code implementations CVPR 2021 Chao Huang, Zhangjie Cao, Yunbo Wang, Jianmin Wang, Mingsheng Long

It is a challenging problem due to the substantial geometry shift from simulated to real data, such that most existing 3D models underperform due to overfitting the complete geometries in the source domain.

Domain Generalization Meta-Learning

From Big to Small: Adaptive Learning to Partial-Set Domains

1 code implementation14 Mar 2022 Zhangjie Cao, Kaichao You, Ziyang Zhang, Jianmin Wang, Mingsheng Long

Still, the common requirement of identical class space shared across domains hinders applications of domain adaptation to partial-set domains.

Partial Domain Adaptation

Leveraging Smooth Attention Prior for Multi-Agent Trajectory Prediction

no code implementations8 Mar 2022 Zhangjie Cao, Erdem Biyik, Guy Rosman, Dorsa Sadigh

At a certain time, to forecast a reasonable future trajectory, each agent needs to pay attention to the interactions with only a small group of most relevant agents instead of unnecessarily paying attention to all the other agents.

Trajectory Prediction

Weakly Supervised Correspondence Learning

no code implementations2 Mar 2022 Zihan Wang, Zhangjie Cao, Yilun Hao, Dorsa Sadigh

Correspondence learning is a fundamental problem in robotics, which aims to learn a mapping between state, action pairs of agents of different dynamics or embodiments.

Learning from Imperfect Demonstrations via Adversarial Confidence Transfer

no code implementations7 Feb 2022 Zhangjie Cao, Zihan Wang, Dorsa Sadigh

Existing learning from demonstration algorithms usually assume access to expert demonstrations.

Learning Feasibility to Imitate Demonstrators with Different Dynamics

2 code implementations28 Oct 2021 Zhangjie Cao, Yilun Hao, Mengxi Li, Dorsa Sadigh

The goal of learning from demonstrations is to learn a policy for an agent (imitator) by mimicking the behavior in the demonstrations.

Confidence-Aware Imitation Learning from Demonstrations with Varying Optimality

2 code implementations NeurIPS 2021 Songyuan Zhang, Zhangjie Cao, Dorsa Sadigh, Yanan Sui

Our results show that CAIL significantly outperforms other imitation learning methods from demonstrations with varying optimality.

Imitation Learning

Omni-Training: Bridging Pre-Training and Meta-Training for Few-Shot Learning

no code implementations14 Oct 2021 Yang Shu, Zhangjie Cao, Jinghan Gao, Jianmin Wang, Philip S. Yu, Mingsheng Long

While pre-training and meta-training can create deep models powerful for few-shot generalization, we find that pre-training and meta-training focuses respectively on cross-domain transferability and cross-task transferability, which restricts their data efficiency in the entangled settings of domain shift and task shift.

Few-Shot Learning Transfer Learning

Multi-Agent Imitation Learning with Copulas

no code implementations10 Jul 2021 Hongwei Wang, Lantao Yu, Zhangjie Cao, Stefano Ermon

Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions, which is essential for understanding physical, social, and team-play systems.

Imitation Learning

Zoo-Tuning: Adaptive Transfer from a Zoo of Models

no code implementations29 Jun 2021 Yang Shu, Zhi Kou, Zhangjie Cao, Jianmin Wang, Mingsheng Long

We propose \emph{Zoo-Tuning} to address these challenges, which learns to adaptively transfer the parameters of pretrained models to the target task.

Facial Landmark Detection Image Classification +1

Transferable Query Selection for Active Domain Adaptation

no code implementations CVPR 2021 Bo Fu, Zhangjie Cao, Jianmin Wang, Mingsheng Long

Due to the domain shift, the query selection criteria of prior active learning methods may be ineffective to select the most informative target samples for annotation.

Active Learning Unsupervised Domain Adaptation

MetaSets:Meta-Learning on Point Sets for Generalizable Representations

no code implementations CVPR 2021 Chao Huang, Zhangjie Cao, Yunbo Wang, Jianmin Wang, Mingsheng Long

It is a challenging problem due to the substantial geometry shift from simulated to real data, such that most existing 3D models underperform due to overfitting the complete geometries in the source domain.

Domain Generalization

Open Domain Generalization with Domain-Augmented Meta-Learning

no code implementations CVPR 2021 Yang Shu, Zhangjie Cao, Chenyu Wang, Jianmin Wang, Mingsheng Long

Leveraging datasets available to learn a model with high generalization ability to unseen domains is important for computer vision, especially when the unseen domain's annotated data are unavailable.

Domain Generalization Meta-Learning

Learning from Imperfect Demonstrations from Agents with Varying Dynamics

1 code implementation10 Mar 2021 Zhangjie Cao, Dorsa Sadigh

The proposed score enables learning from more informative demonstrations, and disregarding the less relevant demonstrations.

Imitation Learning

Transfer Reinforcement Learning across Homotopy Classes

no code implementations10 Feb 2021 Zhangjie Cao, Minae Kwon, Dorsa Sadigh

The ability for robots to transfer their learned knowledge to new tasks -- where data is scarce -- is a fundamental challenge for successful robot learning.

Transfer Reinforcement Learning Robotics

Reinforcement Learning based Control of Imitative Policies for Near-Accident Driving

1 code implementation1 Jul 2020 Zhangjie Cao, Erdem Biyik, Woodrow Z. Wang, Allan Raventos, Adrien Gaidon, Guy Rosman, Dorsa Sadigh

To address driving in near-accident scenarios, we propose a hierarchical reinforcement and imitation learning (H-ReIL) approach that consists of low-level policies learned by IL for discrete driving modes, and a high-level policy learned by RL that switches between different driving modes.

Autonomous Driving Imitation Learning +2

Spatiotemporal Relationship Reasoning for Pedestrian Intent Prediction

1 code implementation20 Feb 2020 Bingbin Liu, Ehsan Adeli, Zhangjie Cao, Kuan-Hui Lee, Abhijeet Shenoi, Adrien Gaidon, Juan Carlos Niebles

In addition, we introduce a new dataset designed specifically for autonomous-driving scenarios in areas with dense pedestrian populations: the Stanford-TRI Intent Prediction (STIP) dataset.

Autonomous Driving Navigate

Adversarial Cross-Domain Action Recognition with Co-Attention

no code implementations22 Dec 2019 Boxiao Pan, Zhangjie Cao, Ehsan Adeli, Juan Carlos Niebles

Action recognition has been a widely studied topic with a heavy focus on supervised learning involving sufficient labeled videos.

Action Recognition

Improving Unsupervised Domain Adaptation with Variational Information Bottleneck

no code implementations21 Nov 2019 Yuxuan Song, Lantao Yu, Zhangjie Cao, Zhiming Zhou, Jian Shen, Shuo Shao, Wei-Nan Zhang, Yong Yu

Domain adaptation aims to leverage the supervision signal of source domain to obtain an accurate model for target domain, where the labels are not available.

Unsupervised Domain Adaptation

Learning to Transfer Examples for Partial Domain Adaptation

1 code implementation CVPR 2019 Zhangjie Cao, Kaichao You, Mingsheng Long, Jian-Min Wang, Qiang Yang

Under the condition that target labels are unknown, the key challenge of PDA is how to transfer relevant examples in the shared classes to promote positive transfer, and ignore irrelevant ones in the specific classes to mitigate negative transfer.

Partial Domain Adaptation Transfer Learning

Multi-Adversarial Domain Adaptation

4 code implementations4 Sep 2018 Zhongyi Pei, Zhangjie Cao, Mingsheng Long, Jian-Min Wang

Recent advances in deep domain adaptation reveal that adversarial learning can be embedded into deep networks to learn transferable features that reduce distribution discrepancy between the source and target domains.

Domain Adaptation

Deep Priority Hashing

1 code implementation4 Sep 2018 Zhangjie Cao, Ziping Sun, Mingsheng Long, Jian-Min Wang, Philip S. Yu

Deep hashing enables image retrieval by end-to-end learning of deep representations and hash codes from training data with pairwise similarity information.

Image Retrieval Quantization +1

Partial Adversarial Domain Adaptation

2 code implementations ECCV 2018 Zhangjie Cao, Lijia Ma, Mingsheng Long, Jian-Min Wang

We present Partial Adversarial Domain Adaptation (PADA), which simultaneously alleviates negative transfer by down-weighing the data of outlier source classes for training both source classifier and domain adversary, and promotes positive transfer by matching the feature distributions in the shared label space.

Partial Domain Adaptation

Transfer Adversarial Hashing for Hamming Space Retrieval

no code implementations13 Dec 2017 Zhangjie Cao, Mingsheng Long, Chao Huang, Jian-Min Wang

Existing work on deep hashing assumes that the database in the target domain is identically distributed with the training set in the source domain.

Image Retrieval Retrieval

3D Object Classification via Spherical Projections

no code implementations12 Dec 2017 Zhangjie Cao, Qi-Xing Huang, Karthik Ramani

Our main idea is to project a 3D object onto a spherical domain centered around its barycenter and develop neural network to classify the spherical projection.

3D Object Classification Classification +1

Conditional Adversarial Domain Adaptation

5 code implementations NeurIPS 2018 Mingsheng Long, Zhangjie Cao, Jian-Min Wang, Michael. I. Jordan

Adversarial learning has been embedded into deep networks to learn disentangled and transferable representations for domain adaptation.

Domain Adaptation General Classification

HashNet: Deep Learning to Hash by Continuation

2 code implementations ICCV 2017 Zhangjie Cao, Mingsheng Long, Jian-Min Wang, Philip S. Yu

Learning to hash has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval, due to its computation efficiency and retrieval quality.

Binarization Representation Learning +1

Transitive Hashing Network for Heterogeneous Multimedia Retrieval

no code implementations15 Aug 2016 Zhangjie Cao, Mingsheng Long, Qiang Yang

Hashing has been widely applied to large-scale multimedia retrieval due to the storage and retrieval efficiency.


Learning Multiple Tasks with Multilinear Relationship Networks

no code implementations NeurIPS 2017 Mingsheng Long, Zhangjie Cao, Jian-Min Wang, Philip S. Yu

Deep networks trained on large-scale data can learn transferable features to promote learning multiple tasks.

Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.