1 code implementation • ECCV 2020 • Bo Fu, Zhangjie Cao, Mingsheng Long, Jian-Min Wang
The new transferability measure accurately quantifies the inclination of a target example to the open classes.
Ranked #5 on
Universal Domain Adaptation
on DomainNet
1 code implementation • 13 Nov 2022 • Yiwen Qiu, Jialong Wu, Zhangjie Cao, Mingsheng Long
Existing imitation learning works mainly assume that the demonstrator who collects demonstrations shares the same dynamics as the imitator.
no code implementations • 16 Sep 2022 • Yilun Hao, Ruinan Wang, Zhangjie Cao, Zihan Wang, Yuchen Cui, Dorsa Sadigh
Specifically, we design a masked policy network with a binary mask to block certain modalities.
no code implementations • 8 Jun 2022 • Yang Shu, Zhangjie Cao, Ziyang Zhang, Jianmin Wang, Mingsheng Long
The proposed framework can be trained end-to-end with the target task-specific loss, where it learns to explore better pathway configurations and exploit the knowledge in pre-trained models for each target datum.
no code implementations • CVPR 2021 • Chao Huang, Zhangjie Cao, Yunbo Wang, Jianmin Wang, Mingsheng Long
It is a challenging problem due to the substantial geometry shift from simulated to real data, such that most existing 3D models underperform due to overfitting the complete geometries in the source domain.
1 code implementation • 14 Mar 2022 • Zhangjie Cao, Kaichao You, Ziyang Zhang, Jianmin Wang, Mingsheng Long
Still, the common requirement of identical class space shared across domains hinders applications of domain adaptation to partial-set domains.
no code implementations • 8 Mar 2022 • Zhangjie Cao, Erdem Biyik, Guy Rosman, Dorsa Sadigh
At a certain time, to forecast a reasonable future trajectory, each agent needs to pay attention to the interactions with only a small group of most relevant agents instead of unnecessarily paying attention to all the other agents.
no code implementations • 2 Mar 2022 • Zihan Wang, Zhangjie Cao, Yilun Hao, Dorsa Sadigh
Correspondence learning is a fundamental problem in robotics, which aims to learn a mapping between state, action pairs of agents of different dynamics or embodiments.
no code implementations • 7 Feb 2022 • Zhangjie Cao, Zihan Wang, Dorsa Sadigh
Existing learning from demonstration algorithms usually assume access to expert demonstrations.
2 code implementations • 28 Oct 2021 • Zhangjie Cao, Yilun Hao, Mengxi Li, Dorsa Sadigh
The goal of learning from demonstrations is to learn a policy for an agent (imitator) by mimicking the behavior in the demonstrations.
2 code implementations • NeurIPS 2021 • Songyuan Zhang, Zhangjie Cao, Dorsa Sadigh, Yanan Sui
Our results show that CAIL significantly outperforms other imitation learning methods from demonstrations with varying optimality.
no code implementations • 14 Oct 2021 • Yang Shu, Zhangjie Cao, Jinghan Gao, Jianmin Wang, Philip S. Yu, Mingsheng Long
While pre-training and meta-training can create deep models powerful for few-shot generalization, we find that pre-training and meta-training focuses respectively on cross-domain transferability and cross-task transferability, which restricts their data efficiency in the entangled settings of domain shift and task shift.
no code implementations • 10 Jul 2021 • Hongwei Wang, Lantao Yu, Zhangjie Cao, Stefano Ermon
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions, which is essential for understanding physical, social, and team-play systems.
no code implementations • 29 Jun 2021 • Yang Shu, Zhi Kou, Zhangjie Cao, Jianmin Wang, Mingsheng Long
We propose \emph{Zoo-Tuning} to address these challenges, which learns to adaptively transfer the parameters of pretrained models to the target task.
no code implementations • CVPR 2021 • Bo Fu, Zhangjie Cao, Jianmin Wang, Mingsheng Long
Due to the domain shift, the query selection criteria of prior active learning methods may be ineffective to select the most informative target samples for annotation.
no code implementations • CVPR 2021 • Chao Huang, Zhangjie Cao, Yunbo Wang, Jianmin Wang, Mingsheng Long
It is a challenging problem due to the substantial geometry shift from simulated to real data, such that most existing 3D models underperform due to overfitting the complete geometries in the source domain.
no code implementations • CVPR 2021 • Yang Shu, Zhangjie Cao, Chenyu Wang, Jianmin Wang, Mingsheng Long
Leveraging datasets available to learn a model with high generalization ability to unseen domains is important for computer vision, especially when the unseen domain's annotated data are unavailable.
1 code implementation • 10 Mar 2021 • Zhangjie Cao, Dorsa Sadigh
The proposed score enables learning from more informative demonstrations, and disregarding the less relevant demonstrations.
no code implementations • 10 Feb 2021 • Zhangjie Cao, Minae Kwon, Dorsa Sadigh
The ability for robots to transfer their learned knowledge to new tasks -- where data is scarce -- is a fundamental challenge for successful robot learning.
Transfer Reinforcement Learning
Robotics
1 code implementation • 1 Jul 2020 • Zhangjie Cao, Erdem Biyik, Woodrow Z. Wang, Allan Raventos, Adrien Gaidon, Guy Rosman, Dorsa Sadigh
To address driving in near-accident scenarios, we propose a hierarchical reinforcement and imitation learning (H-ReIL) approach that consists of low-level policies learned by IL for discrete driving modes, and a high-level policy learned by RL that switches between different driving modes.
1 code implementation • 7 Jun 2020 • Amir Zamir, Alexander Sax, Teresa Yeo, Oğuzhan Kar, Nikhil Cheerla, Rohan Suri, Zhangjie Cao, Jitendra Malik, Leonidas Guibas
Visual perception entails solving a wide set of tasks, e. g., object detection, depth estimation, etc.
1 code implementation • 20 Feb 2020 • Bingbin Liu, Ehsan Adeli, Zhangjie Cao, Kuan-Hui Lee, Abhijeet Shenoi, Adrien Gaidon, Juan Carlos Niebles
In addition, we introduce a new dataset designed specifically for autonomous-driving scenarios in areas with dense pedestrian populations: the Stanford-TRI Intent Prediction (STIP) dataset.
no code implementations • 22 Dec 2019 • Boxiao Pan, Zhangjie Cao, Ehsan Adeli, Juan Carlos Niebles
Action recognition has been a widely studied topic with a heavy focus on supervised learning involving sufficient labeled videos.
no code implementations • 21 Nov 2019 • Yuxuan Song, Lantao Yu, Zhangjie Cao, Zhiming Zhou, Jian Shen, Shuo Shao, Wei-Nan Zhang, Yong Yu
Domain adaptation aims to leverage the supervision signal of source domain to obtain an accurate model for target domain, where the labels are not available.
no code implementations • CVPR 2020 • Kaidi Cao, Jingwei Ji, Zhangjie Cao, Chien-Yi Chang, Juan Carlos Niebles
In this paper, we propose Temporal Alignment Module (TAM), a novel few-shot learning framework that can learn to classify a previous unseen video.
1 code implementation • ICLR Workshop DeepGenStruct 2019 • Aditya Grover, Christopher Chute, Rui Shu, Zhangjie Cao, Stefano Ermon
Given datasets from multiple domains, a key challenge is to efficiently exploit these data sources for modeling a target domain.
1 code implementation • CVPR 2019 • Zhangjie Cao, Kaichao You, Mingsheng Long, Jian-Min Wang, Qiang Yang
Under the condition that target labels are unknown, the key challenge of PDA is how to transfer relevant examples in the shared classes to promote positive transfer, and ignore irrelevant ones in the specific classes to mitigate negative transfer.
Ranked #4 on
Partial Domain Adaptation
on ImageNet-Caltech
4 code implementations • 4 Sep 2018 • Zhongyi Pei, Zhangjie Cao, Mingsheng Long, Jian-Min Wang
Recent advances in deep domain adaptation reveal that adversarial learning can be embedded into deep networks to learn transferable features that reduce distribution discrepancy between the source and target domains.
Ranked #24 on
Domain Adaptation
on Office-31
1 code implementation • 4 Sep 2018 • Zhangjie Cao, Ziping Sun, Mingsheng Long, Jian-Min Wang, Philip S. Yu
Deep hashing enables image retrieval by end-to-end learning of deep representations and hash codes from training data with pairwise similarity information.
2 code implementations • ECCV 2018 • Zhangjie Cao, Lijia Ma, Mingsheng Long, Jian-Min Wang
We present Partial Adversarial Domain Adaptation (PADA), which simultaneously alleviates negative transfer by down-weighing the data of outlier source classes for training both source classifier and domain adversary, and promotes positive transfer by matching the feature distributions in the shared label space.
Ranked #3 on
Partial Domain Adaptation
on DomainNet
no code implementations • 13 Dec 2017 • Zhangjie Cao, Mingsheng Long, Chao Huang, Jian-Min Wang
Existing work on deep hashing assumes that the database in the target domain is identically distributed with the training set in the source domain.
no code implementations • 12 Dec 2017 • Zhangjie Cao, Qi-Xing Huang, Karthik Ramani
Our main idea is to project a 3D object onto a spherical domain centered around its barycenter and develop neural network to classify the spherical projection.
no code implementations • CVPR 2018 • Zhangjie Cao, Mingsheng Long, Jian-Min Wang, Michael. I. Jordan
Existing domain adversarial networks assume fully shared label space across domains.
5 code implementations • NeurIPS 2018 • Mingsheng Long, Zhangjie Cao, Jian-Min Wang, Michael. I. Jordan
Adversarial learning has been embedded into deep networks to learn disentangled and transferable representations for domain adaptation.
Ranked #6 on
Domain Adaptation
on USPS-to-MNIST
2 code implementations • ICCV 2017 • Zhangjie Cao, Mingsheng Long, Jian-Min Wang, Philip S. Yu
Learning to hash has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval, due to its computation efficiency and retrieval quality.
no code implementations • 15 Aug 2016 • Zhangjie Cao, Mingsheng Long, Qiang Yang
Hashing has been widely applied to large-scale multimedia retrieval due to the storage and retrieval efficiency.
no code implementations • NeurIPS 2017 • Mingsheng Long, Zhangjie Cao, Jian-Min Wang, Philip S. Yu
Deep networks trained on large-scale data can learn transferable features to promote learning multiple tasks.