Search Results for author: Zhangjie Cao

Found 27 papers, 12 papers with code

Omni-Training for Data-Efficient Deep Learning

no code implementations14 Oct 2021 Yang Shu, Zhangjie Cao, Jinghan Gao, Jianmin Wang, Mingsheng Long

Our second contribution is Omni-Loss, in which a mean-teacher regularization is imposed to learn generalizable and stabilized representations.

Multi-Agent Imitation Learning with Copulas

no code implementations10 Jul 2021 Hongwei Wang, Lantao Yu, Zhangjie Cao, Stefano Ermon

Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions, which is essential for understanding physical, social, and team-play systems.

Imitation Learning

Zoo-Tuning: Adaptive Transfer from a Zoo of Models

no code implementations29 Jun 2021 Yang Shu, Zhi Kou, Zhangjie Cao, Jianmin Wang, Mingsheng Long

We propose \emph{Zoo-Tuning} to address these challenges, which learns to adaptively transfer the parameters of pretrained models to the target task.

Facial Landmark Detection Image Classification +1

Transferable Query Selection for Active Domain Adaptation

no code implementations CVPR 2021 Bo Fu, Zhangjie Cao, Jianmin Wang, Mingsheng Long

Due to the domain shift, the query selection criteria of prior active learning methods may be ineffective to select the most informative target samples for annotation.

Active Learning Unsupervised Domain Adaptation

Open Domain Generalization with Domain-Augmented Meta-Learning

no code implementations CVPR 2021 Yang Shu, Zhangjie Cao, Chenyu Wang, Jianmin Wang, Mingsheng Long

Leveraging datasets available to learn a model with high generalization ability to unseen domains is important for computer vision, especially when the unseen domain's annotated data are unavailable.

Domain Generalization Meta-Learning

Learning from Imperfect Demonstrations from Agents with Varying Dynamics

1 code implementation10 Mar 2021 Zhangjie Cao, Dorsa Sadigh

The proposed score enables learning from more informative demonstrations, and disregarding the less relevant demonstrations.

Imitation Learning

Transfer Reinforcement Learning across Homotopy Classes

no code implementations10 Feb 2021 Zhangjie Cao, Minae Kwon, Dorsa Sadigh

The ability for robots to transfer their learned knowledge to new tasks -- where data is scarce -- is a fundamental challenge for successful robot learning.

Curriculum Learning Transfer Reinforcement Learning Robotics

Reinforcement Learning based Control of Imitative Policies for Near-Accident Driving

1 code implementation1 Jul 2020 Zhangjie Cao, Erdem Biyik, Woodrow Z. Wang, Allan Raventos, Adrien Gaidon, Guy Rosman, Dorsa Sadigh

To address driving in near-accident scenarios, we propose a hierarchical reinforcement and imitation learning (H-ReIL) approach that consists of low-level policies learned by IL for discrete driving modes, and a high-level policy learned by RL that switches between different driving modes.

Autonomous Driving Imitation Learning

Spatiotemporal Relationship Reasoning for Pedestrian Intent Prediction

1 code implementation20 Feb 2020 Bingbin Liu, Ehsan Adeli, Zhangjie Cao, Kuan-Hui Lee, Abhijeet Shenoi, Adrien Gaidon, Juan Carlos Niebles

In addition, we introduce a new dataset designed specifically for autonomous-driving scenarios in areas with dense pedestrian populations: the Stanford-TRI Intent Prediction (STIP) dataset.

Autonomous Driving

Adversarial Cross-Domain Action Recognition with Co-Attention

no code implementations22 Dec 2019 Boxiao Pan, Zhangjie Cao, Ehsan Adeli, Juan Carlos Niebles

Action recognition has been a widely studied topic with a heavy focus on supervised learning involving sufficient labeled videos.

Action Recognition

Improving Unsupervised Domain Adaptation with Variational Information Bottleneck

no code implementations21 Nov 2019 Yuxuan Song, Lantao Yu, Zhangjie Cao, Zhiming Zhou, Jian Shen, Shuo Shao, Wei-Nan Zhang, Yong Yu

Domain adaptation aims to leverage the supervision signal of source domain to obtain an accurate model for target domain, where the labels are not available.

Unsupervised Domain Adaptation

Few-Shot Video Classification via Temporal Alignment

no code implementations CVPR 2020 Kaidi Cao, Jingwei Ji, Zhangjie Cao, Chien-Yi Chang, Juan Carlos Niebles

In this paper, we propose Temporal Alignment Module (TAM), a novel few-shot learning framework that can learn to classify a previous unseen video.

Action Recognition Classification +2

AlignFlow: Cycle Consistent Learning from Multiple Domains via Normalizing Flows

1 code implementation30 May 2019 Aditya Grover, Christopher Chute, Rui Shu, Zhangjie Cao, Stefano Ermon

Given datasets from multiple domains, a key challenge is to efficiently exploit these data sources for modeling a target domain.

Density Estimation Image-to-Image Translation +2

Learning to Transfer Examples for Partial Domain Adaptation

1 code implementation CVPR 2019 Zhangjie Cao, Kaichao You, Mingsheng Long, Jian-Min Wang, Qiang Yang

Under the condition that target labels are unknown, the key challenge of PDA is how to transfer relevant examples in the shared classes to promote positive transfer, and ignore irrelevant ones in the specific classes to mitigate negative transfer.

Partial Domain Adaptation Transfer Learning

Multi-Adversarial Domain Adaptation

2 code implementations4 Sep 2018 Zhongyi Pei, Zhangjie Cao, Mingsheng Long, Jian-Min Wang

Recent advances in deep domain adaptation reveal that adversarial learning can be embedded into deep networks to learn transferable features that reduce distribution discrepancy between the source and target domains.

Domain Adaptation

Deep Priority Hashing

1 code implementation4 Sep 2018 Zhangjie Cao, Ziping Sun, Mingsheng Long, Jian-Min Wang, Philip S. Yu

Deep hashing enables image retrieval by end-to-end learning of deep representations and hash codes from training data with pairwise similarity information.

Image Retrieval Quantization

Partial Adversarial Domain Adaptation

2 code implementations ECCV 2018 Zhangjie Cao, Lijia Ma, Mingsheng Long, Jian-Min Wang

We present Partial Adversarial Domain Adaptation (PADA), which simultaneously alleviates negative transfer by down-weighing the data of outlier source classes for training both source classifier and domain adversary, and promotes positive transfer by matching the feature distributions in the shared label space.

Partial Domain Adaptation

Transfer Adversarial Hashing for Hamming Space Retrieval

no code implementations13 Dec 2017 Zhangjie Cao, Mingsheng Long, Chao Huang, Jian-Min Wang

Existing work on deep hashing assumes that the database in the target domain is identically distributed with the training set in the source domain.

Image Retrieval

3D Object Classification via Spherical Projections

no code implementations12 Dec 2017 Zhangjie Cao, Qi-Xing Huang, Karthik Ramani

Our main idea is to project a 3D object onto a spherical domain centered around its barycenter and develop neural network to classify the spherical projection.

3D Classification 3D Object Classification +3

Conditional Adversarial Domain Adaptation

3 code implementations NeurIPS 2018 Mingsheng Long, Zhangjie Cao, Jian-Min Wang, Michael. I. Jordan

Adversarial learning has been embedded into deep networks to learn disentangled and transferable representations for domain adaptation.

Domain Adaptation General Classification

HashNet: Deep Learning to Hash by Continuation

1 code implementation ICCV 2017 Zhangjie Cao, Mingsheng Long, Jian-Min Wang, Philip S. Yu

Learning to hash has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval, due to its computation efficiency and retrieval quality.

Binarization Representation Learning

Transitive Hashing Network for Heterogeneous Multimedia Retrieval

no code implementations15 Aug 2016 Zhangjie Cao, Mingsheng Long, Qiang Yang

Hashing has been widely applied to large-scale multimedia retrieval due to the storage and retrieval efficiency.

Learning Multiple Tasks with Multilinear Relationship Networks

no code implementations NeurIPS 2017 Mingsheng Long, Zhangjie Cao, Jian-Min Wang, Philip S. Yu

Deep networks trained on large-scale data can learn transferable features to promote learning multiple tasks.

Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.