1 code implementation • ECCV 2020 • Bo Fu, Zhangjie Cao, Mingsheng Long, Jian-Min Wang
The new transferability measure accurately quantifies the inclination of a target example to the open classes.
Ranked #5 on
Universal Domain Adaptation
on DomainNet
1 code implementation • ICML 2020 • Zhiyu Yao, Yunbo Wang, Mingsheng Long, Jian-Min Wang
This paper explores a new research problem of unsupervised transfer learning across multiple spatiotemporal prediction tasks.
no code implementations • 14 Aug 2020 • Yuchen Zhang, Mingsheng Long, Jian-Min Wang, Michael. I. Jordan
Finally, we further extend the localized discrepancies for achieving super transfer and derive generalization bounds that could be even more sample-efficient on source domain.
no code implementations • NeurIPS 2020 • Ximei Wang, Mingsheng Long, Jian-Min Wang, Michael. I. Jordan
In this paper, we delve into the open problem of Calibration in DA, which is extremely challenging due to the coexistence of domain shift and the lack of target labels.
no code implementations • 7 Apr 2020 • Aoqian Zhang, Shaoxu Song, Yu Sun, Jian-Min Wang
We propose to adaptively learn individual models over various number l of neighbors for different complete tuples.
3 code implementations • ECCV 2020 • Ying Jin, Ximei Wang, Mingsheng Long, Jian-Min Wang
It can be characterized as (1) a non-adversarial DA method without explicitly deploying domain alignment, enjoying faster convergence speed; (2) a versatile approach that can handle four existing scenarios: Closed-Set, Partial-Set, Multi-Source, and Multi-Target DA, outperforming the state-of-the-art methods in these scenarios, especially on one of the largest and hardest datasets to date (7. 3% on DomainNet).
Ranked #3 on
Multi-target Domain Adaptation
on DomainNet
1 code implementation • NeurIPS 2019 • Ximei Wang, Ying Jin, Mingsheng Long, Jian-Min Wang, Michael. I. Jordan
Deep neural networks (DNNs) excel at learning representations when trained on large-scale datasets.
2 code implementations • NeurIPS 2019 • Xinyang Chen, Sinan Wang, Bo Fu, Mingsheng Long, Jian-Min Wang
Before sufficient training data is available, fine-tuning neural networks pre-trained on large-scale datasets substantially outperforms training from random initialization.
no code implementations • 26 Sep 2019 • Hong Liu, Mingsheng Long, Jian-Min Wang, Michael. I. Jordan
3) The feasibility of transferability is related to the similarity of both input and label.
no code implementations • ICLR 2020 • Kaichao You, Mingsheng Long, Jian-Min Wang, Michael. I. Jordan
Despite the popularity of these common beliefs, experiments suggest that they are insufficient in explaining the general effectiveness of lrDecay in training modern neural networks that are deep, wide, and nonconvex.
1 code implementation • 16 May 2019 • Chen Qian, Lijie Wen, Akhil Kumar, Leilei Lin, Li Lin, Zan Zong, Shuang Li, Jian-Min Wang
Process model extraction (PME) is a recently emerged interdiscipline between natural language processing (NLP) and business process management (BPM), which aims to extract process models from textual descriptions.
no code implementations • 18 Apr 2019 • Ying Wang, Xiao Xu, Tao Jin, Xiang Li, Guotong Xie, Jian-Min Wang
In addition, for unordered medical activity set, existing medical RL methods utilize a simple pooling strategy, which would result in indistinguishable contributions among the activities for learning.
1 code implementation • CVPR 2019 • Zhangjie Cao, Kaichao You, Mingsheng Long, Jian-Min Wang, Qiang Yang
Under the condition that target labels are unknown, the key challenge of PDA is how to transfer relevant examples in the shared classes to promote positive transfer, and ignore irrelevant ones in the specific classes to mitigate negative transfer.
Ranked #4 on
Partial Domain Adaptation
on ImageNet-Caltech
no code implementations • CVPR 2017 • Yunbo Wang, Mingsheng Long, Jian-Min Wang, Philip S. Yu
From the technical perspective, we introduce the spatiotemporal compact bilinear operator into video analysis tasks.
1 code implementation • 1 Feb 2019 • Bin Liu, Yue Cao, Mingsheng Long, Jian-Min Wang, Jingdong Wang
We propose Deep Triplet Quantization (DTQ), a novel approach to learning deep quantization models from the similarity triplets.
Ranked #1 on
Image Retrieval
on NUS-WIDE
no code implementations • NeurIPS 2018 • Shichen Liu, Mingsheng Long, Jian-Min Wang, Michael. I. Jordan
A technical challenge of deep learning is recognizing target classes without seen data.
no code implementations • 27 Nov 2018 • Enya Shen, Zhidong Cao, Changqing Zou, Jian-Min Wang
In this paper, we propose a novel framework, FANE, to integrate structure and property information in the network embedding process.
no code implementations • 20 Nov 2018 • Zhiyu Yao, Yunbo Wang, Mingsheng Long, Jian-Min Wang, Philip S. Yu, Jiaguang Sun
Rev2Net is shown to be effective on the classic action recognition task.
3 code implementations • CVPR 2019 • Yunbo Wang, Jianjin Zhang, Hongyu Zhu, Mingsheng Long, Jian-Min Wang, Philip S. Yu
Natural spatiotemporal processes can be highly non-stationary in many ways, e. g. the low-level non-stationarity such as spatial correlations or temporal dependencies of local pixel values; and the high-level variations such as the accumulation, deformation or dissipation of radar echoes in precipitation forecasting.
Ranked #5 on
Video Prediction
on Human3.6M
no code implementations • 15 Nov 2018 • Yan-Rong Li, Yu-Yang Songsheng, Jie Qiu, Chen Hu, Pu Du, Kai-Xing Lu, Ying-Ke Huang, Jin-Ming Bai, Wei-Hao Bian, Ye-Fei Yuan, Luis C. Ho, Jian-Min Wang
We apply three BLR models with different prescriptions of BLR clouds distributions and find that the best model for fitting the data of Mrk 142 is a two-zone BLR model, consistent with the theoretical BLR model surrounding slim accretion disks.
Astrophysics of Galaxies Instrumentation and Methods for Astrophysics
4 code implementations • 4 Sep 2018 • Zhongyi Pei, Zhangjie Cao, Mingsheng Long, Jian-Min Wang
Recent advances in deep domain adaptation reveal that adversarial learning can be embedded into deep networks to learn transferable features that reduce distribution discrepancy between the source and target domains.
Ranked #24 on
Domain Adaptation
on Office-31
1 code implementation • 4 Sep 2018 • Zhangjie Cao, Ziping Sun, Mingsheng Long, Jian-Min Wang, Philip S. Yu
Deep hashing enables image retrieval by end-to-end learning of deep representations and hash codes from training data with pairwise similarity information.
no code implementations • ECCV 2018 • Yue Cao , Bin Liu, Mingsheng Long, Jian-Min Wang
Extensive experiments demonstrate that CMHH can generate highly concentrated hash codes and achieve state-of-the-art cross-modal retrieval performance for both hash lookups and linear scan scenarios on three benchmark datasets, NUS-WIDE, MIRFlickr-25K, and IAPR TC-12.
2 code implementations • ECCV 2018 • Zhangjie Cao, Lijia Ma, Mingsheng Long, Jian-Min Wang
We present Partial Adversarial Domain Adaptation (PADA), which simultaneously alleviates negative transfer by down-weighing the data of outlier source classes for training both source classifier and domain adversary, and promotes positive transfer by matching the feature distributions in the shared label space.
Ranked #3 on
Partial Domain Adaptation
on DomainNet
1 code implementation • Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18} 2018 • Ziru Xu, Yunbo Wang, Mingsheng Long, Jian-Min Wang
Predicting future frames in videos remains an unsolved but challenging problem.
Ranked #3 on
Pose Prediction
on Filtered NTU RGB+D
no code implementations • CVPR 2018 • Yue Cao, Mingsheng Long, Bin Liu, Jian-Min Wang
Due to its computation efficiency and retrieval quality, hashing has been widely applied to approximate nearest neighbor search for large-scale image retrieval, while deep hashing further improves the retrieval quality by end-to-end representation learning and hash coding.
no code implementations • CVPR 2018 • Yue Cao, Bin Liu, Mingsheng Long, Jian-Min Wang
The main idea is to augment the training data with nearly real images synthesized from a new Pair Conditional Wasserstein GAN (PC-WGAN) conditioned on the pairwise similarity information.
6 code implementations • ICML 2018 • Yunbo Wang, Zhifeng Gao, Mingsheng Long, Jian-Min Wang, Philip S. Yu
We present PredRNN++, an improved recurrent network for video predictive learning.
Ranked #1 on
Video Prediction
on KTH
(Cond metric)
no code implementations • 13 Dec 2017 • Zhangjie Cao, Mingsheng Long, Chao Huang, Jian-Min Wang
Existing work on deep hashing assumes that the database in the target domain is identically distributed with the training set in the source domain.
no code implementations • NeurIPS 2017 • Yunbo Wang, Mingsheng Long, Jian-Min Wang, Zhifeng Gao, Philip S. Yu
The core of this network is a new Spatiotemporal LSTM (ST-LSTM) unit that extracts and memorizes spatial and temporal representations simultaneously.
Ranked #6 on
Video Prediction
on Human3.6M
no code implementations • 27 Oct 2017 • Rong Kang, Chen Wang, Peng Wang, Yuting Ding, Jian-Min Wang
Hence, we formulate a new problem, called "fine-grained pattern matching", which allows users to specify varied granularities of matching deviation to different segments of a given pattern, and fuzzy regions for adaptive breakpoints determination between consecutive segments.
no code implementations • CVPR 2018 • Zhangjie Cao, Mingsheng Long, Jian-Min Wang, Michael. I. Jordan
Existing domain adversarial networks assume fully shared label space across domains.
no code implementations • CVPR 2017 • Yue Cao, Mingsheng Long, Jian-Min Wang, Shichen Liu
This paper presents a compact coding solution with a focus on the deep learning to quantization approach, which improves retrieval quality by end-to-end representation learning and compact encoding and has already shown the superior performance over the hashing solutions for similarity retrieval.
1 code implementation • Proceedings of the VLDB Endowment 2017 • Aoqian Zhang, Shaoxu Song, Jian-Min Wang, Philip S. Yu
Instead of simply discarding anomalies, we propose to (iteratively) repair them in time series data, by creatively bonding the beauty of temporal nature in anomaly detection with the widely considered minimum change principle in data repairing.
5 code implementations • NeurIPS 2018 • Mingsheng Long, Zhangjie Cao, Jian-Min Wang, Michael. I. Jordan
Adversarial learning has been embedded into deep networks to learn disentangled and transferable representations for domain adaptation.
Ranked #5 on
Domain Adaptation
on USPS-to-MNIST
2 code implementations • ICCV 2017 • Zhangjie Cao, Mingsheng Long, Jian-Min Wang, Philip S. Yu
Learning to hash has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval, due to its computation efficiency and retrieval quality.
4 code implementations • ICML 2017 • Mingsheng Long, Han Zhu, Jian-Min Wang, Michael. I. Jordan
Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain.
Ranked #2 on
Domain Adaptation
on HMDBfull-to-UCF
Multi-Source Unsupervised Domain Adaptation
Transfer Learning
no code implementations • 22 Feb 2016 • Yue Cao, Mingsheng Long, Jian-Min Wang, Philip S. Yu
This paper presents a Correlation Hashing Network (CHN) approach to cross-modal hashing, which jointly learns good data representation tailored to hash coding and formally controls the quantization error.
2 code implementations • NeurIPS 2016 • Mingsheng Long, Han Zhu, Jian-Min Wang, Michael. I. Jordan
In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain.
no code implementations • NeurIPS 2017 • Mingsheng Long, Zhangjie Cao, Jian-Min Wang, Philip S. Yu
Deep networks trained on large-scale data can learn transferable features to promote learning multiple tasks.
no code implementations • CVPR 2015 • Zijia Lin, Guiguang Ding, Mingqing Hu, Jian-Min Wang
With benefits of low storage costs and high query speeds, hashing methods are widely researched for efficiently retrieving large-scale data, which commonly contains multiple views, e. g. a news report with images, videos and texts.
5 code implementations • 10 Feb 2015 • Mingsheng Long, Yue Cao, Jian-Min Wang, Michael. I. Jordan
Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation.
Ranked #3 on
Domain Adaptation
on Synth Digits-to-SVHN
no code implementations • CVPR 2014 • Mingsheng Long, Jian-Min Wang, Guiguang Ding, Jiaguang Sun, Philip S. Yu
Visual domain adaptation, which learns an accurate classifier for a new domain using labeled images from an old domain, has shown promising value in computer vision yet still been a challenging problem.
no code implementations • CVPR 2013 • Mingsheng Long, Guiguang Ding, Jian-Min Wang, Jiaguang Sun, Yuchen Guo, Philip S. Yu
In this paper, we propose a Transfer Sparse Coding (TSC) approach to construct robust sparse representations for classifying cross-distribution images accurately.
no code implementations • CVPR 2013 • Zijia Lin, Guiguang Ding, Mingqing Hu, Jian-Min Wang, Xiaojun Ye
Though widely utilized for facilitating image management, user-provided image tags are usually incomplete and insufficient to describe the whole semantic content of corresponding images, resulting in performance degradations in tag-dependent applications and thus necessitating effective tag completion methods.