1 code implementation • ECCV 2020 • Bo Fu, Zhangjie Cao, Mingsheng Long, Jian-Min Wang
The new transferability measure accurately quantifies the inclination of a target example to the open classes.
Ranked #5 on
Universal Domain Adaptation
on DomainNet
no code implementations • 16 Oct 2023 • Lanxiang Xing, Haixu Wu, Yuezhou Ma, Jianmin Wang, Mingsheng Long
Inspired by the Helmholtz theorem, we design a HelmDynamic block to learn the Helmholtz dynamics, which decomposes fluid dynamics into more solvable curl-free and divergence-free parts, physically corresponding to potential and stream functions of fluid.
4 code implementations • 10 Oct 2023 • Yong liu, Tengge Hu, Haoran Zhang, Haixu Wu, Shiyu Wang, Lintao Ma, Mingsheng Long
These forecasters leverage Transformers to model the global dependencies over temporal tokens of time series, with each token formed by multiple variates of the same timestamp.
no code implementations • 6 Oct 2023 • Xingzhuo Guo, Junwei Pan, Ximei Wang, Baixu Chen, Jie Jiang, Mingsheng Long
Recent advances in deep foundation models have led to a promising trend of developing large recommendation models to leverage vast amounts of available data.
no code implementations • 30 Sep 2023 • Haoyu Ma, Jialong Wu, Ningya Feng, Jianmin Wang, Mingsheng Long
Model-based reinforcement learning (MBRL) holds the promise of sample-efficient learning by utilizing a world model, which models how the environment works and typically encompasses components for two tasks: observation modeling and reward modeling.
no code implementations • 19 May 2023 • Kaichao You, Anchang Bao, Guo Qin, Meng Cao, Ping Huang, Jiulong Shan, Mingsheng Long
Convolution-BatchNorm (ConvBN) blocks are integral components in various computer vision tasks and other domains.
1 code implementation • 2 Feb 2023 • Yang Shu, Xingzhuo Guo, Jialong Wu, Ximei Wang, Jianmin Wang, Mingsheng Long
This paper aims at generalizing CLIP to out-of-distribution test data on downstream tasks.
1 code implementation • 30 Jan 2023 • Haixu Wu, Tengge Hu, Huakun Luo, Jianmin Wang, Mingsheng Long
A burgeoning paradigm is learning neural operators to approximate the input-output mappings of PDEs.
1 code implementation • 13 Nov 2022 • Yiwen Qiu, Jialong Wu, Zhangjie Cao, Mingsheng Long
Existing imitation learning works mainly assume that the demonstrator who collects demonstrations shares the same dynamics as the imitator.
2 code implementations • 5 Oct 2022 • Haixu Wu, Tengge Hu, Yong liu, Hang Zhou, Jianmin Wang, Mingsheng Long
TimesBlock can discover the multi-periodicity adaptively and extract the complex temporal variations from transformed 2D tensors by a parameter-efficient inception block.
no code implementations • 13 Jun 2022 • Zhiyu Yao, Xinyang Chen, Sinan Wang, Qinyan Dai, Yumeng Li, Tanchao Zhu, Mingsheng Long
We conclude this characteristic for sequential behaviors of each user as the Behavior Pathway.
no code implementations • 8 Jun 2022 • Yang Shu, Zhangjie Cao, Ziyang Zhang, Jianmin Wang, Mingsheng Long
The proposed framework can be trained end-to-end with the target task-specific loss, where it learns to explore better pathway configurations and exploit the knowledge in pre-trained models for each target datum.
1 code implementation • 28 May 2022 • Yong liu, Haixu Wu, Jianmin Wang, Mingsheng Long
However, their performance can degenerate terribly on non-stationary real-world data in which the joint distribution changes over time.
no code implementations • CVPR 2021 • Chao Huang, Zhangjie Cao, Yunbo Wang, Jianmin Wang, Mingsheng Long
It is a challenging problem due to the substantial geometry shift from simulated to real data, such that most existing 3D models underperform due to overfitting the complete geometries in the source domain.
1 code implementation • CVPR 2022 • Geng Chen, Wendong Zhang, Han Lu, Siyu Gao, Yunbo Wang, Mingsheng Long, Xiaokang Yang
Can we develop predictive learning algorithms that can deal with more realistic, non-stationary physical environments?
no code implementations • 26 Mar 2022 • Sha Yuan, Hanyu Zhao, Shuai Zhao, Jiahong Leng, Yangxiao Liang, Xiaozhi Wang, Jifan Yu, Xin Lv, Zhou Shao, Jiaao He, Yankai Lin, Xu Han, Zhenghao Liu, Ning Ding, Yongming Rao, Yizhao Gao, Liang Zhang, Ming Ding, Cong Fang, Yisen Wang, Mingsheng Long, Jing Zhang, Yinpeng Dong, Tianyu Pang, Peng Cui, Lingxiao Huang, Zheng Liang, HuaWei Shen, HUI ZHANG, Quanshi Zhang, Qingxiu Dong, Zhixing Tan, Mingxuan Wang, Shuo Wang, Long Zhou, Haoran Li, Junwei Bao, Yingwei Pan, Weinan Zhang, Zhou Yu, Rui Yan, Chence Shi, Minghao Xu, Zuobai Zhang, Guoqiang Wang, Xiang Pan, Mengjie Li, Xiaoyu Chu, Zijun Yao, Fangwei Zhu, Shulin Cao, Weicheng Xue, Zixuan Ma, Zhengyan Zhang, Shengding Hu, Yujia Qin, Chaojun Xiao, Zheni Zeng, Ganqu Cui, Weize Chen, Weilin Zhao, Yuan YAO, Peng Li, Wenzhao Zheng, Wenliang Zhao, Ziyi Wang, Borui Zhang, Nanyi Fei, Anwen Hu, Zenan Ling, Haoyang Li, Boxi Cao, Xianpei Han, Weidong Zhan, Baobao Chang, Hao Sun, Jiawen Deng, Chujie Zheng, Juanzi Li, Lei Hou, Xigang Cao, Jidong Zhai, Zhiyuan Liu, Maosong Sun, Jiwen Lu, Zhiwu Lu, Qin Jin, Ruihua Song, Ji-Rong Wen, Zhouchen Lin, LiWei Wang, Hang Su, Jun Zhu, Zhifang Sui, Jiajun Zhang, Yang Liu, Xiaodong He, Minlie Huang, Jian Tang, Jie Tang
With the rapid development of deep learning, training Big Models (BMs) for multiple downstream tasks becomes a popular paradigm.
1 code implementation • 14 Mar 2022 • Zhangjie Cao, Kaichao You, Ziyang Zhang, Jianmin Wang, Mingsheng Long
Still, the common requirement of identical class space shared across domains hinders applications of domain adaptation to partial-set domains.
1 code implementation • 15 Feb 2022 • Baixu Chen, Junguang Jiang, Ximei Wang, Pengfei Wan, Jianmin Wang, Mingsheng Long
Yet these datasets are time-consuming and labor-exhaustive to obtain on realistic tasks.
1 code implementation • 13 Feb 2022 • Haixu Wu, Jialong Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long
By respectively conserving the incoming flow of sinks for source competition and the outgoing flow of sources for sink allocation, Flow-Attention inherently generates informative attentions without using specific inductive biases.
3 code implementations • 13 Feb 2022 • Jialong Wu, Haixu Wu, Zihan Qiu, Jianmin Wang, Mingsheng Long
Policy constraint methods to offline reinforcement learning (RL) typically utilize parameterization or regularization that constrains the policy to perform actions within the support set of the behavior policy.
1 code implementation • 15 Jan 2022 • Junguang Jiang, Yang Shu, Jianmin Wang, Mingsheng Long
The success of deep learning algorithms generally depends on large-scale data, while humans appear to have inherent ability of knowledge transfer, by recognizing and applying relevant knowledge from previous learning experiences when encountering and solving unseen tasks.
1 code implementation • 20 Oct 2021 • Kaichao You, Yong liu, Ziyang Zhang, Jianmin Wang, Michael I. Jordan, Mingsheng Long
(2) The best ranked PTM can either be fine-tuned and deployed if we have no preference for the model's architecture or the target PTM can be tuned by the top $K$ ranked PTMs via a Bayesian procedure that we propose.
no code implementations • 14 Oct 2021 • Yang Shu, Zhangjie Cao, Jinghan Gao, Jianmin Wang, Philip S. Yu, Mingsheng Long
While pre-training and meta-training can create deep models powerful for few-shot generalization, we find that pre-training and meta-training focuses respectively on cross-domain transferability and cross-task transferability, which restricts their data efficiency in the entangled settings of domain shift and task shift.
no code implementations • ICLR 2022 • Ximei Wang, Xinyang Chen, Jianmin Wang, Mingsheng Long
To take the power of both worlds, we propose a novel X-model by simultaneously encouraging the invariance to {data stochasticity} and {model stochasticity}.
1 code implementation • 8 Oct 2021 • Zhiyu Yao, Yunbo Wang, Haixu Wu, Jianmin Wang, Mingsheng Long
To this end, we propose ModeRNN, which introduces a novel method to learn structured hidden representations between recurrent states.
2 code implementations • ICLR 2022 • Junguang Jiang, Baixu Chen, Jianmin Wang, Mingsheng Long
Besides, previous methods focused on category adaptation but ignored another important part for object detection, i. e., the adaptation on bounding box regression.
3 code implementations • ICLR 2022 • Jiehui Xu, Haixu Wu, Jianmin Wang, Mingsheng Long
Unsupervised detection of anomaly points in time series is a challenging problem, which requires the model to derive a distinguishable criterion.
no code implementations • 29 Jun 2021 • Yang Shu, Zhi Kou, Zhangjie Cao, Jianmin Wang, Mingsheng Long
We propose \emph{Zoo-Tuning} to address these challenges, which learns to adaptively transfer the parameters of pretrained models to the target task.
1 code implementation • NeurIPS 2021 • Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long
Going beyond Transformers, we design Autoformer as a novel decomposition architecture with an Auto-Correlation mechanism.
no code implementations • CVPR 2021 • Bo Fu, Zhangjie Cao, Jianmin Wang, Mingsheng Long
Due to the domain shift, the query selection criteria of prior active learning methods may be ineffective to select the most informative target samples for annotation.
no code implementations • CVPR 2021 • Chao Huang, Zhangjie Cao, Yunbo Wang, Jianmin Wang, Mingsheng Long
It is a challenging problem due to the substantial geometry shift from simulated to real data, such that most existing 3D models underperform due to overfitting the complete geometries in the source domain.
no code implementations • CVPR 2021 • Yang Shu, Zhangjie Cao, Chenyu Wang, Jianmin Wang, Mingsheng Long
Leveraging datasets available to learn a model with high generalization ability to unseen domains is important for computer vision, especially when the unseen domain's annotated data are unavailable.
3 code implementations • 17 Mar 2021 • Yunbo Wang, Haixu Wu, Jianjin Zhang, Zhifeng Gao, Jianmin Wang, Philip S. Yu, Mingsheng Long
This paper models these structures by presenting PredRNN, a new recurrent network, in which a pair of memory cells are explicitly decoupled, operate in nearly independent transition manners, and finally form unified representations of the complex environment.
Ranked #1 on
Video Prediction
on KTH
(Cond metric)
2 code implementations • CVPR 2021 • Junguang Jiang, Yifei Ji, Ximei Wang, Yufeng Liu, Jianmin Wang, Mingsheng Long
First, based on our observation that the probability density of the output space is sparse, we introduce a spatial probability distribution to describe this sparsity and then use it to guide the learning of the adversarial regressor.
1 code implementation • NeurIPS 2021 • Hong Liu, Jianmin Wang, Mingsheng Long
In the forward step, CST generates target pseudo-labels with a source-trained classifier.
1 code implementation • CVPR 2021 • Haixu Wu, Zhiyu Yao, Jianmin Wang, Mingsheng Long
With high flexibility, this framework can adapt to a series of models for deterministic spatiotemporal prediction.
2 code implementations • 25 Feb 2021 • Ximei Wang, Jinghan Gao, Mingsheng Long, Jianmin Wang
Deep learning has made revolutionary advances to diverse applications in the presence of large-scale labeled datasets.
1 code implementation • 22 Feb 2021 • Kaichao You, Yong liu, Jianmin Wang, Mingsheng Long
In pursuit of a practical assessment method, we propose to estimate the maximum value of label evidence given features extracted by pre-trained models.
2 code implementations • NeurIPS 2020 • Zhi Kou, Kaichao You, Mingsheng Long, Jianmin Wang
During training, two branches are stochastically selected to avoid over-depending on some sample statistics, resulting in a strong regularization effect, which we interpret as ``architecture regularization.''
2 code implementations • NeurIPS 2020 • Kaichao You, Zhi Kou, Mingsheng Long, Jianmin Wang
Fine-tuning pre-trained deep neural networks (DNNs) to a target dataset, also known as transfer learning, is widely used in computer vision and NLP.
Ranked #1 on
Transfer Learning
on COCO70
1 code implementation • NeurIPS 2020 • Hong Liu, Mingsheng Long, Jianmin Wang, Yu Wang
(2) Since the target data arrive online, the agent should also maintain competence on previous target domains, i. e. to adapt without forgetting.
no code implementations • 12 Nov 2020 • Jincheng Zhong, Ximei Wang, Zhi Kou, Jianmin Wang, Mingsheng Long
It is common within the deep learning community to first pre-train a deep neural network from a large-scale dataset and then fine-tune the pre-trained model to a specific downstream task.
1 code implementation • ICML 2020 • Zhiyu Yao, Yunbo Wang, Mingsheng Long, Jian-Min Wang
This paper explores a new research problem of unsupervised transfer learning across multiple spatiotemporal prediction tasks.
no code implementations • 14 Aug 2020 • Yuchen Zhang, Mingsheng Long, Jian-Min Wang, Michael. I. Jordan
Finally, we further extend the localized discrepancies for achieving super transfer and derive generalization bounds that could be even more sample-efficient on source domain.
no code implementations • NeurIPS 2020 • Ximei Wang, Mingsheng Long, Jian-Min Wang, Michael. I. Jordan
In this paper, we delve into the open problem of Calibration in DA, which is extremely challenging due to the coexistence of domain shift and the lack of target labels.
1 code implementation • ECCV 2020 • Bin Liu, Yue Cao, Yutong Lin, Qi Li, Zheng Zhang, Mingsheng Long, Han Hu
This paper introduces a negative margin loss to metric learning based few-shot learning methods.
3 code implementations • ECCV 2020 • Ying Jin, Ximei Wang, Mingsheng Long, Jian-Min Wang
It can be characterized as (1) a non-adversarial DA method without explicitly deploying domain alignment, enjoying faster convergence speed; (2) a versatile approach that can handle four existing scenarios: Closed-Set, Partial-Set, Multi-Source, and Multi-Target DA, outperforming the state-of-the-art methods in these scenarios, especially on one of the largest and hardest datasets to date (7. 3% on DomainNet).
Ranked #3 on
Multi-target Domain Adaptation
on DomainNet
1 code implementation • 8 Dec 2019 • Zhiyu Yao, Yunbo Wang, Jianmin Wang, Philip S. Yu, Mingsheng Long
This paper introduces video domain generalization where most video classification networks degenerate due to the lack of exposure to the target domains of divergent distributions.
1 code implementation • NeurIPS 2019 • Ximei Wang, Ying Jin, Mingsheng Long, Jian-Min Wang, Michael. I. Jordan
Deep neural networks (DNNs) excel at learning representations when trained on large-scale datasets.
2 code implementations • NeurIPS 2019 • Xinyang Chen, Sinan Wang, Bo Fu, Mingsheng Long, Jian-Min Wang
Before sufficient training data is available, fine-tuning neural networks pre-trained on large-scale datasets substantially outperforms training from random initialization.
no code implementations • 26 Sep 2019 • Hong Liu, Mingsheng Long, Jian-Min Wang, Michael. I. Jordan
3) The feasibility of transferability is related to the similarity of both input and label.
no code implementations • ICLR 2020 • Kaichao You, Mingsheng Long, Jian-Min Wang, Michael. I. Jordan
Despite the popularity of these common beliefs, experiments suggest that they are insufficient in explaining the general effectiveness of lrDecay in training modern neural networks that are deep, wide, and nonconvex.
no code implementations • IEEE International Conference on Multimedia and Expo (ICME) 2019 • Jianjin Zhang, Yunbo Wang, Mingsheng Long, Wang Jianmin, Philip S Yu
First, we propose a new RNN architecture for modeling the deterministic dynamics, which updates hidden states along a z-order curve to enhance the consistency of the features of mirrored layers.
Ranked #1 on
Video Prediction
on KTH
(Cond metric)
2 code implementations • International Conference on Machine Learning 2019 • Xinyang Chen, Sinan Wang, Mingsheng Long, Jianmin Wang
In this paper, a series of experiments based on spectral analysis of the feature representations have been conducted, revealing an unexpected deterioration of the discriminability while learning transferable features adversarially.
2 code implementations • International Conference on Machine Learning 2019 • Kaichao You, Ximei Wang, Mingsheng Long, Michael Jordan
Deep unsupervised domain adaptation (Deep UDA) methods successfully leverage rich labeled data in a source domain to boost the performance on related but unlabeled data in a target domain.
3 code implementations • ICLR 2019 • Yunbo Wang, Lu Jiang, Ming-Hsuan Yang, Li-Jia Li, Mingsheng Long, Li Fei-Fei
We first evaluate the E3D-LSTM network on widely-used future video prediction datasets and achieve the state-of-the-art performance.
Ranked #1 on
Video Prediction
on KTH
(Cond metric)
5 code implementations • 11 Apr 2019 • Yuchen Zhang, Tianle Liu, Mingsheng Long, Michael. I. Jordan
We introduce Margin Disparity Discrepancy, a novel measurement with rigorous generalization bounds, tailored to the distribution comparison with the asymmetric margin loss, and to the minimax optimization for easier training.
1 code implementation • CVPR 2019 • Zhangjie Cao, Kaichao You, Mingsheng Long, Jian-Min Wang, Qiang Yang
Under the condition that target labels are unknown, the key challenge of PDA is how to transfer relevant examples in the shared classes to promote positive transfer, and ignore irrelevant ones in the specific classes to mitigate negative transfer.
Ranked #4 on
Partial Domain Adaptation
on ImageNet-Caltech
no code implementations • CVPR 2017 • Yunbo Wang, Mingsheng Long, Jian-Min Wang, Philip S. Yu
From the technical perspective, we introduce the spatiotemporal compact bilinear operator into video analysis tasks.
1 code implementation • 14 Feb 2019 • Binhang Yuan, Chen Wang, Chen Luo, Fei Jiang, Mingsheng Long, Philip S. Yu, Yu-An Liu
Quick detection of blade ice accretion is crucial for the maintenance of wind farms.
1 code implementation • 1 Feb 2019 • Bin Liu, Yue Cao, Mingsheng Long, Jian-Min Wang, Jingdong Wang
We propose Deep Triplet Quantization (DTQ), a novel approach to learning deep quantization models from the similarity triplets.
Ranked #1 on
Image Retrieval
on NUS-WIDE
no code implementations • NeurIPS 2018 • Shichen Liu, Mingsheng Long, Jian-Min Wang, Michael. I. Jordan
A technical challenge of deep learning is recognizing target classes without seen data.
no code implementations • 20 Nov 2018 • Zhiyu Yao, Yunbo Wang, Mingsheng Long, Jian-Min Wang, Philip S. Yu, Jiaguang Sun
Rev2Net is shown to be effective on the classic action recognition task.
4 code implementations • CVPR 2019 • Yunbo Wang, Jianjin Zhang, Hongyu Zhu, Mingsheng Long, Jian-Min Wang, Philip S. Yu
Natural spatiotemporal processes can be highly non-stationary in many ways, e. g. the low-level non-stationarity such as spatial correlations or temporal dependencies of local pixel values; and the high-level variations such as the accumulation, deformation or dissipation of radar echoes in precipitation forecasting.
Ranked #5 on
Video Prediction
on Human3.6M
1 code implementation • 4 Sep 2018 • Zhangjie Cao, Ziping Sun, Mingsheng Long, Jian-Min Wang, Philip S. Yu
Deep hashing enables image retrieval by end-to-end learning of deep representations and hash codes from training data with pairwise similarity information.
4 code implementations • 4 Sep 2018 • Zhongyi Pei, Zhangjie Cao, Mingsheng Long, Jian-Min Wang
Recent advances in deep domain adaptation reveal that adversarial learning can be embedded into deep networks to learn transferable features that reduce distribution discrepancy between the source and target domains.
Ranked #25 on
Domain Adaptation
on Office-31
no code implementations • ECCV 2018 • Yue Cao , Bin Liu, Mingsheng Long, Jian-Min Wang
Extensive experiments demonstrate that CMHH can generate highly concentrated hash codes and achieve state-of-the-art cross-modal retrieval performance for both hash lookups and linear scan scenarios on three benchmark datasets, NUS-WIDE, MIRFlickr-25K, and IAPR TC-12.
2 code implementations • ECCV 2018 • Zhangjie Cao, Lijia Ma, Mingsheng Long, Jian-Min Wang
We present Partial Adversarial Domain Adaptation (PADA), which simultaneously alleviates negative transfer by down-weighing the data of outlier source classes for training both source classifier and domain adversary, and promotes positive transfer by matching the feature distributions in the shared label space.
Ranked #3 on
Partial Domain Adaptation
on DomainNet
1 code implementation • Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18} 2018 • Ziru Xu, Yunbo Wang, Mingsheng Long, Jian-Min Wang
Predicting future frames in videos remains an unsolved but challenging problem.
Ranked #3 on
Pose Prediction
on Filtered NTU RGB+D
no code implementations • CVPR 2018 • Yue Cao, Bin Liu, Mingsheng Long, Jian-Min Wang
The main idea is to augment the training data with nearly real images synthesized from a new Pair Conditional Wasserstein GAN (PC-WGAN) conditioned on the pairwise similarity information.
no code implementations • CVPR 2018 • Yue Cao, Mingsheng Long, Bin Liu, Jian-Min Wang
Due to its computation efficiency and retrieval quality, hashing has been widely applied to approximate nearest neighbor search for large-scale image retrieval, while deep hashing further improves the retrieval quality by end-to-end representation learning and hash coding.
9 code implementations • ICML 2018 • Yunbo Wang, Zhifeng Gao, Mingsheng Long, Jian-Min Wang, Philip S. Yu
We present PredRNN++, an improved recurrent network for video predictive learning.
Ranked #1 on
Video Prediction
on KTH
(Cond metric)
no code implementations • 13 Dec 2017 • Zhangjie Cao, Mingsheng Long, Chao Huang, Jian-Min Wang
Existing work on deep hashing assumes that the database in the target domain is identically distributed with the training set in the source domain.
no code implementations • NeurIPS 2017 • Yunbo Wang, Mingsheng Long, Jian-Min Wang, Zhifeng Gao, Philip S. Yu
The core of this network is a new Spatiotemporal LSTM (ST-LSTM) unit that extracts and memorizes spatial and temporal representations simultaneously.
Ranked #6 on
Video Prediction
on Human3.6M
no code implementations • CVPR 2018 • Zhangjie Cao, Mingsheng Long, Jian-Min Wang, Michael. I. Jordan
Existing domain adversarial networks assume fully shared label space across domains.
no code implementations • CVPR 2017 • Yue Cao, Mingsheng Long, Jian-Min Wang, Shichen Liu
This paper presents a compact coding solution with a focus on the deep learning to quantization approach, which improves retrieval quality by end-to-end representation learning and compact encoding and has already shown the superior performance over the hashing solutions for similarity retrieval.
5 code implementations • NeurIPS 2018 • Mingsheng Long, Zhangjie Cao, Jian-Min Wang, Michael. I. Jordan
Adversarial learning has been embedded into deep networks to learn disentangled and transferable representations for domain adaptation.
Ranked #6 on
Domain Adaptation
on USPS-to-MNIST
2 code implementations • ICCV 2017 • Zhangjie Cao, Mingsheng Long, Jian-Min Wang, Philip S. Yu
Learning to hash has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval, due to its computation efficiency and retrieval quality.
no code implementations • 15 Aug 2016 • Zhangjie Cao, Mingsheng Long, Qiang Yang
Hashing has been widely applied to large-scale multimedia retrieval due to the storage and retrieval efficiency.
4 code implementations • ICML 2017 • Mingsheng Long, Han Zhu, Jian-Min Wang, Michael. I. Jordan
Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain.
Ranked #2 on
Domain Adaptation
on HMDBfull-to-UCF
Multi-Source Unsupervised Domain Adaptation
Transfer Learning
no code implementations • 22 Feb 2016 • Yue Cao, Mingsheng Long, Jian-Min Wang, Philip S. Yu
This paper presents a Correlation Hashing Network (CHN) approach to cross-modal hashing, which jointly learns good data representation tailored to hash coding and formally controls the quantization error.
2 code implementations • NeurIPS 2016 • Mingsheng Long, Han Zhu, Jian-Min Wang, Michael. I. Jordan
In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain.
no code implementations • NeurIPS 2017 • Mingsheng Long, Zhangjie Cao, Jian-Min Wang, Philip S. Yu
Deep networks trained on large-scale data can learn transferable features to promote learning multiple tasks.
5 code implementations • 10 Feb 2015 • Mingsheng Long, Yue Cao, Jian-Min Wang, Michael. I. Jordan
Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation.
Ranked #3 on
Domain Adaptation
on Synth Digits-to-SVHN
no code implementations • CVPR 2014 • Mingsheng Long, Jian-Min Wang, Guiguang Ding, Jiaguang Sun, Philip S. Yu
Visual domain adaptation, which learns an accurate classifier for a new domain using labeled images from an old domain, has shown promising value in computer vision yet still been a challenging problem.
no code implementations • CVPR 2013 • Mingsheng Long, Guiguang Ding, Jian-Min Wang, Jiaguang Sun, Yuchen Guo, Philip S. Yu
In this paper, we propose a Transfer Sparse Coding (TSC) approach to construct robust sparse representations for classifying cross-distribution images accurately.