1 code implementation • 27 Feb 2023 • Wang Lu, Xixu Hu, Jindong Wang, Xing Xie
Concretely, we design an attention-based adapter for the large model, CLIP, and the rest operations merely depend on adapters.
1 code implementation • 22 Feb 2023 • Jindong Wang, Xixu Hu, Wenxin Hou, Hao Chen, Runkai Zheng, Yidong Wang, Linyi Yang, Haojun Huang, Wei Ye, Xiubo Geng, Binxin Jiao, Yue Zhang, Xing Xie
In this paper, we conduct a thorough evaluation of the robustness of ChatGPT from the adversarial and out-of-distribution (OOD) perspective.
3 code implementations • 26 Jan 2023 • Hao Chen, Ran Tao, Yue Fan, Yidong Wang, Jindong Wang, Bernt Schiele, Xing Xie, Bhiksha Raj, Marios Savvides
The critical challenge of Semi-Supervised Learning (SSL) is how to effectively leverage the limited labeled data and massive unlabeled data to improve the model's generalization performance.
no code implementations • 20 Nov 2022 • Hao Chen, Yue Fan, Yidong Wang, Jindong Wang, Bernt Schiele, Xing Xie, Marios Savvides, Bhiksha Raj
While standard SSL assumes uniform data distribution, we consider a more realistic and challenging setting called imbalanced SSL, where imbalanced class distributions occur in both labeled and unlabeled data.
1 code implementation • 15 Nov 2022 • Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, Yue Zhang
Pre-trained language models (PLMs) are known to improve the generalization performance of natural language understanding models by leveraging large amounts of data during the pre-training phase.
Natural Language Understanding
Out-of-Distribution Generalization
1 code implementation • 7 Nov 2022 • Wang Lu, Jindong Wang, Han Yu, Lei Huang, Xiang Zhang, Yiqiang Chen, Xing Xie
Firstly, Mixup cannot effectively identify the domain and class information that can be used for learning invariant representations.
1 code implementation • 15 Sep 2022 • Wang Lu, Jindong Wang, Xinwei Sun, Yiqiang Chen, Xing Xie
Time series classification is an important problem in real world.
no code implementations • 1 Sep 2022 • Wang Lu, Jindong Wang, Yidong Wang, Kan Ren, Yiqiang Chen, Xing Xie
For optimization, we utilize an adapted Mixup to generate an out-of-distribution dataset that can guide the preference direction and optimize with Pareto optimization.
no code implementations • 18 Aug 2022 • Yi-Fan Zhang, Jindong Wang, Jian Liang, Zhang Zhang, Baosheng Yu, Liang Wang, DaCheng Tao, Xing Xie
Our bound motivates two strategies to reduce the gap: the first one is ensembling multiple classifiers to enrich the hypothesis space, then we propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
1 code implementation • COLING 2022 • Yidong Wang, Hao Wu, Ao Liu, Wenxin Hou, Zhen Wu, Jindong Wang, Takahiro Shinozaki, Manabu Okumura, Yue Zhang
Limited labeled data increase the risk of distribution shift between test data and training data.
no code implementations • 15 Aug 2022 • Hao Chen, Ran Tao, Han Zhang, Yidong Wang, Wei Ye, Jindong Wang, Guosheng Hu, Marios Savvides
Beyond classification, Conv-Adapter can generalize to detection and segmentation tasks with more than 50% reduction of parameters but comparable performance to the traditional full fine-tuning.
1 code implementation • 12 Aug 2022 • Yidong Wang, Hao Chen, Yue Fan, Wang Sun, Ran Tao, Wenxin Hou, RenJie Wang, Linyi Yang, Zhi Zhou, Lan-Zhe Guo, Heli Qi, Zhen Wu, Yu-Feng Li, Satoshi Nakamura, Wei Ye, Marios Savvides, Bhiksha Raj, Takahiro Shinozaki, Bernt Schiele, Jindong Wang, Xing Xie, Yue Zhang
We further provide the pre-trained versions of the state-of-the-art neural models for CV tasks to make the cost affordable for further tuning.
no code implementations • 3 Aug 2022 • Yivan Zhang, Jindong Wang, Xing Xie, Masashi Sugiyama
To formally analyze this issue, we provide a unique algebraic formulation of the combination shift problem based on the concepts of homomorphism, equivariance, and a refined definition of disentanglement.
1 code implementation • 25 Jul 2022 • Wang Lu, Jindong Wang, Haoliang Li, Yiqiang Chen, Xing Xie
Internal invariance means that the features can be learned with a single domain and the features capture intrinsic semantics of data, i. e., the property within a domain, which is agnostic to other domains.
1 code implementation • 21 Jul 2022 • Xin Qin, Jindong Wang, Yiqiang Chen, Wang Lu, Xinlong Jiang
To this end, we propose \emph{Adaptive Feature Fusion for Activity Recognition~(AFFAR)}, a domain generalization approach that learns to fuse the domain-invariant and domain-specific representations to improve the model's generalization performance.
1 code implementation • 26 Jun 2022 • Yongchun Zhu, Qiang Sheng, Juan Cao, Qiong Nan, Kai Shu, Minghui Wu, Jindong Wang, Fuzhen Zhuang
In this paper, we propose a Memory-guided Multi-view Multi-domain Fake News Detection Framework (M$^3$FEND) to address these two challenges.
no code implementations • 20 Jun 2022 • Han Zhu, Gaofeng Cheng, Jindong Wang, Wenxin Hou, Pengyuan Zhang, Yonghong Yan
The cross-domain performance of automatic speech recognition (ASR) could be severely hampered due to the mismatch between training and testing distributions.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+4
no code implementations • 18 Jun 2022 • Han Zhu, Jindong Wang, Gaofeng Cheng, Pengyuan Zhang, Yonghong Yan
Secondly, to reduce the communication and computation costs, we propose decoupled federated learning (DecoupleFL).
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
2 code implementations • 17 Jun 2022 • Yiqiang Chen, Wang Lu, Xin Qin, Jindong Wang, Xing Xie
Federated learning has attracted increasing attention to building models without accessing the raw user data, especially in healthcare.
no code implementations • 14 Jun 2022 • Wang Lu, Jindong Wang, Yiqiang Chen, Sinno Jialin Pan, Chunyu Hu, Xin Qin
Training on existing data often makes the model biased towards the distribution of the training data, thus the model might perform terribly on test data with different distributions.
2 code implementations • 15 May 2022 • Yidong Wang, Hao Chen, Qiang Heng, Wenxin Hou, Yue Fan, Zhen Wu, Jindong Wang, Marios Savvides, Takahiro Shinozaki, Bhiksha Raj, Bernt Schiele, Xing Xie
Semi-supervised Learning (SSL) has witnessed great success owing to the impressive performances brought by various methods based on pseudo labeling and consistency regularization.
1 code implementation • 4 Jan 2022 • Yongchun Zhu, Fuzhen Zhuang, Jindong Wang, Jingwu Chen, Zhiping Shi, Wenjuan Wu, Qing He
Based on this, we present Multi-Representation Adaptation Network (MRAN) to accomplish the cross-domain image classification task via multi-representation alignment which can capture the information from different aspects.
no code implementations • 3 Jan 2022 • Yuxin Zhang, Jindong Wang, Yiqiang Chen, Han Yu, Tao Qin
In this paper, we propose a novel approach called Adaptive Memory Network with Self-supervised Learning (AMSL) to address these challenges and enhance the generalization ability in unsupervised anomaly detection.
2 code implementations • 14 Dec 2021 • Yidong Wang, BoWen Zhang, Wenxin Hou, Zhen Wu, Jindong Wang, Takahiro Shinozaki
The long-tailed class distribution in visual recognition tasks poses great challenges for neural networks on how to handle the biased predictions between head and tail classes, i. e., the model tends to classify tail classes as head classes.
1 code implementation • 1 Dec 2021 • Wang Lu, Jindong Wang, Yiqiang Chen, Xin Qin, Renjun Xu, Dimitrios Dimitriadis, Tao Qin
There is a growing interest in applying machine learning techniques to healthcare.
1 code implementation • NeurIPS 2021 • BoWen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, Takahiro Shinozaki
However, like other modern SSL algorithms, FixMatch uses a pre-defined constant threshold for all classes to select unlabeled data that contribute to the training, thus failing to consider different learning status and learning difficulties of different classes.
no code implementations • 9 Oct 2021 • Han Zhu, Li Wang, Jindong Wang, Gaofeng Cheng, Pengyuan Zhang, Yonghong Yan
In this work, in order to build a better pre-trained model for low-resource ASR, we propose a pre-training approach called wav2vec-S, where we use task-specific semi-supervised pre-training to refine the self-supervised pre-trained model for the ASR task thus more effectively utilize the capacity of the pre-trained model to generate task-specific representations for ASR.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
no code implementations • 29 Sep 2021 • Wang Lu, Jindong Wang, Yiqiang Chen, Xinwei Sun
In this paper, we propose to view the time series classification problem from the distribution perspective.
2 code implementations • 10 Aug 2021 • Yuntao Du, Jindong Wang, Wenjie Feng, Sinno Pan, Tao Qin, Renjun Xu, Chongjun Wang
This paper proposes Adaptive RNNs (AdaRNN) to tackle the TCS problem by building an adaptive model that generalizes well on the unseen test data.
no code implementations • 27 Jul 2021 • Yuxin Zhang, Yiqiang Chen, Jindong Wang, Zhiwen Pan
We empirically compare the proposed approach with several state-of-the-art anomaly detection methods on HAR and HC datasets.
1 code implementation • 17 Jun 2021 • Yongchun Zhu, Fuzhen Zhuang, Jindong Wang, Guolin Ke, Jingwu Chen, Jiang Bian, Hui Xiong, Qing He
The adaptation can be achieved easily with most feed-forward network models by extending them with LMMD loss, which can be trained efficiently via back-propagation.
no code implementations • 2 Jun 2021 • Yiqiang Chen, Wang Lu, Jindong Wang, Xin Qin
The success of machine learning applications often needs a large quantity of data.
2 code implementations • 18 May 2021 • Wenxin Hou, Han Zhu, Yidong Wang, Jindong Wang, Tao Qin, Renjun Xu, Takahiro Shinozaki
Based on our previous MetaAdapter that implicitly leverages adapters, we propose a novel algorithms called SimAdapter for explicitly learning knowledge from adapters.
Ranked #1 on
Cross-Lingual ASR
on Common Voice
1 code implementation • 15 Apr 2021 • Wenxin Hou, Jindong Wang, Xu Tan, Tao Qin, Takahiro Shinozaki
End-to-end automatic speech recognition (ASR) can achieve promising performance with large-scale training data.
Ranked #1 on
Cross-environment ASR
on Libri-Adapt
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+4
no code implementations • 3 Mar 2021 • Jindong Wang, Wenjie Feng, Chang Liu, Chaohui Yu, Mingxuan Du, Renjun Xu, Tao Qin, Tie-Yan Liu
Being expensive and time-consuming to collect massive COVID-19 image samples to train deep classification models, transfer learning is a promising approach by transferring knowledge from the abundant typical pneumonia datasets for COVID-19 image classification.
1 code implementation • 2 Mar 2021 • Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, Philip S. Yu
Domain generalization deals with a challenging setting where one or several different but related domain(s) are given, and the goal is to learn a model that can generalize to an unseen test domain.
no code implementations • 25 Feb 2021 • Linghui Meng, Jin Xu, Xu Tan, Jindong Wang, Tao Qin, Bo Xu
In this paper, we propose MixSpeech, a simple yet effective data augmentation method based on mixup for automatic speech recognition (ASR).
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
no code implementations • 7 Feb 2021 • Bo Yang, Hengwei Zhang, Yuchen Zhang, Kaiyong Xu, Jindong Wang
ABI-FGM and CIM can be readily integrated to build a strong gradient-based attack to further boost the success rates of adversarial examples for black-box attacks.
1 code implementation • 29 Jan 2021 • Wang Lu, Yiqiang Chen, Jindong Wang, Xin Qin
In this paper, we propose substructure-level matching for domain adaptation (SSDA) to better utilize the locality information of activity data for accurate and efficient knowledge transfer.
no code implementations • 1 Dec 2020 • Heng Yin, Hengwei Zhang, Jindong Wang, Ruiyu Dou
However, the success rate of adversarial attacks can be further improved in black-box environments.
1 code implementation • NeurIPS 2021 • Chang Liu, Xinwei Sun, Jindong Wang, Haoyue Tang, Tao Li, Tao Qin, Wei Chen, Tie-Yan Liu
Conventional supervised learning methods, especially deep ones, are found to be sensitive to out-of-distribution (OOD) examples, largely because the learned representation mixes the semantic factor with the variation factor due to their domain-specific correlation, while only the semantic factor causes the output.
1 code implementation • 17 Jul 2020 • Chaohui Yu, Jindong Wang, Chang Liu, Tao Qin, Renjun Xu, Wenjie Feng, Yiqiang Chen, Tie-Yan Liu
However, it remains challenging to determine which method is suitable for a given application since they are built with certain priors or bias.
no code implementations • 11 Jul 2020 • Renjun Xu, Pelen Liu, Yin Zhang, Fang Cai, Jindong Wang, Shuoying Liang, Heting Ying, Jianwei Yin
However, in a general setting when the target domain contains classes that are never observed in the source domain, namely in Open Set Domain Adaptation (OSDA), existing DA methods failed to work because of the interference of the extra unknown classes.
no code implementations • 18 Sep 2019 • Chaohui Yu, Jindong Wang, Yiqiang Chen, Meiyu Huang
In this paper, we propose a novel Dynamic Adversarial Adaptation Network (DAAN) to dynamically learn domain-invariant representations while quantitatively evaluate the relative importance of global and local domain distributions.
1 code implementation • 17 Sep 2019 • Jindong Wang, Yiqiang Chen, Wenjie Feng, Han Yu, Meiyu Huang, Qiang Yang
Since the source and the target domains are usually from different distributions, existing methods mainly focus on adapting the cross-domain marginal or conditional distributions.
Ranked #6 on
Domain Adaptation
on ImageCLEF-DA
no code implementations • 22 Jul 2019 • Yiqiang Chen, Jindong Wang, Chaohui Yu, Wen Gao, Xin Qin
It is able to achieve accurate and personalized healthcare without compromising privacy and security.
1 code implementation • 2 Apr 2019 • Jindong Wang, Yiqiang Chen, Han Yu, Meiyu Huang, Qiang Yang
In this paper, we propose a practically Easy Transfer Learning (EasyTL) approach which requires no model selection and hyperparameter tuning, while achieving competitive performance.
Ranked #5 on
Transfer Learning
on Office-Home
1 code implementation • 25 Mar 2019 • Chaohui Yu, Jindong Wang, Yiqiang Chen, Zijing Wu
In this paper, we propose a unified Transfer Channel Pruning (TCP) approach for accelerating UDA models.
no code implementations • 20 Jul 2018 • Jindong Wang, Vincent W. Zheng, Yiqiang Chen, Meiyu Huang
In this paper, we propose an effective Unsupervised Source Selection algorithm for Activity Recognition (USSAR).
Cross-Domain Activity Recognition
Human Activity Recognition
+1
1 code implementation • 19 Jul 2018 • Jindong Wang, Wenjie Feng, Yiqiang Chen, Han Yu, Meiyu Huang, Philip S. Yu
Existing methods either attempt to align the cross-domain distributions, or perform manifold subspace learning.
Ranked #1 on
Domain Adaptation
on Office-Caltech-10
no code implementations • 2 Jul 2018 • Jindong Wang, Yiqiang Chen, Shuji Hao, Wenjie Feng, Zhiqi Shen
To tackle the distribution adaptation problem, in this paper, we propose a novel transfer learning approach, named as Balanced Distribution \underline{A}daptation~(BDA), which can adaptively leverage the importance of the marginal and conditional distribution discrepancies, and several existing methods can be treated as special cases of BDA.
no code implementations • 26 Jun 2018 • Yiqiang Chen, Jindong Wang, Meiyu Huang, Han Yu
STL consists of two components: Stratified Domain Selection (STL-SDS) can select the most similar source domain to the target domain; Stratified Activity Transfer (STL-SAT) is able to perform accurate knowledge transfer.
no code implementations • 25 Dec 2017 • Jindong Wang, Yiqiang Chen, Lisha Hu, Xiaohui Peng, Philip S. Yu
The proposed framework, referred to as Stratified Transfer Learning (STL), can dramatically improve the classification accuracy for cross-domain activity recognition.
no code implementations • 12 Jul 2017 • Jindong Wang, Yiqiang Chen, Shuji Hao, Xiaohui Peng, Lisha Hu
This paper surveys the recent advance of deep learning based sensor-based activity recognition.