Search Results for author: Jindong Wang

Found 54 papers, 29 papers with code

FedCLIP: Fast Generalization and Personalization for CLIP in Federated Learning

1 code implementation27 Feb 2023 Wang Lu, Xixu Hu, Jindong Wang, Xing Xie

Concretely, we design an attention-based adapter for the large model, CLIP, and the rest operations merely depend on adapters.

Federated Learning Privacy Preserving

On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective

1 code implementation22 Feb 2023 Jindong Wang, Xixu Hu, Wenxin Hou, Hao Chen, Runkai Zheng, Yidong Wang, Linyi Yang, Haojun Huang, Wei Ye, Xiubo Geng, Binxin Jiao, Yue Zhang, Xing Xie

In this paper, we conduct a thorough evaluation of the robustness of ChatGPT from the adversarial and out-of-distribution (OOD) perspective.

Adversarial Robustness Chatbot +1

SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised Learning

3 code implementations26 Jan 2023 Hao Chen, Ran Tao, Yue Fan, Yidong Wang, Jindong Wang, Bernt Schiele, Xing Xie, Bhiksha Raj, Marios Savvides

The critical challenge of Semi-Supervised Learning (SSL) is how to effectively leverage the limited labeled data and massive unlabeled data to improve the model's generalization performance.

imbalanced classification

An Embarrassingly Simple Baseline for Imbalanced Semi-Supervised Learning

no code implementations20 Nov 2022 Hao Chen, Yue Fan, Yidong Wang, Jindong Wang, Bernt Schiele, Xing Xie, Marios Savvides, Bhiksha Raj

While standard SSL assumes uniform data distribution, we consider a more realistic and challenging setting called imbalanced SSL, where imbalanced class distributions occur in both labeled and unlabeled data.

Pseudo Label

GLUE-X: Evaluating Natural Language Understanding Models from an Out-of-distribution Generalization Perspective

1 code implementation15 Nov 2022 Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, Yue Zhang

Pre-trained language models (PLMs) are known to improve the generalization performance of natural language understanding models by leveraging large amounts of data during the pre-training phase.

Natural Language Understanding Out-of-Distribution Generalization

FIXED: Frustratingly Easy Domain Generalization with Mixup

1 code implementation7 Nov 2022 Wang Lu, Jindong Wang, Han Yu, Lei Huang, Xiang Zhang, Yiqiang Chen, Xing Xie

Firstly, Mixup cannot effectively identify the domain and class information that can be used for learning invariant representations.

Domain Generalization Image Classification +1

Towards Optimization and Model Selection for Domain Generalization: A Mixup-guided Solution

no code implementations1 Sep 2022 Wang Lu, Jindong Wang, Yidong Wang, Kan Ren, Yiqiang Chen, Xing Xie

For optimization, we utilize an adapted Mixup to generate an out-of-distribution dataset that can guide the preference direction and optimize with Pareto optimization.

Domain Generalization Model Optimization +1

Domain-Specific Risk Minimization for Out-of-Distribution Generalization

no code implementations18 Aug 2022 Yi-Fan Zhang, Jindong Wang, Jian Liang, Zhang Zhang, Baosheng Yu, Liang Wang, DaCheng Tao, Xing Xie

Our bound motivates two strategies to reduce the gap: the first one is ensembling multiple classifiers to enrich the hypothesis space, then we propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.

Domain Generalization Out-of-Distribution Generalization

Conv-Adapter: Exploring Parameter Efficient Transfer Learning for ConvNets

no code implementations15 Aug 2022 Hao Chen, Ran Tao, Han Zhang, Yidong Wang, Wei Ye, Jindong Wang, Guosheng Hu, Marios Savvides

Beyond classification, Conv-Adapter can generalize to detection and segmentation tasks with more than 50% reduction of parameters but comparable performance to the traditional full fine-tuning.

Transfer Learning

Equivariant Disentangled Transformation for Domain Generalization under Combination Shift

no code implementations3 Aug 2022 Yivan Zhang, Jindong Wang, Xing Xie, Masashi Sugiyama

To formally analyze this issue, we provide a unique algebraic formulation of the combination shift problem based on the concepts of homomorphism, equivariance, and a refined definition of disentanglement.

Disentanglement Domain Generalization

Domain-invariant Feature Exploration for Domain Generalization

1 code implementation25 Jul 2022 Wang Lu, Jindong Wang, Haoliang Li, Yiqiang Chen, Xing Xie

Internal invariance means that the features can be learned with a single domain and the features capture intrinsic semantics of data, i. e., the property within a domain, which is agnostic to other domains.

Domain Generalization Knowledge Distillation +1

Domain Generalization for Activity Recognition via Adaptive Feature Fusion

1 code implementation21 Jul 2022 Xin Qin, Jindong Wang, Yiqiang Chen, Wang Lu, Xinlong Jiang

To this end, we propose \emph{Adaptive Feature Fusion for Activity Recognition~(AFFAR)}, a domain generalization approach that learns to fuse the domain-invariant and domain-specific representations to improve the model's generalization performance.

Domain Generalization Human Activity Recognition

Memory-Guided Multi-View Multi-Domain Fake News Detection

1 code implementation26 Jun 2022 Yongchun Zhu, Qiang Sheng, Juan Cao, Qiong Nan, Kai Shu, Minghui Wu, Jindong Wang, Fuzhen Zhuang

In this paper, we propose a Memory-guided Multi-view Multi-domain Fake News Detection Framework (M$^3$FEND) to address these two challenges.

Fake News Detection

Boosting Cross-Domain Speech Recognition with Self-Supervision

no code implementations20 Jun 2022 Han Zhu, Gaofeng Cheng, Jindong Wang, Wenxin Hou, Pengyuan Zhang, Yonghong Yan

The cross-domain performance of automatic speech recognition (ASR) could be severely hampered due to the mismatch between training and testing distributions.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

MetaFed: Federated Learning among Federations with Cyclic Knowledge Distillation for Personalized Healthcare

2 code implementations17 Jun 2022 Yiqiang Chen, Wang Lu, Xin Qin, Jindong Wang, Xing Xie

Federated learning has attracted increasing attention to building models without accessing the raw user data, especially in healthcare.

Federated Learning Knowledge Distillation

Semantic-Discriminative Mixup for Generalizable Sensor-based Cross-domain Activity Recognition

no code implementations14 Jun 2022 Wang Lu, Jindong Wang, Yiqiang Chen, Sinno Jialin Pan, Chunyu Hu, Xin Qin

Training on existing data often makes the model biased towards the distribution of the training data, thus the model might perform terribly on test data with different distributions.

Cross-Domain Activity Recognition Domain Adaptation +2

FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning

2 code implementations15 May 2022 Yidong Wang, Hao Chen, Qiang Heng, Wenxin Hou, Yue Fan, Zhen Wu, Jindong Wang, Marios Savvides, Takahiro Shinozaki, Bhiksha Raj, Bernt Schiele, Xing Xie

Semi-supervised Learning (SSL) has witnessed great success owing to the impressive performances brought by various methods based on pseudo labeling and consistency regularization.

Fairness Semi-Supervised Image Classification

Multi-Representation Adaptation Network for Cross-domain Image Classification

1 code implementation4 Jan 2022 Yongchun Zhu, Fuzhen Zhuang, Jindong Wang, Jingwu Chen, Zhiping Shi, Wenjuan Wu, Qing He

Based on this, we present Multi-Representation Adaptation Network (MRAN) to accomplish the cross-domain image classification task via multi-representation alignment which can capture the information from different aspects.

Classification Domain Adaptation +2

Adaptive Memory Networks with Self-supervised Learning for Unsupervised Anomaly Detection

no code implementations3 Jan 2022 Yuxin Zhang, Jindong Wang, Yiqiang Chen, Han Yu, Tao Qin

In this paper, we propose a novel approach called Adaptive Memory Network with Self-supervised Learning (AMSL) to address these challenges and enhance the generalization ability in unsupervised anomaly detection.

Self-Supervised Learning Sleep Stage Detection +2

Margin Calibration for Long-Tailed Visual Recognition

2 code implementations14 Dec 2021 Yidong Wang, BoWen Zhang, Wenxin Hou, Zhen Wu, Jindong Wang, Takahiro Shinozaki

The long-tailed class distribution in visual recognition tasks poses great challenges for neural networks on how to handle the biased predictions between head and tail classes, i. e., the model tends to classify tail classes as head classes.

FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling

1 code implementation NeurIPS 2021 BoWen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, Takahiro Shinozaki

However, like other modern SSL algorithms, FixMatch uses a pre-defined constant threshold for all classes to select unlabeled data that contribute to the training, thus failing to consider different learning status and learning difficulties of different classes.

Semi-Supervised Image Classification

Wav2vec-S: Semi-Supervised Pre-Training for Low-Resource ASR

no code implementations9 Oct 2021 Han Zhu, Li Wang, Jindong Wang, Gaofeng Cheng, Pengyuan Zhang, Yonghong Yan

In this work, in order to build a better pre-trained model for low-resource ASR, we propose a pre-training approach called wav2vec-S, where we use task-specific semi-supervised pre-training to refine the self-supervised pre-trained model for the ASR task thus more effectively utilize the capacity of the pre-trained model to generate task-specific representations for ASR.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

AdaRNN: Adaptive Learning and Forecasting of Time Series

2 code implementations10 Aug 2021 Yuntao Du, Jindong Wang, Wenjie Feng, Sinno Pan, Tao Qin, Renjun Xu, Chongjun Wang

This paper proposes Adaptive RNNs (AdaRNN) to tackle the TCS problem by building an adaptive model that generalizes well on the unseen test data.

Human Activity Recognition Time Series Analysis

Unsupervised Deep Anomaly Detection for Multi-Sensor Time-Series Signals

no code implementations27 Jul 2021 Yuxin Zhang, Yiqiang Chen, Jindong Wang, Zhiwen Pan

We empirically compare the proposed approach with several state-of-the-art anomaly detection methods on HAR and HC datasets.

Human Activity Recognition Time Series Analysis +1

Deep Subdomain Adaptation Network for Image Classification

1 code implementation17 Jun 2021 Yongchun Zhu, Fuzhen Zhuang, Jindong Wang, Guolin Ke, Jingwu Chen, Jiang Bian, Hui Xiong, Qing He

The adaptation can be achieved easily with most feed-forward network models by extending them with LMMD loss, which can be trained efficiently via back-propagation.

Classification Domain Adaptation +4

Exploiting Adapters for Cross-lingual Low-resource Speech Recognition

2 code implementations18 May 2021 Wenxin Hou, Han Zhu, Yidong Wang, Jindong Wang, Tao Qin, Renjun Xu, Takahiro Shinozaki

Based on our previous MetaAdapter that implicitly leverages adapters, we propose a novel algorithms called SimAdapter for explicitly learning knowledge from adapters.

Cross-Lingual ASR General Knowledge +3

Learning Invariant Representations across Domains and Tasks

no code implementations3 Mar 2021 Jindong Wang, Wenjie Feng, Chang Liu, Chaohui Yu, Mingxuan Du, Renjun Xu, Tao Qin, Tie-Yan Liu

Being expensive and time-consuming to collect massive COVID-19 image samples to train deep classification models, transfer learning is a promising approach by transferring knowledge from the abundant typical pneumonia datasets for COVID-19 image classification.

Domain Adaptation Image Classification +1

Generalizing to Unseen Domains: A Survey on Domain Generalization

1 code implementation2 Mar 2021 Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, Philip S. Yu

Domain generalization deals with a challenging setting where one or several different but related domain(s) are given, and the goal is to learn a model that can generalize to an unseen test domain.

Domain Generalization Out-of-Distribution Generalization +1

MixSpeech: Data Augmentation for Low-resource Automatic Speech Recognition

no code implementations25 Feb 2021 Linghui Meng, Jin Xu, Xu Tan, Jindong Wang, Tao Qin, Bo Xu

In this paper, we propose MixSpeech, a simple yet effective data augmentation method based on mixup for automatic speech recognition (ASR).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Adversarial example generation with AdaBelief Optimizer and Crop Invariance

no code implementations7 Feb 2021 Bo Yang, Hengwei Zhang, Yuchen Zhang, Kaiyong Xu, Jindong Wang

ABI-FGM and CIM can be readily integrated to build a strong gradient-based attack to further boost the success rates of adversarial examples for black-box attacks.

Cross-domain Activity Recognition via Substructural Optimal Transport

1 code implementation29 Jan 2021 Wang Lu, Yiqiang Chen, Jindong Wang, Xin Qin

In this paper, we propose substructure-level matching for domain adaptation (SSDA) to better utilize the locality information of activity data for accurate and efficient knowledge transfer.

Cross-Domain Activity Recognition Domain Adaptation +2

Boosting Adversarial Attacks on Neural Networks with Better Optimizer

no code implementations1 Dec 2020 Heng Yin, Hengwei Zhang, Jindong Wang, Ruiyu Dou

However, the success rate of adversarial attacks can be further improved in black-box environments.

Learning Causal Semantic Representation for Out-of-Distribution Prediction

1 code implementation NeurIPS 2021 Chang Liu, Xinwei Sun, Jindong Wang, Haoyue Tang, Tao Li, Tao Qin, Wei Chen, Tie-Yan Liu

Conventional supervised learning methods, especially deep ones, are found to be sensitive to out-of-distribution (OOD) examples, largely because the learned representation mixes the semantic factor with the variation factor due to their domain-specific correlation, while only the semantic factor causes the output.

Domain Adaptation

Learning to Match Distributions for Domain Adaptation

1 code implementation17 Jul 2020 Chaohui Yu, Jindong Wang, Chang Liu, Tao Qin, Renjun Xu, Wenjie Feng, Yiqiang Chen, Tie-Yan Liu

However, it remains challenging to determine which method is suitable for a given application since they are built with certain priors or bias.

Domain Adaptation Inductive Bias

Joint Partial Optimal Transport for Open Set Domain Adaptation

no code implementations11 Jul 2020 Renjun Xu, Pelen Liu, Yin Zhang, Fang Cai, Jindong Wang, Shuoying Liang, Heting Ying, Jianwei Yin

However, in a general setting when the target domain contains classes that are never observed in the source domain, namely in Open Set Domain Adaptation (OSDA), existing DA methods failed to work because of the interference of the extra unknown classes.

Domain Adaptation

Transfer Learning with Dynamic Adversarial Adaptation Network

no code implementations18 Sep 2019 Chaohui Yu, Jindong Wang, Yiqiang Chen, Meiyu Huang

In this paper, we propose a novel Dynamic Adversarial Adaptation Network (DAAN) to dynamically learn domain-invariant representations while quantitatively evaluate the relative importance of global and local domain distributions.

Domain Adaptation Transfer Learning

Transfer Learning with Dynamic Distribution Adaptation

1 code implementation17 Sep 2019 Jindong Wang, Yiqiang Chen, Wenjie Feng, Han Yu, Meiyu Huang, Qiang Yang

Since the source and the target domains are usually from different distributions, existing methods mainly focus on adapting the cross-domain marginal or conditional distributions.

Domain Adaptation Image Classification +2

Easy Transfer Learning By Exploiting Intra-domain Structures

1 code implementation2 Apr 2019 Jindong Wang, Yiqiang Chen, Han Yu, Meiyu Huang, Qiang Yang

In this paper, we propose a practically Easy Transfer Learning (EasyTL) approach which requires no model selection and hyperparameter tuning, while achieving competitive performance.

Domain Adaptation Model Selection +1

Accelerating Deep Unsupervised Domain Adaptation with Transfer Channel Pruning

1 code implementation25 Mar 2019 Chaohui Yu, Jindong Wang, Yiqiang Chen, Zijing Wu

In this paper, we propose a unified Transfer Channel Pruning (TCP) approach for accelerating UDA models.

Transfer Learning Unsupervised Domain Adaptation

Balanced Distribution Adaptation for Transfer Learning

no code implementations2 Jul 2018 Jindong Wang, Yiqiang Chen, Shuji Hao, Wenjie Feng, Zhiqi Shen

To tackle the distribution adaptation problem, in this paper, we propose a novel transfer learning approach, named as Balanced Distribution \underline{A}daptation~(BDA), which can adaptively leverage the importance of the marginal and conditional distribution discrepancies, and several existing methods can be treated as special cases of BDA.

Transfer Learning

Cross-position Activity Recognition with Stratified Transfer Learning

no code implementations26 Jun 2018 Yiqiang Chen, Jindong Wang, Meiyu Huang, Han Yu

STL consists of two components: Stratified Domain Selection (STL-SDS) can select the most similar source domain to the target domain; Stratified Activity Transfer (STL-SAT) is able to perform accurate knowledge transfer.

Human Activity Recognition Transfer Learning

Stratified Transfer Learning for Cross-domain Activity Recognition

no code implementations25 Dec 2017 Jindong Wang, Yiqiang Chen, Lisha Hu, Xiaohui Peng, Philip S. Yu

The proposed framework, referred to as Stratified Transfer Learning (STL), can dramatically improve the classification accuracy for cross-domain activity recognition.

Cross-Domain Activity Recognition General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.