Search Results for author: Wang Lu

Found 18 papers, 11 papers with code

Enhancing Few-shot CLIP with Semantic-Aware Fine-Tuning

no code implementations8 Nov 2023 Yao Zhu, Yuefeng Chen, Wei Wang, Xiaofeng Mao, Xiu Yan, Yue Wang, Zhigang Li, Wang Lu, Jindong Wang, Xiangyang Ji

Hence, we propose fine-tuning the parameters of the attention pooling layer during the training process to encourage the model to focus on task-specific semantics.

ZooPFL: Exploring Black-box Foundation Models for Personalized Federated Learning

1 code implementation8 Oct 2023 Wang Lu, Hao Yu, Jindong Wang, Damien Teney, Haohan Wang, Yiqiang Chen, Qiang Yang, Xing Xie, Xiangyang Ji

When personalized federated learning (FL) meets large foundation models, new challenges arise from various limitations in resources.

Personalized Federated Learning

FedCLIP: Fast Generalization and Personalization for CLIP in Federated Learning

1 code implementation27 Feb 2023 Wang Lu, Xixu Hu, Jindong Wang, Xing Xie

Concretely, we design an attention-based adapter for the large model, CLIP, and the rest operations merely depend on adapters.

Federated Learning Privacy Preserving

FIXED: Frustratingly Easy Domain Generalization with Mixup

1 code implementation7 Nov 2022 Wang Lu, Jindong Wang, Han Yu, Lei Huang, Xiang Zhang, Yiqiang Chen, Xing Xie

Firstly, Mixup cannot effectively identify the domain and class information that can be used for learning invariant representations.

Domain Generalization Image Classification +2

Towards Optimization and Model Selection for Domain Generalization: A Mixup-guided Solution

no code implementations1 Sep 2022 Wang Lu, Jindong Wang, Yidong Wang, Xing Xie

For optimization, we utilize an adapted Mixup to generate an out-of-distribution dataset that can guide the preference direction and optimize with Pareto optimization.

Domain Generalization Model Optimization +2

Domain-invariant Feature Exploration for Domain Generalization

1 code implementation25 Jul 2022 Wang Lu, Jindong Wang, Haoliang Li, Yiqiang Chen, Xing Xie

Internal invariance means that the features can be learned with a single domain and the features capture intrinsic semantics of data, i. e., the property within a domain, which is agnostic to other domains.

Domain Generalization Knowledge Distillation +2

Domain Generalization for Activity Recognition via Adaptive Feature Fusion

1 code implementation21 Jul 2022 Xin Qin, Jindong Wang, Yiqiang Chen, Wang Lu, Xinlong Jiang

To this end, we propose \emph{Adaptive Feature Fusion for Activity Recognition~(AFFAR)}, a domain generalization approach that learns to fuse the domain-invariant and domain-specific representations to improve the model's generalization performance.

Domain Generalization Human Activity Recognition

MetaFed: Federated Learning among Federations with Cyclic Knowledge Distillation for Personalized Healthcare

2 code implementations17 Jun 2022 Yiqiang Chen, Wang Lu, Xin Qin, Jindong Wang, Xing Xie

Federated learning has attracted increasing attention to building models without accessing the raw user data, especially in healthcare.

Federated Learning Knowledge Distillation

Semantic-Discriminative Mixup for Generalizable Sensor-based Cross-domain Activity Recognition

no code implementations14 Jun 2022 Wang Lu, Jindong Wang, Yiqiang Chen, Sinno Jialin Pan, Chunyu Hu, Xin Qin

Training on existing data often makes the model biased towards the distribution of the training data, thus the model might perform terribly on test data with different distributions.

Cross-Domain Activity Recognition Domain Adaptation +2

Generalizing to Unseen Domains: A Survey on Domain Generalization

1 code implementation2 Mar 2021 Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, Philip S. Yu

Domain generalization deals with a challenging setting where one or several different but related domain(s) are given, and the goal is to learn a model that can generalize to an unseen test domain.

Domain Generalization Out-of-Distribution Generalization +1

Cross-domain Activity Recognition via Substructural Optimal Transport

1 code implementation29 Jan 2021 Wang Lu, Yiqiang Chen, Jindong Wang, Xin Qin

In this paper, we propose substructure-level matching for domain adaptation (SSDA) to better utilize the locality information of activity data for accurate and efficient knowledge transfer.

Clustering Cross-Domain Activity Recognition +3

Cannot find the paper you are looking for? You can Submit a new open access paper.