no code implementations • 8 Nov 2023 • Yao Zhu, Yuefeng Chen, Wei Wang, Xiaofeng Mao, Xiu Yan, Yue Wang, Zhigang Li, Wang Lu, Jindong Wang, Xiangyang Ji
Hence, we propose fine-tuning the parameters of the attention pooling layer during the training process to encourage the model to focus on task-specific semantics.
1 code implementation • 8 Oct 2023 • Wang Lu, Hao Yu, Jindong Wang, Damien Teney, Haohan Wang, Yiqiang Chen, Qiang Yang, Xing Xie, Xiangyang Ji
When personalized federated learning (FL) meets large foundation models, new challenges arise from various limitations in resources.
no code implementations • 4 Aug 2023 • Wang Lu, Jindong Wang, Xinwei Sun, Yiqiang Chen, Xiangyang Ji, Qiang Yang, Xing Xie
We propose DIVERSIFY, a general framework, for OOD detection and generalization on dynamic distributions of time series.
1 code implementation • 25 May 2023 • Xin Qin, Jindong Wang, Shuo Ma, Wang Lu, Yongchun Zhu, Xing Xie, Yiqiang Chen
With the constructed self-supervised learning task, DDLearn enlarges the data diversity and explores the latent activity properties.
1 code implementation • 27 Feb 2023 • Wang Lu, Xixu Hu, Jindong Wang, Xing Xie
Concretely, we design an attention-based adapter for the large model, CLIP, and the rest operations merely depend on adapters.
1 code implementation • 7 Nov 2022 • Wang Lu, Jindong Wang, Han Yu, Lei Huang, Xiang Zhang, Yiqiang Chen, Xing Xie
Firstly, Mixup cannot effectively identify the domain and class information that can be used for learning invariant representations.
1 code implementation • 15 Sep 2022 • Wang Lu, Jindong Wang, Xinwei Sun, Yiqiang Chen, Xing Xie
Time series classification is an important problem in real world.
no code implementations • 1 Sep 2022 • Wang Lu, Jindong Wang, Yidong Wang, Xing Xie
For optimization, we utilize an adapted Mixup to generate an out-of-distribution dataset that can guide the preference direction and optimize with Pareto optimization.
1 code implementation • 25 Jul 2022 • Wang Lu, Jindong Wang, Haoliang Li, Yiqiang Chen, Xing Xie
Internal invariance means that the features can be learned with a single domain and the features capture intrinsic semantics of data, i. e., the property within a domain, which is agnostic to other domains.
1 code implementation • 21 Jul 2022 • Xin Qin, Jindong Wang, Yiqiang Chen, Wang Lu, Xinlong Jiang
To this end, we propose \emph{Adaptive Feature Fusion for Activity Recognition~(AFFAR)}, a domain generalization approach that learns to fuse the domain-invariant and domain-specific representations to improve the model's generalization performance.
2 code implementations • 17 Jun 2022 • Yiqiang Chen, Wang Lu, Xin Qin, Jindong Wang, Xing Xie
Federated learning has attracted increasing attention to building models without accessing the raw user data, especially in healthcare.
no code implementations • 14 Jun 2022 • Wang Lu, Jindong Wang, Yiqiang Chen, Sinno Jialin Pan, Chunyu Hu, Xin Qin
Training on existing data often makes the model biased towards the distribution of the training data, thus the model might perform terribly on test data with different distributions.
1 code implementation • 1 Dec 2021 • Wang Lu, Jindong Wang, Yiqiang Chen, Xin Qin, Renjun Xu, Dimitrios Dimitriadis, Tao Qin
There is a growing interest in applying machine learning techniques to healthcare.
no code implementations • 29 Sep 2021 • Wang Lu, Jindong Wang, Yiqiang Chen, Xinwei Sun
In this paper, we propose to view the time series classification problem from the distribution perspective.
no code implementations • 2 Jun 2021 • Yiqiang Chen, Wang Lu, Jindong Wang, Xin Qin
The success of machine learning applications often needs a large quantity of data.
1 code implementation • 2 Mar 2021 • Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, Philip S. Yu
Domain generalization deals with a challenging setting where one or several different but related domain(s) are given, and the goal is to learn a model that can generalize to an unseen test domain.
1 code implementation • 29 Jan 2021 • Wang Lu, Yiqiang Chen, Jindong Wang, Xin Qin
In this paper, we propose substructure-level matching for domain adaptation (SSDA) to better utilize the locality information of activity data for accurate and efficient knowledge transfer.
no code implementations • 2 Dec 2020 • Tangqing Cao, Wenqi Guo, Wang Lu, Yunfei Xue, Wenjun Lu, Jing Su, Christian H. Liebscher, Chang Liu, Gerhard Dehm
Such a softening behavior can be related to the interaction of dislocations with short-range clustering.
Materials Science