no code implementations • 22 May 2024 • Naibo Wang, Yuchen Deng, Wenjie Feng, Jianwei Yin, See-Kiong Ng
In this paper, we introduce a novel data-free federated class incremental learning framework with diffusion-based generative memory (DFedDGM) to mitigate catastrophic forgetting by generating stable, high-quality images through diffusion models.
no code implementations • 18 Apr 2024 • Naibo Wang, Yuchen Deng, Wenjie Feng, Shichen Fan, Jianwei Yin, See-Kiong Ng
In this paper, we improve the one-shot sequential federated learning for non-IID data by proposing a local model diversity-enhancing strategy.
1 code implementation • 27 Mar 2024 • Brian Formento, Wenjie Feng, Chuan Sheng Foo, Luu Anh Tuan, See-Kiong Ng
Language models (LMs) are indispensable tools for natural language processing tasks, but their vulnerability to adversarial attacks remains a concern.
no code implementations • 11 Feb 2024 • Yuyao Ge, Shenghua Liu, Wenjie Feng, Lingrui Mei, Lizhe Chen, Xueqi Cheng
In this work, we reveal the impact of the order of graph description on LLMs' graph reasoning performance, which significantly affects LLMs' reasoning abilities.
2 code implementations • 10 May 2023 • Mingqi Yang, Wenjie Feng, Yanming Shen, Bryan Hooi
Proposing an effective and flexible matrix to represent a graph is a fundamental challenge that has been explored from multiple perspectives, e. g., filtering in Graph Fourier Transforms.
Ranked #6 on Graph Regression on ZINC
1 code implementation • ACM The Web Conference 2023 • Naibo Wang, Wenjie Feng, Jianwei Yin, See-Kiong Ng
As such, web-crawling is an essential tool for both computational and non-computational scientists to conduct research.
1 code implementation • 23 Feb 2023 • Naibo Wang, Wenjie Feng, Fusheng Liu, Moming Duan, See-Kiong Ng
The emerging availability of trained machine learning models has put forward the novel concept of Machine Learning Model Market in which one can harness the collective intelligence of multiple well-trained models to improve the performance of the resultant model through one-shot federated learning and ensemble learning in a data-free manner.
1 code implementation • 29 Nov 2022 • Miao Xiong, Shen Li, Wenjie Feng, Ailin Deng, Jihai Zhang, Bryan Hooi
How do we know when the predictions made by a classifier can be trusted?
2 code implementations • 10 Aug 2021 • Yuntao Du, Jindong Wang, Wenjie Feng, Sinno Pan, Tao Qin, Renjun Xu, Chongjun Wang
This paper proposes Adaptive RNNs (AdaRNN) to tackle the TCS problem by building an adaptive model that generalizes well on the unseen test data.
no code implementations • 3 Mar 2021 • Jindong Wang, Wenjie Feng, Chang Liu, Chaohui Yu, Mingxuan Du, Renjun Xu, Tao Qin, Tie-Yan Liu
Being expensive and time-consuming to collect massive COVID-19 image samples to train deep classification models, transfer learning is a promising approach by transferring knowledge from the abundant typical pneumonia datasets for COVID-19 image classification.
1 code implementation • 17 Jul 2020 • Chaohui Yu, Jindong Wang, Chang Liu, Tao Qin, Renjun Xu, Wenjie Feng, Yiqiang Chen, Tie-Yan Liu
However, it remains challenging to determine which method is suitable for a given application since they are built with certain priors or bias.
1 code implementation • 17 Sep 2019 • Jindong Wang, Yiqiang Chen, Wenjie Feng, Han Yu, Meiyu Huang, Qiang Yang
Since the source and the target domains are usually from different distributions, existing methods mainly focus on adapting the cross-domain marginal or conditional distributions.
Ranked #7 on Domain Adaptation on ImageCLEF-DA
1 code implementation • 19 Jul 2018 • Jindong Wang, Wenjie Feng, Yiqiang Chen, Han Yu, Meiyu Huang, Philip S. Yu
Existing methods either attempt to align the cross-domain distributions, or perform manifold subspace learning.
Ranked #1 on Domain Adaptation on Office-Caltech-10
no code implementations • 2 Jul 2018 • Jindong Wang, Yiqiang Chen, Shuji Hao, Wenjie Feng, Zhiqi Shen
To tackle the distribution adaptation problem, in this paper, we propose a novel transfer learning approach, named as Balanced Distribution \underline{A}daptation~(BDA), which can adaptively leverage the importance of the marginal and conditional distribution discrepancies, and several existing methods can be treated as special cases of BDA.