Search Results for author: Wenjie Feng

Found 12 papers, 9 papers with code

Balanced Distribution Adaptation for Transfer Learning

no code implementations2 Jul 2018 Jindong Wang, Yiqiang Chen, Shuji Hao, Wenjie Feng, Zhiqi Shen

To tackle the distribution adaptation problem, in this paper, we propose a novel transfer learning approach, named as Balanced Distribution \underline{A}daptation~(BDA), which can adaptively leverage the importance of the marginal and conditional distribution discrepancies, and several existing methods can be treated as special cases of BDA.

Transfer Learning

Transfer Learning with Dynamic Distribution Adaptation

1 code implementation17 Sep 2019 Jindong Wang, Yiqiang Chen, Wenjie Feng, Han Yu, Meiyu Huang, Qiang Yang

Since the source and the target domains are usually from different distributions, existing methods mainly focus on adapting the cross-domain marginal or conditional distributions.

Domain Adaptation Image Classification +2

Learning to Match Distributions for Domain Adaptation

1 code implementation17 Jul 2020 Chaohui Yu, Jindong Wang, Chang Liu, Tao Qin, Renjun Xu, Wenjie Feng, Yiqiang Chen, Tie-Yan Liu

However, it remains challenging to determine which method is suitable for a given application since they are built with certain priors or bias.

Domain Adaptation Inductive Bias

Learning Invariant Representations across Domains and Tasks

no code implementations3 Mar 2021 Jindong Wang, Wenjie Feng, Chang Liu, Chaohui Yu, Mingxuan Du, Renjun Xu, Tao Qin, Tie-Yan Liu

Being expensive and time-consuming to collect massive COVID-19 image samples to train deep classification models, transfer learning is a promising approach by transferring knowledge from the abundant typical pneumonia datasets for COVID-19 image classification.

Domain Adaptation Image Classification +1

AdaRNN: Adaptive Learning and Forecasting of Time Series

2 code implementations10 Aug 2021 Yuntao Du, Jindong Wang, Wenjie Feng, Sinno Pan, Tao Qin, Renjun Xu, Chongjun Wang

This paper proposes Adaptive RNNs (AdaRNN) to tackle the TCS problem by building an adaptive model that generalizes well on the unseen test data.

Human Activity Recognition Time Series +1

Data-Free Diversity-Based Ensemble Selection For One-Shot Federated Learning in Machine Learning Model Market

1 code implementation23 Feb 2023 Naibo Wang, Wenjie Feng, Fusheng Liu, Moming Duan, See-Kiong Ng

The emerging availability of trained machine learning models has put forward the novel concept of Machine Learning Model Market in which one can harness the collective intelligence of multiple well-trained models to improve the performance of the resultant model through one-shot federated learning and ensemble learning in a data-free manner.

Ensemble Learning Federated Learning

EasySpider: A No-Code Visual System for Crawling the Web

1 code implementation ACM The Web Conference 2023 Naibo Wang, Wenjie Feng, Jianwei Yin, See-Kiong Ng

As such, web-crawling is an essential tool for both computational and non-computational scientists to conduct research.

Data Integration Marketing

Towards Better Graph Representation Learning with Parameterized Decomposition & Filtering

2 code implementations10 May 2023 Mingqi Yang, Wenjie Feng, Yanming Shen, Bryan Hooi

Proposing an effective and flexible matrix to represent a graph is a fundamental challenge that has been explored from multiple perspectives, e. g., filtering in Graph Fourier Transforms.

Computational Efficiency Graph Learning +2

Graph Descriptive Order Improves Reasoning with Large Language Model

no code implementations11 Feb 2024 Yuyao Ge, Shenghua Liu, Wenjie Feng, Lingrui Mei, Lizhe Chen, Xueqi Cheng

In this work, we reveal the impact of the order of graph description on LLMs' graph reasoning performance, which significantly affects LLMs' reasoning abilities.

Descriptive Language Modelling +1

SemRoDe: Macro Adversarial Training to Learn Representations That are Robust to Word-Level Attacks

1 code implementation27 Mar 2024 Brian Formento, Wenjie Feng, Chuan Sheng Foo, Luu Anh Tuan, See-Kiong Ng

Language models (LMs) are indispensable tools for natural language processing tasks, but their vulnerability to adversarial attacks remains a concern.

Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.