Search Results for author: Yongqiang Chen

Found 14 papers, 8 papers with code

Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions

1 code implementation31 Mar 2023 Sanxing Chen, Yongqiang Chen, Börje F. Karlsson

Temporal and numerical expression understanding is of great importance in many downstream Natural Language Processing (NLP) and Information Retrieval (IR) tasks.

Date Understanding Information Retrieval +2

Learning Causally Invariant Representations for Out-of-Distribution Generalization on Graphs

3 code implementations11 Feb 2022 Yongqiang Chen, Yonggang Zhang, Yatao Bian, Han Yang, Kaili Ma, Binghui Xie, Tongliang Liu, Bo Han, James Cheng

Despite recent success in using the invariance principle for out-of-distribution (OOD) generalization on Euclidean data (e. g., images), studies on graph data are still limited.

Drug Discovery Graph Learning +1

Understanding and Improving Graph Injection Attack by Promoting Unnoticeability

1 code implementation ICLR 2022 Yongqiang Chen, Han Yang, Yonggang Zhang, Kaili Ma, Tongliang Liu, Bo Han, James Cheng

Recently Graph Injection Attack (GIA) emerges as a practical attack scenario on Graph Neural Networks (GNNs), where the adversary can merely inject few malicious nodes instead of modifying existing nodes or edges, i. e., Graph Modification Attack (GMA).

Understanding and Improving Feature Learning for Out-of-Distribution Generalization

1 code implementation NeurIPS 2023 Yongqiang Chen, Wei Huang, Kaiwen Zhou, Yatao Bian, Bo Han, James Cheng

Moreover, when fed the ERM learned features to the OOD objectives, the invariant feature learning quality significantly affects the final OOD performance, as OOD objectives rarely learn new features.

Out-of-Distribution Generalization

Self-Enhanced GNN: Improving Graph Neural Networks Using Model Outputs

1 code implementation18 Feb 2020 Han Yang, Xiao Yan, Xinyan Dai, Yongqiang Chen, James Cheng

In this paper, we propose self-enhanced GNN (SEG), which improves the quality of the input data using the outputs of existing GNN models for better performance on semi-supervised node classification.

General Classification Node Classification

Calibrating and Improving Graph Contrastive Learning

1 code implementation27 Jan 2021 Kaili Ma, Haochen Yang, Han Yang, Yongqiang Chen, James Cheng

To assess the discrepancy between the prediction and the ground-truth in the downstream tasks for these contrastive pairs, we adapt the expected calibration error (ECE) to graph contrastive learning.

Contrastive Learning Graph Clustering +3

Towards out-of-distribution generalizable predictions of chemical kinetics properties

1 code implementation4 Oct 2023 ZiHao Wang, Yongqiang Chen, Yang Duan, Weijiang Li, Bo Han, James Cheng, Hanghang Tong

Under this framework, we create comprehensive datasets to benchmark (1) the state-of-the-art ML approaches for reaction prediction in the OOD setting and (2) the state-of-the-art graph OOD methods in kinetics property prediction problems.

Property Prediction

RLTP: Reinforcement Learning to Pace for Delayed Impression Modeling in Preloaded Ads

no code implementations6 Feb 2023 Penghui Wei, Yongqiang Chen, Shaoguo Liu, Liang Wang, Bo Zheng

In a whole delivery period, advertisers usually desire a certain impression count for the ads, and they also expect that the delivery performance is as good as possible (e. g., obtaining high click-through rate).

reinforcement-learning Reinforcement Learning (RL)

Positional Information Matters for Invariant In-Context Learning: A Case Study of Simple Function Classes

no code implementations30 Nov 2023 Yongqiang Chen, Binghui Xie, Kaiwen Zhou, Bo Han, Yatao Bian, James Cheng

Surprisingly, DeepSet outperforms transformers across a variety of distribution shifts, implying that preserving permutation invariance symmetry to input demonstrations is crucial for OOD ICL.

In-Context Learning

Enhancing Evolving Domain Generalization through Dynamic Latent Representations

no code implementations16 Jan 2024 Binghui Xie, Yongqiang Chen, Jiaqi Wang, Kaiwen Zhou, Bo Han, Wei Meng, James Cheng

However, in non-stationary tasks where new domains evolve in an underlying continuous structure, such as time, merely extracting the invariant features is insufficient for generalization to the evolving new domains.

Evolving Domain Generalization

Enhancing Neural Subset Selection: Integrating Background Information into Set Representations

no code implementations5 Feb 2024 Binghui Xie, Yatao Bian, Kaiwen Zhou, Yongqiang Chen, Peilin Zhao, Bo Han, Wei Meng, James Cheng

Learning neural subset selection tasks, such as compound selection in AI-aided drug discovery, have become increasingly pivotal across diverse applications.

Drug Discovery

Discovery of the Hidden World with Large Language Models

no code implementations6 Feb 2024 Chenxi Liu, Yongqiang Chen, Tongliang Liu, Mingming Gong, James Cheng, Bo Han, Kun Zhang

The rise of large language models (LLMs) that are trained to learn rich knowledge from the massive observations of the world, provides a new opportunity to assist with discovering high-level hidden variables from the raw observational data.

Causal Discovery

Do CLIPs Always Generalize Better than ImageNet Models?

no code implementations18 Mar 2024 Qizhou Wang, Yong Lin, Yongqiang Chen, Ludwig Schmidt, Bo Han, Tong Zhang

The performance drops from the common to counter groups quantify the reliance of models on spurious features (i. e., backgrounds) to predict the animals.

Cannot find the paper you are looking for? You can Submit a new open access paper.