Search Results for author: Zhengyu Chen

Found 19 papers, 8 papers with code

Discovering Invariant Neighborhood Patterns for Heterophilic Graphs

no code implementations15 Mar 2024 Ruihao Zhang, Zhengyu Chen, Teng Xiao, Yueyang Wang, Kun Kuang

We propose a novel Invariant Neighborhood Pattern Learning (INPL) to alleviate the distribution shifts problem on non-homophilous graphs.

Graph Learning

Learning to Reweight for Graph Neural Network

no code implementations19 Dec 2023 Zhengyu Chen, Teng Xiao, Kun Kuang, Zheqi Lv, Min Zhang, Jinluan Yang, Chengqiang Lu, Hongxia Yang, Fei Wu

In this paper, we study the problem of the generalization ability of GNNs in Out-Of-Distribution (OOD) settings.

Out-of-Distribution Generalization

Simple and Asymmetric Graph Contrastive Learning without Augmentations

1 code implementation NeurIPS 2023 Teng Xiao, Huaisheng Zhu, Zhengyu Chen, Suhang Wang

Experimental results show that the simple GraphACL significantly outperforms state-of-the-art graph contrastive learning and self-supervised learning methods on homophilic and heterophilic graphs.

Contrastive Learning Representation Learning +1

Let Models Speak Ciphers: Multiagent Debate through Embeddings

no code implementations10 Oct 2023 Chau Pham, Boyi Liu, Yingxiang Yang, Zhengyu Chen, Tianyi Liu, Jianbo Yuan, Bryan A. Plummer, Zhaoran Wang, Hongxia Yang

Although natural language is an obvious choice for communication due to LLM's language understanding capability, the token sampling step needed when generating natural language poses a potential risk of information loss, as it uses only one token to represent the model's belief across the entire vocabulary.

Learning How to Propagate Messages in Graph Neural Networks

1 code implementation1 Oct 2023 Teng Xiao, Zhengyu Chen, Donglin Wang, Suhang Wang

To compensate for this, in this paper, we present learning to propagate, a general learning framework that not only learns the GNN parameters for prediction but more importantly, can explicitly learn the interpretable and personalized propagate strategies for different nodes and various types of graphs.

Empowering Many, Biasing a Few: Generalist Credit Scoring through Large Language Models

1 code implementation1 Oct 2023 Duanyu Feng, Yongfu Dai, Jimin Huang, Yifang Zhang, Qianqian Xie, Weiguang Han, Zhengyu Chen, Alejandro Lopez-Lira, Hao Wang

We then propose the first Credit and Risk Assessment Large Language Model (CALM) by instruction tuning, tailored to the nuanced demands of various financial risk assessment tasks.

Decision Making Language Modelling +1

PIE: Simulating Disease Progression via Progressive Image Editing

1 code implementation21 Sep 2023 Kaizhao Liang, Xu Cao, Kuei-Da Liao, Tianren Gao, Wenqian Ye, Zhengyu Chen, Jianguo Cao, Tejas Nama, Jimeng Sun

Disease progression simulation is a crucial area of research that has significant implications for clinical diagnosis, prognosis, and treatment.

On the Tool Manipulation Capability of Open-source Large Language Models

1 code implementation25 May 2023 Qiantong Xu, Fenglu Hong, Bo Li, Changran Hu, Zhengyu Chen, Jian Zhang

In this paper, we ask can we enhance open-source LLMs to be competitive to leading closed LLM APIs in tool manipulation, with practical amount of human supervision.

IDEAL: Toward High-efficiency Device-Cloud Collaborative and Dynamic Recommendation System

no code implementations14 Feb 2023 Zheqi Lv, Zhengyu Chen, Shengyu Zhang, Kun Kuang, Wenqiao Zhang, Mengze Li, Beng Chin Ooi, Fei Wu

The aforementioned two trends enable the device-cloud collaborative and dynamic recommendation, which deeply exploits the recommendation pattern among cloud-device data and efficiently characterizes different instances with different underlying distributions based on the cost of frequent device-cloud communication.

Recommendation Systems Vocal Bursts Intensity Prediction

MAP: Towards Balanced Generalization of IID and OOD through Model-Agnostic Adapters

1 code implementation ICCV 2023 Min Zhang, Junkun Yuan, Yue He, Wenbin Li, Zhengyu Chen, Kun Kuang

To achieve this goal, we apply a bilevel optimization to explicitly model and optimize the coupling relationship between the OOD model and auxiliary adapter layers.

Bilevel Optimization Inductive Bias

DUET: A Tuning-Free Device-Cloud Collaborative Parameters Generation Framework for Efficient Device Model Generalization

1 code implementation12 Sep 2022 Zheqi Lv, Wenqiao Zhang, Shengyu Zhang, Kun Kuang, Feng Wang, Yongwei Wang, Zhengyu Chen, Tao Shen, Hongxia Yang, Beng Chin Ooi, Fei Wu

DUET is deployed on a powerful cloud server that only requires the low cost of forwarding propagation and low time delay of data transmission between the device and the cloud.

Device-Cloud Collaboration Domain Adaptation +3

Knowledge Distillation of Transformer-based Language Models Revisited

no code implementations29 Jun 2022 Chengqiang Lu, Jianwei Zhang, Yunfei Chu, Zhengyu Chen, Jingren Zhou, Fei Wu, Haiqing Chen, Hongxia Yang

In the past few years, transformer-based pre-trained language models have achieved astounding success in both industry and academia.

Knowledge Distillation Language Modelling

Decoupled Self-supervised Learning for Non-Homophilous Graphs

no code implementations7 Jun 2022 Teng Xiao, Zhengyu Chen, Zhimeng Guo, Zeyang Zhuang, Suhang Wang

This paper studies the problem of conducting self-supervised learning for node representation learning on graphs.

Representation Learning Self-Supervised Learning +1

Reconsidering Learning Objectives in Unbiased Recommendation with Unobserved Confounders

no code implementations7 Jun 2022 Teng Xiao, Zhengyu Chen, Suhang Wang

In this paper, we propose a theoretical understanding of why existing unbiased learning objectives work for unbiased recommendation.

Generalization Bounds Knowledge Distillation +3

Minimizing Memorization in Meta-learning: A Causal Perspective

no code implementations29 Sep 2021 Yinjie Jiang, Zhengyu Chen, Luotian Yuan, Ying WEI, Kun Kuang, Xinhai Ye, Zhihua Wang, Fei Wu

Meta-learning has emerged as a potent paradigm for quick learning of few-shot tasks, by leveraging the meta-knowledge learned from meta-training tasks.

Causal Inference Memorization +1

Adaptive Adversarial Training for Meta Reinforcement Learning

no code implementations27 Apr 2021 Shiqi Chen, Zhengyu Chen, Donglin Wang

Meta Reinforcement Learning (MRL) enables an agent to learn from a limited number of past trajectories and extrapolate to a new task.

Generative Adversarial Network Meta-Learning +3

Pareto Self-Supervised Training for Few-Shot Learning

no code implementations CVPR 2021 Zhengyu Chen, Jixie Ge, Heshen Zhan, Siteng Huang, Donglin Wang

While few-shot learning (FSL) aims for rapid generalization to new concepts with little supervision, self-supervised learning (SSL) constructs supervisory signals directly computed from unlabeled data.

Auxiliary Learning Few-Shot Learning +2

Learn Goal-Conditioned Policy with Intrinsic Motivation for Deep Reinforcement Learning

no code implementations11 Apr 2021 Jinxin Liu, Donglin Wang, Qiangxing Tian, Zhengyu Chen

It is of significance for an agent to learn a widely applicable and general-purpose policy that can achieve diverse goals including images and text descriptions.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.