Search Results for author: Zhengyu Chen

Found 8 papers, 0 papers with code

DUET: A Tuning-Free Device-Cloud Collaborative Parameters Generation Framework for Efficient Device Model Generalization

no code implementations12 Sep 2022 Zheqi Lv, Wenqiao Zhang, Shengyu Zhang, Kun Kuang, Feng Wang, Yongwei Wang, Zhengyu Chen, Tao Shen, Hongxia Yang, Bengchin Ooi, Fei Wu

DUET is deployed on a powerful cloud server that only requires the low cost of forwarding propagation and low time delay of data transmission between the device and the cloud.

Model Compression

Knowledge Distillation of Transformer-based Language Models Revisited

no code implementations29 Jun 2022 Chengqiang Lu, Jianwei Zhang, Yunfei Chu, Zhengyu Chen, Jingren Zhou, Fei Wu, Haiqing Chen, Hongxia Yang

In the past few years, transformer-based pre-trained language models have achieved astounding success in both industry and academia.

Knowledge Distillation Language Modelling

Decoupled Self-supervised Learning for Non-Homophilous Graphs

no code implementations7 Jun 2022 Teng Xiao, Zhengyu Chen, Zhimeng Guo, Zeyang Zhuang, Suhang Wang

In this paper, we study the problem of conducting self-supervised learning for node representation learning on non-homophilous graphs.

Representation Learning Self-Supervised Learning +1

Minimizing Memorization in Meta-learning: A Causal Perspective

no code implementations29 Sep 2021 Yinjie Jiang, Zhengyu Chen, Luotian Yuan, Ying WEI, Kun Kuang, Xinhai Ye, Zhihua Wang, Fei Wu

Meta-learning has emerged as a potent paradigm for quick learning of few-shot tasks, by leveraging the meta-knowledge learned from meta-training tasks.

Causal Inference Memorization +1

Adaptive Adversarial Training for Meta Reinforcement Learning

no code implementations27 Apr 2021 Shiqi Chen, Zhengyu Chen, Donglin Wang

Meta Reinforcement Learning (MRL) enables an agent to learn from a limited number of past trajectories and extrapolate to a new task.

Meta-Learning Meta Reinforcement Learning +1

Pareto Self-Supervised Training for Few-Shot Learning

no code implementations CVPR 2021 Zhengyu Chen, Jixie Ge, Heshen Zhan, Siteng Huang, Donglin Wang

While few-shot learning (FSL) aims for rapid generalization to new concepts with little supervision, self-supervised learning (SSL) constructs supervisory signals directly computed from unlabeled data.

Auxiliary Learning Few-Shot Learning +2

Learn Goal-Conditioned Policy with Intrinsic Motivation for Deep Reinforcement Learning

no code implementations11 Apr 2021 Jinxin Liu, Donglin Wang, Qiangxing Tian, Zhengyu Chen

It is of significance for an agent to learn a widely applicable and general-purpose policy that can achieve diverse goals including images and text descriptions.


Cannot find the paper you are looking for? You can Submit a new open access paper.