Search Results for author: Anchun Gui

Found 3 papers, 0 papers with code

G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning for Graph Transformer Networks

no code implementations17 May 2023 Anchun Gui, Jinqiang Ye, Han Xiao

However, with the growth of model scale and the rising number of downstream tasks, this paradigm inevitably meets the challenges in terms of computation consumption and memory footprint issues.

Inductive Bias Transfer Learning

HiFi: High-Information Attention Heads Hold for Parameter-Efficient Model Adaptation

no code implementations8 May 2023 Anchun Gui, Han Xiao

To fully leverage the advantages of large-scale pre-trained language models (PLMs) on downstream tasks, it has become a ubiquitous adaptation paradigm to fine-tune the entire parameters of PLMs.

Vocal Bursts Intensity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.