Search Results for author: Xueting Han

Found 5 papers, 3 papers with code

NutePrune: Efficient Progressive Pruning with Numerous Teachers for Large Language Models

no code implementations15 Feb 2024 Shengrui Li, Xueting Han, Jing Bai

Structured pruning, offers an effective means to compress LLMs, thereby reducing storage costs and enhancing inference speed for more efficient utilization.

Knowledge Distillation

Parameter-efficient is not sufficient: Exploring Parameter, Memory, and Time Efficient Adapter Tuning for Dense Predictions

no code implementations16 Jun 2023 Dongshuo Yin, Xueting Han, Bin Li, Hao Feng, Jing Bai

We provide a gradient backpropagation highway for low-rank adapters which eliminates the need for expensive backpropagation through the frozen pre-trained model, resulting in substantial savings of training memory and training time.

Transfer Learning

Time-aware Graph Structure Learning via Sequence Prediction on Temporal Graphs

1 code implementation13 Jun 2023 Haozhen Zhang, Xueting Han, Xi Xiao, Jing Bai

To address these issues, we propose a Time-aware Graph Structure Learning (TGSL) approach via sequence prediction on temporal graphs, which learns better graph structures for downstream tasks through adding potential temporal edges.

Contrastive Learning Data Augmentation +3

AdapterGNN: Parameter-Efficient Fine-Tuning Improves Generalization in GNNs

1 code implementation19 Apr 2023 Shengrui Li, Xueting Han, Jing Bai

AdapterGNN preserves the knowledge of the large pre-trained model and leverages highly expressive adapters for GNNs, which can adapt to downstream tasks effectively with only a few parameters, while also improving the model's generalization ability.

Generalization Bounds

Adaptive Transfer Learning on Graph Neural Networks

1 code implementation19 Jul 2021 Xueting Han, Zhenhuan Huang, Bang An, Jing Bai

We design an adaptive auxiliary loss weighting model to learn the weights of auxiliary tasks by quantifying the consistency between auxiliary tasks and the target task.

Meta-Learning Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.