Search Results for author: Guangliang Liu

Found 3 papers, 0 papers with code

A Data Generation Perspective to the Mechanism of In-Context Learning

no code implementations3 Feb 2024 Haitao Mao, Guangliang Liu, Yao Ma, Rongrong Wang, Jiliang Tang

In-Context Learning (ICL) empowers Large Language Models (LLMs) with the capacity to learn in context, achieving downstream generalization without gradient updates but with a few in-context examples.

In-Context Learning

PAC-tuning:Fine-tuning Pretrained Language Models with PAC-driven Perturbed Gradient Descent

no code implementations26 Oct 2023 Guangliang Liu, Zhiyu Xue, Xitong Zhang, Kristen Marie Johnson, Rongrong Wang

Fine-tuning pretrained language models (PLMs) for downstream tasks is a large-scale optimization problem, in which the choice of the training algorithm critically determines how well the trained model can generalize to unseen test data, especially in the context of few-shot learning.

Data Augmentation Few-Shot Learning

Unlocking Tuning-free Generalization: Minimizing the PAC-Bayes Bound with Trainable Priors

no code implementations30 May 2023 Xitong Zhang, Avrajit Ghosh, Guangliang Liu, Rongrong Wang

It is widely recognized that the generalization ability of neural networks can be greatly enhanced through carefully designing the training procedure.

Cannot find the paper you are looking for? You can Submit a new open access paper.