Search Results for author: Ruochen Zhao

Found 8 papers, 2 papers with code

PromptSum: Parameter-Efficient Controllable Abstractive Summarization

no code implementations6 Aug 2023 Mathieu Ravaut, Hailin Chen, Ruochen Zhao, Chengwei Qin, Shafiq Joty, Nancy Chen

Prompt tuning (PT), a parameter-efficient technique that only tunes the additional prompt embeddings while keeping the backbone pre-trained language model (PLM) frozen, has shown promising results in language understanding tasks, especially in low-resource scenarios.

Abstractive Text Summarization Language Modelling

Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework

1 code implementation5 May 2023 Ruochen Zhao, Xingxuan Li, Shafiq Joty, Chengwei Qin, Lidong Bing

As large language models (LLMs) have become the norm in NLP, demonstrating good performance in generation and reasoning tasks, one of its most fatal disadvantages is the lack of factual correctness.

Open-Domain Question Answering

Explaining Language Models' Predictions with High-Impact Concepts

no code implementations3 May 2023 Ruochen Zhao, Shafiq Joty, Yongjie Wang, Tan Wang

The emergence of large-scale pretrained language models has posed unprecedented challenges in deriving explanations of why the model has made some predictions.

Fairness Vocal Bursts Intensity Prediction

Retrieving Multimodal Information for Augmented Generation: A Survey

no code implementations20 Mar 2023 Ruochen Zhao, Hailin Chen, Weishi Wang, Fangkai Jiao, Xuan Long Do, Chengwei Qin, Bosheng Ding, Xiaobao Guo, Minzhi Li, Xingxuan Li, Shafiq Joty

As Large Language Models (LLMs) become popular, there emerged an important trend of using multimodality to augment the LLMs' generation ability, which enables LLMs to better interact with the world.


Cannot find the paper you are looking for? You can Submit a new open access paper.