Search Results for author: Yunqi Zhu

Found 4 papers, 2 papers with code

Hierarchical Skip Decoding for Efficient Autoregressive Text Generation

no code implementations22 Mar 2024 Yunqi Zhu, Xuebing Yang, Yuanyuan Wu, Wensheng Zhang

Autoregressive decoding strategy is a commonly used method for text generation tasks with pre-trained language models, while early-exiting is an effective approach to speedup the inference stage.

Text Generation

Parameter-Efficient Fine-Tuning with Layer Pruning on Free-Text Sequence-to-Sequence Modeling

1 code implementation15 May 2023 Yunqi Zhu, Xuebing Yang, Yuanyuan Wu, Wensheng Zhang

The increasing size of language models raises great research interests in parameter-efficient fine-tuning such as LoRA that freezes the pre-trained model, and injects small-scale trainable parameters for multiple downstream tasks (e. g., summarization, question answering and translation).

Dialogue Generation Question Answering

Leveraging Summary Guidance on Medical Report Summarization

no code implementations8 Feb 2023 Yunqi Zhu, Xuebing Yang, Yuanyuan Wu, Wensheng Zhang

This study presents three deidentified large medical text datasets, named DISCHARGE, ECHO and RADIOLOGY, which contain 50K, 16K and 378K pairs of report and summary that are derived from MIMIC-III, respectively.

16k Abstractive Text Summarization

Differentiable N-gram Objective on Abstractive Summarization

1 code implementation8 Feb 2022 Yunqi Zhu, Xuebing Yang, Yuanyuan Wu, Mingjin Zhu, Wensheng Zhang

ROUGE is a standard automatic evaluation metric based on n-grams for sequence-to-sequence tasks, while cross-entropy loss is an essential objective of neural network language model that optimizes at a unigram level.

Abstractive Text Summarization Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.