Search Results for author: Hyeonmok Ko

Found 4 papers, 0 papers with code

BRIDO: Bringing Democratic Order to Abstractive Summarization

no code implementations25 Feb 2025 Junhyun Lee, Harshith Goka, Hyeonmok Ko

Hallucination refers to the inaccurate, irrelevant, and inconsistent text generated from large language models (LLMs).

Abstractive Text Summarization Contrastive Learning +1

Hansel: Output Length Controlling Framework for Large Language Models

no code implementations18 Dec 2024 Seoha Song, Junhyun Lee, Hyeonmok Ko

We demonstrate this by finetuning four different LLMs with Hansel and show that the mean absolute error of the output sequence decreases significantly in every model and dataset compared to the prompt-based length control finetuning.

Cannot find the paper you are looking for? You can Submit a new open access paper.