Search Results for author: ChaeHun Park

Found 11 papers, 5 papers with code

PairEval: Open-domain Dialogue Evaluation with Pairwise Comparison

no code implementations1 Apr 2024 ChaeHun Park, Minseok Choi, Dohyun Lee, Jaegul Choo

Recent studies proposed evaluation metrics that assess generated responses by considering their relevance to previous dialogue histories.

Dialogue Evaluation

Learning to Diversify Neural Text Generation via Degenerative Model

no code implementations22 Sep 2023 Jimin Hong, ChaeHun Park, Jaegul Choo

We then enhance the diversity of the second model by focusing on patterns that the first model fails to learn.

Dialogue Generation Language Modelling

DEnsity: Open-domain Dialogue Evaluation Metric using Density Estimation

1 code implementation8 May 2023 ChaeHun Park, Seungil Chad Lee, Daniel Rim, Jaegul Choo

Despite the recent advances in open-domain dialogue systems, building a reliable evaluation metric is still a challenging problem.

Contrastive Learning Density Estimation +1

Pneg: Prompt-based Negative Response Generation for Dialogue Response Selection Task

no code implementations31 Oct 2022 Nyoungwoo Lee, ChaeHun Park, Ho-Jin Choi, Jaegul Choo

To overcome these limitations, this paper proposes a simple but efficient method for generating adversarial negative responses leveraging a large-scale language model.

Language Modelling Response Generation +1

Reweighting Strategy based on Synthetic Data Identification for Sentence Similarity

1 code implementation COLING 2022 Taehee Kim, ChaeHun Park, Jimin Hong, Radhika Dua, Edward Choi, Jaegul Choo

To analyze this, we first train a classifier that identifies machine-written sentences, and observe that the linguistic features of the sentences identified as written by a machine are significantly different from those of human-written sentences.

Sentence Sentence Embedding +2

Evaluating Predictive Uncertainty under Distributional Shift on Dialogue Dataset

no code implementations1 Sep 2021 Nyoungwoo Lee, ChaeHun Park, Ho-Jin Choi

In open-domain dialogues, predictive uncertainties are mainly evaluated in a domain shift setting to cope with out-of-distribution inputs.

Generating Negative Samples by Manipulating Golden Responses for Unsupervised Learning of a Response Evaluation Model

1 code implementation NAACL 2021 ChaeHun Park, Eugene Jang, Wonsuk Yang, Jong Park

Reference-based metrics that rely on comparisons to a set of known correct responses often fail to account for this variety, and consequently correlate poorly with human judgment.

Dialogue Evaluation

Unsupervised Document Expansion for Information Retrieval with Stochastic Text Generation

1 code implementation NAACL (sdp) 2021 Soyeong Jeong, Jinheon Baek, ChaeHun Park, Jong C. Park

In this paper, we propose an Unsupervised Document Expansion with Generation (UDEG) framework with a pre-trained language model, which generates diverse supplementary sentences for the original document without using labels on query-document pairs for training.

Information Retrieval Language Modelling +2

Generating Sentential Arguments from Diverse Perspectives on Controversial Topic

1 code implementation WS 2019 ChaeHun Park, Wonsuk Yang, Jong Park

Considering diverse aspects of an argumentative issue is an essential step for mitigating a biased opinion and making reasonable decisions.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.