Search Results for author: Jay-Yoon Lee

Found 12 papers, 3 papers with code

Locate&Edit: Energy-based Text Editing for Efficient, Flexible, and Faithful Controlled Text Generation

no code implementations30 Jun 2024 Hye Ryung Son, Jay-Yoon Lee

In this work, we propose Locate&Edit(L&E), an efficient and flexible energy-based approach to CTG, which edits text outputs from a base LM using off-the-shelf energy models.

Text Generation

RE-RAG: Improving Open-Domain QA Performance and Interpretability with Relevance Estimator in Retrieval-Augmented Generation

no code implementations9 Jun 2024 Kiseung Kim, Jay-Yoon Lee

The Retrieval Augmented Generation (RAG) framework utilizes a combination of parametric knowledge and external knowledge to demonstrate state-of-the-art performance on open-domain question answering tasks.

Document Ranking Natural Questions +4

An Analysis under a Unified Fomulation of Learning Algorithms with Output Constraints

no code implementations3 Jun 2024 Mooho Song, Jay-Yoon Lee

(2) We propose new algorithms to integrate the information of main task and constraint injection, inspired by continual-learning algorithms.

Continual Learning Natural Language Inference +1

Case-Based Reasoning Approach for Solving Financial Question Answering

no code implementations18 May 2024 Yikyung Kim, Jay-Yoon Lee

To address this issue, we propose a novel approach to tackle numerical reasoning problems using case based reasoning (CBR), an artificial intelligence paradigm that provides problem solving guidance by offering similar cases (i. e. similar questions and corresponding logical programs).

Question Answering

Multistage Collaborative Knowledge Distillation from a Large Language Model for Semi-Supervised Sequence Generation

no code implementations15 Nov 2023 Jiachen Zhao, Wenlong Zhao, Andrew Drozdov, Benjamin Rozonoyer, Md Arafat Sultan, Jay-Yoon Lee, Mohit Iyyer, Andrew McCallum

In this paper, we present the discovery that a student model distilled from a few-shot prompted LLM can commonly generalize better than its teacher to unseen examples on such tasks.

Constituency Parsing Knowledge Distillation +3

Machine Reading Comprehension using Case-based Reasoning

no code implementations24 May 2023 Dung Thai, Dhruv Agarwal, Mudit Chaudhary, Wenlong Zhao, Rajarshi Das, Manzil Zaheer, Jay-Yoon Lee, Hannaneh Hajishirzi, Andrew McCallum

Given a test question, CBR-MRC first retrieves a set of similar cases from a nonparametric memory and then predicts an answer by selecting the span in the test context that is most similar to the contextualized representations of answers in the retrieved cases.

Attribute Machine Reading Comprehension

Structured Energy Network as a dynamic loss function. Case study. A case study with multi-label Classification

no code implementations29 Sep 2021 Jay-Yoon Lee, Dhruvesh Patel, Purujit Goyal, Andrew McCallum

The best version of SEAL that uses NCE ranking method achieves close to +2. 85, +2. 23 respective F1 point gain in average over cross-entropy and INFNET on the feature-based datasets, excluding one outlier that has an excessive gain of +50. 0 F1 points.

Multi-Label Classification Structured Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.