Search Results for author: DaeJin Jo

Found 8 papers, 5 papers with code

Hexa: Self-Improving for Knowledge-Grounded Dialogue System

no code implementations10 Oct 2023 DaeJin Jo, Daniel Wontae Nam, Gunsoo Han, Kyoung-Woon On, Taehwan Kwon, Seungeun Rho, Sungwoong Kim

A common practice in knowledge-grounded dialogue generation is to explicitly utilize intermediate steps (e. g., web-search, memory retrieval) with modular approaches.

Dialogue Generation Retrieval

MAGVLT: Masked Generative Vision-and-Language Transformer

1 code implementation CVPR 2023 Sungwoong Kim, DaeJin Jo, Donghoon Lee, Jongmin Kim

Particularly, MAGVLT achieves competitive results on both zero-shot image-to-text and text-to-image generation tasks from MS-COCO by one moderate-sized model (fewer than 500M parameters) even without the use of monomodal data and networks.

Image Captioning Text Infilling +1

LECO: Learnable Episodic Count for Task-Specific Intrinsic Reward

1 code implementation11 Oct 2022 DaeJin Jo, Sungwoong Kim, Daniel Wontae Nam, Taehwan Kwon, Seungeun Rho, Jongmin Kim, Donghoon Lee

In order to resolve these issues, in this paper, we propose a learnable hash-based episodic count, which we name LECO, that efficiently performs as a task-specific intrinsic reward in hard exploration problems.

Efficient Exploration reinforcement-learning

Selective Token Generation for Few-shot Natural Language Generation

1 code implementation COLING 2022 DaeJin Jo, Taehwan Kwon, Eun-Sol Kim, Sungwoong Kim

Natural language modeling with limited training data is a challenging problem, and many algorithms make use of large-scale pretrained language models (PLMs) for this due to its great generalization ability.

Data-to-Text Generation Language Modelling +3

Selective Token Generation for Few-shot Language Modeling

no code implementations29 Sep 2021 DaeJin Jo, Taehwan Kwon, Sungwoong Kim, Eun-Sol Kim

Therefore, in this work, we develop a novel additive learning algorithm based on reinforcement learning (RL) for few-shot natural language generation (NLG) tasks.

Data-to-Text Generation Language Modelling +3

Token Manipulation Generative Adversarial Network for Text Generation

1 code implementation6 May 2020 DaeJin Jo

MaskGAN opens the query for the conditional language model by filling in the blanks between the given tokens.

Conditional Text Generation Generative Adversarial Network +1

Cannot find the paper you are looking for? You can Submit a new open access paper.