no code implementations • 10 Oct 2023 • DaeJin Jo, Daniel Wontae Nam, Gunsoo Han, Kyoung-Woon On, Taehwan Kwon, Seungeun Rho, Sungwoong Kim
A common practice in knowledge-grounded dialogue generation is to explicitly utilize intermediate steps (e. g., web-search, memory retrieval) with modular approaches.
no code implementations • 23 May 2023 • Eunbi Choi, Kyoung-Woon On, Gunsoo Han, Sungwoong Kim, Daniel Wontae Nam, DaeJin Jo, Seung Eun Rho, Taehwan Kwon, Minjoon Seo
Open-domain conversation systems integrate multiple conversation skills into a single system through a modular approach.
1 code implementation • CVPR 2023 • Sungwoong Kim, DaeJin Jo, Donghoon Lee, Jongmin Kim
Particularly, MAGVLT achieves competitive results on both zero-shot image-to-text and text-to-image generation tasks from MS-COCO by one moderate-sized model (fewer than 500M parameters) even without the use of monomodal data and networks.
1 code implementation • 11 Oct 2022 • DaeJin Jo, Sungwoong Kim, Daniel Wontae Nam, Taehwan Kwon, Seungeun Rho, Jongmin Kim, Donghoon Lee
In order to resolve these issues, in this paper, we propose a learnable hash-based episodic count, which we name LECO, that efficiently performs as a task-specific intrinsic reward in hard exploration problems.
1 code implementation • COLING 2022 • DaeJin Jo, Taehwan Kwon, Eun-Sol Kim, Sungwoong Kim
Natural language modeling with limited training data is a challenging problem, and many algorithms make use of large-scale pretrained language models (PLMs) for this due to its great generalization ability.
1 code implementation • 22 Mar 2022 • Eric Hambro, Sharada Mohanty, Dmitrii Babaev, Minwoo Byeon, Dipam Chakraborty, Edward Grefenstette, Minqi Jiang, DaeJin Jo, Anssi Kanervisto, Jongmin Kim, Sungwoong Kim, Robert Kirk, Vitaly Kurin, Heinrich Küttler, Taehwon Kwon, Donghoon Lee, Vegard Mella, Nantas Nardelli, Ivan Nazarov, Nikita Ovsov, Jack Parker-Holder, Roberta Raileanu, Karolis Ramanauskas, Tim Rocktäschel, Danielle Rothermel, Mikayel Samvelyan, Dmitry Sorokin, Maciej Sypetkowski, Michał Sypetkowski
In this report, we summarize the takeaways from the first NeurIPS 2021 NetHack Challenge.
no code implementations • 29 Sep 2021 • DaeJin Jo, Taehwan Kwon, Sungwoong Kim, Eun-Sol Kim
Therefore, in this work, we develop a novel additive learning algorithm based on reinforcement learning (RL) for few-shot natural language generation (NLG) tasks.
1 code implementation • 6 May 2020 • DaeJin Jo
MaskGAN opens the query for the conditional language model by filling in the blanks between the given tokens.
Conditional Text Generation Generative Adversarial Network +1