no code implementations • EMNLP 2020 • Seungtaek Choi, Haeju Park, Jinyoung Yeo, Seung-won Hwang
We aim to leverage human and machine intelligence together for attention supervision.
no code implementations • Findings (ACL) 2022 • Minji Seo, YeonJoon Jung, Seungtaek Choi, Seung-won Hwang, Bei Liu
We study event understanding as a critical step towards visual commonsense tasks. Meanwhile, we argue that current object-based event understanding is purely likelihood-based, leading to incorrect event prediction, due to biased correlation between events and objects. We propose to mitigate such biases with do-calculus, proposed in causality research, but overcoming its limited robustness, by an optimized aggregation with association-based prediction. We show the effectiveness of our approach, intrinsically by comparing our generated events with ground-truth event annotation, and extrinsically by downstream commonsense tasks.
no code implementations • EMNLP (sustainlp) 2020 • Seungtaek Choi, Myeongho Jeong, Jinyoung Yeo, Seung-won Hwang
This paper studies label augmentation for training dialogue response selection.
1 code implementation • EMNLP 2021 • Jihyuk Kim, Myeongho Jeong, Seungtaek Choi, Seung-won Hwang
The second phase, encoding structure, builds a graph of keyphrases and the given document to obtain the structure-aware representation of the augmented text.
no code implementations • 3 Mar 2024 • Seo Hyun Kim, Keummin Ka, Yohan Jo, Seung-won Hwang, Dongha Lee, Jinyoung Yeo
To effectively construct memory, it is crucial to seamlessly connect past and present information, while also possessing the ability to forget obstructive information.
1 code implementation • 24 Feb 2024 • Soyoung Yoon, Eunbi Choi, Jiyeon Kim, Yireun Kim, Hyeongu Yun, Seung-won Hwang
We propose ListT5, a novel reranking approach based on Fusion-in-Decoder (FiD) that handles multiple candidate passages at both train and inference time.
no code implementations • 13 Nov 2023 • Seungjun Moon, Hyungjoo Chae, Yongho Song, Taeyoon Kwon, Dongjin Kang, Kai Tzu-iunn Ong, Seung-won Hwang, Jinyoung Yeo
Hence, the focus of our work is to leverage open-source code LLMs to generate helpful feedback with correct guidance for code editing.
1 code implementation • 24 Oct 2023 • Ye Eun Chun, Sunjae Kwon, Kyunghwan Sohn, Nakwon Sung, Junyoup Lee, Byungki Seo, Kevin Compher, Seung-won Hwang, Jaesik Choi
In this paper, we introduce CR-COPEC called Causal Rationale of Corporate Performance Changes from financial reports.
1 code implementation • 8 Aug 2023 • Sang-eun Han, Yeonseok Jeong, Seung-won Hwang, Kyungjae Lee
Our experiments show that our framework not only ensures monotonicity, but also outperforms state-of-the-art multi-source QA methods on Natural Questions.
no code implementations • 7 Jun 2023 • Kyungjae Lee, Sang-eun Han, Seung-won Hwang, Moontae Lee
This paper studies the problem of open-domain question answering, with the aim of answering a diverse range of questions leveraging knowledge resources.
no code implementations • 6 Apr 2023 • Yongho Song, Dahyun Lee, Myungha Jang, Seung-won Hwang, Kyungjae Lee, Dongha Lee, Jinyeong Yeo
The long-standing goal of dense retrievers in abtractive open-domain question answering (ODQA) tasks is to learn to capture evidence passages among relevant passages for any given query, such that the reader produce factually correct outputs from evidence passages.
1 code implementation • 23 Oct 2022 • Minju Kim, Chaehyeong Kim, Yongho Song, Seung-won Hwang, Jinyoung Yeo
To build open-domain chatbots that are able to use diverse communicative skills, we propose a novel framework BotsTalk, where multiple agents grounded to the specific target skills participate in a conversation to automatically annotate multi-skill dialogues.
no code implementations • NAACL 2022 • Garam Lee, Minsoo Kim, Jai Hyun Park, Seung-won Hwang, Jung Hee Cheon
Embeddings, which compress information in raw text into semantics-preserving low-dimensional vectors, have been widely adopted for their efficacy.
1 code implementation • COLING 2022 • Seungone Kim, Se June Joo, Hyungjoo Chae, Chaehyeong Kim, Seung-won Hwang, Jinyoung Yeo
In this paper, we propose to leverage the unique characteristics of dialogues sharing commonsense knowledge across participants, to resolve the difficulties in summarizing them.
Ranked #2 on Text Summarization on DialogSum
no code implementations • NAACL 2022 • Yu Jin Kim, Beong-woo Kwak, Youngwook Kim, Reinald Kim Amplayo, Seung-won Hwang, Jinyoung Yeo
Towards this goal, we propose to mitigate the loss of knowledge from the interference among the different knowledge sources, by developing a modular variant of the knowledge aggregation as a new zero-shot commonsense reasoning framework.
1 code implementation • NAACL 2022 • Jihyuk Kim, Minsoo Kim, Seung-won Hwang
Deep learning for Information Retrieval (IR) requires a large amount of high-quality query-document relevance labels, but such labels are inherently sparse.
no code implementations • Findings (ACL) 2022 • Kyungjae Lee, Wookje Han, Seung-won Hwang, Hwaran Lee, Joonsuk Park, Sang-Woo Lee
To this end, we first propose a novel task--Continuously-updated QA (CuQA)--in which multiple large-scale updates are made to LMs, and the performance is measured with respect to the success in adding and updating knowledge while retaining existing knowledge.
1 code implementation • ACL 2022 • Shuai Lu, Nan Duan, Hojae Han, Daya Guo, Seung-won Hwang, Alexey Svyatkovskiy
Code completion, which aims to predict the following code token(s) according to the code context, can improve the productivity of software development.
no code implementations • 11 Feb 2022 • Minju Kim, Beong-woo Kwak, Youngwook Kim, Hong-in Lee, Seung-won Hwang, Jinyoung Yeo
This paper introduces a simple yet effective data-centric approach for the task of improving persona-conditioned dialogue agents.
1 code implementation • 30 Jan 2022 • Wonpyo Park, WoongGi Chang, Donggeon Lee, Juntae Kim, Seung-won Hwang
The former loses preciseness of relative position from linearization, while the latter loses a tight integration of node-edge and node-topology interaction.
Ranked #13 on Graph Regression on PCQM4Mv2-LSC
no code implementations • 26 Jan 2022 • Beong-woo Kwak, Youngwook Kim, Yu Jin Kim, Seung-won Hwang, Jinyoung Yeo
A traditional view of data acquisition is that, through iterations, knowledge from human labels and models is implicitly distilled to monotonically increase the accuracy and label consistency.
no code implementations • ACL 2021 • Kyungjae Lee, Seung-won Hwang, Sang-eun Han, Dohyeon Lee
This paper studies the bias problem of multi-hop question answering models, of answering correctly without correct reasoning.
no code implementations • EACL 2021 • Kyungho Kim, Kyungjae Lee, Seung-won Hwang, Young-In Song, SeungWook Lee
This paper studies the problem of generatinglikely queries for multimodal documents withimages.
no code implementations • COLING 2020 • Jihyeok Kim, Seungtaek Choi, Reinald Kim Amplayo, Seung-won Hwang
We thus propose to additionally leverage references, which are selected from a large pool of texts labeled with one of the attributes, as textual information that enriches inductive biases of given attributes.
no code implementations • 18 Oct 2020 • Shin-woo Park, Byung Jun Bae, Jinyoung Yeo, Seung-won Hwang
Graph neural networks (GNNs) have been widely used in representation learning on graphs and achieved superior performance in tasks such as node classification.
no code implementations • LREC 2020 • Gyeongbok Lee, Seung-won Hwang, Hyunsouk Cho
Existing machine reading comprehension models are reported to be brittle for adversarially perturbed questions when optimizing only for accuracy, which led to the creation of new reading comprehension benchmarks, such as SQuAD 2. 0 which contains such type of questions.
no code implementations • IJCNLP 2019 • Kyungjae Lee, Sunghyun Park, Hojae Han, Jinyoung Yeo, Seung-won Hwang, Juho Lee
This paper studies the problem of supporting question answering in a new language with limited training resources.
no code implementations • IJCNLP 2019 • Hojae Han, Seungtaek Choi, Haeju Park, Seung-won Hwang
This paper studies the problem of non-factoid question answering, where the answer may span over multiple sentences.
no code implementations • IJCNLP 2019 • Fuxiang Chen, Seung-won Hwang, Jaegul Choo, Jung-Woo Ha, Sunghun Kim
Here we describe a new NL2pSQL task to generate pSQL codes from natural language questions on under-specified database issues, NL2pSQL.
no code implementations • WS 2019 • Reinald Kim Amplayo, Seung-won Hwang, Min Song
We find the novelty is not a singular concept, and thus inherently lacks of ground truth annotations with cross-annotator agreement, which is a major obstacle in evaluating these models.
1 code implementation • 18 Sep 2019 • Reinald Kim Amplayo, Seonjae Lim, Seung-won Hwang
We propose a state-of-the-art CLT model called Length Transfer Networks (LeTraNets) that introduces a two-way encoding scheme for short and long texts using multiple training mechanisms.
no code implementations • ACL 2019 • Haeju Park, Jinyoung Yeo, Gengyu Wang, Seung-won Hwang
Transfer learning is effective for improving the performance of tasks that are related, and Multi-task learning (MTL) and Cross-lingual learning (CLL) are important instances.
no code implementations • 6 Mar 2019 • Wanyun Cui, Yanghua Xiao, Haixun Wang, Yangqiu Song, Seung-won Hwang, Wei Wang
Based on these templates, our QA system KBQA effectively supports binary factoid questions, as well as complex questions which are composed of a series of binary factoid questions.
2 code implementations • TACL 2019 • Jihyeok Kim, Reinald Kim Amplayo, Kyungjae Lee, Sua Sung, Minji Seo, Seung-won Hwang
The performance of text classification has improved tremendously using intelligently engineered neural-based models, especially those injecting categorical metadata as additional information, e. g., using user/product information for sentiment classification.
Ranked #4 on Sentiment Analysis on User and product information (Yelp 2013 (Acc) metric)
no code implementations • 1 Dec 2018 • Gyeongbok Lee, Sungdong Kim, Seung-won Hwang
Question answering (QA) extracting answers from text to the given question in natural language, has been actively studied and existing models have shown a promise of outperforming human performance when trained and evaluated with SQuAD dataset.
1 code implementation • 22 Nov 2018 • Reinald Kim Amplayo, Seung-won Hwang, Min Song
Thus, we aim to eliminate these requirements and solve the sense granularity problem by proposing AutoSense, a latent variable model based on two observations: (1) senses are represented as a distribution over topics, and (2) senses generate pairings between the target word and its neighboring word.
Ranked #2 on Word Sense Induction on SemEval 2010 WSI
no code implementations • 18 Oct 2018 • Minseok Cho, Reinald Kim Amplayo, Seung-won Hwang, Jonghyuck Park
The same question has not been asked in the table question answering (TableQA) task, where we are tasked to answer a query given a table.
no code implementations • ACL 2018 • Bill Yuchen Lin, Frank F. Xu, Kenny Zhu, Seung-won Hwang
Cross-cultural differences and similarities are common in cross-lingual natural language understanding, especially for research in social media.
1 code implementation • NAACL 2018 • Reinald Kim Amplayo, Seonjae Lim, Seung-won Hwang
To this end, we leverage on an off-the-shelf entity linking system (ELS) to extract linked entities and propose Entity2Topic (E2T), a module easily attachable to a sequence-to-sequence model that transforms a list of entities into a vector representation of the topic of the summary.
Ranked #23 on Text Summarization on GigaWord
1 code implementation • ACL 2018 • Reinald Kim Amplayo, Jihyeok Kim, Sua Sung, Seung-won Hwang
The use of user/product information in sentiment analysis is important, especially for cold-start users/products, whose number of reviews are very limited.
Ranked #4 on Sentiment Analysis on User and product information
1 code implementation • 14 Jun 2018 • Reinald Kim Amplayo, Kyungjae Lee, Jinyeong Yeo, Seung-won Hwang
We are the first to use translations as domain-free contexts for sentence classification.
Ranked #7 on Text Classification on TREC-6
1 code implementation • 14 Jun 2018 • Reinald Kim Amplayo, Seung-won Hwang
This paper aims at an aspect sentiment model for aspect-based sentiment analysis (ABSA) focused on micro reviews.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +3
no code implementations • 20 Oct 2017 • Wanyun Cui, Xiyou Zhou, Hangyu Lin, Yanghua Xiao, Haixun Wang, Seung-won Hwang, Wei Wang
In this paper, we introduce verb patterns to represent verbs' semantics, such that each pattern corresponds to a single semantic of the verb.
no code implementations • COLING 2016 • Taesung Lee, Seung-won Hwang, Zhongyuan Wang
Besides providing the relevant information, amusing users has been an important role of the web.
no code implementations • 29 Nov 2015 • Yi Zhang, Yanghua Xiao, Seung-won Hwang, Haixun Wang, X. Sean Wang, Wei Wang
This paper provides a query processing method based on the relevance models between entity sets and concepts.