no code implementations • EMNLP (ACL) 2021 • San-Hee Park, Kang-Min Kim, Seonhee Cho, Jun-Hyung Park, Hyuntae Park, Hyuna Kim, Seongwon Chung, SangKeun Lee
Warning: This manuscript contains a certain level of offensive expression.
1 code implementation • Findings (ACL) 2022 • Yong-Ho Jung, Jun-Hyung Park, Joon-Young Choi, Mingyu Lee, Junho Kim, Kang-Min Kim, SangKeun Lee
Commonsense inference poses a unique challenge to reason and generate the physical, social, and causal conditions of a given event.
1 code implementation • 6 Dec 2023 • Eojin Jeon, Mingyu Lee, Juhyeong Park, Yeachan Kim, Wing-Lam Mok, SangKeun Lee
To mitigate the detrimental effect of the bias on the networks, previous works have proposed debiasing methods that down-weight the biased examples identified by an auxiliary model, which is trained with explicit bias labels.
no code implementations • 9 Nov 2023 • Gang Seob Jung, SangKeun Lee, Jong Youl Choi
Furthermore, the data can be translated to other metallic systems (aluminum and niobium), without repeating the sampling and distillation processes.
1 code implementation • 17 Mar 2023 • Jun-Hyung Park, Yeachan Kim, Junho Kim, Joon-Young Choi, SangKeun Lee
In this work, we introduce a novel structure pruning method, termed as dynamic structure pruning, to identify optimal pruning granularities for intra-channel pruning.
1 code implementation • 15 Dec 2022 • Mingyu Lee, Jun-Hyung Park, Junho Kim, Kang-Min Kim, SangKeun Lee
Masked language modeling (MLM) has been widely used for pre-training effective bidirectional representations, but incurs substantial training costs.
no code implementations • EACL 2021 • Ohjoon Kwon, Dohyun Kim, Soo-Ryeon Lee, Junyoung Choi, SangKeun Lee
Word embedding is considered an essential factor in improving the performance of various Natural Language Processing (NLP) models.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Kang-Min Kim, Bumsu Hyeon, Yeachan Kim, Jun-Hyung Park, SangKeun Lee
In addition, we propose a weakly supervised pretraining, where labels for text classification are obtained automatically from an existing approach.
no code implementations • ACL 2020 • Yeachan Kim, Kang-Min Kim, SangKeun Lee
However, unlike prior works that assign the same length of codes to all words, we adaptively assign different lengths of codes to each word by learning downstream tasks.
no code implementations • LREC 2020 • Yeachan Kim, Kang-Min Kim, SangKeun Lee
In the first stage, we learn subword embeddings from the pre-trained word embeddings by using an additive composition function of subwords.
no code implementations • NAACL 2019 • Byung-Ju Choi, Jun-Hyung Park, SangKeun Lee
We show the efficacy of our approach in existing CNNs based on the performance evaluation.
1 code implementation • 22 Aug 2018 • Deunsol Yoon, Dongbok Lee, SangKeun Lee
In this paper, we propose Dynamic Self-Attention (DSA), a new self-attention mechanism for sentence embedding.
Ranked #44 on Natural Language Inference on SNLI
no code implementations • COLING 2018 • Yeachan Kim, Kang-Min Kim, Ji-Min Lee, SangKeun Lee
Unlike previous models that learn word representations from a large corpus, we take a set of pre-trained word embeddings and generalize it to word entries, including OOV words.
no code implementations • 3 Apr 2018 • Kang-Min Kim, Aliyeva Dinara, Byung-Ju Choi, SangKeun Lee
However, these approaches are limited to small- or moderate-scale text classification.