Search Results for author: SangKeun Lee

Found 14 papers, 5 papers with code

Improving Bias Mitigation through Bias Experts in Natural Language Understanding

1 code implementation6 Dec 2023 Eojin Jeon, Mingyu Lee, Juhyeong Park, Yeachan Kim, Wing-Lam Mok, SangKeun Lee

To mitigate the detrimental effect of the bias on the networks, previous works have proposed debiasing methods that down-weight the biased examples identified by an auxiliary model, which is trained with explicit bias labels.

Binary Classification Multi-class Classification +1

Data Distillation for Neural Network Potentials toward Foundational Dataset

no code implementations9 Nov 2023 Gang Seob Jung, SangKeun Lee, Jong Youl Choi

Furthermore, the data can be translated to other metallic systems (aluminum and niobium), without repeating the sampling and distillation processes.

Active Learning

Dynamic Structure Pruning for Compressing CNNs

1 code implementation17 Mar 2023 Jun-Hyung Park, Yeachan Kim, Junho Kim, Joon-Young Choi, SangKeun Lee

In this work, we introduce a novel structure pruning method, termed as dynamic structure pruning, to identify optimal pruning granularities for intra-channel pruning.

Efficient Pre-training of Masked Language Model via Concept-based Curriculum Masking

1 code implementation15 Dec 2022 Mingyu Lee, Jun-Hyung Park, Junho Kim, Kang-Min Kim, SangKeun Lee

Masked language modeling (MLM) has been widely used for pre-training effective bidirectional representations, but incurs substantial training costs.

Language Modelling Masked Language Modeling

Handling Out-Of-Vocabulary Problem in Hangeul Word Embeddings

no code implementations EACL 2021 Ohjoon Kwon, Dohyun Kim, Soo-Ryeon Lee, Junyoung Choi, SangKeun Lee

Word embedding is considered an essential factor in improving the performance of various Natural Language Processing (NLP) models.

Word Embeddings

Adaptive Compression of Word Embeddings

no code implementations ACL 2020 Yeachan Kim, Kang-Min Kim, SangKeun Lee

However, unlike prior works that assign the same length of codes to all words, we adaptively assign different lengths of codes to each word by learning downstream tasks.

Self-Driving Cars Word Embeddings

Representation Learning for Unseen Words by Bridging Subwords to Semantic Networks

no code implementations LREC 2020 Yeachan Kim, Kang-Min Kim, SangKeun Lee

In the first stage, we learn subword embeddings from the pre-trained word embeddings by using an additive composition function of subwords.

Representation Learning Word Embeddings

Learning to Generate Word Representations using Subword Information

no code implementations COLING 2018 Yeachan Kim, Kang-Min Kim, Ji-Min Lee, SangKeun Lee

Unlike previous models that learn word representations from a large corpus, we take a set of pre-trained word embeddings and generalize it to word entries, including OOV words.

Chunking Language Modelling +5

Cannot find the paper you are looking for? You can Submit a new open access paper.