1 code implementation • 21 Aug 2023 • Seongmin Park, Jinkyu Seo, Jihwa Lee
We open-source HyperSeg to provide a strong baseline for unsupervised topic segmentation.
1 code implementation • 11 Aug 2023 • Seongmin Park, Mincheol Yoon, Jae-woong Lee, Hogun Park, Jongwuk Lee
Inspired by this analysis, we propose a novel loss function that improves the design of alignment and uniformity considering the unique patterns of datasets called Margin-aware Alignment and Weighted Uniformity (MAWU).
1 code implementation • 22 May 2023 • Jae-woong Lee, Seongmin Park, Mincheol Yoon, Jongwuk Lee
In this paper, we propose Unbiased ConTrastive Representation Learning (uCTRL), optimizing alignment and uniformity functions derived from the InfoNCE loss function for CF models.
no code implementations • 12 May 2023 • Minjae Lee, Seongmin Park, Hyungmin Kim, Minyong Yoon, Janghwan Lee, Jun Won Choi, Nam Sung Kim, Mingu Kang, Jungwook Choi
3D object detection using point cloud (PC) data is essential for perception pipelines of autonomous driving, where efficient encoding is key to meeting stringent resource and latency requirements.
1 code implementation • 23 Feb 2023 • Minsoo Kim, Kyuhong Shim, Seongmin Park, Wonyong Sung, Jungwook Choi
Pre-trained Transformer models such as BERT have shown great success in a wide range of applications, but at the cost of substantial increases in model complexity.
no code implementations • 21 Dec 2022 • Seongmin Park, Beomseok Kwon, Jieun Lim, Kyuyoung Sim, Tae-Ho Kim, Jungwook Choi
Uniform-precision neural network quantization has gained popularity since it simplifies densely packed arithmetic unit for high computing capability.
no code implementations • TU (COLING) 2022 • Seongmin Park, Dongchan Shin, Jihwa Lee
To mitigate the lack of diverse dialogue summarization datasets in academia, we present methods to utilize non-dialogue summarization data for enhancing dialogue summarization systems.
1 code implementation • COLING 2022 • Seongmin Park, Jihwa Lee
With just an off-the-shelf textual entailment model, LIME outperforms recent baselines in weakly-supervised text classification and achieves state-of-the-art in 4 benchmarks.
1 code implementation • 26 Jul 2022 • Jae-woong Lee, Seongmin Park, Joonseok Lee, Jongwuk Lee
Implicit feedback has been widely used to build commercial recommender systems.
1 code implementation • WIT (ACL) 2022 • Seongmin Park, Jihwa Lee
We advance the state-of-the-art in unsupervised abstractive dialogue summarization by utilizing multi-sentence compression graphs.
no code implementations • 3 Dec 2021 • Joonsang Yu, Junki Park, Seongmin Park, Minsoo Kim, Sihwa Lee, Dong Hyun Lee, Jungwook Choi
Non-linear operations such as GELU, Layer normalization, and Softmax are essential yet costly building blocks of Transformer models.
1 code implementation • EMNLP (insights) 2021 • Seongmin Park, Jihwa Lee
Text variational autoencoders (VAEs) are notorious for posterior collapse, a phenomenon where the model's decoder learns to ignore signals from the encoder.
no code implementations • 4 Aug 2021 • Seongmin Park, Dongchan Shin, Sangyoun Paik, Subong Choi, Alena Kazakova, Jihwa Lee
Fine-tuning pretrained language models (LMs) is a popular approach to automatic speech recognition (ASR) error detection during post-processing.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 1 Jan 2021 • Seongmin Park, Beomseok Kwon, Kyuyoung Sim, Jieun Lim, Tae-Ho Kim, Jungwook Choi
Uniform-precision neural network quantization has gained popularity thanks to its simple arithmetic unit densely packed for high computing capability.