Search Results for author: Seongmin Park

Found 14 papers, 8 papers with code

Unsupervised Dialogue Topic Segmentation in Hyperdimensional Space

1 code implementation21 Aug 2023 Seongmin Park, Jinkyu Seo, Jihwa Lee

We open-source HyperSeg to provide a strong baseline for unsupervised topic segmentation.

Segmentation

Toward a Better Understanding of Loss Functions for Collaborative Filtering

1 code implementation11 Aug 2023 Seongmin Park, Mincheol Yoon, Jae-woong Lee, Hogun Park, Jongwuk Lee

Inspired by this analysis, we propose a novel loss function that improves the design of alignment and uniformity considering the unique patterns of datasets called Margin-aware Alignment and Weighted Uniformity (MAWU).

Collaborative Filtering Recommendation Systems

uCTRL: Unbiased Contrastive Representation Learning via Alignment and Uniformity for Collaborative Filtering

1 code implementation22 May 2023 Jae-woong Lee, Seongmin Park, Mincheol Yoon, Jongwuk Lee

In this paper, we propose Unbiased ConTrastive Representation Learning (uCTRL), optimizing alignment and uniformity functions derived from the InfoNCE loss function for CF models.

Causal Inference Collaborative Filtering +1

SPADE: Sparse Pillar-based 3D Object Detection Accelerator for Autonomous Driving

no code implementations12 May 2023 Minjae Lee, Seongmin Park, Hyungmin Kim, Minyong Yoon, Janghwan Lee, Jun Won Choi, Nam Sung Kim, Mingu Kang, Jungwook Choi

3D object detection using point cloud (PC) data is essential for perception pipelines of autonomous driving, where efficient encoding is key to meeting stringent resource and latency requirements.

3D Object Detection Autonomous Driving +2

Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers

1 code implementation23 Feb 2023 Minsoo Kim, Kyuhong Shim, Seongmin Park, Wonyong Sung, Jungwook Choi

Pre-trained Transformer models such as BERT have shown great success in a wide range of applications, but at the cost of substantial increases in model complexity.

Knowledge Distillation Quantization

Automatic Network Adaptation for Ultra-Low Uniform-Precision Quantization

no code implementations21 Dec 2022 Seongmin Park, Beomseok Kwon, Jieun Lim, Kyuyoung Sim, Tae-Ho Kim, Jungwook Choi

Uniform-precision neural network quantization has gained popularity since it simplifies densely packed arithmetic unit for high computing capability.

Neural Architecture Search Quantization

Leveraging Non-dialogue Summaries for Dialogue Summarization

no code implementations TU (COLING) 2022 Seongmin Park, Dongchan Shin, Jihwa Lee

To mitigate the lack of diverse dialogue summarization datasets in academia, we present methods to utilize non-dialogue summarization data for enhancing dialogue summarization systems.

Document Summarization

LIME: Weakly-Supervised Text Classification Without Seeds

1 code implementation COLING 2022 Seongmin Park, Jihwa Lee

With just an off-the-shelf textual entailment model, LIME outperforms recent baselines in weakly-supervised text classification and achieves state-of-the-art in 4 benchmarks.

Natural Language Inference text-classification +2

Unsupervised Abstractive Dialogue Summarization with Word Graphs and POV Conversion

1 code implementation WIT (ACL) 2022 Seongmin Park, Jihwa Lee

We advance the state-of-the-art in unsupervised abstractive dialogue summarization by utilizing multi-sentence compression graphs.

Abstractive Dialogue Summarization Sentence +1

NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference

no code implementations3 Dec 2021 Joonsang Yu, Junki Park, Seongmin Park, Minsoo Kim, Sihwa Lee, Dong Hyun Lee, Jungwook Choi

Non-linear operations such as GELU, Layer normalization, and Softmax are essential yet costly building blocks of Transformer models.

Finetuning Pretrained Transformers into Variational Autoencoders

1 code implementation EMNLP (insights) 2021 Seongmin Park, Jihwa Lee

Text variational autoencoders (VAEs) are notorious for posterior collapse, a phenomenon where the model's decoder learns to ignore signals from the encoder.

Language Modelling

Uniform-Precision Neural Network Quantization via Neural Channel Expansion

no code implementations1 Jan 2021 Seongmin Park, Beomseok Kwon, Kyuyoung Sim, Jieun Lim, Tae-Ho Kim, Jungwook Choi

Uniform-precision neural network quantization has gained popularity thanks to its simple arithmetic unit densely packed for high computing capability.

Neural Architecture Search Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.