no code implementations • 23 Feb 2024 • Jongyoon Song, Nohil Park, Bongkyu Hwang, Jaewoong Yun, Seongho Joe, Youngjune L. Gwon, Sungroh Yoon
Abstractive summarization models often generate factually inconsistent content particularly when the parametric knowledge of the model conflicts with the knowledge in the input document.
no code implementations • 20 Apr 2023 • Hyunjin Choi, Hyunjae Lee, Seongho Joe, Youngjune L. Gwon
Embeddings for a particular modality of data occupy a high-dimensional space of its own, but it can be semantically aligned to another by a simple mapping without training a deep neural net.
no code implementations • 19 Apr 2023 • Seongho Joe, Byoungjip Kim, Hoyoung Kang, Kyoungwon Park, Bogun Kim, Jaeseon Park, Joonseok Lee, Youngjune Gwon
The recent advances in representation learning inspire us to take on the challenging problem of unsupervised image classification tasks in a principled way.
no code implementations • 19 Apr 2023 • Joonseok Lee, Seongho Joe, Kyoungwon Park, Bogun Kim, Hoyoung Kang, Jaeseon Park, Youngjune Gwon
We propose a self-supervised learning method for long text documents based on contrastive learning.
no code implementations • 1 Sep 2022 • Hyunjae Lee, Jaewoong Yun, Hyunjin Choi, Seongho Joe, Youngjune L. Gwon
We build and fine-tune an abstractive dialogue summarization model on a shared encoder-decoder architecture using the enhanced BERT.
Abstractive Dialogue Summarization Abstractive Text Summarization +4
no code implementations • 16 Aug 2021 • Yonghyun Jeong, Doyeon Kim, Seungjai Min, Seongho Joe, Youngjune Gwon, Jongwon Choi
The advancement in numerous generative models has a two-fold effect: a simple and easy generation of realistic synthesized images, but also an increased risk of malicious abuse of those images.
no code implementations • 27 Jan 2021 • Hyunjae Lee, Jaewoong Yoon, Bonggyu Hwang, Seongho Joe, Seungjai Min, Youngjune Gwon
A Lite BERT (ALBERT) has been introduced to scale up deep bidirectional representation learning for natural languages.
no code implementations • 26 Jan 2021 • Hyunjin Choi, Judong Kim, Seongho Joe, Seungjai Min, Youngjune Gwon
In zero-shot cross-lingual transfer, a supervised NLP task trained on a corpus in one language is directly applicable to another language without any additional training.
no code implementations • 26 Jan 2021 • Hyunjin Choi, Judong Kim, Seongho Joe, Youngjune Gwon
The pre-trained BERT and A Lite BERT (ALBERT) models can be fine-tuned to give state-ofthe-art results in sentence-pair regressions such as semantic textual similarity (STS) and natural language inference (NLI).
no code implementations • 16 Jan 2021 • Byoungjip Kim, Jinho Choo, Yeong-Dae Kwon, Seongho Joe, Seungjai Min, Youngjune Gwon
This paper introduces SelfMatch, a semi-supervised learning method that combines the power of contrastive self-supervised learning and consistency regularization.