Search Results for author: Seongho Joe

Found 10 papers, 0 papers with code

Entity-level Factual Adaptiveness of Fine-tuning based Abstractive Summarization Models

no code implementations23 Feb 2024 Jongyoon Song, Nohil Park, Bongkyu Hwang, Jaewoong Yun, Seongho Joe, Youngjune L. Gwon, Sungroh Yoon

Abstractive summarization models often generate factually inconsistent content particularly when the parametric knowledge of the model conflicts with the knowledge in the input document.

Abstractive Text Summarization Contrastive Learning +2

Is Cross-modal Information Retrieval Possible without Training?

no code implementations20 Apr 2023 Hyunjin Choi, Hyunjae Lee, Seongho Joe, Youngjune L. Gwon

Embeddings for a particular modality of data occupy a high-dimensional space of its own, but it can be semantically aligned to another by a simple mapping without training a deep neural net.

Contrastive Learning Cross-Modal Information Retrieval +4

BiHPF: Bilateral High-Pass Filters for Robust Deepfake Detection

no code implementations16 Aug 2021 Yonghyun Jeong, Doyeon Kim, Seungjai Min, Seongho Joe, Youngjune Gwon, Jongwon Choi

The advancement in numerous generative models has a two-fold effect: a simple and easy generation of realistic synthesized images, but also an increased risk of malicious abuse of those images.

DeepFake Detection Face Swapping +1

KoreALBERT: Pretraining a Lite BERT Model for Korean Language Understanding

no code implementations27 Jan 2021 Hyunjae Lee, Jaewoong Yoon, Bonggyu Hwang, Seongho Joe, Seungjai Min, Youngjune Gwon

A Lite BERT (ALBERT) has been introduced to scale up deep bidirectional representation learning for natural languages.

Representation Learning Sentence

Analyzing Zero-shot Cross-lingual Transfer in Supervised NLP Tasks

no code implementations26 Jan 2021 Hyunjin Choi, Judong Kim, Seongho Joe, Seungjai Min, Youngjune Gwon

In zero-shot cross-lingual transfer, a supervised NLP task trained on a corpus in one language is directly applicable to another language without any additional training.

Language Modelling Machine Reading Comprehension +6

Evaluation of BERT and ALBERT Sentence Embedding Performance on Downstream NLP Tasks

no code implementations26 Jan 2021 Hyunjin Choi, Judong Kim, Seongho Joe, Youngjune Gwon

The pre-trained BERT and A Lite BERT (ALBERT) models can be fine-tuned to give state-ofthe-art results in sentence-pair regressions such as semantic textual similarity (STS) and natural language inference (NLI).

Language Modelling Natural Language Inference +5

Cannot find the paper you are looking for? You can Submit a new open access paper.