Search Results for author: Seonhoon Kim

Found 9 papers, 3 papers with code

Self-Distilled Self-Supervised Representation Learning

1 code implementation25 Nov 2021 Jiho Jang, Seonhoon Kim, KiYoon Yoo, Chaerin Kong, Jangho Kim, Nojun Kwak

Through self-distillation, the intermediate layers are better suited for instance discrimination, making the performance of an early-exited sub-network not much degraded from that of the full network.

Representation Learning Self-Supervised Learning

LM-BFF-MS: Improving Few-Shot Fine-tuning of Language Models based on Multiple Soft Demonstration Memory

1 code implementation ACL 2022 Eunhwan Park, Donghyeon Jeon, Seonhoon Kim, Inho Kang, Seung-Hoon Na

LM-BFF (CITATION) achieves significant few-shot performance by using auto-generated prompts and adding demonstrations similar to an input example.

MRPC SST-2 +1

Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information

no code implementations29 May 2018 Seonhoon Kim, Inho Kang, Nojun Kwak

Inspired by DenseNet, a densely connected convolutional network, we propose a densely-connected co-attentive recurrent neural network, each layer of which uses concatenated information of attentive features as well as hidden features of all the preceding recurrent layers.

Natural Language Inference Paraphrase Identification +2

Textbook Question Answering with Multi-modal Context Graph Understanding and Self-supervised Open-set Comprehension

no code implementations ACL 2019 Daesik Kim, Seonhoon Kim, Nojun Kwak

Moreover, ablation studies validate that both methods of incorporating f-GCN for extracting knowledge from multi-modal contexts and our newly proposed self-supervised learning process are effective for TQA problems.

Open Set Learning Question Answering +2

Self-supervised pre-training and contrastive representation learning for multiple-choice video QA

no code implementations17 Sep 2020 Seonhoon Kim, Seohyeong Jeong, Eunbyul Kim, Inho Kang, Nojun Kwak

In this paper, we propose novel training schemes for multiple-choice video question answering with a self-supervised pre-training stage and a supervised contrastive learning in the main stage as an auxiliary learning.

Auxiliary Learning Contrastive Learning +4

Korean Language Modeling via Syntactic Guide

no code implementations LREC 2022 Hyeondey Kim, Seonhoon Kim, Inho Kang, Nojun Kwak, Pascale Fung

Our experiment results prove that the proposed methods improve the model performance of the investigated Korean language understanding tasks.

Language Modelling POS

SISER: Semantic-Infused Selective Graph Reasoning for Fact Verification

no code implementations COLING 2022 Eunhwan Park, Jong-Hyeon Lee, Jeon Dong Hyeon, Seonhoon Kim, Inho Kang, Seung-Hoon Na

This study proposes Semantic-Infused SElective Graph Reasoning (SISER) for fact verification, which newly presents semantic-level graph reasoning and injects its reasoning-enhanced representation into other types of graph-based and sequence-based reasoning methods.

Fact Verification Sentence

Unifying Vision-Language Representation Space with Single-tower Transformer

no code implementations21 Nov 2022 Jiho Jang, Chaerin Kong, Donghyeon Jeon, Seonhoon Kim, Nojun Kwak

Contrastive learning is a form of distance learning that aims to learn invariant features from two related representations.

Contrastive Learning Object Localization +3

Cannot find the paper you are looking for? You can Submit a new open access paper.