Search Results for author: Inho Kang

Found 8 papers, 3 papers with code

LM-BFF-MS: Improving Few-Shot Fine-tuning of Language Models based on Multiple Soft Demonstration Memory

1 code implementation ACL 2022 Eunhwan Park, Donghyeon Jeon, Seonhoon Kim, Inho Kang, Seung-Hoon Na

LM-BFF (CITATION) achieves significant few-shot performance by using auto-generated prompts and adding demonstrations similar to an input example.

MRPC SST-2 +1

Korean Language Modeling via Syntactic Guide

no code implementations LREC 2022 Hyeondey Kim, Seonhoon Kim, Inho Kang, Nojun Kwak, Pascale Fung

Our experiment results prove that the proposed methods improve the model performance of the investigated Korean language understanding tasks.

Language Modelling POS

SISER: Semantic-Infused Selective Graph Reasoning for Fact Verification

no code implementations COLING 2022 Eunhwan Park, Jong-Hyeon Lee, Jeon Dong Hyeon, Seonhoon Kim, Inho Kang, Seung-Hoon Na

This study proposes Semantic-Infused SElective Graph Reasoning (SISER) for fact verification, which newly presents semantic-level graph reasoning and injects its reasoning-enhanced representation into other types of graph-based and sequence-based reasoning methods.

Fact Verification Sentence

MAFiD: Moving Average Equipped Fusion-in-Decoder for Question Answering over Tabular and Textual Data

1 code implementation Conference 2023 Sung-Min Lee, Eunhwan Park, Daeryong Seo, Donghyeon Jeon, Inho Kang, Seung-Hoon Na

Transformer-based models for question answering (QA) over tables and texts confront a “long” hybrid sequence over tabular and textual elements, causing long-range reasoning problems.

Question Answering

A Versatile Framework for Evaluating Ranked Lists in terms of Group Fairness and Relevance

no code implementations1 Apr 2022 Tetsuya Sakai, Jin Young Kim, Inho Kang

We present a simple and versatile framework for evaluating ranked lists in terms of group fairness and relevance, where the groups (i. e., possible attribute values) can be either nominal or ordinal in nature.

Attribute Fairness

Self-supervised pre-training and contrastive representation learning for multiple-choice video QA

no code implementations17 Sep 2020 Seonhoon Kim, Seohyeong Jeong, Eunbyul Kim, Inho Kang, Nojun Kwak

In this paper, we propose novel training schemes for multiple-choice video question answering with a self-supervised pre-training stage and a supervised contrastive learning in the main stage as an auxiliary learning.

Auxiliary Learning Contrastive Learning +4

Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information

no code implementations29 May 2018 Seonhoon Kim, Inho Kang, Nojun Kwak

Inspired by DenseNet, a densely connected convolutional network, we propose a densely-connected co-attentive recurrent neural network, each layer of which uses concatenated information of attentive features as well as hidden features of all the preceding recurrent layers.

Natural Language Inference Paraphrase Identification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.