Search Results for author: Seung-Hoon Na

Found 15 papers, 6 papers with code

LM-BFF-MS: Improving Few-Shot Fine-tuning of Language Models based on Multiple Soft Demonstration Memory

1 code implementation ACL 2022 Eunhwan Park, Donghyeon Jeon, Seonhoon Kim, Inho Kang, Seung-Hoon Na

LM-BFF (CITATION) achieves significant few-shot performance by using auto-generated prompts and adding demonstrations similar to an input example.

MRPC SST-2 +1

SISER: Semantic-Infused Selective Graph Reasoning for Fact Verification

no code implementations COLING 2022 Eunhwan Park, Jong-Hyeon Lee, Jeon Dong Hyeon, Seonhoon Kim, Inho Kang, Seung-Hoon Na

This study proposes Semantic-Infused SElective Graph Reasoning (SISER) for fact verification, which newly presents semantic-level graph reasoning and injects its reasoning-enhanced representation into other types of graph-based and sequence-based reasoning methods.

Fact Verification Sentence

JBNU-CCLab at SemEval-2022 Task 7: DeBERTa for Identifying Plausible Clarifications in Instructional Texts

no code implementations SemEval (NAACL) 2022 Daewook Kang, Sung-Min Lee, Eunhwan Park, Seung-Hoon Na

In this study, we examine the ability of contextualized representations of pretrained language model to distinguish whether sequences from instructional articles are plausible or implausible.

Language Modelling Multi-class Classification +1

JBNU-CCLab at SemEval-2022 Task 12: Machine Reading Comprehension and Span Pair Classification for Linking Mathematical Symbols to Their Descriptions

1 code implementation SemEval (NAACL) 2022 Sung-Min Lee, Seung-Hoon Na

This paper describes our system in the SemEval-2022 Task 12: ‘linking mathematical symbols to their descriptions’, achieving first on the leaderboard for all the subtasks comprising named entity extraction (NER) and relation extraction (RE).

Joint Entity and Relation Extraction Machine Reading Comprehension +1

MAFiD: Moving Average Equipped Fusion-in-Decoder for Question Answering over Tabular and Textual Data

1 code implementation Conference 2023 Sung-Min Lee, Eunhwan Park, Daeryong Seo, Donghyeon Jeon, Inho Kang, Seung-Hoon Na

Transformer-based models for question answering (QA) over tables and texts confront a “long” hybrid sequence over tabular and textual elements, causing long-range reasoning problems.

Question Answering

Feature Structure Distillation with Centered Kernel Alignment in BERT Transferring

1 code implementation1 Apr 2022 Hee-Jun Jung, Doyeon Kim, Seung-Hoon Na, Kangil Kim

To resolve it in transferring, we investigate distillation of structures of representations specified to three types: intra-feature, local inter-feature, global inter-feature structures.

Knowledge Distillation Language Modelling

JBNU at SemEval-2020 Task 4: BERT and UniLM for Commonsense Validation and Explanation

no code implementations SEMEVAL 2020 Seung-Hoon Na, Jong-Hyeon Lee

This paper presents our contributions to the SemEval-2020 Task 4 Commonsense Validation and Explanation (ComVE) and includes the experimental results of the two Subtasks B and C of the SemEval-2020 Task 4.

Sentence

JBNU at MRP 2020: AMR Parsing Using a Joint State Model for Graph-Sequence Iterative Inference

no code implementations CONLL 2020 Seung-Hoon Na, Jinwoo Min

This paper describes the Jeonbuk National University (JBNU) system for the 2020 shared task on Cross-Framework Meaning Representation Parsing at the Conference on Computational Natural Language Learning.

AMR Parsing

JBNU at MRP 2019: Multi-level Biaffine Attention for Semantic Dependency Parsing

no code implementations CONLL 2019 Seung-Hoon Na, Jinwoon Min, Kwanghyeon Park, Jong-Hun Shin, Young-Kil Kim

We propose a unified parsing model using biaffine attention (Dozat and Manning, 2017), consisting of 1) a BERT-BiLSTM encoder and 2) a biaffine attention decoder.

Dependency Parsing Semantic Dependency Parsing +1

QE BERT: Bilingual BERT Using Multi-task Learning for Neural Quality Estimation

no code implementations WS 2019 Hyun Kim, Joon-Ho Lim, Hyun-Ki Kim, Seung-Hoon Na

Our proposed model is re-purposed BERT for the translation quality estimation and uses multi-task learning for the sentence-level task and word-level subtasks (i. e., source word, target word, and target gap).

Multi-Task Learning Sentence +1

Concept Equalization to Guide Correct Training of Neural Machine Translation

no code implementations IJCNLP 2017 Kangil Kim, Jong-Hun Shin, Seung-Hoon Na, SangKeun Jung

Neural machine translation decoders are usually conditional language models to sequentially generate words for target sentences.

Machine Translation NMT +1

Improving Term Frequency Normalization for Multi-topical Documents, and Application to Language Modeling Approaches

no code implementations8 Feb 2015 Seung-Hoon Na, In-Su Kang, Jong-Hyeok Lee

Although these document characteristics should be differently handled, all previous methods of term frequency normalization have ignored these differences and have used a simplified length-driven approach which decreases the term frequency by only the length of a document, causing an unreasonable penalization.

Language Modelling Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.