Search Results for author: Kotaro Funakoshi

Found 17 papers, 4 papers with code

Joint Learning-based Heterogeneous Graph Attention Network for Timeline Summarization

no code implementations NAACL 2022 Jingyi You, Dongyuan Li, Hidetaka Kamigaito, Kotaro Funakoshi, Manabu Okumura

Previous studies on the timeline summarization (TLS) task ignored the information interaction between sentences and dates, and adopted pre-defined unlearnable representations for them.

Event Detection Graph Attention +1

A-TIP: Attribute-aware Text Infilling via Pre-trained Language Model

no code implementations COLING 2022 Dongyuan Li, Jingyi You, Kotaro Funakoshi, Manabu Okumura

Text infilling aims to restore incomplete texts by filling in blanks, which has attracted more attention recently because of its wide application in ancient text restoration and text rewriting.

Ancient Text Restoration Attribute +2

Joyful: Joint Modality Fusion and Graph Contrastive Learning for Multimodal Emotion Recognition

no code implementations18 Nov 2023 Dongyuan Li, Yusong Wang, Kotaro Funakoshi, Manabu Okumura

In this paper, we propose a method for joint modality fusion and graph contrastive learning for multimodal emotion recognition (Joyful), where multimodality fusion, contrastive learning, and emotion recognition are jointly optimized.

Contrastive Learning Multimodal Emotion Recognition

Active Learning Based Fine-Tuning Framework for Speech Emotion Recognition

no code implementations30 Sep 2023 Dongyuan Li, Yusong Wang, Kotaro Funakoshi, Manabu Okumura

However, existing SER methods ignore the information gap between the pre-training speech recognition task and the downstream SER task, leading to sub-optimal performance.

Active Learning Speech Emotion Recognition +2

Automatic Answerability Evaluation for Question Generation

no code implementations22 Sep 2023 Zifan Wang, Kotaro Funakoshi, Manabu Okumura

This work proposes PMAN (Prompting-based Metric on ANswerability), a novel automatic evaluation metric to assess whether the generated questions are answerable by the reference answers for the QG tasks.

Question Generation Question-Generation +1

LATTE: Lattice ATTentive Encoding for Character-based Word Segmentation

2 code implementations Journal of Natural Language Processing 2023 Thodsaporn Chay-intr, Hidetaka Kamigaito, Kotaro Funakoshi, Manabu Okumura

Our model employs the lattice structure to handle segmentation alternatives and utilizes graph neural networks along with an attention mechanism to attentively extract multi-granularity representation from the lattice for complementing character representations.

 Ranked #1 on Chinese Word Segmentation on CTB6 (using extra training data)

Chinese Word Segmentation Japanese Word Segmentation +2

Non-Axiomatic Term Logic: A Computational Theory of Cognitive Symbolic Reasoning

no code implementations12 Oct 2022 Kotaro Funakoshi

This paper presents Non-Axiomatic Term Logic (NATL) as a theoretical computational framework of humanlike symbolic reasoning in artificial intelligence.

Towards Table-to-Text Generation with Numerical Reasoning

1 code implementation ACL 2021 Lya Hulliyyatus Suadaa, Hidetaka Kamigaito, Kotaro Funakoshi, Manabu Okumura, Hiroya Takamura

In summary, our contributions are (1) a new dataset for numerical table-to-text generation using pairs of a table and a paragraph of a table description with richer inference from scientific papers, and (2) a table-to-text generation framework enriched with numerical reasoning.

Descriptive Table-to-Text Generation

Generating Weather Comments from Meteorological Simulations

1 code implementation EACL 2021 Soichiro Murakami, Sora Tanaka, Masatsugu Hangyo, Hidetaka Kamigaito, Kotaro Funakoshi, Hiroya Takamura, Manabu Okumura

The task of generating weather-forecast comments from meteorological simulations has the following requirements: (i) the changes in numerical values for various physical quantities need to be considered, (ii) the weather comments should be dependent on delivery time and area information, and (iii) the comments should provide useful information for users.

Informativeness

A POS Tagging Model Adapted to Learner English

no code implementations WS 2018 Ryo Nagata, Tomoya Mizumoto, Yuta Kikuchi, Yoshifumi Kawasaki, Kotaro Funakoshi

Based on the discussion of possible causes of POS tagging errors in learner English, we show that deep neural models are particularly suitable for this.

Grammatical Error Correction Part-Of-Speech Tagging +2

Cannot find the paper you are looking for? You can Submit a new open access paper.