no code implementations • NAACL 2022 • Jingyi You, Dongyuan Li, Hidetaka Kamigaito, Kotaro Funakoshi, Manabu Okumura
Previous studies on the timeline summarization (TLS) task ignored the information interaction between sentences and dates, and adopted pre-defined unlearnable representations for them.
no code implementations • COLING 2022 • Dongyuan Li, Jingyi You, Kotaro Funakoshi, Manabu Okumura
Text infilling aims to restore incomplete texts by filling in blanks, which has attracted more attention recently because of its wide application in ancient text restoration and text rewriting.
no code implementations • COLING 2022 • Jingyi You, Dongyuan Li, Manabu Okumura, Kenji Suzuki
Automated radiology report generation aims to generate paragraphs that describe fine-grained visual differences among cases, especially those between the normal and the diseased.
1 code implementation • 2 May 2024 • Shiyin Tan, Dongyuan Li, Renhe Jiang, Ying Zhang, Manabu Okumura
Graph augmentation has received great attention in recent years for graph contrastive learning (GCL) to learn well-generalized node/graph representations.
no code implementations • 1 May 2024 • Dongyuan Li, Zhen Wang, Yankai Chen, Renhe Jiang, Weiping Ding, Manabu Okumura
Active learning seeks to achieve strong performance with fewer training samples.
1 code implementation • 1 May 2024 • Dongyuan Li, Ying Zhang, Yusong Wang, Funakoshi Kataro, Manabu Okumura
To address these issues, we propose an active learning (AL)-based fine-tuning framework for SER, called \textsc{After}, that leverages task adaptation pre-training (TAPT) and AL methods to enhance performance and efficiency.
no code implementations • 18 Nov 2023 • Dongyuan Li, Yusong Wang, Kotaro Funakoshi, Manabu Okumura
In this paper, we propose a method for joint modality fusion and graph contrastive learning for multimodal emotion recognition (Joyful), where multimodality fusion, contrastive learning, and emotion recognition are jointly optimized.
no code implementations • 30 Sep 2023 • Dongyuan Li, Yusong Wang, Kotaro Funakoshi, Manabu Okumura
However, existing SER methods ignore the information gap between the pre-training speech recognition task and the downstream SER task, leading to sub-optimal performance.