no code implementations • 30 Sep 2023 • Dongyuan Li, Yusong Wang, Kotaro Funakoshi, Manabu Okumura
However, existing SER methods ignore the information gap between the pre-training speech recognition task and the downstream SER task, leading to sub-optimal performance.
no code implementations • 18 Nov 2023 • Dongyuan Li, Yusong Wang, Kotaro Funakoshi, Manabu Okumura
In this paper, we propose a method for joint modality fusion and graph contrastive learning for multimodal emotion recognition (Joyful), where multimodality fusion, contrastive learning, and emotion recognition are jointly optimized.
no code implementations • NAACL 2022 • Jingyi You, Dongyuan Li, Hidetaka Kamigaito, Kotaro Funakoshi, Manabu Okumura
Previous studies on the timeline summarization (TLS) task ignored the information interaction between sentences and dates, and adopted pre-defined unlearnable representations for them.
no code implementations • COLING 2022 • Dongyuan Li, Jingyi You, Kotaro Funakoshi, Manabu Okumura
Text infilling aims to restore incomplete texts by filling in blanks, which has attracted more attention recently because of its wide application in ancient text restoration and text rewriting.
no code implementations • COLING 2022 • Jingyi You, Dongyuan Li, Manabu Okumura, Kenji Suzuki
Automated radiology report generation aims to generate paragraphs that describe fine-grained visual differences among cases, especially those between the normal and the diseased.